Fp/fifo Scheduling: Coexistence of Deterministic and Probabilistic Qos Guarantees

In this paper, we focus on applications with quantitative QoS (Quality of Service) requirements in their end-to-end response time (or jitter). We propose a solution allowing the coexistence of two types of quantitative QoS guarantees, deterministic and probabilistic, while providing high resource utilization. Our solution combines the advantages of both the deterministic and the probabilistic approaches. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability of the response time exceeding a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet that arrives first is scheduled first. We make no particular assumptions about the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees. 1 Context and motivations We are interested in providing quantitative QoS (Quality of Service) guarantees to various types of applications in their end-to-end response time (or jitter). Accordingly, the goal of this paper is to achieve quantitative QoS guarantees to such applications while providing high resource utilization. Two types of guarantees can be granted to a flow: • a deterministic guarantee, which ensures that no packet of this flow will encounter an end-to-end response time exceeding a given deadline. For example, the delivery of an alarm in a command and control application requires a bounded delay.


Context and motivations
We are interested in providing quantitative QoS (Quality of Service) guarantees to various types of applications in their end-to-end response time (or jitter).Accordingly, the goal of this paper is to achieve quantitative QoS guarantees to such applications while providing high resource utilization.Two types of guarantees can be granted to a flow: • a deterministic guarantee, which ensures that no packet of this flow will encounter an end-to-end response time exceeding a given deadline.For example, the delivery of an alarm in a command and control application requires a bounded delay.
• a probabilistic guarantee, which ensures that the end-to-end response time of any packet of this flow will not exceed a given deadline with a probability higher than a given value.For example, a video flow can tolerate a few packet losses.
We propose a solution offering a good trade-off between the deterministic and the probabilistic approaches.Indeed, the deterministic approach is based on a worst case analysis and can lead to a low resource utilization.However, providing only probabilistic guarantees is not acceptable for applications requiring hard deadlines.That is why we have investigated a solution allowing both guarantee types to coexist.Such a solution should lead to a better resource utilization.The admission control presented in this paper will allow us to accept more flows and will offer to each of them a quantitative QoS guarantee in accordance with its requirements.
Our solution is based on the Fixed Priority scheduling [1,2] that exhibits interesting properties: • It favors flows with the highest fixed priority.Then, the fixed priority of a flow can be easily assigned to reflect the flow's degree of importance.
• The impact of a new flow τ i is limited to flows having priorities smaller than that of τ i .
• It is easy to implement.
• It is well adapted for service differentiation: flows with high priorities have smaller response times.
In this paper, we focus on non-preemptive Fixed Priority scheduling.Indeed, with regard to flow scheduling, the assumption generally admitted is that packet transmission is not preemptive.Moreover, in many cases, several flows may have to share the same priority, for example when: • the number of fixed priorities available on a processor is less than the flow number; • the priority of a flow is determined by external constraints and cannot be chosen arbitrarily; • flows are processed by class of service and the flow priority is that of its class.
In the state of the art, the worst case analysis assumes that flows sharing the same priority are arbitrarily scheduled.However, FIFO is the policy generally used by the Fixed Priority implementations to schedule flow packets having the same fixed priority.In this paper, we consider that such packets are scheduled FIFO, and, unlike the state of the art, we take this scheduling into account to compute deterministic and probabilistic guarantees.The resulting scheduling policy is called FP/FIFO.Our solution enables us to improve the worst case response times of such flows.Indeed, a packet cannot be delayed by other packets of the same priority released after it.Notice that there is no relationship between the nature of the QoS guarantee required by a flow (deterministic or probabilistic) and its fixed priority.The paper is organized as follows.In Section 2, we define the problem we address.In Section 3, we show how to conduct a worst case analysis to provide deterministic end-to-end response times to flows requiring firm QoS guarantees.In Section 4, we present a mathematical model to obtain, for any flow requesting probabilistic QoS guarantee, the probability that its response time does not exceed a given value.We derive from deterministic and probabilistic results an admission control, presented in Section 5.
The mathematical study is validated in an example given in Section 6.An extended example to illustrate these results is presented in Section 7. Finally, we give some perspectives in Section 8 and conclude the paper in Section 9.

Problematic of providing quantitative QoS guarantees
We investigate the problem of providing a quantitative end-to-end response time guarantee to any flow in a distributed system.This guarantee can be either deterministic or probabilistic depending on the flow QoS requirements.We do not make any assumption concerning the scheduling priority of flows with deterministic QoS requirements versus flows with probabilistic QoS requirements.

Scheduling model
We adopt the following assumption concerning the scheduling model.
Assumption 1 Flows are scheduled according to FP/FIFO.With FP/FIFO, packets are first scheduled according to their fixed priority.Packets with the same fixed priority are scheduled according to their arrival order on the node considered.
Notice that this solution has no particular requirement regarding the priority of flows requesting deterministic QoS guarantees versus flows requesting probabilistic ones.
Assumption 2 Packet scheduling is non-preemptive: the scheduler of the node considered waits for the completion of the current packet transmission (if any) before selecting the next packet.

Network model
We adopt the following assumptions concerning the network considered.
Assumption 3 Links interconnecting nodes are FIFO.

Assumption 4
The network delay between two nodes has known lower and upper bounds: Lmin and Lmax.
Assumption 5 Network is reliable: neither network failures nor packet losses are considered.

Traffic model
We consider a set {τ 1 , τ 2 , ..., τ n } of n flows and adopt the following assumptions.
Assumption 6 Each flow τ i follows a fixed (i) route H i that is an ordered sequence of nodes whose first node is the ingress node of the flow.
Assumption 7 Flows are characterized by sporadic arrivals.Hence, each flow τ i is defined by: • T i , the minimum interarrival time (called period) between two successive packets of τ i ; • C h i , the maximum processing time on node h of a packet of τ i .This parameter depends on the maximum packet size and the capacity of its output link; • J i , the maximum release jitter of packets of τ i arriving in the network considered.A packet is subject to a release jitter if there exists a non-null delay between its generation time and the time called its released time where it is taken into account by the scheduler; • D i , the end-to-end deadline required by τ i ; (i) For instance, MPLS [3] can be used to fix the route followed by a flow.
• F i , the fixed priority of τ i ; • P i , the probability required by τ i to meet its deadline.
For any flow τ i , we then define the following three sets: , the set of flows having a fixed priority strictly higher than that of τ i ; , the set of flows having a fixed priority equal to that of τ i ; , the set of flows having a fixed priority strictly lower than that of τ i .
Moreover, a flow requires either a deterministic or a probabilistic guarantee.Then, we can define two disjoint sets: such that P i = 1}, the set of flows requiring deterministic guarantees; such that P i < 1}, the set of flows requiring probabilistic guarantees.
To provide probabilistic guarantees to flows belonging to P, we use the following property.

Property 1
The sporadic arrivals of any flow τ i can be upper bounded by Poisson arrivals characterized by: λ i = 1/T i and Proof: See [4].✷ 3 Deterministic approach for computing the worst case end-to-end response times

Related work
Deterministic and quantitative guarantees can be provided by at least three approaches, which compute the worst case end-to-end response time of any flow: The holistic approach [5,6].This approach, the first introduced in the literature, considers the worst case scenario on each node visited by a flow, taking into account the maximum possible jitter introduced by the previous visited nodes.The minimum and maximum response times on a node h induce a maximum jitter on the next visited node h + 1, which leads to a worst case response time and then a maximum jitter on the following node, and so on.This approach can be pessimistic as it considers worst case scenarios on every node, possibly leading to impossible scenarios.Indeed, a worst case scenario for a flow τ i on a node h does not generally result in a worst case scenario for τ i on any node visited after h.The network calculus approach [7].Network Calculus is a powerful tool which has been recently developed to solve flow problems encountered in networking.Indeed, considering a network element characterized by a service curve and all the arrival curves of flows visiting this element, it is possible to compute the maximum delay of any flow, the maximum size of the waiting queue and the departure curves of flows.Results of such analysis are deterministic, provided that the arrival and service curves are deterministic.As bounds are generally used instead of the exact knowledge of the arrival and service curves, this approach can lead to an overestimation of the bounds on the end-to-end response times.
The trajectory approach.This approach considers the worst case scenario that can happen to a message along its trajectory, i.e., the sequence of nodes visited.This approach is described in this section.

Notations
We consider any flow τ i , i ∈ [1, n], following a path H i , and focus on the packet m of τ i generated at time t.We adopt the following defintition and notations: Definition 1 Let m be the packet of flow τ i generated at time t.Let m ′ be the packet of flow τ j generated at time t ′ .On any node h ∈ H i ∩ H j , priority of packet m is higher than or equal to this of packet m ′ if and only if: (F i > F j ) or (F i = F j and m arrives before m ′ on node h).
• τ i , a sporadic flow of the set {τ 1 , ..., τ n }; • R i , the worst case response time of flow τ i ; • m, the packet of flow τ i generated at time t; • W h i,t , the latest starting time of packet m on node h; • f irst i , the first node visited by flow τ i in the network; • last i , the last node visited by flow τ i in the network; • H i = [f irst i , ..., last i ], the path followed by flow τ i ; • |H i |, the number of nodes visited by flow τ i ; • slow i , the slowest node visited by flow τ i on path , the first node visited by flow τ j on path H i ; • last j,i , the last node visited by flow τ j on path H i ; • slow j,i , the slowest node visited by τ j on path i , the minimum time taken by a packet of flow τ i to go from its source node to node h; • Smax h i , the maximum time taken by a packet of flow τ i to go from its source node to node h; • δ i , the maximum delay incurred by a packet of flow τ i directly due to non-preemption when visiting path H i ; • pre i (h), the node visited by τ i just before node h; • τ (g), the index of the flow which packet g belongs to; • ∀a ∈ R, (1 + ⌊a⌋) + stands for max(0; 1 + ⌊a⌋); 1 illustrates the notations of f irst i,j , f irst j,i , last i,j and last j,i when flows τ i and τ j are (1) in the same direction and (2) in reverse directions.Moreover, we assume, with regard to flow τ i following path H i , that any flow τ j , j ∈ hp i ∪ sp i following path H j with H j = H i and H j H i = ∅ never visits a node of path H i after having left this path.
Assumption 8 For any flow τ i following path H i , for any flow τ j , j ∈ hp i ∪ sp i , following path H j such that H j ∩ H i = ∅, we have either [f irst j,i , last j,i ] ⊆ H i or [last j,i , f irst j,i ] ⊆ H i .To achieve this, the idea is to consider a flow crossing path H i after it has left H i as a new flow.We proceed by iteration until meeting Assumption 8.

Definition 2
The end-to-end jitter of any flow τ i , i ∈ [1, n], is the difference between the maximum and minimum end-to-end response times of τ i packets, that is equal to:

Study of the trajectory of packet m
Unlike the holistic approach, the trajectory approach is based on the analysis of the worst case scenario experienced by a packet on its trajectory and not on any node visited [8].Then, only possible scenarios are examined.For instance, the fluid model is relevant to the trajectory approach.More precisely, we consider any flow τ i , i ∈ [1, n], following a path H i consisting of q nodes numbered from 1 to q.We focus on the packet m of τ i generated at time t.
As we consider a non-preemptive scheduling, the processing of a packet can no longer be delayed after it has started.That is why we compute the latest starting time of m on its last node visited.For this, we adopt the trajectory approach, consisting in moving backwards through the sequence of nodes m visits, each time identifying preceding packets and busy periods that ultimately affect the delay of m.
To compute the latest starting time of packet m, we proceed as follows.We first determine bp q , that is the busy period (ii) of level corresponding to the priority of m in which m is processed on node q.We define f (q) as the first packet processed in bp q with a priority higher than or equal to that of m.Due to the non-preemption, f (q) can be delayed by at most one packet with a priority less than this of m.As flows do not necessarily follow the same path in the network considered, it is possible that f (q) does not come from node q − 1.We then define p(q − 1) as the first packet processed between f (q) and m such that p(q − 1) comes from node q − 1.This packet has been processed on node q − 1 in a busy period bp q−1 of level corresponding to the priority of p(q − 1).We then define f (q − 1) as the first packet processed in bp q−1 with a priority higher than or equal to this of p(q −1).And so on until the busy period, on node 1, of the level corresponding to the priority of packet p(1) in which the packet f (1) is processed (see Fig. 2).
For the sake of simplicity, on a node h, we number consecutively the packets processed after f (h) and before p(h) (with p(q) = m).Then, we denote m ′ − 1 (respectively m ′ + 1) the packet preceding (ii) A busy period of level L is defined by an interval [t, t ′ ) such that t and t ′ are both idle times of level L and there is no idle time of level L in (t, t ′ ).An idle time t of level L is a time such that all packets with a priority greater than or equal to L generated before t have been processed at time t.(respectively succeeding to) m ′ .Moreover, we denote a h m ′ the arrival time of m ′ on node h and consider that a 1 f (1) = 0.By adding parts of the busy periods considered, we can express the latest starting time of packet m in node q, that is: the processing time on node 1 of packets f (1) to p(1) + Lmax + the processing time on node q of packets f (q) to (m − 1) − (a q p(q−1) − a q f (q) ) + δ i .
In the worst case, p(h) = f (h + 1) on any node h ∈ H i .Moreover, in the worst case, on any node h visited by τ i , the fixed priority of the packet f (h) is that of packet m.Thus, we get: • Lmax.Then, the latest starting time of packet m, generated at time t, consists of three parts: the delay due to packets with a priority ≥ priority of m; • δ i , the delay due to the non-preemptive effect; • (q − 1) • Lmax, the maximum network delay.
We evaluate X i,t and δ i in the two following subsections.

Delay due to higher priority packets
We now evaluate the maximum delay incurred by m due to packets with a priority higher than or equal to this of m.This delay is equal to: By definition, for any node h ∈ [1, q), f (h + 1) is the first packet with a priority higher than or equal to this of m, processed in bp h+1 and coming from node h.Moreover, f (h + 1) is the last packet considered in bp h .Let us show that in this sum, if we count packets processed in bp h and bp h+1 , only f (h + 1) is counted twice.

Lemma 1 For any flow τ i , if there exists a node h ∈ H
Proof: By induction.Let us consider any packet m ′ processed in (f (1), f (2)) on node 1.By definition, we have and links are FIFO, m ′ arrives on node 2 before f (2).Consequently, on node 2, m ′ has a priority higher than f (2).Arrived before f (2), m ′ starts its transmission before f (2) on node 2. As on this node, the busy period starts with f (2), the processing of m ′ is completed at the latest at the arrival of f (2).Hence We now distinguish the nodes visited before slow i , the node slow i itself and the nodes visited after slow i .By definition, ∀h ∈ [1, slow i ), f (h + 1) is the first packet with a priority higher than or equal to this of m, processed in bp h+1 and coming from node h.Moreover, f (h + 1) is the last packet considered in bp h .Hence, if we count packets processed in bp h and bp h+1 , only f (h + 1) is counted twice.In the same way, ∀h ∈ (slow i , q], f (h) is the first packet with a priority higher than or equal to this of m, processed in bp h and coming from node h − 1.Moreover, f (h) is the last packet considered in bp h−1 .Thus, f (h) is the only packet counted twice when counting packets processed in bp h−1 and bp h .Hence, X i,t is equal to: Moreover, for any packet g visiting a node h ∈ . Then, as packets are numbered consecutively from f (1) to f (q + 1) = m, we get: C slowi τ (g) .
In addition, as in the worst case, f (h + 1) is a packet coming from node h, we have: Hence, X i,t is bounded by: The term X i,t is maximized when the workload generated by such flows is the maximum.Then, we get: Lemma 2 Let m be the packet of flow τ i generated at time t.When flows are scheduled FP/FIFO, the maximum delay incurred by m due to packets having a priority higher than or equal to this of m is bounded by: Proof: Considering a packet m of τ i generated at time t: • Packets of flow τ j , j ∈ hp i , can delay m if they are generated at the earliest at time a − J j and at the latest at time W lasti,j i,t − Smin lasti,j j ; • Packets of flow τ j , j ∈ sp i , can delay m if they are generated at the earliest at time a • Packets of τ i can delay m if they are generated at the earliest at time −J i and at the latest at time t.
The maximum workload generated by any flow τ j in the interval [a, b] on node h is equal to and = 0, we get the lemma.✷

Delay due to non-preemption
We recall that packet scheduling is non-preemptive.Hence, despite the high priority of any packet m of any flow τ i , a packet with a lower priority can delay m processing due to non-preemption.Indeed, if m arrives on node h while a packet m ′ belonging to lp i is being processed, m has to wait until m ′ completion.By definition of FIFO scheduling, m cannot be delayed by a packet belonging to sp i due to the non-preemption.It is important to notice that the non-preemptive effect is not limited to this waiting time.The delay incurred by packet m on node h directly due to m ′ may lead to consider packets belonging to hp i , arriving after m on the node but before m starts its execution.
When flows are scheduled FP/FIFO, the maximum delay incurred by a packet of flow τ i directly due to flows belonging to lp i , denoted δ i , is bounded by h∈Hi (max j∈lpi {C h j } − 1) + , where max j∈lpi {C h j } = 0 if lp i = ∅.
Proof: On each node h visited by τ i , the delay incurred by m due to a packet m ′ of flow τ j having a lower priority is maximum when (i) m ′ starts its processing on node h one time unit before the beginning of the busy period considered in the decomposition illustrated by Figure 2 and (ii) τ j has the maximum processing time among flows belonging to lp i .✷

Latest starting time expression
From subsections 3.4 and 3.5, we can express the latest starting time of packet m on its last visited node.
Property 3 Let m be the packet of flow τ i generated at time t.When flows are scheduled FP/FIFO, the latest starting time of m on its last node visited, W lasti i,t , is bounded by: Proof: By Lemma 2 and Property 2. ✷ The expression of W lasti i,t is recursive.Let us consider the following series for any node h ∈ H i : , the first node visited by τ j on H h i ; • last h j,i , the last node visited by τ j on H h i ; • slow h j,i , the slowest node visited by τ j on H h i ; • δ h i , the maximum delay incurred by a packet of τ i directly due to non-preemption when visiting H h i .
When the series W lasti i,t converges, W lasti i,t is its limit.

Worst case end-to-end response time
The worst case end-to-end response time of the packet of flow τ i generated at time t is equal to: The worst case end-to-end response time of flow τ i is then equal to: R i = max t≥−Ji {W lasti i,t + C lasti i − t}.In order not to test all times t ≥ −J i , we establish Lemma 3.
Lemma 3 Let us consider a flow τ i following a path H i .If flows are scheduled FP/FIFO, then for any time t ≥ −J i : Proof: In [9].✷ From the worst case analysis given in this section and the previous lemma, we get the following property.
Property 4 When flows are scheduled FP/FIFO, the worst case end-to-end response time of any flow τ i is bounded by − t}, with: and .

Computation algorithm
To compute the worst case response times of a flow set, we proceed by decreasing fixed priority order.We first compute the response times of flows having the highest fixed priority.We then continue with flows having the highest priority among those whose response time is not yet computed and so on.Let F i be the highest priority of flows whose response time has not yet been computed.Let τ i , i ∈ [1, n], be a flow of priority F i .We compute the set S i of flows crossing directly or indirectly τ i and apply Property 4 to compute the worst case response time of τ i .More formally, we determine S i as follows: Notice that if a flow exceeds its deadline, we stop the computation.We proceed in the same way for any flow having priority F i .
4 Probabilistic approach for computing the probabilities of meeting the deadlines The probabilistic approach will be able to guarantee to each flow τ i belonging to the set D that the end-toend response time of any of its packets does not exceed the deadline D i with a probability higher than P i .
To obtain this probability, the end-to-end response time distribution must be computed.

Notations
We focus on the set {τ 1 , τ 2 , ..., τ n } of n flows.We consider in this section that these flows are characterized by Poisson arrivals (see Property 1) and adopt (or recall) the following notations: • τ i , a sporadic flow of the set {τ 1 , ..., τ n }; • λ i , the average arrival rate of a packet of flow τ i ; • µ h i , the average processing time of a packet of flow τ i in node h; • P success (D i ), the probability that flow τ i will not miss its deadline; • H i = [f irst i , ..., last i ], the path followed by flow τ i ; • |H i |, the number of nodes visited by flow τ i ; • |F |, the number of fixed priorities shared by the flows considered; • pre i (h), the node visited by τ i just before node h; • suc i (h), the node visited by τ i just after node h; • ρ ab i , the utilization factor of the link server ab by the packets of flow τ i , which is equal to λ ab i /µ ab i ; • ρ ab , the utilization factor of the link server ab, which is equal to n j=1 ρ ab j .

Node response time distribution
To compute the end-to-end response time distribution of any flow τ i , we first focus on its node response time distribution.A node can be considered as a set of queuing systems.Arriving packets are stocked in a first queue to be processed and switched over the appropriate link.Each link corresponds to a queuing system, where the service is the transmission of a packet over this link.By supposing that the processing time at the first queue is instantaneous, the node response time, for any packet of τ i going through node a to node b, corresponds to the response time of the queuing system modelling the link ab [10].To simplify this study, we introduce Assumption 9.
Assumption 9 Packet arrivals of any flow τ i to a link ab is also a Poisson process with the parameter λ ab i , that is equal to: λ i if ab belongs to the path of τ i , 0 otherwise.According to the traffic description, Assumption 9 and the FP/FIFO scheduling, each link can be modelled by an M/G/1 station with n classes of customers, the non-preemptive Priority Queuing (with |F | priorities) and the Head Of Line (HOL) discipline [11].The average arrival rate of packets of class i (flow τ i ) on the link ab is λ ab i and the average service rate of packets of τ i on the link ab is µ ab i .The node response time distribution for packets of flow τ i at the link ab is obtained by inspecting its Laplace transform which will be denoted by S ab i * (s) and is given by: is the Laplace transform of the waiting time density of packets having the priority F i at the link ab and B ab i * (s) is the Laplace transform of the service time probability density function of packets of flow τ i at the link ab.Indeed, packets having the same priority have the same waiting time distribution.We first focus on the computation of W ab Fi * (s).A packet having the priority F i must wait for [12]: • packets with a priority ≥ F i and found in the queue upon the arrival of our tagged packet; • packets with a priority > F i and which arrive before the service beginning of our tagged packet; • the packet found in service upon the arrival of our tagged packet.
As in [13], we define two categories of packets: those belonging to flows in hp i ∪ sp i ∪ {i} (called priority packets) and those belonging to flows in lp i (called ordinary packets).The Poisson arrival rates of these two packets categories are given by: λ + Fi = j∈hpi∪spi∪{i} λ ab j and λ − Fi = j∈lpi λ ab j .
The Laplace transforms of the service time densities of priority and ordinary packets are respectively: Notice that the waiting time of a packet having priority F i is invariant to the change in the order of service.This waiting time can be computed as follows [14]: • the service time of the packet in service upon the arrival of our tagged packet and the packets having priority ≥ F i and waiting in the system at this time.This time will be denoted by W + Fi ; • the service time of packets having priority > F i that arrive during W + Fi and the duration of all busy periods generated by these packets.
Let W + Fi * (s) be the Laplace transform of the waiting time density of priority packets.W + Fi * (s) is given by [15]: .
By coming back to the original system, the waiting time of packets having the priority F i at the link ab corresponds to W + Fi and the sum of the service times of packets having priorities higher than F i that arrive during the busy period initiated by W + Fi .Hence, the Laplace transform of this waiting time density W ab Fi * (s) is given by: where θ + Fi+1 * (s) corresponds to the Laplace transform of a busy period duration density generated by packets having a priority strictly greater than F i and is the solution to the equation: θ + Fi+1 * (s) can be obtained numerically [15]: with itself and represents the probability density function for the sum of n independent random variables, where each corresponds to the service time of a packet belonging to hp i .Let σ Fi+1 = s + λ + Fi+1 − λ + Fi+1 θ + Fi+1 * (s), then we get: The Laplace transform of the node response time distribution for packets of flow τ i at the link ab, S ab i * (s), is then equal to:

End-to-end response time distribution
Let s i be the random variable representing the end-to-end response time for a packet of flow τ i and S * i (s) the Laplace transform of its probability density function.The end-to-end response time corresponds to the time needed to go from the ingress to the egress node.The random variable s i corresponds to the sum of the response times on the nodes crossed by the packet while going through the network and the sum of the transmission delays between the different nodes: where d is the random variable corresponding to the transmission delays between two nodes.The random variables corresponding to these different durations being independent, we obtain:

Probabilistic QoS guarantee
The end-to-end response time distribution enables us to determine, for a given configuration, the probability that a flow packet does not stay in the network beyond a given duration.
Property 5 A packet belonging to the flow τ i with a relative deadline D i meets its deadline with the probability: , where s i (t) is the end-to-end response time distribution obtained by inspecting its Laplace transform.
The study developed here allows to provide probabilistic QoS guarantees to flows having quantitative constraints on their end-to-end response times.

Admission control
Now we will show how to manage the coexistence of deterministic and probabilistic QoS guarantees in a network.This is done by an admission control, derived from the results established in the previous sections.We will see a numerical example in the next section.
The admission control is in charge of deciding whether a new flow τ k can be accepted in the network.This decision is based on the following conditions: 1. the acceptance of τ k should not compromise the guarantees granted to the already accepted flows.
This condition requires that both flows belonging to D and flows belonging to P meet their deadlines with the requested probability (this probability is 1 for flows in D).For this purpose, the admission control proceeds as follows: • for each flow τ j belonging to D, we recompute its end-to-end response time R j and check that R j ≤ D j , by applying Property 4; • for each flow τ j belonging to P, we recompute its end-to-end response time distribution and check that P success (D j ) ≥ P j , by applying Property 5. We recall that for this computation, packet arrivals of all flows in D ∪ P are upper bounded by Poisson arrivals.
2. the guarantee requested by τ k can be met taking into account the available resources.Depending on the type of the QoS guarantee required by τ k , we apply Property 4 or Property 5 to check either that R k ≤ D k or P success (D k ) ≥ P k .
Remark: The impact of a new flow τ k on flows belonging to D with a priority strictly higher than this of τ k is due to the non-preemptive effect.In order to avoid the computation of the worst case end-to-end response time of any flow τ j ∈ D such that F j > F k , we will maximize the processing time of any potential flow that could be accepted with a priority lower than that of τ j by C max,Fj .With no particular knowledge on the paths followed by the potential flow, we can assume that in the worst case, τ j can be delayed by C max,Fj − 1 on each node visited, because of a potential flow with a fixed priority striclty lower than F j .Then, Property 2 becomes Property 6.
When flows are scheduled FP/FIFO, the maximum delay incurred by a packet of flow τ i directly due to flows belonging to lp i , denoted δ i , is bounded by: lasti h=f irsti (C max,Fi − 1), where C max,Fi is the maximum processing time of any possible flow with a priority lower than F i and C max,Fi − 1 = 0 if F i is the smallest possible fixed priority.
Thanks to this property, the acceptance of a new flow τ k does not impact flows having a priority higher than that of τ k .In this section, we focus on the validation of our analytical model and the mathematical study presented in the previous sections.More precisely, we show, in an example, that the theoretical results are (i) very close to those obtained by simulation with NS2 for the probabilistic model and (ii) reached in a given worst case scenario for the deterministic study.In the next section, we consider a more general example.

Example
Let us consider six flows: two control and command flows (τ 1 and τ 2 ), two video flows (τ 3 and τ 4 ), and two flows corresponding to file transfer (τ 5 and τ 6 ).The characteristics of the flows considered are given in Table 1 and their paths are illustrated in Figure 3, where the node identifiers have been omitted for clarity reason.Moreover, for each flow, we have emphasized its name in its input node.Notice that only flows τ 1 and τ 2 require deterministic QoS guarantees.Flow τ 3 , for instance, requires its end-to-end response time to be lower than 10 milliseconds with a probability higher than or equal to 99%.Moreover, links are 10 Mbit/s.On the other hand, for the deterministic guarantees, the worst case end-to-end response times obtained by simulation for the flows τ 1 and τ 2 are smaller than those obtained with the mathematical study.This can be explained by the fact that no packet of τ 1 nor τ 2 has gone through the worst case scenario in the simulation duration.However, we can show that the worst case end-to-end response time can be reached.For example, this of τ 1 given in Table 2 is reached in the worst case scenario, illustrated in Figure 4. Indeed, if we number nodes visited by τ 1 from 1 to 4, the following scenario leads to a worst case end-toend response time equal to 5537: • On node 1, flow τ 3 generates a packet at time 0 and flow τ 1 generates a packet at time 1.Hence, the packet of τ 1 is delayed by the packet of τ 3 .
• On node 2, packets of τ 3 and τ 1 arrive respectively at times 2240 and 2340.As flow τ 5 follows an independent path until node 2, a packet of this flow can arrive at any time.Then, we assume that a packet of τ 5 arrives at time 1779.Therefore, the packet of τ 3 starts its execution at time 2339, that is one time unit before the arrival of the packet of τ 1 .The non-preemptive effect is then maximized for this packet.Moreover, if a packet of τ 2 arrives during the processing of the packet of τ 3 , it is processed before the packet of τ 1 .
• On node 3, packets of flow τ 2 and τ 3 arrive respectively at times 4679 and 4779.If a packet of τ 6 arrives at time 4678 (this is possible for the same reasons as τ 5 on node 2), then packets of flows τ 2 and τ 1 experiment a maximum non-preemptive effect equal to 560 − 1 time units.
• On node 4, the packet of flow τ 1 arrives at time 5438.Hence, as flow τ 1 is the only flow visiting this node, its worst case end-to-end response time is equal to 5438 plus 100 (its maximum processing time on this node) minus 1 (its generation time), that is 5537.
We can therefore conclude that both deterministic and probabilistic studies are validated in this example.

Coexistence benefits
To illustrate the interest of the coexistence of deterministic and probabilistic QoS guarantees, we first show, by computing the worst case end-to-end response times, that this network fails to provide determin- istic QoS guarantees to all flows.These response times are obtained by applying Property 4 and are given in Figure 5.We notice that flows τ 4 and τ 5 have missed their deadlines.Hence, with a pure deterministic approach, only four flows (τ 1 , τ 2 , τ 3 and τ 6 ) would be accepted.Nevertheless, flows τ 1 and τ 2 do not tolerate any deadline violation.The probabilistic approach fails to provide such a guarantee.Indeed, the probability P success cannot in any case equal 1, as a Poisson arrival process and a service time exponentially distributed have been considered.To accept all the flows considered and meet their QoS requirements, we provide deterministic QoS guarantees for flows τ 1 and τ 2 and probabilistic for the others.The success probabilities given in Table 3 are computed by applying Property 5.
Providing both deterministic and probabilistic guarantees enables a better resource utilization rate.Indeed, if τ 4 , for instance, required a deterministic QoS guarantee, it would be rejected as the computed bound on its end-to-end response time is equal to R 4 = 1519 µs > D 3 = 1000 µs.By asking a probabilistic QoS guarantee, τ 3 is accepted in the network with the probability of 98.54% to meet its deadline.Consequently, this example shows that the coexistence of deterministic and probabilistic QoS guarantees enables us to accept more flows than a pure deterministic approach does.

Extended example
In this section, we present an example illustrating the benefits brought by the coexistence of deterministic and probabilistic QoS guarantees in a network consisting of 48 nodes.Let us consider 24 flows, presented in Table 4.We suppose that links are 10 Mbits/s and the paths of flows considered are illustrated in Figure 6.As in the previous section, the node identifiers have been omitted for clarity reason.Moreover, for each flow, we have emphasized its name in its input node.We present in Figure 7 the worst end-to-end response times of flows requiring deterministic QoS guarantees (i.e., τ 1 , τ 3 , τ 7 , τ 9 , τ 17 , τ 19 and τ 23 ).Notice that the deterministic bounds are computed according to Property 4. As we can see, each flow meets its end-to-end deadline.Moreover, flows τ 12 and τ 24 have the same characteristics, except the deadline and the type of QoS guarantee.If we compute the worst case end-to-end response times of these flows, we obtain R 12 = 9235 and R 24 = 10215.Thus, these flows do not meet their end-to-end deadlines.However, we see in Table 6 that the considered deadlines are met with a probability higher than 97%.
The deadline success probabilities of flows requiring probabilistic QoS guarantees are given in Table 5.As in Section 6, these results highlight the benefits brought by our solution.Indeed, providing only deterministic bounds on the end-to-end response times would lead either to accept a small number of flows or to have a low resource utilization.Indeed, the worst case scenario occurs infrequently.Nevertheless, probabilistic guarantees are not satisfying when specific applications, such as control/commande applications, have strict end-to-end response time requirements.Notice that our solution makes no restriction about the importance degree of a flow and its type of guarantee.As a consequence, each flow receives the guarantee requested and globally the network achieves a higher resource utilization.

Perspectives
In this paper, we have established new results to provide quantitative QoS guarantees when flows are scheduled according to non-preemptive FP/FIFO.The coexistence of deterministic and probabilistic QoS guarantees allows a higher resource utilization.Consequently, more flows can be accepted in the network.
To further improve the resource utilization, we can investigate two directions.First, we can focus on techniques dropping packets as soon as it can be proved that these packets are unable to meet their deadlines.These techniques allow us to spare resources.However, they should be selected carefully because very aggressive techniques will discard a packet as soon as its local deadline is missed, even if this excessive delay can be compensated on other nodes such that the end-to-end deadline is finally met.The second direction can be the study of other scheduling strategies combining the flow's degree of importance and its end-to-end deadline.For instance, scheduling flows having the same fixed priority according to EDF (Earliest Deadline First) would lead to a better schedulability of a flow set by taking into account the end-to-end deadlines.Moreover, in a future work, we will see how to apply our results to networks using shaping techniques.Different cases will be considered: (i) shaping done only in the ingress nodes, (ii) shaping done in every node and (iii) shaping done in specific nodes.
Finally, flow aggregation techniques could be interesting to study.

Conclusion
FP scheduling is used when flows have different degrees of importance.FP/FIFO is the most commonly used implementation of FP: packets having the same fixed priority are scheduled according to their arrival order on the node considered.In this paper, we have shown how to provide quantitative QoS guarantees to flows having constraints in their end-to-end response times.We have proposed a solution to achieve this goal while preserving a high resource utilization rate.This solution allows the coexistence of two types of quantitative QoS guarantees: deterministic and probabilistic.No restriction is given concerning the relationship between the fixed priority value and the type of QoS guarantee that can be granted to a flow.
On the one hand, deterministic guarantees are obtained from a worst case end-to-end response time analysis, based on the trajectory approach.This ensures that the worst case end-to-end response time of the flow considered does not exceed the required deadline.On the other hand, probabilistic guarantees are obtained from a mathematical model, based on Poisson arrivals.The distribution of the end-to-end response time of the flow considered is computed.This ensures that the end-to-end response time of this flow does not exceed the given deadline with a probability higher than what is required.
Finally, we have shown how to derive an admission control from our results.This admission control, in charge of deciding the acceptance of a new flow, allows us to accept more flows, leading to a better resource utilization than a pure deterministic approach.Moreover, each flow receives the quantitative QoS guarantee in accordance with its requirements.

Fig. 2 :
Fig. 2: Response time of packet m ab belongs to H i S ab rk * (s) otherwise and L * (s) is the Laplace transform of d probability density function.According to Assumption 4, we have: L * (s) = e −sL min −e −sLmax s(Lmax−Lmin) .The end-to-end response time distribution is obtained by inspecting its Laplace transform.

Fig. 3 :
Fig. 3: Paths followed by the flows considered 6 Model validation

Fig. 4 :
Fig. 4: Worst case scenario for τ1: the upper bound is reached

Table 2
presents the results (the worst case end-to-end response time R i and the deadline success probability P success (Di) ) obtained by the analytical study and simulation.First, we can see for the probabilistic guarantees that the deadline success probabilities obtained by simulation are very close to the results obtained with the mathematical study.As a consequence, the mathe- Tab. 5: Simulation results for flows with probabilistic QoS guarantees