Discrete Mathematics & Theoretical Computer Science |
Given a boolean predicate $\Pi$ on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for $\Pi$ is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying $\Pi$. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size $n$ of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of $O(\log \log n)$ bits per node in any $n$-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use $\Omega(\log \log n)$-bit per node registers in some $n$-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms.
A distributed graph algorithm is basically an algorithm where every node of a graph can look at its neighborhood at some distance in the graph and chose its output. As distributed environment are subject to faults, an important issue is to be able to check that the output is correct, or in general that the network is in proper configuration with respect to some predicate. One would like this checking to be very local, to avoid using too much resources. Unfortunately most predicates cannot be checked this way, and that is where certification comes into play. Local certification (also known as proof-labeling schemes, locally checkable proofs or distributed verification) consists in assigning labels to the nodes, that certify that the configuration is correct. There are several point of view on this topic: it can be seen as a part of self-stabilizing algorithms, as labeling problem, or as a non-deterministic distributed decision. This paper is an introduction to the domain of local certification, giving an overview of the history, the techniques and the current research directions.
Assume that two robots are located at the centre of a unit disk. Their goal is to evacuate from the disk through an exit at an unknown location on the boundary of the disk. At any time the robots can move anywhere they choose on the disk, independently of each other, with maximum speed $1$. The robots can cooperate by exchanging information whenever they meet. We study algorithms for the two robots to minimize the evacuation time: the time when both robots reach the exit. In [CGGKMP14] the authors gave an algorithm defining trajectories for the two robots yielding evacuation time at most $5.740$ and also proved that any algorithm has evacuation time at least $3+ \frac{\pi}{4} + \sqrt{2} \approx 5.199$. We improve both the upper and lower bound on the evacuation time of a unit disk. Namely, we present a new non-trivial algorithm whose evacuation time is at most $5.628$ and show that any algorithm has evacuation time at least $3+ \frac{\pi}{6} + \sqrt{3} \approx 5.255$. To achieve the upper bound, we designed an algorithm which proposes a forced meeting between the two robots, even if the exit has not been found by either of them. We also show that such a strategy is provably optimal for a related problem of searching for an exit placed at the vertices of a regular hexagon.
The \emph{matching preclusion number} of a graph is the minimum number of edges whose deletion results in a graph that has neither perfect matchings nor almost perfect matchings. As a generalization, Liu and Liu recently introduced the concept of fractional matching preclusion number. The \emph{fractional matching preclusion number} of $G$ is the minimum number of edges whose deletion leaves the resulting graph without a fractional perfect matching. The \emph{fractional strong matching preclusion number} of $G$ is the minimum number of vertices and edges whose deletion leaves the resulting graph without a fractional perfect matching. In this paper, we obtain the fractional matching preclusion number and the fractional strong matching preclusion number for generalized augmented cubes. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.
We initiate the study of a new problem on searching and fetching in a distributed environment concerning treasure-evacuation from a unit disk. A treasure and an exit are located at unknown positions on the perimeter of a disk and at known arc distance. A team of two robots start from the center of the disk, and their goal is to fetch the treasure to the exit. At any time the robots can move anywhere they choose on the disk, independently of each other, with the same speed. A robot detects an interesting point (treasure or exit) only if it passes over the exact location of that point. We are interested in designing distributed algorithms that minimize the worst-case treasure-evacuation time, i.e. the time it takes for the treasure to be discovered and brought (fetched) to the exit by any of the robots. The communication protocol between the robots is either wireless, where information is shared at any time, or face-to-face (i.e. non-wireless), where information can be shared only if the robots meet. For both models we obtain upper bounds for fetching the treasure to the exit. Our main technical contribution pertains to the face-to-face model. More specifically, we demonstrate how robots can exchange information without meeting, effectively achieving a highly efficient treasure-evacuation protocol which is minimally affected by the lack of distant communication. Finally, we complement our positive results above by providing a lower bound in the face-to-face model.
We deal with the problem of maintaining a shortest-path tree rooted at some process r in a network that may be disconnected after topological changes. The goal is then to maintain a shortest-path tree rooted at r in its connected component, V_r, and make all processes of other components detecting that r is not part of their connected component. We propose, in the composite atomicity model, a silent self-stabilizing algorithm for this problem working in semi-anonymous networks, where edges have strictly positive weights. This algorithm does not require any a priori knowledge about global parameters of the network. We prove its correctness assuming the distributed unfair daemon, the most general daemon. Its stabilization time in rounds is at most 3nmax+D, where nmax is the maximum number of non-root processes in a connected component and D is the hop-diameter of V_r. Furthermore, if we additionally assume that edge weights are positive integers, then it stabilizes in a polynomial number of steps: namely, we exhibit a bound in O(maxi nmax^3 n), where maxi is the maximum weight of an edge and n is the number of processes.
In this paper we introduce and study a new family of combinatorial simplicial complexes, which we call immediate snapshot complexes. Our construction and terminology is strongly motivated by theoretical distributed computing, as these complexes are combinatorial models of the standard protocol complexes associated to immediate snapshot read/write shared memory communication model. In order to define the immediate snapshot complexes we need a new combinatorial object, which we call a witness structure. These objects are indexing the simplices in the immediate snapshot complexes, while a special operation on them, called ghosting, describes the combinatorics of taking simplicial boundary. In general, we develop the theory of witness structures and use it to prove several combinatorial as well as topological properties of the immediate snapshot complexes.
The notion of Shared Risk Link Groups (SRLG) captures survivability issues when a set of links of a network may fail simultaneously. The theory of survivable network design relies on basic combinatorial objects that are rather easy to compute in the classical graph models: shortest paths, minimum cuts, or pairs of disjoint paths. In the SRLG context, the optimization criterion for these objects is no longer the number of edges they use, but the number of SRLGs involved. Unfortunately, computing these combinatorial objects is NP-hard and hard to approximate with this objective in general. Nevertheless some objects can be computed in polynomial time when the SRLGs satisfy certain structural properties of locality which correspond to practical ones, namely the star property (all links affected by a given SRLG are incident to a unique node) and the span 1 property (the links affected by a given SRLG form a connected component of the network). The star property is defined in a multi-colored model where a link can be affected by several SRLGs while the span property is defined only in a mono-colored model where a link can be affected by at most one SRLG. In this paper, we extend these notions to characterize new cases in which these optimization problems can be solved in polynomial time. We also investigate the computational impact of the transformation from the multi-colored model to the mono-colored one. Experimental results are presented to validate the proposed algorithms and […]
In this work we present a decentralized deployment algorithm for wireless mobile sensor networks focused on deployment Efficiency, connectivity Maintenance and network Reparation (EMR). We assume that a group of mobile sensors is placed in the area of interest to be covered, without any prior knowledge of the environment. The goal of the algorithm is to maximize the covered area and cope with sudden sensor failures. By relying on the locally available information regarding the environment and neighborhood, and without the need for any kind of synchronization in the network, each sensor iteratively chooses the next-step movement location so as to form a hexagonal lattice grid. Relying on the graph of wireless mobile sensors, we are able to provide the properties regarding the quality of coverage, the connectivity of the graph and the termination of the algorithm. We run extensive simulations to provide compactness properties of the deployment and evaluate the robustness against sensor failures. We show through the analysis and the simulations that EMR algorithm is robust to node failures and can restore the lattice grid. We also show that even after a failure, EMR algorithm call still provide a compact deployment in a reasonable time.
When nodes can repeatedly update their behavior (as in agent-based models from computational social science or repeated-game play settings) the problem of optimal network seeding becomes very complex. For a popular spreading-phenomena model of binary-behavior updating based on thresholds of adoption among neighbors, we consider several planning problems in the design of \textit{Sticky Interventions}: when adoption decisions are reversible, the planner aims to find a Seed Set where temporary intervention leads to long-term behavior change. We prove that completely converting a network at minimum cost is $\Omega(\ln (OPT) )$-hard to approximate and that maximizing conversion subject to a budget is $(1-\frac{1}{e})$-hard to approximate. Optimization heuristics which rely on many objective function evaluations may still be practical, particularly in relatively-sparse networks: we prove that the long-term impact of a Seed Set can be evaluated in $O(|E|^2)$ operations. For a more descriptive model variant in which some neighbors may be more influential than others, we show that under integer edge weights from $\{0,1,2,...,k\}$ objective function evaluation requires only $O(k|E|^2)$ operations. These operation bounds are based on improvements we give for bounds on time-steps-to-convergence under discrete-time reversible-threshold updates in networks.
The state-of-the-art telecommunication technologies have widely been adapted for sensing the traffic related information and collection of it. Vehicular Ad-Hoc Networks (VANETs) have emerged as a novel technology for revolutionizing the driving experiences of human. The most effective and widely recognized way for mutual authentication among entities in VANETs is digital signature scheme. The new and attractive paradigm which eliminates the use of certificates in public key cryptography and solves the key escrow problem in identity based cryptography is certificateless cryptography. A new certificateless aggregate signature scheme is proposed in the paper for VANETs with constant pairing computations. Assuming the hardness of computational Diffie-Hellman Problem, the scheme is proved to be existentially unforgeable in the random oracle model against adaptive chosen-message attacks.
We study time and message complexity of the problem of building a BFS tree by a spontaneously awaken node in ad hoc network. Computation is in synchronous rounds, and messages are sent via point-to-point bi-directional links. Network topology is modeled by a graph. Each node knows only its own id and the id's of its neighbors in the network and no pre-processing is allowed; therefore the solutions to the problem of spanning a BFS tree in this setting must be distributed. We deliver a deterministic distributed solution that trades time for messages, mainly, with time complexity O(D . min(D; n=f(n)) . logD . log n) and with the number of point-to-point messages sent O(n. (min(D; n=f(n))+f(n)) . logD. log n), for any n-node network with diameter D and for any monotonically non-decreasing sub-linear integer function f. Function f in the above formulas come from the threshold value on node degrees used by our algorithms, in the sense that nodes with degree at most f(n) are treated differently that the other nodes. This yields the first BFS-finding deterministic distributed algorithm in ad hoc networks working in time o(n) and with o(n2) message complexity, for some suitable functions f(n) = o(n= log2 n), provided D = o(n= log4 n).
For a parallel computer system with m identical computers, we study optimal performance precaution for one possible computer crash. We want to calculate the cost of crash precaution in the case of no crash. We thus define a tolerance level r meaning that we only tolerate that the completion time of a parallel program after a crash is at most a factor r + 1 larger than if we use optimal allocation on m - 1 computers. This is an r-dependent restriction of the set of allocations of a program. Then, what is the worst-case ratio of the optimal r-dependent completion time in the case of no crash and the unrestricted optimal completion time of the same parallel program? We denote the maximal ratio of completion times f(r, m) - i.e., the ratio for worst-case programs. In the paper we establish upper and lower bounds of the worst-case cost function f (r, m) and characterize worst-case programs.
Monitoring physical phenomena in Sensor Networks requires guaranteeing permanent communication between nodes. Moreover, in an effective implementation of such infrastructure, the delay between any two consecutive communications should be minimized. The problem is challenging because, in a restricted Sensor Network, the communication is carried out through a single and shared radio channel without collision detection. Dealing with collisions is crucial to ensure effective communication between nodes. Additionally, minimizing them yields energy consumption minimization, given that sensing and computational costs in terms of energy are negligible with respect to radio communication. In this work, we present a deterministic recurrent-communication protocol for Sensor Networks. After an initial negotiation phase of the access pattern to the channel, each node running this protocol reaches a steady state, which is asymptotically optimal in terms of energy and time efficiency. As a by-product, a protocol for the synchronization of a Sensor Network is also proposed. Furthermore, the protocols are resilient to an arbitrary node power-up schedule and a general node failure model.
This article studies the fundamental trade-off between delay and communication cost in networks. We consider an online optimization problem where nodes are organized in a tree topology. The nodes seek to minimize the time until the root is informed about the changes of their states and to use as few transmissions as possible. We derive an upper bound on the competitive ratio of O(min (h, c)) where h is the tree's height, and c is the transmission cost per edge. Moreover, we prove that this upper bound is tight in the sense that any oblivious algorithm has a ratio of at least Omega(min (h, c)). For chain networks, we prove a tight competitive ratio of Theta(min (root h, c)). Furthermore, we introduce a model for value-sensitive aggregation, where the cost depends on the number of transmissions and the error at the root.
In the permutation routing problem, each processor is the origin of at most one packet and the destination of no more than one packet. The goal is to minimize the number of time steps required to route all packets to their respective destinations, under the constraint that each link can be crossed simultaneously by no more than one packet. We study this problem in a hexagonal network, i.e. a finite subgraph of a triangular grid, which is a widely used network in practical applications. We present an optimal distributed permutation routing algorithm on full-duplex hexagonal networks, using the addressing scheme described by F.G. Nocetti, I. Stojmenovic and J. Zhang (IEEE TPDS 13(9): 962-971, 2002). Furthermore, we prove that this algorithm is oblivious and translation invariant.
In this paper, we focus on applications having quantitative QoS (Quality of Service) requirements on their end-to-end response time (or jitter). We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.