Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

A coded shared atomic memory algorithm for message passing architectures

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

This paper considers the communication and storage costs of emulating atomic (linearizable) multi-writer multi-reader shared memory in distributed message-passing systems. The paper contains three main contributions: (1) we present an atomic shared-memory emulation algorithm that we call Coded Atomic Storage (CAS). This algorithm uses erasure coding methods. In a storage system with N servers that is resilient to f server failures, we show that the communication cost of CAS is \(\frac{N}{N-2f}\). The storage cost of CAS is unbounded. (2) We present a modification of the CAS algorithm known as CAS with garbage collection (CASGC). The CASGC algorithm is parameterized by an integer \(\delta \) and has a bounded storage cost. We show that the CASGC algorithm satisfies atomicity. In every execution of CASGC where the number of server failures is no bigger than f, we show that every write operation invoked at a non-failing client terminates. We also show that in an execution of CASGC with parameter \(\delta \) where the number of server failures is no bigger than f,  a read operation terminates provided that the number of write operations that are concurrent with the read is no bigger than \(\delta \). We explicitly characterize the storage cost of CASGC, and show that it has the same communication cost as CAS. (3) We describe an algorithm known as the Communication Cost Optimal Atomic Storage (CCOAS) algorithm that achieves a smaller communication cost than CAS and CASGC. In particular, CCOAS incurs read and write communication costs of \(\frac{N}{N-f}\) measured in terms of number of object values. We also discuss drawbacks of CCOAS as compared with CAS and CASGC.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. The algorithm of Attiya et al. [8] allows only a single node to act as a writer. Also, it did not distinguish between client and server nodes as we do in our paper.

  2. Informally, an operation \(\pi \) is ongoing at a point P in an execution \(\beta \) if the point P is after the invocation of the operation \(\pi \), and there are steps taken on behalf of the operation \(\pi \) after point P in \(\beta \).

  3. As we shall see later, the server gossip is not essential to correctness of CAS. It is however useful as a theoretical tool to prove correctness of CASGC.

  4. The idea of registering a client’s identity was introduced originally in [29] and plays an important role in our CCOAS algorithm as well.

  5. Strictly speaking, we need \(\lceil \log _{2}|\mathcal {V}|\rceil \) bits since the number of bits has to be an integer. We ignore this rounding error.

  6. We assume that \(N > 2f,\) since correctness cannot be guaranteed if \(N \le 2f\) [26].

  7. The ‘\(\mathrm {null}\)’ entry indicates that no coded element is stored; the storage cost associated storing a \(\mathrm {null}\) coded element is negligible.

  8. It is worth noting that \(Q_{fw}\) and \(Q_{pw}\) need not be the same quorum.

  9. An operation is said to have failed if the client performing the operation fails after its invocation but before its termination.

  10. It is instructive to note that CAS\('\) does not satisfy the same liveness properties as CAS since servers may never respond to finalize messages from a reader in CAS\('\), even in a fair execution.

  11. If the number of writes that are concurrent with a read operation is larger than \(\delta \), then the read simply may not terminate.

  12. Note that in this second scenario, the server does not respond with a coded element in CAS, where the server only sends an acknowledgement. In contrast to the proof here, the liveness proof of CAS involved showing that at least k servers satisfy the condition imposed by the first scenario.

  13. Literature in coding theory literature often studies the rate \(\frac{N}{k}\) of a code, which is the reciprocal of the redundancy factor, i.e., the rate of an (Nk) code is \(\frac{k}{N}.\) In this paper, we use the redundancy factor in our discussions since it enables a somewhat more intuitive connection with the costs of our algorithms in Theorems 13144, 7.

  14. This is unlike ABD where the servers store only the latest version of the data object received.

References

  1. Common RAID disk data format specification. SNIA, Advanced Storage and Information Technology Standard, version 2 (2009)

  2. Abd-El-Malek, M., Ganger, G.R., Goodson, G.R., Reiter, M.K., Wylie, J.J.: Fault-scalable byzantine fault-tolerant services. ACM SIGOPS Oper. Syst. Rev. 39, 59–74 (2005)

    Article  Google Scholar 

  3. Agrawal, A., Jalote, P.: Coding-based replication schemes for distributed systems. IEEE Trans. Parallel Distrib. Syst. 6(3), 240–251 (1995). doi:10.1109/71.372774

    Article  Google Scholar 

  4. Aguilera, M.K., Janakiraman, R., Xu, L.: Using erasure codes efficiently for storage in a distributed system. In: Proceedings of International Conference on Dependable Systems and Networks (DSN), pp. 336–345. IEEE (2005)

  5. Aguilera, M.K., Keidar, I., Malkhi, D., Shraer, A.: Dynamic atomic storage without consensus. J. ACM 58, 7:1–7:32 (2011). doi:10.1145/1944345.1944348

    Article  MathSciNet  MATH  Google Scholar 

  6. Anderson, E., Li, X., Merchant, A., Shah, M.A., Smathers, K., Tucek, J., Uysal, M., Wylie, J.J.: Efficient eventual consistency in pahoehoe, an erasure-coded key-blob archive. In: IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 181–190. IEEE (2010)

  7. Androulaki, E., Cachin, C., Dobre, D., Vukolić, M.: Erasure-coded byzantine storage with separate metadata. In: Aguilera, M.K., et al. (eds.) Principles of Distributed Systems. 18th International Conference, OPODIS 2014, Cortina d’Ampezzo, Italy, December 16–19, 2014. Proceedings, pp. 76–90. Springer, New York (2014)

    Google Scholar 

  8. Attiya, H., Bar-Noy, A., Dolev, D.: Sharing memory robustly in message-passing systems. J. ACM (JACM) 42(1), 124–142 (1995)

    Article  MATH  Google Scholar 

  9. Cachin, C., Tessaro, S.: Asynchronous verifiable information dispersal. In: Fraigniaud, P. (ed.) Distributed Computing. 19th International Conference, DISC 2005, Cracow, Poland, September 26–29, 2005. Proceedings, pp. 503–504. Springer, Berlin, Heidelberg (2005)

    Google Scholar 

  10. Cachin, C., Tessaro, S.: Optimal resilience for erasure-coded byzantine distributed storage. In: 2006 International Conference on Dependable Systems and Networks (DSN), pp. 115–124. IEEE (2006)

  11. Cadambe, V.R., Lynch, N., Medard, M., Musial, P.: A coded shared atomic memory algorithm for message passing architectures. In: 13th International Symposium on Network Computing and Applications (NCA), pp. 253–260. IEEE (2014)

  12. Cassuto, Y.: What can coding theory do for storage systems? ACM SIGACT News 44(1), 80–88 (2013)

    Article  MathSciNet  Google Scholar 

  13. Datta, A., Oggier, F.: An overview of codes tailor-made for better repairability in networked distributed storage systems. ACM SIGACT News 44(1), 89–105 (2013)

    Article  MathSciNet  Google Scholar 

  14. Dobre, D., Karame, G., Li, W., Majuntke, M., Suri, N., Vukolić, M.: PoWerStore: proofs of writing for efficient and robust storage. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications security, pp. 285–298. ACM (2013)

  15. Dutta, P., Guerraoui, R., Levy, R.R.: Optimistic erasure-coded distributed storage. In: Taubenfeld, G. (ed.) Distributed Computing. 22nd International Symposium, DISC 2008, Arcachon, France, September 22–24, 2008. Proceedings, pp. 182–196. Springer, New York (2008)

    Google Scholar 

  16. Fan, R., Lynch, N.: Efficient replication of large data objects. In: Proceedings of the 17th International Symposium on Distributed Computing (DISC), pp. 75–91 (2003)

  17. Fekete, A., Lynch, N., Shvartsman, A.: Specifying and using a partitionable group communication service. ACM Trans. Comput. Syst. 19(2), 171–216 (2001). doi:10.1145/377769.377776

  18. Gifford, D.K.: Weighted voting for replicated data. In: Proceedings of the Seventh ACM Symposium on Operating Systems Principles, SOSP ’79, pp. 150–162. ACM, New York (1979). doi:10.1145/800215.806583

  19. Gilbert, S., Lynch, N., Shvartsman, A.: RAMBO: a robust, reconfigurable atomic memory service for dynamic networks. Distrib. Comput. 23(4), 225–272 (2010)

    Article  MATH  Google Scholar 

  20. Goodson, G.R., Wylie, J.J., Ganger, G.R., Reiter, M.K.: Efficient byzantine-tolerant erasure-coded storage. In: 2004 International Conference on Dependable Systems and Networks, pp. 135–144. IEEE (2004)

  21. Hendricks, J., Ganger, G.R., Reiter, M.K.: Low-overhead Byzantine fault-tolerant storage. In: Proceedings of the Seventh ACM Symposium on Operating Systems Principles (SOSP) vol. 41, no. 6, pp. 73–86 (2007)

  22. Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12, 463–492 (1990). doi:10.1145/78969.78972

    Article  Google Scholar 

  23. Lamport, L.: On interprocess communication. Part I: basic formalism. Distrib. Comput. 2(1), 77–85 (1986)

    Article  MATH  Google Scholar 

  24. Lin, S., Costello, D.J.: Error Control Coding, 2nd edn. Prentice-Hall, Upper Saddle River (2004)

    MATH  Google Scholar 

  25. Lynch, N., Shvartsman, A.: Robust emulation of shared memory using dynamic quorum-acknowledged broadcasts. In: Twenty-Seventh Annual International Symposium on Fault-Tolerant Computing, FTCS-27. Digest of Papers, pp. 272–281. IEEE (1997)

  26. Lynch, N.A.: Distributed Algorithms. Morgan Kaufmann Publishers, San Francisco (1996)

    MATH  Google Scholar 

  27. Lynch, N.A., Tuttle, M.R.: An introduction to input/output automata. CWI Q. 2, 219–246 (1989)

    MathSciNet  MATH  Google Scholar 

  28. Malkhi, D., Reiter, M.: Byzantine quorum systems. Distrib. Comput. 11(4), 203–213 (1998). doi:10.1007/s004460050050

    Article  MATH  Google Scholar 

  29. Martin, J.P., Alvisi, L., Dahlin, M.: Minimal byzantine storage. In: Malkhi, D. (ed.) Distributed Computing. 16th International Conference, DISC 2002, Toulouse, France, October 28–30, 2002. Proceedings, pp. 311–325. Springer, New York (2002)

    Google Scholar 

  30. Plank, J.S.: T1: erasure codes for storage applications. In: Proceedings of the 4th USENIX Conference on File and Storage Technologies, pp. 1–74 (2005)

  31. Reed, I.S., Solomon, G.: Polynomial codes over certain finite fields. J. Soc. Ind. Appl. Math. 8(2), 300–304 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  32. Roth, R.: Introduction to Coding Theory. Cambridge University Press, Cambridge (2006)

    Book  MATH  Google Scholar 

  33. Saito, Y., Frølund, S., Veitch, A., Merchant, A., Spence, S.: Fab: building distributed enterprise disk arrays from commodity components. In: ACM SIGARCH Computer Architecture News, vol. 32, pp. 48–58. ACM (2004)

  34. Thomas, R.: A majority consensus approach to concurrency control for multiple copy databases. ACM Trans. Database Syst. 4(2), 180–209 (1979)

    Article  Google Scholar 

  35. Vukolić, M.: Quorum systems: with applications to storage and consensus. Synth. Lect. Distrib. Comput. Theory 3(1), 1–146 (2012). doi:10.2200/S00402ED1V01Y201202DCT009

    Article  Google Scholar 

  36. Wang, Z., Cadambe, V.R.: Multi-version coding in distributed storage. In: 2014 IEEE International Symposium on Information Theory (ISIT) (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Viveck R. Cadambe.

Additional information

This work was supported in part by AFOSR Contract Numbers FA9550-13-1-0023, FA9550-14-1-0043, NSF Award Numbers CCF-1217506, CCF-0939370, CCF-1553248, and by BAE Systems National Security Solutions, Inc., award 739532-SLIN 0004.

The results of this work have partially appeared in a conference paper [11]. The paper [11] does not contain proofs of atomicity, liveness, and the costs incurred by the published algorithms. [11] does not include the CCOAS algorithm of Sect. 6 as well.

Appendices

Appendix A: Discussion on erasure codes

For an (Nk) code, the ratio \(\frac{N}{k}\)—also known as the redundancy factor of the code—represents the storage cost overhead in the classical erasure coding model. Much literature in coding theory involves the design of (Nk) codes for which the redundancy factorFootnote 13 can be made as small as possible. In the classical erasure coding model, the extent to which the redundancy factor can be reduced depends on f—the maximum number of server failures that are to be tolerated. In particular, an (Nk) MDS code, when employed to store the value of the data object, tolerates \(N-k\) server node failures; this is because the definition of an MDS code implies that the data can be recovered from any k surviving nodes. Thus, for an N-server system that uses an MDS code, we must have \(k \le N-f\), meaning that the redundancy factor is at least \(\frac{N}{N-f}\). It is well known [32] that, given N and f, the parameter k cannot be made larger than \(N-f\) so that the redundancy factor is lower bounded by \(\frac{N}{N-f}\) for any code even if it is not an MDS code; In fact, an MDS code can equivalently be defined as one which attains this lower bound on the redundancy factor. In coding theory, this lower bound is known as the Singleton bound [32]. Given parameters Nk,  the question of whether an (Nk) MDS code exists depends on the alphabet of code \(\mathcal {W}\). We next discuss some of the relevant assumptions that we (implicitly) make in this paper to enable the use of an (Nk) MDS code in our algorithms.

Fig. 8
figure 8

Write protocol of the ABD algorithm

1.1 Assumption on \(|\mathcal {V}|\) due to erasure coding

Recall that, in our model, each value v of a data object belongs to a finite set \(\mathcal {V}\). In our system, for the use of coding, we assume that \(\mathcal {V}=\mathcal {W}^{k}\) for some finite set \(\mathcal {W}\) and that \(\varPhi :\mathcal {W}^{k} \rightarrow \mathcal {W}^{N}\) is an MDS code. Here we refine these assumptions using classical results from erasure coding theory. In particular, the following result is useful.

Theorem 12

Consider a finite set \(\mathcal {W}\) such that \(|\mathcal {W}| \ge {N}.\) Then, for any integer \(k < N\), there exists an (Nk) MDS code \(\varPhi :\mathcal {W}^{k} \rightarrow \mathcal {W}^{N}\).

One proof for the above in coding theory literature is constructive. Specifically, it is well known that when \(|\mathcal {W}| \ge {N}\), then \(\varPhi \) can be constructed using the Reed-Solomon code construction [24, 31, 32]. The above theorem implies that, to employ a Reed-Solomon code over our system, we shall need the following two assumptions:

  • k divides \(\log _2|\mathcal {V}|,\) and

  • \(\log _{2}|\mathcal {V}|/k ~\ge ~ \log _{2}N\).

Thus all our results are applicable under the above assumptions.

Fig. 9
figure 9

Read protocol of the ABD algorithm

Fig. 10
figure 10

Server protocol of the ABD algorithm

Fig. 11
figure 11

Write protocol of the LDR algorithm

Fig. 12
figure 12

Read protocol of the LDR algorithm

In fact, the first assumption above can be replaced by a different assumption with only a negligible effect on the communication and storage costs. Specifically, if \(\log _2|\mathcal {V}|\) were not a multiple of k then, one could pad the value with \(\left( \lceil \frac{\log _2{|\mathcal {V}}|}{k}\rceil k - \log _2|\mathcal {V}|\right) \) “dummy” bits, all set to 0, to ensure that the (padded) object has a size that is multiple of k; note that this padding is an overhead. The size of the padded object would be \(\lceil \frac{\log _2{|\mathcal {V}|}}{k}\rceil k\) bits and the size of each coded element would be \(\lceil \frac{\log _2{|\mathcal {V}|}}{k}\rceil \) bits. If we assume that \(\log _{2}|\mathcal {V}| \gg k\) then, \(\lceil \frac{\log _2{|\mathcal {V}|}}{k}\rceil \approx \frac{\log _{2}|\mathcal {V}|}{k}\) meaning that the padding overhead can be neglected. Consequently, the first assumption can be replaced by the assumption that \(\log _{2}|\mathcal {V}| \gg k\) with only a negligible effect on the communication and storage costs.

Appendix B: Descriptions of the ABD and LDR algorithms

As baselines for our work we use the MWMR versions of the ABD and LDR algorithms [8, 16]. Here, we describe the ABD and LDR algorithms, and evaluate their communication and storage costs. We present the ABD algorithm in Figs. 89 and 10. We present the LDR algorithm in Figs. 1112 and 13. The costs of these algorithms are stated in Theorems 13 and 14.

Theorem 13

The write and read communication costs of ABD are respectively equal to \(N \log |\mathcal {V}|\) and \({2N} \log |\mathcal {V}|\) bits. The storage cost is equal to N \(\log _2|\mathcal {V}|\) bits.

The LDR algorithm divides its servers into directory servers that store metadata, and replica servers that store object values. The write protocol of LDR involves the sending of object values to \(2f+1\) replica servers. The read protocol is less taxing since in the worst-case, it involves retrieving the data object values from \(f+1\) replica servers. We state the communication costs of LDR next (for formal proof, see Appendix 1.)

Theorem 14

In LDR, the write communication cost is \((2f+1)\) \(\log _2|\mathcal {V}|\) bits, and the read communication cost is \((f+1)\) \(\log _2|\mathcal {V}|\) bits.

In the LDR algorithm, each replica server stores every version of the data object it receives.Footnote 14 Therefore, the (worst-case) storage cost of the LDR algorithm is unbounded.

Fig. 13
figure 13

Replica and directory server protocols of the LDR algorithm

Proof of Theorem 13

We first present arguments that upper bound the communication and storage cost for every execution of the ABD algorithm. The ABD algorithm presented here is fitted to our model. Specifically in [8, 25] there is no clear cut separation between clients and servers. However, this separation does not change the costs of the algorithm. Then we present worst-case executions that incur the costs as stated in the theorem.

Upper bounds First consider the write protocol. It has two phases, get and put. The get phase of a write involves transfer of a tag, but not of actual data, and therefore has negligible communication cost. In the put phase of a write, the client sends a value from the set \(\mathcal {T} \times \mathcal {V}\) to every server node; the total communication cost of this phase is at most \(N \log _2|\mathcal {V}|\) bits. Therefore the total write communication cost is at most \(N \log _{2}|\mathcal {V}|\) bits. In the get phase of the read protocol, the message from the client to the servers contains only metadata, and therefore has negligible communication cost. However, in this phase, each of the N servers could respond to the client with a message from \(\mathcal {T} \times \mathcal {V}\); therefore the total communication cost of the messages involved in the get phase is upper bounded by \(N \log _{2}|\mathcal {V}|\) bits. In the put phase of the read protocol, the read sends an element of \(\mathcal {T} \times \mathcal {V}\) to N servers. Therefore, this phase incurs a communication cost of at most \( {N} \log _{2}|\mathcal {V}|\) bits. The total communication cost of a read is therefore upper bounded by \({2N }\log _{2}|\mathcal {V}|\) bits.

The storage cost of ABD is no bigger than \(N\log _{2}|\mathcal {V}|\) bits because each server stores at most one value - the latest value it receives.

Worst-case executions Informally speaking, due to asynchrony and the possibility of failures, clients always send requests to all servers and in the worst case, all servers respond. Therefore the upper bounds described above are tight.

For the write protocol, the client sends the value to all N nodes in its put phase. So the write communication cost in an execution where at least one write terminates is \(N \log _{2}|\mathcal {V}|\) bits. For the read protocol, consider the following execution, where there is one read operation, and one write operation that is concurrent with this read. We will assume that none of the N servers fail in this execution. Suppose that the writer completes its get phase, and commits to a tag t. Note that t is the highest tag in the system at this point. Suppose that among the N messages that the writer sends in its put phase with the value and tag t, Now the writer begins its put phase where it sends N messages with the value and tag t. At least one of these messages, say the message to server 1, arrives. The remaining messages are delayed, i.e., they are assumed to reach after the portion of the execution segment described here. At this point, the read operation begins and receives (tagvalue) pairs from all the N server nodes in its get phase. Of these N messages, at least one message contains the tag t and the corresponding value. Note that t is the highest tag it receives. Therefore, the put phase of the read has to sends N messages with the tag t and the corresponding value - one message to each of the N servers that which responded to the read in the get phase with an older tag.

The read protocol has two phases. The cost of a read operation in an execution is the sum of the communication costs of the messages sent in its get phase and those sent in its put phase. The get phase involves communication of N messages from \(\mathcal {T} \times \mathcal {V}\), one message from each server to the client, and therefore incurs a communication cost of \(N \log _{2}|\mathcal {V}|\) bits provided that every server is active. The put phase involves the communication of a message in \(\mathcal {T} \times \mathcal {V}\) from the client to every server thereby incurring a communication cost of \(N \log _{2}|\mathcal {V}|\) bits as well. Therefore, in any execution where all N servers are active, the communication cost of a read operation is \({2N}\log _{2}|\mathcal {V}|\) bits and therefore the upper bound is tight.

The storage cost is equal to \(N \log _2|\mathcal {V}|\) bits since each of the N servers store exactly one value from \(\mathcal {V}\). \(\square \)

Proof of Theorem 14

Upper bounds: In LDR servers are divided into two groups: directory servers used to manage object metadata, and replication servers used for object replication. Read and write protocols have three sequentially executed phases. The get-metadata and put-metadata phases incur negligible communication cost since only metadata is sent over the message-passing system. In the put phase, the writer sends its messages, each of which is an element from \(\mathcal {T} \times \mathcal {V},\) to \(2f+1\) replica servers and awaits \(f+1\) responses; since the responses have negligible communication cost, this phase incurs a total communication cost of at most \((2f+1)\log _2|\mathcal {V}|\) bits. The read protocol is less taxing, where the reader during the get phase queries \(f+1\) replica servers and in the worst case, all respond with a message containing an element from \(\mathcal {T} \times \mathcal {V}\) thereby incurring a total communication cost of at most \((f+1)\log _2|\mathcal {V}|\) bits.

Worst-case executions It is clear that in every execution where at least one writer terminates, the writer sends out \((2f+1)\) messages to replica servers that contain the value, thus incurring a write communication cost of \((2f+1)\log _{2}|\mathcal {V}|\) bits. Similarly, for a read, in certain executions, all \((f+1)\) replica servers that are selected in the put phase of the read respond to the \(get \) request from the client. So the upper bounds derived above are tight. \(\square \)

Appendix C: Proof of Lemma 1

Proof of property (i): by the definition, each \(Q \in \mathcal {Q}\) has cardinality at least \(\lceil \frac{N+k}{2}\rceil \). Therefore, for \(Q_1, Q_2 \in \mathcal {Q},\) we have

where we have used the fact that \(|Q_1 \cup Q_2| \le N\) in (a).

Proof of property (ii): let \(\mathcal {B}\) be the set of all the server nodes that fail in an execution, where \(|\mathcal {B}| \le f\). We need to show that there exists at least one quorum set \(Q \in \mathcal {Q}\) such that \(Q \subseteq \mathcal {N}-\mathcal {B}\), that is, at least one quorum survives. To show this, because of the definition of our quorum system, it suffices to show that \(|\mathcal {N}-\mathcal {B}| \ge \lceil \frac{N+k}{2}\rceil \). We show this as follows:

$$\begin{aligned} |\mathcal {N}-\mathcal {B}|\ge & {} N-f ~\mathop {\ge }\limits ^{{(b)}}N-\left\lfloor \frac{N-k}{2}\right\rfloor = \left\lceil \frac{N+k}{2} \right\rceil , \end{aligned}$$

where, (b) follows because \(k \le N-2f\) implies that \(f \le \lfloor \frac{N-k}{2}\rfloor \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cadambe, V.R., Lynch, N., Mèdard, M. et al. A coded shared atomic memory algorithm for message passing architectures. Distrib. Comput. 30, 49–73 (2017). https://doi.org/10.1007/s00446-016-0275-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-016-0275-x

Keywords

Navigation