Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Session Coalgebras: A Coalgebraic View on Regular and Context-free Session Types

Published: 15 July 2022 Publication History

Abstract

Compositional methods are central to the verification of software systems. For concurrent and communicating systems, compositional techniques based on behavioural type systems have received much attention. By abstracting communication protocols as types, these type systems can statically check that channels in a program interact following a certain protocol—whether messages are exchanged in the intended order.
In this article, we put on our coalgebraic spectacles to investigate session types, a widely studied class of behavioural type systems. We provide a syntax-free description of session-based concurrency as states of coalgebras. As a result, we rediscover type equivalence, duality, and subtyping relations in terms of canonical coinductive presentations. In turn, this coinductive presentation enables us to derive a decidable type system with subtyping for the π-calculus, in which the states of a coalgebra will serve as channel protocols. Going full circle, we exhibit a coalgebra structure on an existing session type system, and show that the relations and type system resulting from our coalgebraic perspective coincide with existing ones. We further apply to session coalgebras the coalgebraic approach to regular languages via the so-called rational fixed point, inspired by the trinity of automata, regular languages, and regular expressions with session coalgebras, rational fixed point, and session types, respectively. We establish a suitable restriction on session coalgebras that determines a similar trinity, and reveals the mismatch between usual session types and our syntax-free coalgebraic approach. Furthermore, we extend our coalgebraic approach to account for context-free session types, by equipping session coalgebras with a stack.

1 Introduction

Communication protocols enable interactions between humans and computers alike, yet different scientific communities rely on different descriptions of protocols: one community may use textual descriptions, another uses diagrams, and yet another may use types. There is then a mismatch, which is fruitful and hindering at the same time. Fruitful, because different views on protocols lead to different insights and technologies. But hindering, because exactly those insights and technologies cannot be easily exchanged. With this article, we will provide a view of protocols that opens up new links between communities and that, at the same time, contributes new insights into the nature of communication protocols.
What would such a view of communication protocols be? Software systems typically consist of concurrent, interacting processes that pass messages over channels. Protocols are then a description of the possible exchanges on channels, without ever referring to the exact structure of the processes that use the channels. Since we may, for example, expect to get an answer only after sending a question, it is clear that such exchanges have to happen in an appropriate order. Therefore, protocols have to be a state-based abstraction of communication behaviour on channels. Because coalgebras provide an abstraction of general state-based behaviour, our proposed view of communication protocols becomes: model the states of a protocol as states of a coalgebra and let the coalgebra govern the exchanges that may happen at each state of the protocol.
The above view of protocols allows us to model protocols as coalgebras. However, protocols are usually not studied for the sake of their description but to achieve certain goals: ensuring correct composition of processes, comparing communication behaviour, or refining and abstracting protocols. Session types [29, 30] are an approach to communication correctness for processes that pass messages along channels. The idea is simple: describe a protocol as a syntactic object (a type), and use a type system to statically verify that processes adhere to the protocol. This syntactic approach allows the automatic and efficient verification of many correctness properties. However, the syntactic approach depends on choosing one particular representation of protocols and one particular representation of processes. We show in this article that our coalgebraic view of protocols can guarantee correct process composition, and allows us to reason about key notions in the world of session types, type equivalence, duality and subtyping, while being completely independent of protocol and process representations.
Our coalgebraic view is best understood by following the distillation process of ideas on a concrete session type system by Vasconcelos [61]. Consider the session type \(S = \mathop {? \texttt {int}.\,} \mathop {! \texttt {bool}.\,} \mathrm{end}\), which specifies the protocol on one endpoint of a channel that receives an integer, then outputs a Boolean, and finally terminates the interaction. Note that the protocol \(S\) specifies three different states: an input state, an output state, and a final state. Moreover, we note that \(S\) specifies only how the channel is seen from one endpoint; the other endpoint needs to use the channel with the dual protocol \(\mathop {! \texttt {int}.\,} \mathop {? \texttt {bool}.\,} \mathrm{end}\). Thus, session type systems ensure that the states of \(S\) are enabled only in the specified order and that the two channel endpoints implement dual protocols.
A state-based reading of session types is intuitive and is already present in programming concepts such as typestates [22, 55, 56], theories of behavioural contracts [9, 11, 12, 19], and connections between session types and communicating automata [16, 40]. The novelty and insight of the coalgebraic view is that
(1)
it describes the state-based behaviour of protocols underlying session types, supporting unrestricted types and delegation, without adhering to any specific syntax or target programming model;
(2)
it offers a general framework in which key notions such as type equivalence, duality, and subtyping arise as instances of well-known coinductive constructions; and
(3)
it allows us to derive type systems for specific process languages, like the \(\pi\)-calculus.
Session Coalgebras at Work. How does this coalgebraic view of protocols work for session types? Consider a “mathematical server” that offers three operations to clients: integer multiplication, Boolean negation, and quitting. The following session type \(T\) specifies a protocol to communicate with this server:
\begin{equation*} T = \mathop {\mu X.\,} \&{\left\lbrace \begin{array}{ll} {\it mul}: & \mathop {? \texttt {int}.\,} \mathop {? \texttt {int}.\,} \mathop {! \texttt {int}.\,} X \\ {\it neg}: & \mathop {? \texttt {bool}.\,} \mathop {! \texttt {bool}.\,} X \\ {\it quit}: & \mathrm{end}\end{array}\right.} \end{equation*}
\(T\) is a recursive protocol, as indicated by “\(\mathop {\mu X.\,}\),” which can be repeated. A client can choose, as indicated by \(\&\), between the three operations (\({\it mul}\), \({\it neg}\), and \({\it quit}\)) and the protocol then continues with the corresponding actions. For instance, after choosing \({\it mul}\), the server requests two integers and, once received, promises to send an integer over the channel. We can see states of the protocol \(T\) emerging, and it remains to provide a coalgebraic view on the actions of the protocol to obtain what we will call session coalgebras.
Figure 1 depicts a session coalgebra that describes protocol \(T\). It consists of states \(q_0, \ldots , q_6\), each representing a different state of \(T\), and transitions between these states to model the evolution of \(T\). Meaning is given to the different states and transitions through the labels on the states and transitions. The state labels, written in red at top-left of the state name, indicate the branching type of that state. Depending on the branching type, the labels of the transitions bear different meanings. For instance, \(q_{0}\) is labelled with “\(\&\),” which indicates that this state initiates an external choice. The labels on the three outgoing transitions for \(q_{0}\) (\({\it mul}\), \({\it neg}\), \(\mathit {quit}\)) correspond then to the possible kinds of message for selecting one of the branches. Continuing, states \(q_{1}, \ldots\,, q_{5}\) are labelled with a request for data (label \(?\)) or the sending of data (label \(!\)), and the outgoing transition labels indicate the type of the exchanged values (e.g., \(\texttt {bool}\)). Finally, state \(q_{6}\) decrees the end of the protocol (label \(\mathrm{end}\)). Note that the cyclic character of \(T\) occurs as transitions back to \(q_{0}\); there is no need for an explicit operator to capture recursion.
Fig. 1.
Fig. 1. Protocol of the mathematical server as a session coalgebra.
A session coalgebra models the view on one channel endpoint, but to correctly execute a protocol we also need to consider the dual session coalgebra that models the other endpoint’s view. In our example, the dual of Figure 1 is given by the diagram in Figure 2, which concerns states \(s_0, \ldots , s_6\). More precisely, the states \(q_{i}\) and \(s_{i}\) are pairwise dual in the following sense. The external choice of \(q_{0}\) becomes an internal choice for \(s_{0}\), expressed through the label \(\oplus\), with exactly the same labels on the transitions leaving \(s_{0}\). This means that whenever the server’s protocol is in state \(q_{0}\) and the client’s protocol in state \(s_{0}\), then the client can choose to send one of the three signals to the server, thereby forcing the server protocol to advance to the corresponding state. All other states turn from sending states into receiving states and vice versa. We will see that this duality relation between states of session coalgebras has a natural coinductive description that can be obtained with the same techniques as bisimilarity. The duality relation for \(T\) will give us then the full picture of the intended protocol.
Fig. 2.
Fig. 2. Session coalgebra for the client view protocol the of mathematical server.
Suppose a client who would only want to use multiplication once but could also handle real numbers as inputs. Such a client had to follow the protocol given by the session coalgebra in Figure 3, with states \(r_0, \ldots , r_5\).
Fig. 3.
Fig. 3. Session coalgebra that uses only part of a mathematical server.
In theories of session types, the protocol of Figure 2 is a subtype of the protocol in Figure 3 (cf. References [23, 24]). Concretely, this new client can also follow the subtype protocol, and can thus communicate with a server following the protocol of Figure 1. For session coalgebras, we recover the same notion of subtyping by using specific simulation relations that allow us to prove that the behaviour of \(r_{0}\) can be simulated by \(s_{0}\). Together, simulations and duality provide the foundation of typical session type systems.
We have used thus far session types and coalgebras for protocols with exchanges of simple data values. In contrast, rich session type systems can regulate session delegation, the dynamic allocation and exchange of channels by processes. Imagine a process that creates a channel, which should adhere to some protocol \(T\). From an abstract perspective, the process holds both endpoints of the new channel, and has to send one of them to the process it wishes to communicate with. To ensure statically that the receiving process respects the protocol of this new channel, we need to announce this communication as a transmission of the session type \(T\) (via an existing channel) and use \(T\) to verify the receiving process. Session delegation adds expressiveness and flexibility, but it may cause problems in the characterisation of a correct notion of duality [25]. Remarkably, our coalgebraic view of session types makes this characterisation completely natural.
As an example, consider the type \(T = \mathop {\mu X.\,} \mathop {? X.\,} X\), which models a channel endpoint that infinitely often receives channel ends of its own type \(T\). To obtain the dual of \(T\), we may naïvely try to replace the receive with a send, which results in the type \(\mathop {\mu X.\,} \mathop {! X.\,} X\). The problem is that the two channel endpoints would not agree on the type they are sending or receiving, as any dual type of \(T\) needs to send messages of type \(T\). Thus, the correct dual of \(T\) would be the type \(U = \mathop {\mu X.\,} \mathop {! T.\,} X\). Both \(T\) and \(U\) specify the transmission of non-basic types, either the recursion variable \(X\) or \(T\), in contrast to the mathematical server that merely stipulated the transmission of basic data values (integers or Booleans).
In our session coalgebras for the mathematical server it sufficed to have simple data types and branching labels on transitions. However, to represent \(T\) and \(U\), we will need another mechanism to express session delegation. We observe that a transmission in session types consists of the transmitted data and the session type that the protocol must continue with afterward. Thus, a transition out of a transmitting state in a session coalgebra encompasses both a data transition and a continuation transition. In diagrams of session coalgebras, we indicate the data transition by a coloured arrow and an arrow connecting the data to the continuation transition. Using the combined transitions, Figure 4 redraws the multiplication part of the mathematical server in Figure 1.
Fig. 4.
Fig. 4. Protocol of mathematical server as session coalgebra.
This way, the transition has been replaced by both a data transition into a new state \(q\) and a continuation transition into \(q_{2}\). Moreover, \(q\) has been declared as a data state that expects an integer to be exchanged (label \(\texttt {int}\)).
Having added these transitions to our toolbox, we can present the two types \(T\) and \(U\) as session coalgebras. The diagram in Figure 5 shows such a session coalgebra, in which we name the states suggestively \(T\) and \(U\).
Fig. 5.
Fig. 5. Session coalgebra for a recursive type \(T\) and its dual \(U\).
Using this presentation as session coalgebras, it is now straightforward to coinductively prove that the states \(T\) and \(U\) are dual:
(1)
the states have opposite actions;
(2)
their data transitions point to equal types; and
(3)
their continuations are dual by coinduction.
Clearly, the last step needs some justification but it will turn out that we can appeal to a standard definition of coinduction in terms of greatest fixed points. This demonstrates that our coalgebraic view on session types makes the definition of duality truly natural and straightforward.
Up to here, we have discussed session types and coalgebras that are linear, i.e., they enforce that protocols complete exactly once. In many situations, one also needs unrestricted types, which enable sharing of channels between processes that access these channels concurrently. This is the case of a process that offers a service for other processes, for instance a web server. Session delegation allows us to create dynamically channels and check their protocols, but the shared channel for initiating a session [24] has to offer its protocol to an arbitrary number of clients. Unrestricted types enable us to specify these kind of service offers.
As an example, consider a process that provides a channel for communicating integers to anyone asking, like a town hall official handing out citizen numbers. The session type \(U^{\prime } = \mathop {\mu X.\,} \mathsf {un}\mathop {! \texttt {int}.\,} X\) represents the corresponding protocol, where “\(\mathsf {un}\)” qualifies the type \(\mathop {! \texttt {int}.\,} X\) as unrestricted. This allows the process holding the end of a channel with type \(U^{\prime }\) to transmit an integer to any process that is connected to the shared channel, without any restriction on their number. Using this, it is surprisingly simple to express \(U^{\prime }\) in our coalgebraic view: We introduce a new state label “\(\mathrm{un}\)” (unrestricted). The idea is that states reachable from an \(\mathrm{un}\)-labelled state can be used arbitrarily as protocols across different processes connecting to a channel that follows the protocol given by those states. The following diagram shows a session coalgebra with a state that corresponds to \(U^{\prime }\):
Contributions.. In this article, we make the following contributions.
(1)
We introduce the notion of session coalgebra, which justifies the state-based behaviour of session types from a coalgebraic perspective. Using coalgebra as a unifying framework for session types has two advantages:
session coalgebras can be defined and studied independently from specific syntactic formulations, while keeping the operational behaviour of session types; and
we can uncover the innate coinductive nature of key notions in session types, such as duality, subtyping, and type equivalence through standard coalgebraic techniques.
(2)
To enable the verification of processes against protocols described by session coalgebras, we also contribute a type system for \(\pi\)-calculus processes, in which channel types are given by states of an arbitrary session coalgebra. Exploiting our coalgebraic perspective, our type system supports subtyping by combining elements from type systems by Gay and Hole [24] and by Vasconcelos [61].
(3)
We provide a type checking algorithm for that system, provided that the underlying session coalgebra fulfils two intuitive conditions. In doing so, we show how a specific type syntax can be equipped with a session coalgebra structure and how the two decidability conditions are reflected in the type system.
(4)
Finally, we show how the coalgebraic approach naturally extends to the more expressive context-free session types by Thiemann et al. [58], by equipping session coalgebras with a stack, which allows the finitary presentation of session types with sequential composition.
Related Work.. To our knowledge, a coalgebraic justification to session types is novel, although specific state-based description of protocols have been considered before [9, 11, 12, 15, 16, 19, 22, 40, 55, 56]. In particular, although communicating automata can also provide syntax-independent characterisations of session types (see, e.g., References [16, 17]), such characterisations do not support delegation, an expressive feature that is cleanly justified in our coalgebraic approach.
Coinduction already has been exploited in the definition of type equivalence [58], subtyping [23, 24] and, especially, duality for systems with recursive types [7, 25, 39]. Unlike ours, these previous definitions are language-dependent, as they are tailored to specific process languages and/or syntactic variants of the type discipline. Session coalgebras enable thus the generalisation of insights and technologies from specific languages to any protocol specification that fits under the umbrella of state-based sessions.
We show how a specific type syntax can be equipped with an appropriate coalgebraic structure. This is in contrast to starting with a specific type syntax and then employing category theoretical ideas, as done in Reference [59], where coinductive session types are encoded in a session type system with parametric polymorphism [10].
Organisation. Throughout the article, we will turn the ideas sketched above into a coalgebraic framework. We introduce in Section 2 a concrete session type syntax that we will use as illustration of our framework. In Section 3, we will define session coalgebras as coalgebras for an appropriate functor and show that the type system from Section 2 can be equipped with a coalgebraic structure. The promised coinductive view on type equivalence, duality, subtyping, and so on, will be provided in Section 4. Moreover, we will show that these notions are decidable under certain conditions that hold for any reasonable session type syntax, including the one from Section 2. Up to that point, the session coalgebras only had intrinsic meaning and were not associated to any process representation. Section 5 sets forth a type system for \(\pi\)-calculus, in which channels are assigned states of a session coalgebra as types. The resulting type system features subtyping and algorithmic type checking, presented in Section 6. In Section 7, we illustrate how to expand the coalgebraic presentation to account for context-free session types. Some final thoughts are gathered in Section 8.
This article is an extended version of the conference paper [34], which goes into more detail regarding proofs and collects additional new material on context-free session types.

2 Session Types

To motivate the development of session coalgebras, we recall in this section the concrete syntax of an existing session type system by Vasconcelos [61]. After building up our intuition, we introduce session coalgebras in Section 3 to show they can represent this concrete type system.
The types of the system that we will be using are generated by the grammar in Figure 6, relative to a set of basic data types \(D\) and a countable set of type variables \(\mathsf {Var}\). This grammar has four syntactic categories: pretypes \(p\), qualifiers \(q\), session types \(S\), and types \(T\), which contains both session and basic types. A pretype \(p\) is simply a communication action: send (\(!\)), receive (\(?\)), external choice (\(\&\)), and internal choice (\(\oplus\)) indexed by a finite sets \(I\) of labels, followed by one or multiple session types. The simplest types are basic data types in \(D\) and the completed, or terminated, protocol represented by \(\mathrm{end}\). A pretype and qualifier also form a session type, written as \(q\ {p}\). The “\(\mathsf {lin}\)” qualifier enforces that the communication action \(p\) has to be carried out exactly once, while the “\(\mathsf {un}\)” qualifier allows arbitrary use of \(p\). Finally, we can form recursive session types with the fixed point operator \(\mu\) and the use of type variables. We use the usual notion of \(\alpha\)-equivalence, (capture-avoiding) substitution, and free and bound type variables for session types.
Fig. 6.
Fig. 6. Session types over sets of basic data types \(D\) and of variables \(\mathsf {Var}\).
The grammar allows arbitrary recursive types. We let \(\mathsf {Type}\) be the set of all \(T\) in which recursive types are contractive and closed, which means that they contain no substrings of the form \(\mu X_1. \mu X_2 \ldots \mu X_n . X_1\) and no free type variables. In a slight abuse of notation, we will use \(T\) to range over elements of \(\mathsf {Type}\) from here on.
To lighten up notation, we will usually omit the qualifier \(\mathsf {lin}\) and assume every type to finalise with \(\mathrm{end}\). With these conventions, we write, e.g., \(\mathop {? \texttt {int}.\,}\) instead of \(\mathsf {lin}\mathop {? \texttt {int}.\,} \mathrm{end}\) and \(\mathsf {un}\mathop {? \texttt {int}.\,}\) for a single unrestricted receive.
We assume there is some decidable subtyping preorder \(\le _D\) over the basic types. A type is a subtype of another if the subtype can be used anywhere where the supertype was accepted. In examples, we use the basic types int, real, and bool, and we assume that int is a subtype of real, as usual.
An important notion is the unfolding of a session type, which we define next:
Definition 2.1 (Unfolding).
The unfolding of a recursive type \(\mu X . {S}\) is defined recursively
\begin{equation*} \mathit {unfold}(\mu X . {S}) = \mathit {unfold} ({S} [\mu X.{S} / X ]) \end{equation*}
For all other \(T\) in \(\mathsf {Type}\), \(\mathit {unfold}\) is the identity: \(\mathit {unfold}(T) = T\).
Because we assume that types are contractive, \(\mathit {unfold}(T)\) terminates for all \(T\). Also, because all types are required to be closed, \(\mathit {unfold}(T)\) can never be a variable \(X\). Any such variable would have to be bound somewhere before use, meaning it would have been substituted. Furthermore, unfolding a closed type always yields another closed type, as each removed binder always causes a substitution of the bound variable.

3 Session Coalgebra

Here, we will discuss session coalgebras, the main contribution of this article. The idea is that session coalgebras will be coalgebras for a specific functor \(F\), which will capture the state labels and the various kinds of transitions that we discussed in Section 1. An important feature of coalgebras in general, and session coalgebras in particular, is that the states can be given by an arbitrary set. We will leverage this to define a session coalgebra on the set of types \(\mathsf {Type}\) introduced in Section 2.

3.1 Preliminaries

Before coming to the definition, let us briefly recall some notions of category theory [4, 8, 47]. We will not require a lot of category theoretical terminology; in fact, we will only use the category \(\mathbf {Set}\) of sets and functions. Moreover, we will be dealing with functors \(F :\mathbf {Set}\rightarrow \mathbf {Set}\) on the category \(\mathbf {Set}\). Such a functor allows us to map a set \(X\) to a set \(F(X)\), and functions \(f: X \rightarrow Y\) to functions \(F(f): F(X) \rightarrow F(Y)\). A functor must preserve identity and compositions, that is, \(F\) maps the identity function \(\mathrm{id}_X :X \rightarrow X\) on \(X\) to the identity on \(F(X)\): \(F(\mathrm{id}_X) = \mathrm{id}_{F(X)}\); and, given functions \(f :X \rightarrow Y\) and \(g :Y \rightarrow Z\), we must have \(F(g \circ f) = F(g) \circ F(f)\).
A central notion is that of the coalgebras [33, 49, 50] for a functor \(F\). A coalgebra is given by a pair \((X, c)\) of a set \(X\) and a function \(c :X \rightarrow F(X)\). For simplicity, we often leave out \(X\) and refer to \(c\) as the coalgebra. The general idea is that the set \(X\) is the set of states and that \(c\) assigns to every state its one-step behaviour. In the case of session coalgebras this will be the state labels and outgoing transitions. Given two coalgebras \(c :X \rightarrow F(X)\) and \(d :Y \rightarrow F(Y)\), we say that \(h :X \rightarrow Y\) is a homomorphism, if \(d \circ h = F(h) \circ c\). Coalgebras and their homomorphisms form a category \(\mathbf {CoAlg}(F)\), with the same identity maps and composition as in \(\mathbf {Set}\).

3.2 Session Coalgebras

Coming to the concrete case of session coalgebras, we now construct a functor that allows us to capture the state labels and the different kinds of transitions. Keeping in mind that states of a session coalgebra correspond to states of a protocol, we need to be able to label the states with enabled operations. Depending on their operations, states may also have polarities.
Definition 3.1 (Operations and Polarities).
We let \(O\) be the set of all operations \(O = \lbrace \mathrm{com},\mathrm{branch},\mathrm{end},\mathrm{bsc},\mathrm{un}\rbrace\) and \(P\) the set of polarities \(P = \lbrace \mathrm{in}, \mathrm{out}\rbrace\).
The operation of a state describes the action it represents: “\(\mathrm{com}\)” marks the transmission (sending or receiving) of a value; “\(\mathrm{branch}\)” marks an (internal or external) choice; “\(\mathrm{end}\)” marks the completed protocol; “\(\mathrm{bsc}\)” marks a basic data type; and “\(\mathrm{un}\)” marks an unrestricted type. States that transmit data (labelled with “\(\mathrm{com}\)”) or allow for choice (labelled with “\(\mathrm{branch}\)”) also have a polarity, which can be either “\(\mathrm{in}\)” (a receiving action or external choice) or “\(\mathrm{out}\)” (a sending action or internal choice).
Note that pairs in \(\lbrace \mathrm{com}, \mathrm{branch}\rbrace \times P\) correspond directly to the actions that can appear in session types: \(? = (\mathrm{com}, \mathrm{in})\), \(! = (\mathrm{com}, \mathrm{out})\), \(\& = (\mathrm{branch}, \mathrm{in}),\) and \(\oplus = (\mathrm{branch}, \mathrm{out})\). We will be using these markers to abbreviate the pairs.
Now that we have the possible operations of a protocol, we need to define the transitions that may follow each operation. Recall that the transition at a choice state has to be labelled with messages that resolve that choice. We therefore assume to be given a set \(\mathbb {L}\) of possible choice labels. The variable \(l\) will be used to refer to an element of \(\mathbb {L}\). Then, \(\mathcal {P}^+_{\lt \aleph _0}(\mathbb {L})\) is the set of all finite, non-empty, subsets of \(\mathbb {L}\). Variables \(L, L_1, L_2,\ldots\) refer to these finite, non-empty subsets of \(\mathbb {L}\).
Our goal is to define a polynomial functor [21] that captures the states labels and transitions. This requires some further formal language. First, given sets \(X\) and \(Y\), we denote by \(X^Y\) the set of all (total) functions from \(Y\) to \(X\). Second, given a family of sets \({\lbrace }X_i{\rbrace }_{i \in I}\) indexed by some set \(I\), their coproduct is the set \(\coprod _{i \in I} X_i = \lbrace (i, x) \mid i \in I, x \in X_i \rbrace\).
We are now ready to define session coalgebras:
Definition 3.2 (Session Coalgebras).
Let \(*\) and \(\mathcal{d}\) be some fixed distinct elements, and let \(A\) and \(B_a\) be sets defined as follows, where \(a \in A\). Recall that \(D\) is the set of all basic data types:
\begin{equation*} \begin{array}{rlcl} A = & \lbrace \mathrm{com}\rbrace \times P & \quad & B_{\mathrm{com},p} = \lbrace *, \mathcal{d}\rbrace \\ & \mathrel {\cup } \lbrace \mathrm{branch}\rbrace \times P \times \mathcal {P}^+_{\lt \aleph _0}(\mathbb {L}) & & B_{\mathrm{branch},p,L} = L \\ & \mathrel {\cup } \lbrace \mathrm{end}\rbrace & & B_{\mathrm{end}} = \emptyset \\ & \mathrel {\cup } \lbrace \mathrm{bsc}\rbrace \times D & & B_{\mathrm{bsc},d} = \emptyset \\ & \mathrel {\cup } \lbrace \mathrm{un}\rbrace & & B_{\mathrm{un}} = \lbrace *\rbrace \\ \end{array} \end{equation*}
The polynomial functor \(F: \mathbf {Set}\rightarrow \mathbf {Set}\) is defined by
\begin{align*} F(X) & = \displaystyle \coprod _{a \in A} X^{B_a}, \\ F(f)(a, g) & = (a, f \circ g). \end{align*}
A coalgebra \((X,c)\) for the functor \(F\) is called a session coalgebra.
Let us unfold this definition. Given a session coalgebra \(c :X \rightarrow F(X)\) and a state \(x \in X\), we find in \(c(x) \in F(X)\) the information of \(x\) encoded as a tuple \((a, f)\) with \(a \in A\) and \(f :B_a \rightarrow X\). From \(a\), we get directly the operation, and the polarity for \(\mathrm{com}\) states, the type of values communicated for \(\mathrm{bsc}\) states or the message labels of \(\mathrm{branch}\) states. The function \(f\) encodes the transitions out of \(x\). The domain of \(f\) is exactly the set of labels that have a transition, and it is dependent on the kind of state declared by \(a\). If \(f\) has a domain of cardinality two or more, then one can think of the state \(x\) “splitting” into states that will be visited one after another. This idea corresponds to the intuition expressed in the diagrams of session coalgebras above, and has been previously used to model processes with binary “splitting” and communication [35, 52]. Here, we are not so much interested in the coalgebraic semantics [27] of session types; rather, we aim to define a coalgebraic view on them that allows the derivation of everything a type system for processes needs.
It is convenient to partition the domain of the transition map \(f\) into data and continuations. Notice how only \(\mathrm{com}\) states have data transitions, for other states, all transitions are continuations. As usual, we write \(dom(f)\) for the domain of \(f\).
Definition 3.3.
Suppose \(c(x) = (\mathrm{com}, p, f)\), then the data domain of \(f\) is \(dom_D(f) = \lbrace \mathcal{d}\rbrace\) and the continuation domain is \(dom_C(f) = \lbrace *\rbrace\). In all other cases, \(dom_D(f) = \emptyset\) and \(dom_C(f) = dom(f)\).
We will have to analyse subsets of coalgebras that are closed under transitions. Given a session coalgebra \(c :X \rightarrow F(X)\), we say that \(d :Y \rightarrow F(Y)\) with \(Y \subseteq X\) is a subcoalgebra of \(c\) if the inclusion map \(Y \rightarrow X\) is a coalgebra homomorphism. If \(F\) weakly preserves pullbacks, then \(c(Y) \subseteq F(Y)\) and thus \(d\) is the restriction of \(c\) to \(Y\) [49, Theorem 6.3]. As all functors that we consider are weakly pullback preserving, we will also refer to \(Y\) as subcoalgebra. The subcoalgebra generated by \(x \in X\) in \(c\), denoted by \({\langle }x{\rangle }_c\), is the least subset of \(X\) that contains \(x\) and is a subcoalgebra of \(c\) [49]. Intuitively, it is the set that includes \(x\) and all states that are reachable from \(x\).

3.3 An Alternative Presentation of Session Coalgebras

Session coalgebras \((X, c)\) are rather complex. We show how to build up \(c\) as the combination of two simpler functions, denoted \(\sigma\) and \(\delta\), so that \(c(x) = (\sigma (x), \delta (x))\) with \(\sigma :X \rightarrow A\) and \(\delta (x) :B_{\sigma (x)} \rightarrow X\). Observe that every state gets an operation in \(O\) assigned, thus we may assume that there is a map \(\text{op}:X \rightarrow O\). Depending on the operation given by \(\text{op}(x)\), the label on \(x\) will then have different other ingredients that are captured in the following proposition.
To formulate the proposition, we need some notation. Suppose \(f :X \rightarrow I\) is a map and \(i \in I\). We define the fibre \(X^f_i\) of \(f\) over \(i\) to be \(X^f_i = {\lbrace }x \in X \; | \; f(x) = i{\rbrace }\). Moreover, we let the pairing of functions \(f\) and \(g\) be \({\langle }f, g{\rangle }(x) = (f(x), g(x))\).
Proposition 3.4.
A session coalgebra \((X, c)\) can equivalently be expressed by providing the following maps:
\begin{equation*} \begin{array}{rll} \text{op}& : X \rightarrow O & {maps\;each\; state\; to\; an\; operation } \\ \text{pol}& : X^{\text{op}}_{\mathrm{com}} + X^{\text{op}}_{\mathrm{branch}} \rightarrow P \quad & {maps\; \text{com}\; and\; \text{branch}\; states\; to\; a\; polarity } \\ \text{la}& : X^{\text{op}}_{\mathrm{branch}} \rightarrow \mathcal {P}^+_{\lt \aleph _0}(\mathbb {L}) & {maps\;\text{branch}\; states\; to\; a\; set\; of\; labels} \\ \text{da}& : X^{\text{op}}_{\mathrm{bsc}} \rightarrow D & {maps\; \text{bsc}\;states\; to\; their\; basic\; type} \\ \delta _a & : X^{\sigma }_a \rightarrow X^{B_a} & {maps\; each\; state\; to\; a\; transition\; function}, \end{array} \end{equation*}
where
\begin{equation*} \sigma (x) = {\left\lbrace \begin{array}{ll} {\langle }\text{op}, \text{pol}{\rangle }(x) & \text{if } \text{op}(x) = \mathrm{com}, \\ {\langle }\text{op}, \text{pol}, \text{la}{\rangle }(x) & \text{if } \text{op}(x) = \mathrm{branch}, \\ {\langle }\text{op}, \text{da}{\rangle }(x) & \text{if } \text{op}(x) = \mathrm{bsc}, \\ \text{op}(x) & \text{if } \text{op}(x) = \mathrm{end}\text{ or } \text{op}(x) = \mathrm{un}. \end{array}\right.} \end{equation*}
We specified \(\delta _a\) as a family of transition functions so we can express the necessary constraints in terms of the codomains. We can define a single global transition function as \(\delta (x) = \delta _{\sigma (x)}(x)\). This is how the coalgebra finally becomes \(c(x) = (\sigma (x), \delta (x))\). As long as the provided functions map the appropriate elements of \(X\) to the specified codomains, this derived function will conform to \(c: X \rightarrow F(X)\).
The procedure also works backward: given any session coalgebra, we can derive functions \(\text{op}(x)\), \(\text{pol}(x)\), and so on, from \(c(x)\). We will often use \(\text{op}(x)\), \(\sigma (x)\), and \(\delta (x)\) to refer to those specific parts of an arbitrary session coalgebra.

3.4 Coalgebra of Session Types

In Section 1, we informally explained how session types can be represented as states of a session coalgebra. We will now justify this claim by showing that session types are, in fact, states of a specific session coalgebra \((\mathsf {Type}, c_{\mathsf {Type}})\).
We define the functions \(\text{op}\), \(\text{pol}\), \(\delta\), and \(\text{la}\) (see proposition 3.4) on \(\mathsf {Type}\). Using proposition 3.4, we can then derive \(c_{\mathsf {Type}}: \mathsf {Type}\rightarrow F(\mathsf {Type})\). Let us begin with the linear types:
Under this definition, \(\text{la}(T)\) is finite, by virtue of an expression being a finite string. The completed protocol \(\mathrm{end}\) and basic types \(d\) are straightforward: \(c(\mathrm{end}) = (\mathrm{end}, {f_\emptyset })\) and \(c(d) = (\mathrm{bsc}, d, {f_\emptyset })\) for any \(d \in D\), where \(f_\emptyset :\emptyset \rightarrow X\) is the empty function. Recursive types are handled according to their unfolding, \(c(\mathop {\mu X.\,} T) = c(\mathit {unfold}(\mathop {\mu X.\,} T))\). Recall that contractivity ensures that \(\mathit {unfold}\) always terminates. As our types are closed, all recursion variables are substituted during the unfolding of their binder. Consequently, we do not need to define \(c\) on these variables. Also note that this definition results in an equi-recursive interpretation of recursive types, i.e., a recursive type and its unfolding are considered as equal.
Session types can also be unrestricted, and consist of a pretype \(p\) with a qualifier \(\mathsf {un}\). Session coalgebras have \(\mathrm{un}\) states to mark unrestricted types; the continuation describes what the actual interaction is. Thus, we define \(\text{op}(\mathsf {un}\ p) = \mathrm{un}\) and \(\delta (\mathsf {un}\ p)(*) = \mathsf {lin}\ p\).
Remark 1 (Alternative Syntaxes and their Functors).
The unrestricted session types that we have adopted are fairly standard, but they are not the only ones in the literature. Most notably, Gay and Hole [24] defined a type \(\widehat{\ }[T_1,\ldots ,T_n]\) that allows infinite reading and writing. To allow for such behaviour in session coalgebra, we can change \(B_{\mathrm{un}}\) to a set of two elements, such a \(\lbrace *_1, *_2\rbrace\). Like internal choice, the two transitions describe an option of which behaviour to follow, but without sending synchronisation signals. One transition could go to a read, and the other to a write, both recursively continuing as the original type \(\widehat{\ }[T_1,\ldots ,T_n]\).
It is possible, although not entirely trivial, to change the further definitions appropriately and get a decidable type checking algorithm encompassing both the syntax presented in this work, and Gay and Hole’s syntax. We choose not to, so to keep the presentation simpler.

4 Type Equivalence, Duality, and Subtyping

Up to here, we have represented session types as session coalgebras, but we have not yet given a precise semantics to them. As a first step, we will define three relations on states: bisimulation, duality, and simulation. Bisimulation is also called behavioural equivalence for types; we will show that bisimilar types are indeed equivalent. Duality specifies complementary types: it tells us which types can form a correct interaction. Simulation will provide a notion of subtyping: it tells us when a type can be used where another type was expected. Besides relations on session coalgebras, we also introduce the parallelizability of states that allows us to rule out certain troubling unrestricted types. Finally, we will obtain conditions on coalgebras to ensure the decidability of the three relations and therefore the type system that we derive in Section 5.
In the following, we will denote by \(\mathrm{Rel}_X\) the poset \(\mathcal {P}(X \times X)\) of all relations on \(X\) ordered by inclusion. Recall that a post-fixpoint of a monotone map \(g :\mathrm{Rel}_X \rightarrow \mathrm{Rel}_X\) is a relation \(R \in \mathrm{Rel}_X\) with \(R \subseteq g(R)\). Note that \(\mathrm{Rel}_X\) is a complete lattice and that therefore any monotone map has a greatest post-fixpoint, which is also the greatest fixpoint, by the Knaster-Tarski Theorem [57].
We will define bisimulation, simulation, and duality as the greatest (post-)fixpoint of monotone functions, which we will therefore call coinductive definitions. This definition turns out to be intuitively what we would expect and the interaction of infinite behaviour with other type features is automatically correct. The coinductive definitions also give us immediately proof techniques for equivalence, duality, and subtyping: to show that two states are, say, dual we only have to establish a relation that relates the two states and show that it is a post-fixpoint. This technique can then be enhanced in various ways [48], and we will show that it is decidable for reasonable session coalgebras.

4.1 Bisimulation

Two states of a coalgebra are said to be bisimilar if they exhibit equivalent behaviour. We abstract away from the precise structure of a coalgebra and only consider its observable behaviour. Two states of a session coalgebra are bisimilar if their labels are equal and if the states at the end of matching transitions are again bisimilar. There is one exception to the equality of labels: basic types can be related via their pre-order, which does not have to coincide with equality.
Definition 4.1.
Given some coalgebra \((X, c)\), we define \(c^*: \mathrm{Rel}_{F(X)} \rightarrow \mathrm{Rel}_X\), the binary preimage of \(c\), as follows:
\begin{equation*} c^*(R) = {\lbrace } (x, y) \; | \; (c(x), c(y)) \in R {\rbrace } \,. \end{equation*}
Definition 4.2.
We define the function \(f_\sim : \mathrm{Rel}_X \rightarrow \mathrm{Rel}_{F(X)}\) as
\begin{align*} f_\sim (R) = {} & \lbrace ((a, f), (a, f^{\prime })) \mid (\forall \alpha \in dom(f)) \quad f(\alpha) \mathrel {R} f^{\prime }(\alpha) \rbrace \\ {} \cup {} & \lbrace ((\mathrm{bsc}, d, f_\emptyset), (\mathrm{bsc}, d^{\prime }, f_\emptyset)) \mid d \le _D d^{\prime } \wedge d^{\prime } \le _D d\rbrace , \end{align*}
where \(f_\emptyset :\emptyset \rightarrow X\) is the empty function.
It can be easily checked that \(c^*\) and \(f_\sim\) are both monotone maps and thus also their composition. Thus, the greatest fixpoint in the following definition exists.
Definition 4.3 (Bisimilarity).
Let \(g_\sim = c^* \circ f_\sim\). A relation \(R\) is called a bisimulation if it is a post-fixpoint of \(g_{\sim }\). We call the greatest fixpoint of \(g_\sim\) bisimilarity and denote it by \(\sim\).
It should be noted that \(f_{\sim }\) is not the canonical relation lifting typically considered to construct bisimilarity as coinductive relation [20, 28, 54]. The reason is simply that we only assume the data types \(D\) to form a preorder, and thus we need to check the order in both directions. The map \(f_{\sim }\) would be the canonical relation lifting if we turned \(D\) into a poset.

4.2 Duality

Duality describes exactly opposite types in terms of their polarity. That is, the dual of input is output and the dual of output is input: \(\overline{in} = out\) and \(\overline{out} = in\). We can extend this to tuples \(a\) in \(A\), see definition 3.2, with the exception of basic types, because they do not describe channels:
\begin{align*} \overline{(\mathrm{com}, p)} & = (\mathrm{com}, \overline{\,p\,}) & \overline{(\mathrm{end})} & = (\mathrm{end}) \\ \overline{(\mathrm{branch}, p, L)} & = (\mathrm{branch}, \overline{\,p\,}, L) & \overline{(\mathrm{un})} & = (\mathrm{un}) \\ \overline{(\mathrm{bsc}, d)} & \text{ is undefined} \end{align*}
The next step is to compare transitions. Continuations of \(dom_C(f)\) need to be dual. The data types that are sent or received need to be equivalent, hence transitions of \(dom_D(f)\) need to go to bisimilar states. We capture this idea with the monotone map \(f_\bot : \mathrm{Rel}_X \rightarrow \mathrm{Rel}_{F(X)}\) defined as follows:
\begin{equation*} \begin{array}{rll} f_\bot (R) = & \bigg \lbrace \; ((a, f), (\overline{\,a\,}, f^{\prime })) & \bigg | \begin{array}{l} (\forall \alpha \in dom_C(f)) \quad f(\alpha) \mathrel {R} f^{\prime }(\alpha) \text{ and} \\ (\forall \beta \in dom_D(f)) \quad f(\beta) \mathrel {\sim } f^{\prime }(\beta) \end{array}\!\bigg \rbrace . \\ \end{array} \end{equation*}
Definition 4.4 (Duality).
Let \(g_\bot = c^* \circ f_\bot\). A relation \(R\) is called a duality relation if it is a post-fixpoint of \(g_{\bot }\). We call the greatest fixpoint of \(g_\bot\) duality and denote it by \(\bot\).

4.3 Parallelizability

Unlike a linear endpoint, a channel endpoint with an unrestricted type may be shared between multiple parallel processes.
There is no coordination of messages over a single channel: messages consecutively sent from one process may be received by different processes, and a single process may receive messages originating from different processes.
Futhermore, we wish to type-check each process independently. That is, to type-check \(P\) running in parallel with \(Q\), we first look at \(P\) (forgetting entirely about \(Q\)), and then consider \(Q\) (again, not taking into account \(P\) at all). The type of the unrestricted channel should carry all information needed to make sure the parallel composition of \(P\) and \(Q\) will work correctly, assuming \(P\) and \(Q\) are individually well-formed. This is not the case for all types that can be represented by session coalgebras.
Example 4.5.
Suppose we would allow a channel \(x\) of type \(\mathop {\mu X.\,} \mathsf {un}\mathop {! \texttt {int}.\,} \mathsf {un}\mathop {! \texttt {bool}.\,}X\) to be used by, say, by some parallel processes \(P_1\) and \(P_2\) that first write some integer, and then some Boolean value to the channel:
\begin{align*} P_1 &: \;\text{send}\; ``1'' \;\text{on}\; x,\; \text{then send}\; ``\texttt {true}'' \;\text{on}\; x, \\ P_2 &: \;\text{send} ``2'' \;\text{on}\; x, \;\text{then send}\; ``\texttt {false}'' \;\text{on}\; x. \end{align*}
Both processes are checked independently, and both conform to the protocol, so all seems well. Nonetheless, the following is a possible trace of \(P_1\) and \(P_2\) running in parallel:
\(P_1\) sends “1” \(\rightarrow\) \(P_2\) sends “2” \(\rightarrow\) \(P_1\) sends “true\(\rightarrow\) \(P_2\) sends “false”.
Although both processes locally conform to the type, when running in parallel the channel can carry two consecutive integers. A process reading from the channel might receive these consecutive integers, even though the type specifies that each integer must be followed by a Boolean.
So, only a subset of unrestricted types can safely be used (by parallel processes), we call such types parallelizable. We will later define the type system such that well-formed processes can only interact with unrestricted types that are parallelizable. Hence, unrestricted, non-parallelizable types can be thought of as being invalid in a certain sense.
Definition 4.6.
Given a coalgebra \((X, c)\), some subset \(Y \subseteq X\) is parallelizable, written \(\text{par}(Y)\), if for every \(x\) and \(y\) in \(Y\) one of the following holds: \(x \sim y\), or \(\sigma (x) = \mathrm{un}\), or \(\sigma (y) = \mathrm{un}\).
Intuitively, a set of states is parallelizable if every message that is sent or received is of the same form. That is, either all messages are branch labels, drawn from the same set, or all messages are communications of the same data type. We make this slightly stronger and say that all states should be pairwise bisimilar. The only exception are \(\mathrm{un}\) states: Since they do not (directly) represent any channel interactions, they can effectively be ignored.
Often, we are interested in the parallelizability only of a specific state. Recall that \({\langle }x{\rangle }_c\) denotes the subcoalgebra generated by \(x \in X\) in \(c\).
Definition 4.7.
Let \(\langle {x}\rangle ^{{\gg }}_c\) be the smallest subset of \({\langle }x{\rangle }_c\) that contains \(x\) and is closed under continuation transitions:
\begin{equation*} \langle {x}\rangle ^{{\gg }}_c = \bigcap {\lbrace }Y \subseteq X \; | \; x \in Y \text{ and } \delta (y)(\alpha) \in Y \text{ for all }\;{y \in Y}\;{\text {and} }\;{\alpha \in dom_C(\delta (y))}\;{}{\rbrace } \end{equation*}
A state \(x\) is parallelizable, written \(\text{par}(x)\), if \(\langle {x}\rangle ^{{\gg }}_c\) is parallelizable.

4.4 Simulation and Subtyping

Intuitively, a coalgebra simulates another if the behaviour of the latter “is contained in” the former. Subtyping, originally defined on session types by Gay and Hole [24], is a notion of substitutability of types [23]. We will define our notion of simulation such that it coincides with subtyping, just like bisimulation provides a notion of type equivalence [24].
We motivate subtyping in session types, and its related notions of covariance and contravariance. Consider a process \(P\) with a channel \(x\) with type \(T =\ ?\texttt {real}\). That is, \(P\) is ready to receive on \(x\) a real value and to treat it as such. Suppose \(P\) is meant to interact with another process \(Q\), which is ready to provide values on \(x\). Clearly, \(P\) can operate correctly if \(Q\) were to send an \(\texttt {int}\) over \(x\), because every int is a real. This way, to formalize the correct interaction between \(P\) and \(Q\), channel \(x\) in \(P\) can also be safely specialized to have the type \(S =\ ?\texttt {int}\). Indeed, because \(\texttt {int}\) is a subtype of \(\texttt {real}\), we will have that \(?\texttt {int}\) is a subtype of \(?\texttt {real}\). Thus, the session types (i.e., \(?\texttt {int}\) and \(?\texttt {real}\)) are related in the same order as the data types (i.e., \(\texttt {int}\) and \(\texttt {real}\)); this is called covariance.
Now consider a process \(Q\) with a channel \(y\) with type \(T =\ !\texttt {int}\), and a process \(R\) with a channel with type \(?\texttt {real}\). Again, because every \(\texttt {int}\) is a \(\texttt {real}\), \(R\) can operate correctly with an integer received from \(Q\). To formalize this, channel \(y\) in \(Q\) can be safely generalized to have the session type \(S =\ !\texttt {real}\). That is, \(!\texttt {real}\) is a subtype of \(!\texttt {int}\). Thus, in this case, the session types (i.e., \(!\texttt {real}\) and \(!\texttt {int}\)) are related in the opposite orders as the data types; this is called contravariance.
Subtyping can be justified similarly for labelled choices: the subtype of an external choice can have a subset of choices, while the subtype of an internal choice can add more options. For all types, it holds that states reached through transitions are covariant, i.e., if \(T\) is a subtype of \(U\), then continuations of \(T\) must be subtypes of continuations (of the same label) of \(U\).
Finally, consider a process \(P\) with a channel \(x\) of type \(T = \mathop {\mu X.\,} \mathop {? \texttt {real}.\,}\mathop {? \texttt {real}.\,} X\). Following the reasoning above, one might suspect \(x\) could be specialized to have type \(U = \mathop {\mu X.\,} \mathop {? \texttt {real}.\,}\mathop {? \texttt {int}.\,} X\), but this is not the case. \(T\) is parallelizable, so \(P\) can safely interact with it, but \(U\) is not parallelizable, so if \(x\) were of the latter type, the channel could not be used. Thus, subtyping of unrestricted types comes with the extra condition that subtypes must have the same parallelizability.
The monotone map \(h_\sqsubseteq\) in Figure 7 captures these ideas formally.
Fig. 7.
Fig. 7. Monotone map \(h_{\sqsubseteq } :\mathrm{Rel}_X \rightarrow \mathrm{Rel}_{F(X)}\) that defines simulations.
Definition 4.8 (Similarity).
Let \(g_\sqsubseteq = c^* \circ h_\sqsubseteq\). A relation \(R\) is called a simulation if it is a post-fixpoint of \(g_\sqsubseteq\). We call the greatest fixpoint of \(g_\sqsubseteq\) similarity and denote it by \(\sqsubseteq\).
Let us illustrate similarity by means of an example.
Example 4.9.
Recall the two client protocols for our mathematical server in Figures 2 and 3, which concerns states \(s_0, s_1, \ldots\) and \(r_0, r_1, \ldots\), respectively. We can now prove that the protocol in Figure 3 can also connect to the server, because it is a supertype of the protocol in Figure 2. To do that, we establish a simulation relation between the states of both client protocols. In Figure 8, we display a part of both session coalgebras side-by-side and indicate with dotted arrows the pairs that have to be related by a simulation relation to show that these states are similar, that is, related by \(\sqsubseteq\). It should be noted that we simulate states from the second coalgebra by that of the first, that is, we show \(s_k \sqsubseteq r_k\) for the shown states. There is one exception to this, namely, \(q_\texttt {int} \sqsubseteq q_\texttt {real}\).
Fig. 8.
Fig. 8. : Simulation for two clients of the mathematical server (indicated by dotted arrows).
The following proposition records some properties of and tight connections between the relations that we introduced.
Proposition 4.10.
Bisimilarity \(\sim\) is an equivalence relation, duality \(\bot\) is symmetric, and similarity \(\sqsubseteq\) is a preorder. Moreover, for all states \(x\), \(y\), and \(z\) of a session coalgebra, we have that
(1)
\(x \mathrel \sim y\) iff \(x \mathrel \sqsubseteq y\) and \(y \sqsubseteq x\);
(2)
\(x \mathrel \bot y\) and \(x \mathrel \bot z\) implies \(y \mathrel \sim z\); and
(3)
\(x \mathrel \bot y\) and \(y \mathrel \sim z\) implies \(x \mathrel \bot z\).
The first claims, that bisimulation is an equivalence, duality is symmetric, and similarity is a preorder, follows from a straightforward coinductive analysis.
To prove the rest of the proposition, we start with a technical lemma.
Lemma 4.11.
If \(x \sim y,\) then \(x\) is parallelizable if and only if \(y\) is parallelizable.
Proof.
Suppose \(x\) is parallelizable. That means all pairs \(s, r\) of states in \(\langle x \rangle _c^\gg\) are either bisimilar, or one of \(s\) or \(r\) is an “\(\mathrm{un}\)” state. Because bisimulation requires all transitions to be bisimilar, any state in \(\langle y \rangle _c^\gg\) has a bisimilar state in \(\langle x \rangle _c^\gg\). So, any \(a, b\) in \(\langle y \rangle _c^\gg\) where \(\text{op}(a) \not= \mathrm{un}\) and \(\text{op}(b) \not= \mathrm{un}\), has \(a^{\prime }, b^{\prime }\) in \(\langle x \rangle _c^\gg\) with \(a \sim a^{\prime } \sim b^{\prime } \sim b\). Bisimulation is transitive, so \(a \sim b\). Thus, \(y\) is parallelizable. The reverse case is analogous.□
Given a subtyping relation \(\le\), type equivalence is generally defined as the derived equivalence relation \(x \equiv y\) iff \(x \le y\) and \(y \le x\). We defined bisimulation separately from simulation, but it coincides with this derived equivalence. This coincides with the claim that for any states \(x\) and \(y\), \(x \sim y\) if and only if \(x \sqsubseteq y\) and \(y \sqsubseteq x\).
Proof of Proposition 4.10
Recall that bisimulation is symmetric, so if \(x \sim y\) then \(\delta (x)(\alpha) \sim \delta (y)(\alpha)\) and \(\delta (y)(\alpha) \sim \delta (x)(\alpha)\) for any \(\alpha \in dom(x)\). Using lemma 4.11 in the case that \(\text{op}(x) = \text{op}(y) = \mathrm{un}\), it is easy to confirm that the bisimulation relation is a simulation. So, \(x \sim y\) implies \(x \sqsubseteq y\) but also \(y \sim x\), by symmetry, thus \(y \sqsubseteq x\).
Suppose \(x \sqsubseteq y\) and \(y \sqsubseteq x\), which means that there are simulation relations \(R_{1}\) and \(R_{2}\) with \(x \mathrel {R_{1}} y\) and \(y \mathrel {R_{2}} x\). It is straightforward to prove that the relation \(R = R_{1} \cap R_{2}^{\mathrm{op}}\), where \(R_{2}^{\mathrm{op}}\) is the dual relation, is a bisimulation relation. To this end, suppose that \(x_{1} \mathrel {R} y_{1}\), \(c(x_{1}) = (a, f)\) and \(c(y_{1}) = (b, g)\). Let us consider first the case when \(a = (\mathrm{bsc}, d)\). Since \(x_{1} \mathrel {R_{1}} y_{1}\), we necessarily have \(b = (\mathrm{bsc}, e)\) with \(d \le e\). Symmetrically, we also have \(y_{1} \mathrel {R_{2}} x_{1}\) and thus \(e \le d\). Then clearly \(x_{1} \mathrel {g_\sim (R)} y_{1}\) by definition. Second, we treat as the case where \(a = (\mathrm{com}, \mathrm{in}) = b\), which have to be equal again, because \(R_{1}\) (or \(R_{2}\)) is a simulation. Since \(x_{1} \mathrel {R_{1}} y_{1}\), we also have that \(f(\ast) \mathrel {R_{1}} g(\ast)\) and \(f(\mathcal{d}) \mathrel {R_{1}} g(\mathcal{d})\). Symmetrically, we obtain \(g(\ast) \mathrel {R_{2}} g(\ast)\) and \(g(\mathcal{d}) \mathrel {R_{2}} f(\mathcal{d})\) from \(y_{1} \mathrel {R_{2}} x_{1}\). Putting these together, we thus have \(f(\ast) \mathrel {R} g(\ast)\) and \(f(\mathcal{d}) \mathrel {R} g(\mathcal{d})\) and so \(x_{1} \mathrel {g_\sim (R)} y_{1}\). All the other cases are analogous and \(R\) is indeed a bisimulation, which implies that \(x \sim y\).
The second claim is that \(x \mathrel \bot y\) and \(x \mathrel \bot z\) implies \(y \mathrel \sim z\). By definition of duality (definition 4.4), we have \(\sigma (x) = \overline{\sigma (y)}\) and \(\sigma (x) = \overline{\sigma (z)}\). Thus, \(\sigma (y) = \sigma (z)\). Duality of the transitions follow from a coinductive analysis. Let \(f = \delta (x)\), \(g = \delta (y)\) and \(h = \delta (z)\), then \(f(\alpha) \mathrel \bot g(x)\) and \(f(\alpha) \mathrel \bot h(\alpha)\) for all \(\alpha \in dom(f)\) (which is equal to \(dom(g)\) and \(dom(h)\)). The coinductive hypothesis implies \(g(\alpha) \sim h(\alpha)\). Bisimilarity of \(y\) and \(z\) follows directly.
The third claim, that \(x \mathrel \bot y\) and \(y \mathrel \sim z\) implies \(x \mathrel \bot z\), is proven similarly.□
Property (1) of proposition 4.10 unfortunately does not follow from the general principles established by Hughes and Jacobs [32], because \(h_{\sqsubseteq }\) does not arise as a lifting induced by an order on the functor \(F\). The problem is that the relation \(R\) occurs in \(h_{\sqsubseteq }(R)\) both normally and dually, which forces us to define the lifting \(h_{\sqsubseteq }\) and prove proposition 4.10 manually.

4.5 Duality Functions

It is useful to have a function mapping any state \(x\) to their dual, as long as duality is defined on \(x\). Note that although there may be multiple states that are dual to \(x\), according to proposition 4.10 all such states are bisimilar, so it does not matter which one we designate as the dual of \(x\).
Definition 4.12.
A partial function \(f: X \rightarrow X\) is called a duality function on session coalgebra \((X, c)\) if for all \(x \in X\), we have
(1)
If \(\overline{\sigma (x)}\) is defined, then \(f(x) \mathrel \bot x\);
(2)
If \(\overline{\sigma (x)}\) is undefined, then \(f(x)\) is also undefined.
Not each session coalgebra admits a duality function. However, we can close any coalgebra \((X, c)\) under duality as such,
\begin{align*} X^\bot &= \lbrace (x, \uparrow) \mid x \in X \rbrace \ \cup \ \lbrace (x, \downarrow) \mid x \in X, \overline{\sigma (x)} \text{ is defined} \rbrace , \end{align*}
where \(\uparrow\) and \(\downarrow\) are distinct, arbitrary objects used to distinguish between the states that should have the same meaning as in X, from those that should get a dual meaning. The coalgebra map \(c^\bot : X^\bot \rightarrow F(X^\bot)$ is defined as\)
\begin{align*} c^\bot (x, \uparrow) &= (\sigma (x), \alpha \mapsto (\delta (x)(\alpha), \uparrow)), \\ c^\bot (x, \downarrow) &= \left(\overline{\sigma (x)}, \alpha \mapsto {\left\lbrace \begin{array}{ll} (\delta (x)(\alpha), \downarrow) & \text{ if } \alpha \in dom_C(f) \\ (\delta (x)(\alpha), \uparrow) & \text{ if } \alpha \in dom_D(f) \end{array}\right.} \right). \end{align*}
The dual closure admits an almost trivial duality function \(f: X^\bot \rightarrow X^\bot\):
\begin{align*} f(x, \uparrow) &= (x, \downarrow) \qquad \text{if } (x, \downarrow) \in X^\bot \\ f(x, \downarrow) &= (x, \uparrow) \end{align*}
Proposition 4.13.
Let \((X^\bot , c^\bot)\) be the dual closure of some session coalgebra, then \((x, \uparrow) \mathrel \bot (x, \downarrow)\) for every state \(x \in X\) such that \((x, \downarrow) \in X^\bot\).
Proof.
Using a coinductive reasoning, we have to show
\begin{align*} \sigma ^\bot (x, \uparrow) &= \overline{\sigma ^\bot (x, \downarrow)} \\ \delta ^\bot ((x, \uparrow))(\alpha) &\mathrel \bot \delta ^\bot ((x, \downarrow))(\alpha) \qquad \forall \alpha \in dom_C(\delta ((x, \uparrow))) \\ \delta ^\bot ((x, \uparrow))(\beta) &\mathrel \sim \delta ^\bot ((x, \downarrow))(\beta) \qquad \forall \beta \in dom_D(\delta ((x, \uparrow))) \end{align*}
The first claim follows by construction. For the second, note that \(\delta ^\bot ((x, \uparrow))(\alpha) = (\delta (x)(\alpha), \uparrow)\) and \(\delta ^\bot ((x, \downarrow))(\alpha) = (\delta (x)(\alpha), \downarrow)\), so the claim follows from the coinductive hypothesis. For the final claim, we see that \(\delta ^\bot ((x, \uparrow))(\beta) = (\delta (x)(\beta), \uparrow)\) and \(\delta ^\bot ((x, \downarrow))(\beta) = (\delta (x)(\beta), \uparrow)\), so the two sides are in fact equal. Bisimilarity is reflexive, thus concluding the coinductive proof.□
Since duality is symmetric, the above proposition suffices to conclude that \(f\) is indeed a duality function. Furthermore, it should be apparent that \(f\) is in fact a computable function.
In the following sections, we will assume that session coalgebras under consideration have a computable duality function, since we can always use the above procedure to construct one. In a slight abuse of notation, we reuse the \(\overline{\ \cdot \ }\) operation to mean this assumed duality function on states, i.e., \(\overline{x}\) denotes the dual of \(x\).

4.6 Decidability

In a practical type checker, we need an algorithm to decide the relations defined above. In this subsection, we show an algorithm that computes the answer in finite time for a certain class of types.
Definition 4.14.
A coalgebra \(c\) is finitely generated if \({\langle }x{\rangle }_c\) is finite for all \(x\).
This restriction is not problematic for types, as the following lemma shows.
Lemma 4.15.
The coalgebra of types \((\mathsf {Type}, c_{\mathsf {Type}})\) is finitely generated.
Proof.
The full proof is quite technical, so we give a sketch. Consider some \(T \in \mathsf {Type}\), and the coalgebra generated at that \(T\). Note that instead of substituting recursion variables \(X\) at every unfold, we can generate an equivalent coalgebra (modulo the name of each state) by defining \(\mathit {unfold}(\mu X. U) = U\). Then, every non-recursive transition goes to a strictly smaller expression. Simply removing the binder, without substituting anything for \(X\) in \(U\) does mean that now \(X\) is free in \(U\). So, whenever we unfold a type (i.e., remove a binder), we keep a record of the bound value (which, in this case, is \(\mu X. U\)). Let \(\zeta\) denote this record, mapping variables to their bound values (so, following the preceding example, we have \(\zeta (X) = \mu X. U\)), then we slightly augment the transition function of the coalgebra of types:
\begin{equation*} \delta ^{\prime }(U)(\alpha) = {\left\lbrace \begin{array}{ll} \zeta (X) & \text{if } \delta (U)(\alpha) = X \text{ for some variable }\;{X,} \\ \delta (U)(\alpha) & \text{otherwise.} \end{array}\right.} \end{equation*}
That is, whenever the original coalgebra of types definition calls for a transition to a variable \(X\), from some state \(U\), we look up the value of \(X\) in the record we kept, and point the transition to the corresponding state. This state is guaranteed to have been seen previously, i.e., lie on the path from \(T\) to \(U\). Finally, there is only a finite number of transitions per state. Hence, the total number of states reachable from \(T\) is finite.□
To determine whether two states \(x\) and \(y\) are bisimilar, we need to find a bisimulation \(R\) with \(x \mathrel {R} y\). We start with the simplest relation \(R = \lbrace (x, y)\rbrace\), and ask if this is a bisimulation.
First, we check that for all \((u, w) \in R\), \(\sigma (u) = \sigma (w)\), or in the case of \(\mathrm{bsc}\) states that \(\text{da}(u) \le _D \text{da}(w)\) and \(\text{da}(w) \le _D \text{da}(u)\). If \(\sigma (u) \not= \sigma (w)\) for any pair in \(R\), then we know that no superset of \(R\) is a bisimulation, and the algorithm rejects.
Second, we check the matching transitions. For every \((u, w) \in R\) and \(\alpha \in dom(\delta (u))\), we check whether \((\delta (u)(\alpha), \delta (w)(\alpha)) \in R\). If we encounter a missing pair, then we add it to \(R\) and ask whether this new relation is a bisimulation, i.e., return to the first step. If all destinations for matching transitions are present in \(R\), then \(R\) is by construction a bisimulation for \((x, y)\). Hence, \(x \sim y\).
This algorithm tries to construct the smallest possible bisimulation containing \((x, y)\), by only adding strictly necessary pairs. If the algorithm rejects, then there is no such bisimulation; hence, \(x \not\sim y\). Additionally, the algorithm only examines pairs in \({\langle }x{\rangle }_c \times {\langle }y{\rangle }_c\). If there are finitely many of such pairs, then the algorithm will terminate in finite time
The above described algorithm can be suitably adapted to similarity and duality, which gives us the following result.
Theorem 4.16.
Bisimilarity, similarity, and duality of any states \(x\) and \(y\) are decidable if \({\langle }x{\rangle }_c\) and \({\langle }y{\rangle }_c\) are finite. Parallelizability of any state \(x\) is decidable if \(\langle {x}\rangle ^{{\gg }}_c\) (as in Definition 4.7) is finite.
Proof.
We start with the decidability of bisimilarity.
A relation \(R\) is a post-fixpoint of \(g_\sim\) if \((a, b) \in g_\sim (R)\) for all \((a, b) \in R\). This involves computing a pre-image, which is, in general, not easy. Because we are not interested in pair \(g_\sim (R)\) that is not in \(R\), and because \(g_\sim (R) = c^*(f_\sim (R))\), \(R\) is a post-fixpoint if \((c(a), c(b)) \in f_\sim (R)\) for all \((a, b) \in R\). When \(R\) is finite, the latter is decidable—either trivially or by the assumption that \(\le _D\) is decidable.
By assumption, there are only finitely many states to be transitioned to, so the algorithm described above can only add finitely many pairs before reaching a relation that either already is a bisimulation or can never be made into a bisimulation.
The decidability of parallelizability for a finite set is straightforward.
If \(Y\) is finite, then the set \(Y \times Y\) of all pairs is finite. We can enumerate all such pairs, and decide whether it either is in the bisimilarity relation or contains a \(\mathrm{un}\) state, in finite time. Once we encounter a pair for which this does not hold, we know \(x\) is not parallelizable. If we have checked all pairs and not encountered such a counter-example, then we know \(x\) is parallelizable.
Each continuation is a transition, so for any state \(x\) such that \({\langle }x{\rangle }_c\) is finite, \(\langle {x}\rangle ^{{\gg }}_c\), i.e. the smallest set that is closed under continuations and contains \(x\), is also finite. Thus, parallelizability of \(x\) is decidable.
Finally, the algorithm, and decidability proof, for duality and similarity are analogous to that for bisimilarity.□
Corollary 4.17.
Bisimilarity, similarity, duality and parallelizability are decidable for \(c_\mathsf {Type}\).
Proof.
The coalgebra \((\mathsf {Type}, c_\mathsf {Type})\) is finitely generated, so theorem 4.16 applies.□

4.7 Session Coalgebras as Type Equations and the Rational Fixed Point

We have seen that the coalgebra \(c_{\mathsf {Type}}\) is finitely generated, which means that every type \(T\) gives rise to a finite subcoalgebra \({\langle }T{\rangle }_{c_{\mathsf {Type}}}\) that contains \(T\). All the examples of session coalgebras, apart from \(c_{\mathsf {Type}}\), were however finite. In the theory of automata, there is a familiar correspondence between finite deterministic automata and regular expressions. Thus, one may ask whether there is such a correspondence between finite state session coalgebras and session types. The answer to this question is not a definite “yes” but requires a bit of care.
Indeed, it is simple to provide an example of a session coalgebra that has no corresponding session type, see Figure 16. But there is more to be said. Coming back to language theory, the above mentioned correspondence actually relates finite automata and regular languages with each other, and uses this to establish the correspondence with regular expressions. This means that we have to find an analogue to regular languages for session coalgebras.
In the theory of coalgebra, the correspondence of regular languages and regular expressions is approached by relating the so-called rational fixed point of the functor for deterministic automata to a coalgebra on the set of regular expressions [53]. To parallel this approach, we will thus have to find a correspondence between the rational fixed point for our session coalgebra functor \(F\) and the coalgebra \(c_{\mathsf {Type}}\). Let us begin by introducing the rational fixed point. We will use a simplified definition [41], which works in our setting and makes the theory more elementary.
Definition 4.18.
Let \(\mathbf {CoAlg}_{\mathrm{fg}}(F)\) be the category of finitely generated coalgebras, that is, the full subcategory of \(\mathbf {CoAlg}(F)\) in which all coalgebras are finitely generated. The rational fixed point of \(F\) is the final object \((\rho F, r)\) in \(\mathbf {CoAlg}_{\mathrm{fg}}(F)\).
It should be noted that the rational fixed point does indeed exist in our case, because the category \(\mathbf {Set}\) is locally finitely presentable [3] and the functor \(F\) preserves filtered colimits [2]. Moreover, every finite session coalgebra is of course finitely generated and thus an object in \(\mathbf {CoAlg}_{\mathrm{fg}}(F)\).
We can ask ourselves now whether the coalgebra on session types corresponds to the rational fixed point, which can be seen as semantic domain for finitely generated session coalgebras and thus taking the role similar to that of regular languages.
Unfortunately, this is not the case. Figure 9 depicts a session coalgebra that does not have a syntactical equivalent; it consists of a state \(T_{\mathrm{end}}\) with \(\sigma (T_{\mathrm{end}}) = \mathrm{un}\) and \(\delta (T_{\mathrm{end}})(*) = T_{\mathrm{end}}\).
Fig. 9.
Fig. 9. An alternative completed protocol.
Theorem 4.19.
There is a unique homomorphism \(t :(\mathsf {Type}, c_{\mathsf {Type}}) \rightarrow (\rho F, r)\) from the coalgebra of types to the rational fixed point of \(F\). This homomorphism is not an isomorphism.
Proof.
In lemma 4.15, we proved that \(c_{\mathsf {Type}}\) is finitely generated and therefore finitely presentable [41, 42]. This gives the unique homomorphism \(t\) into the rational fixed point.
Suppose, toward a contradiction, that the homomorphism \(t\) would have an inverse \(t^{-1}\). Then for any finitely presentable coalgebra \((X, c)\), we get a homomorphism \(h :X \rightarrow \rho F\) and thus a homomorphism \(t^{-1} \circ h :X \rightarrow \rho F \rightarrow \mathsf {Type}\). In Figure 9, we give an example of a finite session coalgebra that has no corresponding type, i.e., there is no homomorphism \((X, c) \rightarrow (\mathsf {Type}, c_{\mathsf {Type}})\). Therefore, \(t\) cannot have an inverse.□
This result is disappointing, as session types are not in correspondence with finitely generated session coalgebras. However, the proof also instructs us how we can remedy this problem. The issue with the session coalgebra in Figure 9 is that the \(\mathrm{un}\)-state \(T_{\mathrm{end}}\) has a direct self-loop without going through a pretype, which would be required to be able to associate a session type with this state. It turns out that this is the only real obstacle and that we can obtain an inverse of sorts of the homomorphism \(t\) from theorem 4.19, if we restrict to appropriate session coalgebras.
Theorem 4.20.
Let \(c :X \rightarrow F(X)\) be a finite session coalgebra. Suppose that for all \(x \in X\), if whenever \(c(x) = (\mathrm{un}, f)\), then \(c(f(\ast)) = (\mathrm{com}, p, f^{\prime })\) or \(c(f(\ast)) = (\mathrm{branch}, p, L, f^{\prime })\). Then there is a homomorphism \(h :(X,c) \rightarrow (\mathsf {Type}, c_{\mathsf {Type}})\).
Proof.
We follow the usual idea of creating a finite system of type equations induced by the states of \(c\) and show that this system is solvable in \(\mathsf {Type}\) [2]. The homomorphism then maps a state to the corresponding solution.
Toward this, we will use type variables \(X_{y}\) for every state \(y \in X\), define metavariables \(T_{y}\) that represent \(y\) as type equation, and a family of pretypes \(p_{u}\) indexed by elements \(u \in F(X)\). This family \(p_{u}\) of pretypes is defined as follows:
\begin{align*} p_{(\mathrm{com}, \mathrm{in}, f)} & = \mathop {? X_{f(\ast)}.\,} X_{f(d)} & p_{(\mathrm{com}, \mathrm{out}, f)} & = \mathop {! X_{f(\ast)}.\,} X_{f(d)} \\ p_{(\mathrm{branch}, \mathrm{in}, L, f)} & = \& \lbrace l : X_{f(l)} \rbrace _{l \in L} & p_{(\mathrm{branch}, \mathrm{out}, L, f)} & = \oplus \lbrace l : X_{f(l)} \rbrace _{l \in L} \end{align*}
For every state \(y \in X\), we require the following equation to hold:
\begin{equation*} T_{y} = {\left\lbrace \begin{array}{ll} \mathsf {lin}\; p_{(\mathrm{com}, p, f)}, & c(y) = (\mathrm{com}, p, f), \\ \mathsf {lin}\; p_{(\mathrm{branch}, p, L, f)}, & c(y) = (\mathrm{branch}, p, L, f), \\ \mathrm{end}, & c(y) = (\mathrm{end}, f), \\ d, & c(y) = (\mathrm{bsc}, d), \\ \mathsf {un}\; p_{c(f(\ast))}, & c(y) = (\mathrm{un}, f). \end{array}\right.} \end{equation*}
Note that the last case is well-defined because of the condition on the coalgebra \(c\). Moreover, it is important to note that the variables only occur inside the pretypes \(p_{u}\), which makes their occurrences contractive (or guarded). Therefore, we can solve the system of equations with types \(\overline{T_{y}}\) by putting \(\overline{T_{y}} = \mathop {\mu X_{y}.\,} T_{y}\) and then repeatedly substituting \(\overline{T_{y^{\prime }}}\) for all \(X_{y^{\prime }}\) with \(y^{\prime } \ne y\) in \(\overline{T_{y}}\). This repeated substitution terminates, because each \(\overline{T_{y^{\prime }}}\) binds one or more variables. That we get closed and contractive types is thus ensured.
We can then define the map \(h :X \rightarrow \mathsf {Type}\) by \(h(y) = \overline{T_{y}}\). That this is a homomorphism is easily proven by using the system of equations:
\begin{align*} c_{\mathsf {Type}}(h(y)) = c(\overline{T_{y}}) & = {\left\lbrace \begin{array}{ll} c_{\mathsf {Type}}(\mathsf {lin}\; p_{(\mathrm{com}, \mathrm{out}, f)}), & c(y) = (\mathrm{com}, \mathrm{out}, f) \\ \vdots \\ c_{\mathsf {Type}}(\mathsf {un}\; p_{c(f(\ast))}), & c(y) = (\mathrm{un}, f) \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} (\mathrm{com}, \mathrm{out}, [[]\big ]{\ast \mapsto \overline{T_{f(\ast)}}, d \mapsto \overline{T_{f(\ast)}}}), & c(y) = (\mathrm{com}, \mathrm{out}, f) \\ \vdots \\ (\mathrm{un}, [\ast \mapsto \mathsf {lin}\; p_{c(f(\ast))}]), & c(y) = (\mathrm{un}, f) \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} (\mathrm{com}, \mathrm{out}, h \circ f), & c(y) = (\mathrm{com}, \mathrm{out}, f) \\ \vdots \\ (\mathrm{un}, h \circ f), & c(y) = (\mathrm{un}, f) \end{array}\right.} \\ & = F(h)(c(y)). \end{align*}
As this holds for all \(y \in X\), \(c_{\mathsf {Type}} \circ h = F(h) \circ c\) and \(h\) is a homomorphism.□
Even though the homomorphism that we found in theorem 4.20 seems canonical, it is not unique. This is because types are considered equal only syntactically, up to \(\alpha\)-equivalence, but uniqueness would require that bisimilar types are equal. For instance, any closed type \(T\) is bisimilar to \(\mathop {\mu X.\,} T\) for any variable \(X\) but these are not syntactically equal. Clearly, \(h\) could be turned into a homomorphism \(X \rightarrow \mathsf {Type}/ \sim\) to the quotient of \(\mathsf {Type}\) by bisimilarity. This is a standard technique [41, 53] to express the bisimulation proof principle that we have devised in Section 4.1 as a unique mapping property. Thus, \(\mathsf {Type}/ \sim\) with the coalgebra induced by the quotient [49] becomes final among appropriately restricted session coalgebras.
Corollary 4.21.
Let \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\) be the full subcategory of \(\mathbf {CoAlg}_{\mathrm{fg}}(F)\) that consists of coalgebras that fulfil the condition of theorem 4.20. The bisimilarity quotient \((\mathsf {Type}/ \sim , c_{\mathsf {Type}/ \sim })\) of \((\mathsf {Type}, c_{\mathsf {Type}})\) is final in \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\).
Proof.
We note that the quotient of a finitely generated coalgebra is also finitely generated [41]. Moreover, \((\mathsf {Type}/ \sim , c_{\mathsf {Type}/ \sim })\) fulfills the condition and is therefore an object in \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\). By theorem 4.20 and the above discussion, it is final in this category.□
Let us illustrate the constructed homomorphism into \(c_{\mathsf {Type}}\) in practice.
Example 4.22.
We apply the method in the proof of theorem 4.20 to the session coalgebra in Figure 14 on page 26. First, we have to represent the coalgebra as a system of type equations:
\begin{align*} T_{U} & = \mathsf {un}\mathop {? X_{q}.\,} X_{{U}} & T_{T} & = \mathsf {lin}\mathop {? X_{q}.\,} X_{{U}} & T_{q} & = \texttt {int} \\ T_{V} & = \mathsf {un}\mathop {! X_{q}.\,} X_{{V}} & T_{S} & = \mathsf {lin}\mathop {! X_{q}.\,} X_{{V}} \end{align*}
Next, we need to solve this system for every type by prefixing the equation with a fixed point operator and then substituting for the unbound variables. For instance, we get for the state \(U\) the following solution:
\begin{align*} \overline{T_{U}} & = \mathop {\mu X_{U}.\,} T_{U} \\ & = \mathop {\mu X_{U}.\,} \mathsf {un}\mathop {? X_{q}.\,} X_{{U}} \\ & = \mathop {\mu X_{U}.\,} \mathsf {un}\mathop {? \overline{T_{q}}.\,} {X_U} \\ & = \mathop {\mu X_{U}.\,} \mathsf {un}\mathop {? (\mathop {\mu X_{q}.\,} \texttt {int}).\,} {X_U}. \end{align*}
This type is very generous in using fixed point operators. In the coalgebra \(\mathsf {Type}/ \sim\), this type would be considered equal to
\begin{equation*} {\mathop {\mu X_U.\,}} \mathsf {un}\mathop {? \texttt {int}.\,} {X_{U}}, \end{equation*}
which is the type that we may intuitively derive from Figure 14. The homomorphism \(h :X \rightarrow \mathsf {Type}/ \sim\) assigns to every state \(x\) of the coalgebra (the equivalence class of) the solution \(\overline{T_{x}}\). In particular, we have \(h(T) = {\mathsf {lin}\mathop {? \texttt {int}.\,} \mathop {\mu X_{U}.\,} \mathsf {un}\mathop {? \texttt {int}.\,} X_{U}}\).
Theorems 4.20 and Corollary 4.21 show that there is a correspondence between session types (quotient by bisimilarity), finite session coalgebras (with an appropriate condition), and the rational fixed point. This is similar to the trinity of regular expressions (quotient thereof), deterministic automata, and regular languages. However, theorem 4.20 also shows that this correspondence does not work out fully, because the rational fixed point is not in the category \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\). We suspect that the correspondence can be restored by altering the functor \(F\). More precisely, one would have to provide a comonad, such that \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\) is isomorphic to the category of coalgebras over that comonad and then show that a rational fixed point exists in that category. Thus, the comonad would encode the condition that we had to impose on \(F\)-coalgebras in \(\mathbf {CoAlg}_{\mathrm{fg}}^{p}(F)\) and the rational fixed point would be final among them. It seems that such a comonad can be easily obtained as sub-comonad of the cofree comonad of \(F\), but we shall not pursue this route further in the present article. For the time being, we content ourselves with the correspondence between certain session coalgebras and session types as in corollary 4.21.

5 Processes and A Type System Based On Session Coalgebras

Session types are meant to discipline the behaviour of the channels of an interacting process, to ensure that prescribed protocols are executed as intended. Up to here, we have focused on session types (i.e., their representation as session coalgebras and coinductively-defined relations on them) without committing to a specific syntax for processes. This choice is on purpose: Our goal is to provide a truly syntax-independent justification for session types.
In this section, we introduce a syntax of processes and rely on session coalgebras to define a session type system, similar to the one defined by Vasconcelos [61]. Because our goal is to precisely illustrate how session coalgebras (and their underpinning notions) arise in a concrete formulation of the typing rules, our type system includes only a basic support for processes with infinite behaviours and recursive session types—indeed, following Reference [24], we consider processes with replication (noted \(!P\)) rather than with recursion. There is no fundamental difficulty in extending our type system based on session coalgebras to match the expressive forms of recursive session types defined in Reference [61].
A note in terminology. In the following, we often use “type” to refer to the corresponding “state” of the underlying session coalgebra. In our developments, both “state” and “type” can be used interchangeably.

5.1 A Session π-calculus

The \(\pi\)-calculus is a formal model of interactive computation in which processes exchange messages along channels (or names) [43, 51]. As such, it is an abstract framework in which key features such as name mobility, (message-passing) concurrency, non-determinism, synchronous communication, and infinite behaviour have rigorous syntactic representations and precise operational meaning.
We consider a session \(\pi\)-calculus, i.e., a variant of the \(\pi\)-calculus whose operators are tailored to the protocols expressed by session types. We assume base sets of variables, ranged over by \(x, y, z, v, \ldots\), and labels \(\mathbb {L}\), ranged over by \(l, l^{\prime }, \ldots\). The syntax of processes \(P, Q, \ldots\) is based on References [24, 61] and given by the grammar in Figure 10. We discuss the salient aspects of the syntax.
Fig. 10.
Fig. 10. Syntax of processes.
A process \(\overline{x}\langle y \rangle . P\) denotes the output of channel \(y\) along channel \(x\), which precedes the execution of \(P\). Dually, a process \(x(z). P\) denotes the input of a value \(v\) along channel \(x\), which precedes the execution of process \(P[y / z]\), i.e., the process \(P\) in which all free occurrences of \(z\) have been substituted by \(y\).
Processes \(x \rhd \lbrace l_i: P_i\rbrace _{i \in I}\) and \(x \lhd l.P\) implement a labelled choice mechanism. Given a finite index set \(I\), the branching process \(x \rhd \lbrace l_i: P_i\rbrace _{i \in I}\) denotes an external choice: the reception of a label \(l_j\) (with \(j \in I\)) along channel \(x\) precedes the execution of the continuation \(P_j\). The selection process \(x \lhd l.P\) denotes an internal choice; it is meant to interact with a complementary branching.
Given processes \(P\) and \(Q\), process \(P \mid Q\) denotes their parallel composition, which enables their simultaneous execution.
The process \(!P\), the replication of \(P\), denotes the composition of infinite copies of \(P\) running in parallel, i.e., \(P \mid P \mid \cdots\).
Process \(\mathbf {0}\) denotes inaction.
Process \((\nu xy) P\) denotes a restriction operator that declares \(x\) and \(y\) as co-variables, i.e., as complementary endpoints of the same channel, with scope \(P\). This construct is arguably the main difference with respect to usual presentations of the \(\pi\)-calculus.
Our process syntax can be seen as a variant of that in Reference [24] with (co)variables and restriction as in Reference [61]. Binding occurrences of names are \(z\) in \(x(z).P\) and both \(x\) and \(y\) in \((\nu xy) P\).
The operational semantics for processes is defined as a reduction relation, denoted \(\longrightarrow\), by relying on a notion of structural congruence on processes, denoted \(\equiv\). Figure 11 defines these two notions. Intuitively, two processes are structurally congruent if they are identical in behaviour but not necessarily in structure. It is the smallest congruence relation satisfying the axioms in Figure 11 (bottom), which formalise expected equalities for parallel composition, replication, and restriction.
Fig. 11.
Fig. 11. Reduction semantics.
We say a process \(P\) reduces to \(Q\), written \(P \longrightarrow Q\), when there is a single execution step yielding \(Q\) from \(P\). We comment on the rules in Figure 11 (top). Following Reference [61], r-com formalises the exchange of a value over a channel formed by two covariables. Similarly, r-sync formalises the synchronisation between a branching and a selection that realises the labelled choice. Rules r-res and r-par are contextual rules, which allow reduction to proceed under restriction and parallel composition. Finally, Rule r-cong says that reduction is closed under structurally congruence; we can use \(\equiv\) to promote interactions that match the structure of the rules above.

5.2 Typing Rules

Let \(T, U, V, \ldots\) denote states of some fixed, but arbitrary, session coalgebra \((X, c)\) (cf. Definition 3.2). Variables are associated with these states in a context \(\Gamma\), as described by \(\Gamma ::= \emptyset \mid \Gamma , x : T\;\). Notice that the use of \(\Gamma , x: T\) carries the assumption that \(x\) is not in \(\Gamma\). A context is an unordered, finite set of pairs, that may have at most one pair \((x, T)\) for each variable \(x\). A context is thus isomorphic to a (partial) function from a finite set of variables to the coalgebra states that represent their types. We use \(\Gamma\) to denote this isomorphic function as well: \(\Gamma (x) = T\) if \((x, T) \in \Gamma\). The domain of a context is defined accordingly.
We know \(\mathrm{un}\) states are unrestricted, but they are not the only ones.
Definition 5.1.
A state is unrestricted, written \(\text{un}(T)\), if its operation is \(\mathrm{un}\), \(\mathrm{end}\), or \(\mathrm{bsc}\). A context is unrestricted, written \(\text{un}(\Gamma)\), if all states in \(\Gamma\) are unrestricted, i.e., if \((x, T) \in \Gamma\) implies \(\text{un}(T)\). A state is linear, written \(\text{lin}(T)\), if it is not unrestricted. A context is linear, if all its states are linear.
A context \(\Gamma\) may be split into two parts \(\Gamma _1\) and \(\Gamma _2\), such that the linear states are strictly divided between \(\Gamma _1\) and \(\Gamma _2\), but unrestricted states can be copied. Following Reference [61], context split is a ternary relation, defined by the axioms in Figure 12. We may write \(\Gamma _1 \circ \Gamma _2\) to refer to a context \(\Gamma\) for which \(\Gamma = \Gamma _1 \circ \Gamma _2\) is in the context split relation. Such a context is not necessarily defined for any given contexts; we implicitly assume its existence when writing \(\Gamma _1 \circ \Gamma _2\). Otherwise, \(\Gamma , x: T\) would have two pairs with \(x\), which is not allowed.
Fig. 12.
Fig. 12. Context split.
The type system considers judgments of the form \(\Gamma \vdash P\), and is defined by the rules in Figure 13. A process \(P\) is well-formed, under a context \(\Gamma\), if there is some inference tree whose root is \(\Gamma \vdash P\) and whose nodes are all valid instantiations of these typing rules. As T-Inact is the only rule that does not depend on the correctness of another process, it forms the leaves of such trees.
Fig. 13.
Fig. 13. Declarative typing rules.
We discuss the typing rules, which can be conveniently read keeping in mind the notations introduced in Definition 3.2 and Proposition 3.4. Rule T-Inact ensures that all linear channels in the context are interacted with until the type becomes unrestricted. If our context contains a variable \(x\) of type \(?\texttt {int}\), then the process is required to read an \(\texttt {int}\) from it. Thus, \(x : \mathop {? \texttt {int}.\,} \nvdash \mathbf {0}\). In contrast, process \(x(z).\mathbf {0}\) is well-formed for the same context, using Rules T-Inact and T-In:
Rule T-Res creates a channel by binding together two covariables \(x\) and \(y\), of dual type. Rule T-Par causes unrestricted channels to be copied and linear channels to get split between composite processes, ensuring the latter occur in only a single process (cf. Figure 12). Rule T-Rep handles the replicated process \(!P\), which denotes an infinite composition of copies of process \(P\); hence, a replicated process can only use an unrestricted context.
Together, Rules T-Par and T-Res allow us to introduce new co-variables, with new types, and distribute them. But, only unrestricted types may be copied. Notice that a process does not specify which types to give the newly bound variables:
\begin{align*} v: \texttt {int} \quad & \vdash \quad (\nu xy) \: x(z).\mathbf {0} \mid \overline{y}\langle v \rangle .\mathbf {0} \\ {x:\mathop {\mu X.\,} {\mathrm{un}} \mathop {? \texttt {int}.\,} X} \quad & \vdash \quad \: x(z).\mathbf {0} \mid x(z).\mathbf {0} \\ x: ?\texttt {int} \quad & \nvdash \quad \: x(z).\mathbf {0} \mid x(z).\mathbf {0} \end{align*}
Each action on a channel has its own rule: Rule T-In handles input, binding the channel \(x\) to the continuation type and \(y\) to some supertype of the received type. Rule T-Out handles output, which requires the sent variable to have a subtype of whatever type the channel expects to send. Rule T-Branch handles external choice, where the process needs to offer at least all choices the type describes, coupled with processes that are correctly typed under the respective continuation types. Rule T-Sel only checks whether the single label that was chosen by the process is a valid option, and if the rest of the process is correct under the continuation type. These rules are only specified for linear states; Rule T-Unpack allows a “\(\mathrm{un}\)” state to be used as if it was the underlying type, as long as it is parallelizable (cf. definition 4.6).

5.3 Properties of Well-Formed Processes

For well-formed processes, the type system guarantees that:
If the process terminates, then all linear sessions were completed.
If a process reads a value from a channel, then the value has the type specified by the channel’s session type. If a process receives a label, then it is one of the labels specified by the channel’s session type.
We establish a standard type preservation result, which rests upon two auxiliary properties: substitution and subject congruence. Furthermore, substitution preserves the height of inference trees.
Lemma 5.2 (Substitution).
If \(U \sqsubseteq T\) and \((y, V) \not\in \Gamma\) and \(\Gamma , x : T \vdash P\) as shown by some inference tree of height \(n\), then \(\Gamma , y : U \vdash P[y/x]\) can be shown by an inference tree of height \(n\).
Proof.
By induction on the derivation of \(\Gamma , x : T \vdash P\), with a case analysis on the last typing rule used, exploiting the fact that \(\sqsubseteq\) is a pre-order (Proposition 4.10) and therefore transitive. The case analysis constructs an inference tree of the conclusion by simply copying the rules from the premisse inference tree, only changing the concrete types and processes, so the transformation is indeed height-preserving.□
Lemma 5.3 (Subject Congruence).
If \(\Gamma \vdash P\) and \(P \equiv Q,\) then \(\Gamma \vdash Q\).
Proof.
By induction on the derivation of \(P \equiv Q\), with a case analysis on the last axiom used.□
Theorem 5.4 (Type Preservation).
If \(\Gamma \vdash P\) and \(P \longrightarrow Q,\) then \(\Gamma \vdash Q\).
Proof.
By induction on the derivation of \(P \longrightarrow Q\), with a case analysis on the last applied rule, using Lemmas 5.2 and 5.3, and inversion principles derived from the assumption \(\Gamma \vdash P\)—all rules, except Rule T-Unpack, are syntax-directed, but ambiguities can be avoided by the conditions on types.□
In Section 4, we defined simulation through the intuition of subtyping as substitutability in one direction. The substitution lemma tells us this is indeed allowed for simulated types.
Theorem 5.5.
The following, more common, rule is admissible from the rules in Figure 13:
\begin{equation*} \begin{array}{cc} \Gamma , x : T \vdash P & U \sqsubseteq T \\ \hline {\Gamma , x : U \vdash P} \end{array} \end{equation*}
Proof.
Direct consequence of lemma 5.2.□
That is, we could add the rule as an axiom, without changing the set of typable processes. As a corollary, bisimulation of states implies the states are equivalent with respect to the type system, in a height-preserving fashion.
Corollary 5.6.
For all bisimilar types \(T \sim U\), contexts \(\Gamma\) and processes \(P\), it holds that \(\Gamma , x : T \vdash P\) can be shown by some inference tree of height \(n\) if and only if \(\Gamma , x : U \vdash P\) can be shown by an inference tree of height \(n\).
Proof.
If \(T \sim U\), then by lemma 4.11, we have \(T \sqsubseteq U\) and \(U \sqsubseteq T\), the rest follows by lemma 5.2.□

5.4 Examples

To illustrate unrestricted behaviours and the T-Unpack rule, we first consider the process
\begin{equation*} P_0 =~ !(x(y).\overline{z}\langle y\rangle .\mathbf {0}). \end{equation*}
\(P_0\) is an unrestricted forwarder: It represents arbitrary copies of a process that first inputs a value \(y\) on \(x\), which is then sent along \(z\). Clearly, these are two unrestricted behaviours that determine separate protocols. Suppose the exchanged value is of type \(\texttt {int}\), then the protocols can be specified using the session coalgebra in Figure 14.
Fig. 14.
Fig. 14. Session coalgebra for \(P_0\).
We have the following typing derivation for \(x(y).\overline{z}\langle y\rangle .\mathbf {0}\):
Note that the shift from \(x:U\) to \(x:T\) (and from \(z:V\) to \(z:S\)) is justified by applications of Rule T-Unpack. This is a silent rule on processes—only the type context changes. Their associated premise holds easily by Definitions 4.6 and 4.7. The derivation for \(P_0\) finishes with an application of Rule T-Rep:
Consider now the process
\begin{equation*} P_1 = x \lhd a.x(y).\overline{z}\langle y\rangle .x \lhd b.\mathbf {0}. \end{equation*}
Clearly, \(P_1\) implements three different actions on \(x\): first a selection, then an input of a channel \(y\), and finally another selection. Before the second selection, the output on \(z\) forward \(y\).
Rather than considering a sequential protocol involving the three actions on \(x\), we will consider the two selections on \(x\) as indicating two independent branches of the same protocol, as specified by the session coalgebra in Figure 15.
Fig. 15.
Fig. 15. Session coalgebra for process \(P_1\).
Accordingly, we have the following typing derivation for \(P_1\):
Recall the alternative completed protocol \(T_{\mathrm{end}}\) of Figure 9. Just like regular \(\mathrm{end}\), \(T_{\mathrm{end}}\) allows no interactions on the channel, but it does not cause a “\(\mathrm{un}\)” type to be unparallelizable. The following example illustrates a more complex session coalgebra that uses \(T_\mathrm{end}\), here called \(q_{2}\).
Figure 16 describes a parallelizable unrestricted state \(T\) such that each copy of a channel in state \(T\) can only do a single receive. However, because it is unrestricted, we can still copy the channel across threads and read a value per copy. We can even read infinitely many values through replication:
\begin{align*} x : T \quad & \nvdash \quad x(y_1).x(y_2).x(y_3).\mathbf {0} \\ x : T \quad & \vdash \quad x(y_1).\mathbf {0} \;\mid \; x(y_2).\mathbf {0} \;\mid \; x(y_3).\mathbf {0} \\ x : T \quad & \vdash \quad !(x(y).\mathbf {0}) \end{align*}
Such a structure might be interesting in combination with session delegation. Figure 17 takes the type of the previous example, and replaces \(\texttt {int}\) by the session \(U = \mathop {! \texttt {int}.\,}\mathop {? \texttt {bool}.\,}\mathrm{end}\) as the type to be communicated over the unrestricted channel.
Fig. 16.
Fig. 16. Session coalgebra using an alternative completed protocol.
Fig. 17.
Fig. 17. Session coalgebra using session delegation and an alternative completed protocol.
The unrestricted channel still allows at most one read per thread, but now each such read establishes a linear session of type \(U\). Let \(P_i = \overline{y_i}\langle z_i \rangle . y_i (b_i) . \mathbf {0}\) be a family of processes that each use a single channel of type \(U\), then
\begin{align*} y_i : U, z_i: \texttt {int} \quad & \vdash \quad P_i.\end{align*}
We can use a channel of type T to run arbitrarily many instances of \(P_i$ in parallel\):
\begin{align*} x : T, \ z_1 : \texttt {int}, \ z_2 : \texttt {int}, \ z_3 : \texttt {int} \quad & \vdash \quad x(y_1).P_1 \;\mid \; x(y_2).P_2 \;\mid \; x(y_3).P_3 \\ x : T, \ z_0 : \texttt {int} \quad & \vdash \quad !(x(y_0).P_0) \\\end{align*}
However, we still cannot use the channel to create multiple sessions of type U in a single thread. For any context \(\Gamma\) and process Q, we have
\begin{align*} \Gamma , \ x : T \quad & \nvdash \quad x(y_1).x(y_2).Q. \end{align*}

6 Algorithmic Type Checking

As observed by Vasconcelos [61], the typing rules describe what well-formed processes look like, but do not directly allow us to decide whether an arbitrary process is well-formed or not. This is because, beforehand, we do not know:
(1)
Which type to introduce in reading (Rule T-In) or scope restriction (Rule T-Res), or
(2)
How to split the context among processes in parallel composition (Rule T-Par).
We closely follow Vasconcelos [61] to address these issues. Rather than trying to infer the introduced types, we augment the language of processes in Figure 10 with type annotations:
\begin{equation*} P ::= \ldots \mid (\nu xy : T)\, P \mid x (y : T) . P. \end{equation*}
Recall that we assumed all session coalgebras we work with come equipped with a duality function, so we only need to annotate one type for scope restrictions. That is, in writing \((\nu xy : T)\, P\), we assume that \(T\) is the type of the first variable, \(x\), and that its dual \(\overline{T}\) is the type of the second variable, \(y\). Other productions are kept unchanged.
The key idea to circumvent splitting is that a process \(P \mid Q\) is checked as follows: We first pass the entire context to \(P\), keeping track of all linear variables used and unused; then, we remove the used linear variables from the context given to \(Q\). To do this, we consider rules of the form \(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\), where \(\Gamma _1\) and \(P\) are the input to the algorithm and \(\Gamma _2\) is the output: in an execution, \(\Gamma _2\) is the subset of \(\Gamma _1\) containing only those variables that had unrestricted types or were not used in \(P\). We say subset, because we want these variables, if present, to have the same type in \(\Gamma _2\) as in \(\Gamma _1\).
Figure 18 lists the algorithmic versions of the typing rules. Rule A-Par, for example, checks parallel processes as just described. By construction, \(\Gamma _2\) is one part of the context split required to instantiate Rule T-Par. The linear variables of the other part is exactly those that are present in \(\Gamma _1\) but not in \(\Gamma _2\). This change in A-Par requires adjusting the other rules. First, we need the algorithm to accept, even when we do not fully complete all sessions of \(\Gamma _1\) in \(P\). We do this by unconditionally accepting the terminated process. Note that acceptance of the algorithm now only implies well-formedness if the returned context is unrestricted.
Fig. 18.
Fig. 18. Algorithmic Type Checking Rules.
Second, the algorithm needs to remove linear variables from the output as we use them. We do not, however, want to remove any variable that has a linear type, as that would allow us to accept processes that do not complete all linear sessions. Thus, we introduce the context difference operator \(\div\) in Figure 19. We write \(\Gamma \div \lbrace x\rbrace\) to denote the context of all variable/type pairs in \(\Gamma\) minus a potential pair including \(x\), but is only defined if \((x, T) \in \Gamma\) implies that \(T\) is unrestricted.
Fig. 19.
Fig. 19. Context Difference.
We elaborate on Rule A-Branch; the algorithm is called once for every branch, yielding a context \(\Gamma _l\) each time. Excluding \(x\), each branch must use the exact same set of linear variables. Thus, we require that all these contexts are equal up to a potential \((x, U_l)\) pair. Specifically, there is some \(\Gamma _2\) such that \(\Gamma _2 = \Gamma _l \div \lbrace x\rbrace\) for any \(l \in L_2\), this \(\Gamma _2\) is the output context.
To motivate this, consider a type \(T = \&\lbrace a: T_{un} ,\, b: \mathrm{end}\rbrace\), where \(T_{un}\) is some unrestricted type distinct from \(\mathrm{end}\), and some process \(P = x \rhd \lbrace a: \mathbf {0} ,\, b: \mathbf {0} \rbrace\). Let \(\Gamma\) be some unrestricted context, \(\mathbf {0}\) is well-formed for both \(\Gamma , x: T_{un}\) and \(\Gamma , x: \mathrm{end}\); the algorithm agrees:
\begin{align*} & \Gamma , x : T_{un} \mathrel {\vdash } \mathbf {0} \mathrel {;} (\Gamma , x : T_{un}), \\ & \Gamma , x : \mathrm{end} \mathrel {\vdash } \mathbf {0} \mathrel {;} (\Gamma , x : \mathrm{end}). \end{align*}
The resulting contexts are not equal. \(P\) is well-formed for \(\Gamma\), so we have to allow \(x\) to have different types in the output of different branches in a complete algorithm. Rules A-In, A-Out, and A-Sel do not have multiple branches to check, but follow similar ideas. When introducing a new variable, either through a read or scope restriction, the new variable is also removed from the output. Rule A-Unpack only unpacks unrestricted types. We want those to have the same type in the input as in the output, so we remove the variable and add a pair with the original type.
Take, for example, the process
\begin{equation*} x:\ ?\texttt {int},\; y:\ ?\texttt {int} \quad \vdash \quad x(z_1).\mathbf {0} \mid y(z_2).\mathbf {0}. \end{equation*}
The variables are split correctly, and both split contexts are unrestricted when the process is completed, thus it is well-formed.
If, however, the left process did not complete the linear session, then the context difference would not have been defined. Take one such process:
\begin{equation*} x:\ ?\texttt {int}.?\texttt {int},\; y:\ ?\texttt {int} \quad \nvdash \quad x(z_1).\mathbf {0} \mid y(z_2).\mathbf {0}. \end{equation*}
We succeed in checking the terminated process of the left part:
\begin{equation*} x:\ ?\texttt {int},\; y:\ ?\texttt {int} \quad \mathrel {\vdash } \quad \mathbf {0} \mathrel {;} \quad (x:\ ?\texttt {int}, y:\ ?\texttt {int}). \end{equation*}
But \(x\) has a linear type in the output. \((x:\ ?\texttt {int}, y:\ ?\texttt {int}) \div \lbrace x\rbrace\) is undefined, so the algorithm rejects this input entirely. The process was indeed not well-formed, and no further parallel processes could fix it; the rejection is expected.
For each process and context there is at most one applicable algorithmic rule: Which one is directed by the process syntax and unrestrictedness of a channel being interacted with.
Under the same assumptions as before (i.e., the session coalgebra describing the types is finitely generated), this induced type checking algorithm is decidable, sound, and complete with respect to the typing rules defined in Section 5.
Theorem 6.1 (Decidability).
The type checking algorithm terminates in finite time for every input, assuming a finitely generated session coalgebra.
Proof.
The input of the algorithm is a finite context \(\Gamma\) and a process, a finite expression, \(P\). Just like a proof of well-formedness is a tree of typing rules, an execution of the algorithm produces a tree of algorithmic rules. For any non-Unpack node in the tree, the rule removes some element from the process(es) to be recursively type checked. The process of any such node is thus strictly larger than the concatenation of all its childrens’ processes. Because a process is a finite expression, one can only remove finitely many elements; hence, there can only be finitely many of these non-Unpack nodes in the tree.
For example, T-Par checks a process \(P \mid Q\). Its children check \(P\) and \(Q\), and \(length(P) + length(Q) \lt length(P \mid Q)\), in terms of their string concatenation.
A-Unpack does not change the process, but it does change the type of a variable. Because we assumed finitely generated coalgebra, each \(\mathrm{un}\) state can either be unpacked into a non-\(\mathrm{un}\) state, or forms a finite cycle of purely \(\mathrm{un}\) states. In the former case, the algorithm proceeds with one of the other, non-A-Unpack, rules. In the latter case, we know the channel in question does not allow any interactions. Because the algorithm only tries to unpack types of variables that the process in question interacts with, detecting such a cycle immediately allows the algorithm to reject. There are finitely many variables in a context, finitely many non-Unpack nodes in the tree and finitely many Unpack nodes per regular node. Thus, there are finitely many nodes in total.
All of the non-recursive premises are decidable (see theorem 4.16). As such, a finite tree corresponds to an execution that finishes in finite time.□
Having defined algorithmic typechecking, we can go back to the process syntax that we used to define our typing rules (Figure 10) by erasing type annotations in input and restriction operators. Let \(erase(\cdot)\) denote a function on processes defined as
\begin{align*} erase((\nu xy : T).Q) & = (\nu xy). erase(Q), \\ erase(x(y : {T}) . Q) & = x(y) . erase(Q) \end{align*}
and as an homomorphism on the remaining process constructs. We have:
Theorem 6.2 (Correctness).
\(\Gamma _1 \vdash P\) iff there is an annotated process \(P^{\prime }\) with \(P = erase(P^{\prime })\), \(\Gamma _1 \vdash P^{\prime } ; \Gamma _2\) and \(\text{un}(\Gamma _2)\).
The rest of this section is dedicated to proving this theorem. Correctness is generally broken down in two parts: soundness and completeness. An algorithm is sound if every accepted program is valid (the right-to-left implication) and it is complete if every valid program—annotated with the correct types—is accepted (left-to-right). We start by formalising the algorithm output. Let \(\mathcal {L}(\Gamma)\) and \(\mathcal {U}(\Gamma)\) denote the linear and unrestricted parts, respectively, of any context \(\Gamma\). That is,
\begin{align*} \mathcal {L}(\Gamma) & = \lbrace (x: T) \in \Gamma \mid \text{lin}(T) \rbrace , \\ \mathcal {U}(\Gamma) & = \lbrace (x: T) \in \Gamma \mid \text{un}(T) \rbrace . \end{align*}
Lemma 6.3 (Algorithmic monotonicity).
If \(\Gamma _1 \vdash P ; \Gamma _2\), then
(1)
\(\Gamma _2 \subseteq \Gamma _1\), and
(2)
\(\mathcal {U}(\Gamma _2) = \mathcal {U}(\Gamma _1).\)
Proof.
The proof is an induction on the structure of the execution tree. We elaborate on A-Unpack. Suppose \(\Delta _1, x : T \mathrel {\vdash } P \mathrel {;} (\Delta _2 \div x), x: T\), for some unrestricted \(T\). We start from the premise of the rule
\begin{equation*} \Delta _1, x : \delta (T)(*) \mathrel {\vdash } P \mathrel {;} \Delta _2. \end{equation*}
By induction,
\begin{align*} \Delta _2 &\subseteq \Delta _1, x : \delta (T)(*), \\ \mathcal {U}(\Delta _2) &= \mathcal {U}(\Delta _1, x : \delta (T)(*)). \end{align*}
Neither of these relations is invalidated by removing \(x\) from both contexts:
\begin{align*} \Delta _2 \div x &\subseteq \Delta _1, \\ \mathcal {U}(\Delta _2 \div x) &= \mathcal {U}(\Delta _1). \end{align*}
Nor by adding the same \(x: T\) pair to both sides.
\begin{align*} (\Delta _2 \div x), x : T &\subseteq \Delta _1, x : T, \\ \mathcal {U}((\Delta _2 \div x), x : T) &= \mathcal {U}(\Delta _1, x : T). \end{align*}
In our proof of soundness, we need algorithmic linear strengthening. In A-Par the entire context is passed along to the first process, but the typing rules require a strict split of linear variables. Linear variables that are still present in the output (thus, not referenced in the process) are safe to remove from the input context.
Lemma 6.4 (Algorithmic linear strengthening).
If \(\text{lin}(T)\) and \(\Gamma _1, x : T \mathrel {\vdash } P \mathrel {;} \Gamma _2, x : T\) can be shown by tree of height \(n\), then \(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\) can also be shown by a tree of height \(n\).
Proof The proof requires an inductive analysis on the structure of the execution tree. Let us detail two cases, the rest are done in a similar fashion.
When the root of the execution tree is \({\rm A}{\rm\small{-}}{\rm P}{\rm\small{AR}}\), suppose that
\begin{equation*} \Delta _1, x : T \mathrel {\vdash } P \mid Q \mathrel {;} \Delta _3, x: T. \end{equation*}
Then, by premise of the rule
\begin{equation*} \Delta _1, x : T \mathrel {\vdash } P \mathrel {;} \Delta _2, x: T \qquad \text{and}\qquad \Delta _2, x : T \mathrel {\vdash } Q \mathrel {;} \Delta _3, x: T. \end{equation*}
Note that monotonicity and \(x: T\) in the output context imply that \(x : T\) is an element of the input and intermediate contexts as well. Since these are the direct subtrees of the root, they are of height \(n - 1\).
We can use induction on both to get \(\Delta _1 \mathrel {\vdash } P \mathrel {;} \Delta _2\) and \(\Delta _2 \mathrel {\vdash } Q \mathrel {;} \Delta _3\), with trees of height \(n - 1\), so we can construct a tree of height \(n\) with \({\rm A}{\rm\small{-}}{\rm P}{\rm\small{AR}}\) at the root, and \(\Delta _1 \mathrel {\vdash } P \mid Q \mathrel {;} \Delta _3\) as conclusion.
When the root is \({\rm A}{\rm\small{-}}{\rm I}{\rm\small{N}}\), suppose
\begin{equation*} \Gamma _1, x : T, z : V \mathrel {\vdash } z(y : U).P \mathrel {;} \Gamma _2, x : T. \end{equation*}
The premise of the rule tells us, for the continuation type \(V_* = \delta (V)(*),\)
\begin{equation*} \Gamma _1, x : T, y: U, z : V_* \mathrel {\vdash } P \mathrel {;} \Gamma _3, x : T. \end{equation*}
Again, this is a direct subtree, so it is of height \(n- 1\). The inductive hypothesis lets us remove \(x\) from both sides, while preserving the height:
\begin{equation*} \Gamma _1, y: U, z : V_* \mathrel {\vdash } P \mathrel {;} \Gamma _3. \end{equation*}
The rule specifies that \(\Gamma _2, x : T = (\Gamma _3, x : T) \div \lbrace z, y\rbrace\), for some \(\Gamma _3\). The variable \(x\) is preserved through the difference, so it must be distinct from \(z\) and \(y\):
\begin{equation*} (\Gamma _3, x : T) \div \lbrace z, y\rbrace = (\Gamma _3 \div \lbrace z, y\rbrace), x : T. \end{equation*}
Hence, \(\Gamma _3 \div \lbrace z, y\rbrace = \Gamma _2\); the desired result follows from an application of A-In, resulting in a tree of height \(n\):
\begin{equation*} \qquad \qquad \qquad \qquad \qquad \Gamma _1, z : V \mathrel {\vdash } z(y : U).P \mathrel {;} \Gamma _2.{\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \Box } \end{equation*}
Finally, we can prove soundness.
Theorem 6.5.
\(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\) and \(\text{un}(\Gamma _2)\) implies \(\Gamma _1 \vdash erase(P)\).
Proof.
By induction on the height of inference trees, using a case analysis on the last rule used. Cases other than A-Par are straightforward; let us illustrate the procedure with A-Branch. Let \(c(T) = (\&, L_1, f)\) and suppose that
\begin{equation*} \Gamma _1, x: T \mathrel {\vdash } x \rhd \lbrace l : P_l\rbrace _{l \in L_2} \mathrel {;} \Gamma _2. \end{equation*}
The premise says that \(\Gamma _1, x : f(l) \mathrel {\vdash } P_l \mathrel {;} \Gamma _2\) for every \(l \in L_1\). By induction, \(\Gamma _1, x : \delta (T)(l) \vdash erase(P_l)\) for the same \(l \in L_1\). The result is directly implied by T-Branch.
Let us elaborate on A-Par. Suppose that \(\Gamma _1 \mathrel {\vdash } P \mid Q \mathrel {;} \Gamma _3\). We know that both processes are accepted, as \(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\) and \(\Gamma _2 \mathrel {\vdash } Q \mathrel {;} \Gamma _3\). Obviously, any type in \(\mathcal {L}(\Gamma _2)\) is linear, so we can strengthen the first premise: \(\Gamma _1 - \mathcal {L}(\Gamma _2) \mathrel {\vdash } P \mathrel {;} \Gamma _2 - \mathcal {L}(\Gamma _2)\). The output is trivially unrestricted (all linear types were removed), so we can apply the inductive hypothesis to \(P\). Similarly, \(\Gamma _3\) is unrestricted by assumption, so we can apply the inductive hypothesis to \(Q\) as well:
\begin{align*} \Gamma _1 - \mathcal {L}(\Gamma _2) &\vdash erase(P), \\ \Gamma _2 &\vdash erase(Q). \end{align*}
Monotonicity tells us that \(\Gamma _1 = (\Gamma _1 - \mathcal {L}(\Gamma _2)) \circ \Gamma _2\) is in the context split relation. We can conclude \(\Gamma _1 \vdash erase(P \mid Q).\)
Just like we can strengthen a context by removing variables, we can weaken it by adding variables. The linearity of these additions does not matter.
Lemma 6.6 (Algorithmic Weakening).
If \(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\), then \(\Gamma _1, x : T \mathrel {\vdash } P \mathrel {;} \Gamma _2, x : T\) for any pair \(x: T\) where \(x \not\in dom(\Gamma)\) and \(x\) does not occur bound in \(P\).
Proof.
A fairly simple inductive analysis. We detail a single case.
Suppose that \(\Gamma _1 \mathrel {\vdash } P \mid Q \mathrel {;} \Gamma _3\). We can apply induction to both premises, yielding \(\Gamma _1, x : T \mathrel {\vdash } P \mathrel {;} \Gamma _2, x : T\) and \(\Gamma _2, x : T \mathrel {\vdash } P \mathrel {;} \Gamma _3, x : T\). Therefore, \(\Gamma _1, x : T \mathrel {\vdash } P \mid Q \mathrel {;} \Gamma _3, x : T\).□
Theorem 6.7 (Algorithmic Completeness).
If \(\Gamma _1 \vdash P\), then there exists a \(P^{\prime }\) with \(erase(P^{\prime }) = P\), \(\Gamma _1 \mathrel {\vdash } P \mathrel {;} \Gamma _2\) and \(\Gamma _2\) unrestricted.
Proof.
An inductive analysis on the height of the tree of typing rules. We detail two inductive cases.
Suppose the root of the tree is [T-Rep], then the process being checked must be \(!P\). That is, \(\Gamma _1 \vdash \ !P\). By the inductive hypothesis, there is some \(P^{\prime }\) such that \(\Gamma _1 \mathrel {\vdash } P^{\prime } \mathrel {;} \Gamma _2\) and \(P = erase(P^{\prime })\). Context \(\Gamma _1\) is unrestricted, by premise of the typing rule, and \(\Gamma _2\) is a subset containing at least all unrestricted variables. Hence, the two must be equal, implying \(\Gamma _1 \mathrel {\vdash } !P^{\prime } \mathrel {;} \Gamma _2\) with unrestricted \(\Gamma _2\).
Suppose the root is scope restriction, i.e., \({\Gamma _1} \vdash {(\nu xy) Q}\). By that rule, there exists some types \(T\) and \(U\) such that \({\Gamma _1}, x : T, y : U \vdash Q\) and \(T \bot U\). Since duality is unique up to bisimulation (proposition 4.10), and the duality function yields dual states, we have that \(\overline{T} \sim U\). We use that we can freely substitute bisimilar types (corollary 5.6) to conclude \(\Gamma _1, x: T, y: \overline{T} \vdash Q\). Furthermore, this substitution is height-preserving, so we can apply the inductive hypothesis and get some annotated process \(Q^{\prime }\) such that \(Q = erase(Q^{\prime })\) and \(\Gamma _1, x: T, y: \overline{T} \vdash \Gamma _2\), where \(\Gamma _2\) is unrestricted. In particular, then the types of \(x\) and \(y\), if present in \(\Gamma _2\), must be unrestricted, so \(\Gamma _3 = \Gamma _2 \div \lbrace x, y\rbrace\) is a well-defined, unrestricted context. Continuing, \(Q = erase(Q^{\prime })\) implies that \((\nu xy)Q = erase((\nu xy: T)Q^{\prime })\), so A-Res is applicable, yielding the desired result: \(\Gamma _1 \vdash (\nu xy: T)Q^{\prime } ; \Gamma _3\).□
Correctness, theorem 6.2, follows directly from soundness and completeness.

7 Context-free Session Types

Besides the session types we have considered so far, there is also a strictly more powerful system called context-free session types [58]. There is a strong resemblance between regular languages and the session types as we have seen them. Indeed, context-free session types expand the discipline to be more like context-free languages.
At the basis lies the sequencing operator “\(\mathrel {;}\)”. A session type \(T ; U\) will first execute \(T\), and once that is finished, execute \(U\). Combined with recursion, this allows for interesting interactions that we could not describe before.
Example 7.1.
Suppose our protocol dictates that we send some number of integers, and then read exactly as many Boolean values. Notice that this behaviour is not expressible in the session types we have discussed, but we can define a session coalgebra for it. That session coalgebra will, however, not be finitely generated (definition 4.14):
\begin{equation*} T = \mathop {\mu X.\,} \oplus {\left\lbrace \begin{array}{ll} {\it next}: & \mathop {! \texttt {int}} \mathrel {;}X\mathrel {;}\mathop {? \texttt {bool}} \mathrel {;}\mathrm{end}\\ {\it quit}: & \mathrm{end}\end{array}\right.} \end{equation*}
The grammar for these context-free session types is very similar to before (i.e., as defined in Figure 6); we remove the continuations from in- and output, and replace it with the sequencing operator between session types.
We let \(\mathsf {CFType}\) be the set of all \(T\) in the grammar of Figure 20 that are closed and contractive, where contractive means that any type variable \(X\) must occur in or after a pretype \(p\). For instance, the type \(\mathop {\mu X.\,} \mathop {? \texttt {int}} \mathrel {;}X\) is contractive, while \(\mathop {\mu X.\,} X \mathrel {;}X\) is not.
Fig. 20.
Fig. 20. Context-free session types over sets of basic data types \(D\) and of variables \(\mathsf {Var}\).

7.1 Session Coalgebras with Stacks

In principle, it is possible to equip the set of context-free session types with a session coalgebra structure by using a stack of active states. The issue with this is, however, that the coalgebra is not finitely generated any longer, because the stacks may grow indefinitely and thus a state may reach infinitely many other states. This is the same issue why one has to consider push-down automata to recognise context-free language instead of (non-)deterministic finite automata. The goal of this section is to devise a notion of session coalgebra with stack, which allows the finitary presentation of sessions with sequential composition, together with trace semantics and a determinisation procedure. Trace semantics allows us to understand the behaviour of session coalgebras with stack, while the determinisation procedure allows us to turn them back into session coalgebras, albeit not finitely generated ones.
We first introduce session coalgebras with stacks. The idea is that any state can push states on a stack, which keeps track of the states that will be visited after the current execution has terminated in an \(\mathrm{end}\)- or \(\mathrm{bsc}\)-state.
Definition 7.2 (Stack Session Coalgebras).
Let \(A\) be the set of labels as in definition 3.2 and define the \(A\)-indexed family \(C\), which differs from \(B\) only for communication labels, by
\begin{equation*} C_{a} = {\left\lbrace \begin{array}{ll} {\lbrace }{\mathcal{d}}{\rbrace }, & a = (\mathrm{com}, p), \\ B_{a}, & \text{otherwise.} \end{array}\right.} \, \end{equation*}
We define the functor \(G :\mathbf {Set}\rightarrow \mathbf {Set}\) on objects by
\begin{equation*} G(X) = \bigg ({\coprod _{a \in A} X^{C_a}}\bigg) \times X^{\ast } \qquad \text{and} \qquad G(f){(a, \beta , u)} = (a, f \circ \beta , f^{\ast }(u)), \end{equation*}
where \((-)^{\ast }\) is the list functor. A coalgebra \((X, c)\) for the functor \(G\) is called a stack session coalgebra.
Note that \(G\) is isomorphic to the polynomial functor given by \(X \mapsto \coprod _{(a, n) \in A \times \mathbb {N}} X^{C_{a} + N_{n}}\), where \(N_n = \lbrace 1, \dots , n\rbrace\). This makes \(G\) again a very well-behaved functor, which we will use later on. For instance, we can easily define bisimilarity for a session coalgebra \(c :X \rightarrow G(X)\) as the greatest fixpoint of the map \(c^{\ast } \circ g_{\sim }\) for a relation lifting \(g_{\sim }\) that is defined analogous to \(f_{\sim }\) from definition 4.2.
Stack session coalgebras should, intuitively, be an extension of session coalgebras with the possibility of composing executions. This intuition is confirmed by the following lemma.
Lemma 7.3.
There is a natural transformation \(\rho :F \rightarrow G\) defined by
\begin{align*} \rho _{X}(\mathrm{com}, p, \beta) & = (\mathrm{com}, p, {\mathcal{d}} \mapsto \beta ({\mathcal{d}}), [\beta (\ast)]) \\ \rho _{X}(a, \beta) & = (a, \beta , \varepsilon) \qquad \qquad {a \not\in {\lbrace }\mathrm{com}{\rbrace } \times P,} \, , \end{align*}
where \([-]\) denotes the singleton list and \(\varepsilon\) the empty list. This natural transformation induces a functor \(\mathbf {CoAlg}(F) \rightarrow \mathbf {CoAlg}(G)\) between the categories of session coalgebras without and with stack.
Proof.
We only need to check naturality of \(\rho\), the existence of the induced functor is standard. Let \(f :X \rightarrow Y\) be a map. We need to prove that \(\rho _{Y} \circ F f = Gf \circ \rho _{X}\), where we write \(F f\) instead of \(F(f)\) for the application of \(F\) to \(f\). This shorter notation will be used only in proofs in this section. For \(p \in P\) and \(\beta :{\lbrace }\ast , {\mathcal{d}}{\rbrace } \rightarrow X\), we have
\begin{align*} (\rho _{Y} \circ Ff)(\mathrm{com}, p, \beta) & = \rho _{Y}(\mathrm{com}, p, f \circ \beta) \\ & = (\mathrm{com}, p, {\mathcal{d}} \mapsto (f \circ \beta)({\mathcal{d}}), [(f \circ \beta)(\ast)]) \\ & = (\mathrm{com}, p, f \circ ({\mathcal{d}} \mapsto \beta ({\mathcal{d}})), f^{\ast }[\beta (\ast)]) \\ & = (Gf) (\mathrm{com}, p, {\mathcal{d}} \mapsto \beta ({\mathcal{d}}), [\beta (\ast)]) \\ & = (Gf \circ \rho _{Y}) (\mathrm{com}, p, \beta) \, . \end{align*}
Similarly, we also have for all other \((a, \beta)\) that
\begin{equation*} (\rho _{Y} \circ Ff)(a, \beta) = (a, f \circ \beta , \varepsilon) = (a, f \circ \beta , f^{\ast }(\varepsilon)) = (Gf \circ \rho _{X})(a, \beta) \end{equation*}
and thus \(\rho\) is a natural transformation.□
The natural transformation from lemma 7.3 shows that the continuation transitions originating from communication states are equivalent to sequentially composing a simple data transmission with the continuation state. This interpretation allows us to extend the graphical representation of session coalgebras to stack session coalgebras.
Example 7.4.
Let us return to the type of example 7.1, but now as a stack session coalgebra. We present the coalgebra graphically in Figure 21, where numbers on transitions indicate where the state will be in the list of states that is pushed on the stack. Formally, the stack session coalgebra induced by Figure 21 is given as follows. We put \(X = {\lbrace }T{\rbrace } \cup {\lbrace }q_{k} \; | \; 1 \le k \le 6{\rbrace }\) and then define \(c :X \rightarrow G(X)\):
\begin{align*} c(T) & = (\oplus , \beta _{0}, \varepsilon) \text{ with } \beta _{0}({\it next}) = q_{1} \text{ and } \beta _{0}({\it quit}) = q_{6}, \\ c(q_{1}) & = (!, \beta _{1}, [{T}, q_{2}]) \text{ with } \beta _{1}({\mathcal{d}}) = q_{4}, \\ c(q_{2}) & = (?, \beta _{2}, [q_{3}]) \text{ with } \beta _{2}({\mathcal{d}}) = q_{5}, \\ c(q_{3}) & = (\mathrm{end}, f_{\emptyset }, \varepsilon), \\ c(q_{4}) & = ({\mathrm{bsc}}, \texttt {int}, f_{\emptyset }, \varepsilon), \\ c(q_{5}) & = ({\mathrm{bsc}}, \texttt {bool}, f_{\emptyset }, \varepsilon), \\ c(q_{6}) & = (\mathrm{end}, f_{\emptyset }, \varepsilon). \end{align*}
Fig. 21.
Fig. 21. A stack-based session coalgebra.
More generally, we can see the set of (closed and contractive) context free session types as a stack session coalgebra.
Lemma 7.5.
There is a stack session coalgebra \(c_{\mathsf {CFType}} :\mathsf {CFType}\rightarrow G(\mathsf {CFType})\).
Proof.
We define first a map \(o\) from pretypes to \(G(\mathsf {CFType})\) as follows:
\begin{align*} o(\mathop {? T}) & = (\mathrm{com}, \mathrm{in}, \mathcal{d}\mapsto T, \varepsilon), & o(\mathop {! T}) & = (\mathrm{com}, \mathrm{out}, \mathcal{d}\mapsto T, \varepsilon), \\ o(\&\lbrace \ell _i : S_i\rbrace _{i \in I}) & = (\mathrm{branch}, \mathrm{in}, \ell _{i} \mapsto S_{i}, \varepsilon), & o(\oplus \lbrace \ell _i : S_i\rbrace _{i \in I}) & = (\mathrm{branch}, \mathrm{out}, \ell _{i} \mapsto S_{i}, \varepsilon). \end{align*}
Using this map, can define \(c_{\mathsf {CFType}}\) recursively:
\begin{align*} c_{\mathsf {CFType}}(d) & = (\mathrm{bsc}, d, {f_\emptyset }, \varepsilon), \\ c_{\mathsf {CFType}}(\mathrm{end}) & = (\mathrm{end}, {f_\emptyset }, \varepsilon), \\ c_{\mathsf {CFType}}(\mathsf {lin}\; p) & = o(p), \\ c_{\mathsf {CFType}}(\mathsf {un}\; p) & = (\text{un}, {* \mapsto {}} \mathsf {lin}\; p, \varepsilon), \\ c_{\mathsf {CFType}}(S_{1} \mathrel {;}S_{2}) & = (a, f, u : S_{2}), \text{ where } c_{\mathsf {CFType}}(S_{1}) = (a, f, u), \\ c_{\mathsf {CFType}}(\mathop {\mu X.\,} S) & = c_{\mathsf {CFType}}(\mathit {unfold}(\mathop {\mu X.\,} S)). \end{align*}
This recursion terminates, because all types in \(\mathsf {CFType}\) are closed and contractive.□
The reader may have noticed that we chose to allow a stack session coalgebra to push an arbitrary, but finite, number of states on the stack. This is not strictly necessary, as it could be simulated by repeated pushing. However, it is more convenient to allow an arbitrary increase of the stack. For instance, in the definition of \(c_{\mathsf {CFType}}\) above, we turned the sequential composition into adding an element to the end of the list of types that we push on the stack. If we were to restrict this, then we would have to find the left-most type in a sequence of sequential compositions, use this type for the output, and then push the sequential composition of the remaining types on the stack. For instance, instead of \(c_{\mathsf {CFType}}((\mathrm{end}\mathrel {;}T) \mathrel {;}S) = (\mathrm{end}, [T, S])\), we would have \(c_{\mathsf {CFType}}((\mathrm{end}\mathrel {;}T) \mathrel {;}S) = (\mathrm{end}, [T \mathrel {;}S])\). This would be an inconvenient and rather odd definition.

7.2 Trace Semantics of Stack Session Coalgebras

Example 7.4 gave an intuitive description of the behaviour of the stack session coalgebra in Figure 21. As \(G\) is a polynomial functor, there is a final stack session coalgebra [1, 62]. However, this final coalgebra will just consist of coinductively defined trees with potentially infinite depth that are labelled in \(A\) and where the branching at nodes is given by the family \(C\) and lists [60]. Such trees correspond to unravelling stack session coalgebras, but these trees carry no information about the order of execution. Instead, we need to enforce the sequential execution in the semantics of stack session coalgebras, which essentially corresponds to visiting the unravelled tree depth-first from left to right. This can be achieved by either altering the functor to a monad that enforces the stack behaviour [26] or by considering traces in stack session coalgebras. We will opt for the latter approach, because it is more elementary and it demonstrates how functors can be used to formalise semantics, called functorial semantics. Traces can then be used to verify the execution order that a type system based on stack session coalgebras has to enforce.
Trace semantics are well-studied for various kinds of transition systems. We will follow here the approach taken in coalgebraic modal logic [36, 37], although there is some discrepancy that we are unable to resolve for the time being. This will be discussed toward the end.
To use coalgebraic modal logic, let us recall the contravariant powerset functor \(2^{-} :\mathbf {Set}^{\text{op}} \rightarrow \mathbf {Set}\). This functor maps a set \(X\) to the set of maps into the two element set 2, or in other words, \(2^{X}\) is the set of predicates on \(X\). The action of this functor on maps is given by \(2^{f}(p) = p \circ f\). Note that for \(f :X \rightarrow Y\), we have \(2^{f} :2^{Y} \rightarrow 2^{X}\), which makes this functor contravariant. An important property of the contravariant powerset functor is that it forms a contravariant adjunction with itself, meaning that the set of maps \(X \rightarrow 2^{Y}\) is isomorphic to the set of maps \(Y \rightarrow 2^{X}\). This can be recognised as the transposition of maps.
The variant of coalgebraic modal logic, which we will use here, starts with the following diagram. In this diagram, we use arrows with a tail to indicate contravariant functors. The (covariant) functor \(L\) will determine the formulas of a testing logic.
The idea behind this diagram is that the left-hand side determines the coalgebraic behaviour of (stack session) coalgebras, while the right-hand side determines the logical behaviour. These two sides of the diagram are intimately related through the contravariant adjunction. We use this picture to provide trace semantics for stack session coalgebras [37].
Definition 7.6.
Let \(T\) be the set of trace labels given by
\begin{equation*} T = \bigg ({\coprod _{a \in A \setminus {\lbrace }\mathrm{end}{\rbrace } \cup {\lbrace }\mathrm{bsc}{\rbrace } \times D} C_{a}}\bigg) + {\lbrace }\mathrm{end}{\rbrace } + D, \end{equation*}
and for \(\mathbf {1}= {\lbrace }\ast {\rbrace }\), we define the logic functor \(L :\mathbf {Set}\rightarrow \mathbf {Set}\) by
\begin{equation*} L(X) = T \times X + \mathbf {1}\qquad \text{and} \qquad L(f) = \mathrm{id}_{T} \times f + \mathrm{id}_{\mathbf {1}} \, . \end{equation*}
Note that \(T\) is defined as the coproduct of all \(C_{a}\) with the exception of \(C_{\mathrm{end}}\) and \(C_{\mathrm{bsc}}\). The reason for this is that the latter two sets are empty, but we want to retain the labels in the traces, even though they states labelled with \(\mathrm{end}\) or \(\mathrm{bsc}\) are terminal. Also, we use the coproduct here, instead of set union, to guarantee that we obtain a disjoint union. Our goal is to use lists over \(T\) as traces for stack session coalgebras. For this, we will make use of the fact that \(L\) has the set \(T^{\ast }\) of lists over \(T\) as initial algebra. The following theorem shows that we can compute traces that are compatible with bisimilarity.
Theorem 7.7.
There is a contravariant functor \(\Gamma :\mathbf {CoAlg}(G) \rightarrow \mathbf {Alg}(L)\) from the category of stack session coalgebras to the category of \(L\)-algebras that makes the following diagram commute, where \(2^{\mathrm{Id}\times \mathrm{Id}^{\ast }}(X) = 2 ^{X \times X^{\ast }}\) is the functor mapping a set \(X\) to predicates over \(X \times X^{\ast }\) and the downward arrows are the forgetful functors. This makes \(\Gamma\) a so-called lifting of \(2^{\mathrm{Id}\times \mathrm{Id}^{\ast }}\):
Proof.
We first define \(\Gamma\) on a coalgebra \(c :X \rightarrow GX\):
\begin{align*} & \Gamma (c) :L\bigg ({ {2^{X \times X^{\ast }}}}\bigg) \rightarrow 2^{X \times X^{\ast }} \\ & \begin{aligned}\Gamma (c)(\ast)(x, u) & = 1 \\ \Gamma (c)(\mathrm{end}, P)(x, u) & = {\left\lbrace \begin{array}{ll} P(x^{\prime }, u^{\prime }), & c(x) = (\mathrm{end}, f_{\emptyset }, v) \text{ and } v + u = x^{\prime } : u^{\prime } \\ 0, & \text{otherwise} \end{array}\right.} \\ \Gamma (c)(d, P)(x, u) & = {\left\lbrace \begin{array}{ll} P(x^{\prime }, u^{\prime }), & c(x) = (\mathrm{bsc}, d, f_{\emptyset }, v) \text{ and } v + u = x^{\prime } : u^{\prime } \\ 0, & \text{otherwise} \end{array}\right.} \\ \Gamma (c)(a, b, P)(x, u) & = {\left\lbrace \begin{array}{ll} P(\beta (b)), v + u), & c(x) = (a, \beta , v)\\ b 0, & \text{otherwise} \end{array}\right.} \end{aligned} \end{align*}
and for another coalgebra \({c^{\prime }}:Y \rightarrow GY\) and a homomorphism \(h :c \rightarrow {c^{\prime }}\), we define
\begin{align*} & \Gamma (h) :2^{Y \times Y^{\ast }} \rightarrow 2^{X \times X^{\ast }}, \\ & \Gamma (h) = 2^{h \times h^{\ast }} \, . \end{align*}
Clearly, \(\Gamma (\mathrm{id}) = \mathrm{id}\) and \(\Gamma (h \circ h^{\prime }) = \Gamma (h^{\prime }) \circ G(h)\). Also the diagram clearly commutes by the definition of \(\Gamma\).
Note that the case, in which \(dom(\beta) = \emptyset\) means that \(a\) is either \(\mathrm{end}\) or \((\mathrm{bsc}, d)\) for some basic type \(d\). In either case, a state is taken from the top of the stack.
Thus, it remains to prove that \(\Gamma (h)\) is an algebra homomorphism, which means we have to prove that the following diagram commutes:
We will prove this by going through the three cases in the definition of \(\Gamma ({c^{\prime }})\). The first case is the application to \(\ast \in \mathbf {1}\), where we use that \(L(\Gamma h)(\ast) = \ast\):
\begin{equation*} (\Gamma h \circ \Gamma {c^{\prime }})(\ast)(x, u) = (\Gamma {c^{\prime }})(\ast)(h(x), h^{\ast }(u)) = 1 = (\Gamma c)(\ast)(x, u) = (\Gamma c \circ L(\Gamma h))(\ast)(x, u). \end{equation*}
The second case is the application to \(\mathrm{end}\) and \(P :Y \times Y^{\ast } \rightarrow 2\):
\begin{align*} (\Gamma c \circ L(\Gamma h))(\mathrm{end}, P)(x, u) & = (\Gamma c)(\mathrm{end}, P \circ (h \times h^{\ast }))(x, u) \\ & = {\left\lbrace \begin{array}{ll} (P \circ (h \times h^{\ast }))(x^{\prime }, u^{\prime }), & c(x) = (\mathrm{end}, \beta , v) \text{ and } v + u = x^{\prime } : u^{\prime } \\ 0 , & \text{otherwise} \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} P(h(x^{\prime }), h^{\ast }(u^{\prime })), & c(x) = (\mathrm{end}, \beta , v) \text{ and } v + u = x^{\prime } : u^{\prime } \\ 0 , & \text{otherwise} \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} P(y^{\prime }, w^{\prime }), & c(x) = (\mathrm{end}, \beta , v) \text{ and } h^{\ast }(v) + h^{\ast }(u) = y^{\prime } : w^{\prime } \\ 0 , & \text{otherwise,} \end{array}\right.} \qquad \qquad {\textrm {using}} {y^{\prime } : w^{\prime } = h(x^{\prime }) : h^{\ast }(u^{\prime }) = h^{\ast }(x^{\prime }:u^{\prime }) \textrm {and} h^{\ast }(v + u) = h^{\ast }(v) + h^{\ast }(u)} \\ & = {\left\lbrace \begin{array}{ll} P(y^{\prime }, w^{\prime }), & Gh(c(x)) = (\mathrm{end}, h \circ \beta , h^{\ast }(v)) \\ & \text{ and } h^{\ast }(v) + h^{\ast }(u) = y^{\prime } : w^{\prime } \\ 0 , & \text{otherwise} \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} P(y^{\prime }, w^{\prime }), & {c^{\prime }}(h(x)) = (\mathrm{end}, \gamma , w) \text{ and } w + h^{\ast }(u) = y^{\prime } : w^{\prime } \\ 0 , & \text{otherwise} \end{array}\right.} \qquad \qquad \textrm {using} Gh(c(x)) = {c^{\prime }}(h(x)), \gamma = h \circ \beta , w = h^{\ast }(v) \\ & = (\Gamma {c^{\prime }})(\mathrm{end}, P)(h(x), h^{\ast }(u)) \\ & = (\Gamma h \circ \Gamma {c^{\prime }})(\mathrm{end}, P)(x, u). \end{align*}
The third case, application to an element in \({\lbrace }\mathrm{bsc}{\rbrace } \times D\) and a predicate \(P\) follows the same reasoning.
The last case is the application to \(a \in A \setminus {\lbrace }\mathrm{end}{\rbrace }\), \(b \in C_{a}\) and \(P :Y \times Y^{\ast } \rightarrow 2\):
\begin{align*} (\Gamma c \circ L(\Gamma h))(a, b, P)(x, u) & = (\Gamma c)(a, b, P \circ (h \times h^{\ast }))(x, u) \\ & = {\left\lbrace \begin{array}{ll} (P \circ (h \times h^{\ast }))(\beta (b)), v + u), & c(x) = (a, \beta , v) \\ 0, \text{otherwise} \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} P(h(\beta (b)), h^{\ast }(v + u)), & c(x) = (a, \beta , v) \\ 0, \text{otherwise} \end{array}\right.} \\ & = {\left\lbrace \begin{array}{ll} P(\gamma (b), w + h^{\ast }(u)), & Gh(c(x)) = (a, \gamma , w) \\ 0, \text{otherwise,} \end{array}\right.} \qquad \qquad \textrm {using} (a, \gamma , w) = (a, h \circ \beta , h^{\ast }(v)) \\ & = {\left\lbrace \begin{array}{ll} P(\gamma (b), w + h^{\ast }(u)), & {c^{\prime }}(h(x)) = (a, \gamma , w) \\ 0, \text{otherwise} \end{array}\right.} \\ & = (\Gamma {c^{\prime }})(a, b, P)(h(x), h^{\ast }(u)) \\ & = (\Gamma h \circ \Gamma {c^{\prime }})(a, b, P)(x, u). \end{align*}
This shows that \(\Gamma c \circ L(\Gamma h) = \Gamma h \circ \Gamma {c^{\prime }}\) in all three cases and \(\Gamma (h)\) is thus a functor.□
Using theorem 7.7 and the fact that \(L\) has the set \(T^{\ast }\) of lists over \(T\) as initial algebra, we obtain traces of stack session coalgebras.
Corollary 7.8.
For every stack session coalgebra \(c :X \rightarrow G(X)\) there is a map \(\tilde{c} :X \rightarrow 2^{T^{\ast }}\) that assigns to a state the (finite) traces originating from it. Moreover, whenever two states \(x\) and \(y\) are bisimilar in \(c\), then \(\tilde{c}(x) = \tilde{c}(y)\).
Proof.
Since \(T^{\ast }\) is an initial \(L\)-algebra, we obtain from \(c\) in the diagram below on the left, the unique algebra homomorphism \(h\) by applying theorem 7.7. We can then transpose \(h\) to \(\hat{h}\) on the right and compose it with the map that sends any \(x \in X\) to the pair \((x, \varepsilon) \in X \times X^{\ast }\). This gives use the map \(\tilde{c}\):
Since \(G\) is a polynomial functor, a bisimulation on \(c\) is given equivalently [54] by a span of coalgebra homomorphisms in \(\mathbf {CoAlg}(G),\)
where \(R \subseteq X \times X\) is a relation with a coalgebra \({c_{R}} :R \rightarrow G(R)\) on it, and \(\pi _{1}\) and \(\pi _{2}\) are the product projections. This means that \(\pi _{1}\) and \(\pi _{2}\) must be coalgebra homomorphism. By theorem 7.7, we then obtain algebra homomorphisms in the other direction:
As \(\Gamma (\pi _{1})\) and \(\Gamma (\pi _{2})\) are algebra homomorphisms, the following diagram commutes for \(k = 1, 2\):
Since \(T^{\ast }\) is an initial algebra, we obtain \(\Gamma (\pi _{1}) \circ h = \Gamma (\pi _{2}) \circ h\). We can use this to obtain trace equivalence for bisimilar states: Let \(x, y \in X\) be bisimilar, then there is a relation \(R\) that gives rise to a span of coalgebras as above. Applying the equality we just obtained, we then have for all \(w \in T^{\ast }\) that
\begin{align*} \tilde{c}(x)(w) & = \hat{h}(x, \varepsilon)(w) \\ & = h(w)(x, \varepsilon) \\ & = (\Gamma (\pi _{1}) \circ h)(w)((x, y), \varepsilon) \\ & = (\Gamma (\pi _{2}) \circ h)(w)((x, y), \varepsilon) \\ & = \tilde{c}(y)(w) \, . \end{align*}
Since this holds for all \(w\), we have \(\tilde{c}(x) = \tilde{c}(y)\), as desired.□
Let us try to understand this result a bit better. For any coalgebra \(c :X \rightarrow G(X)\), \(x \in X\) and \(w \in T^{\ast }\), we obtain from the proof that
\begin{equation*} \tilde{c}(x)(w) = h(w)(x, \varepsilon), \end{equation*}
and thus for the empty list,
\begin{equation*} \tilde{c}(x)(\varepsilon) = h(\varepsilon)(x, \varepsilon) = \Gamma (c)(\ast)(x, \varepsilon) = 1 \, . \end{equation*}
This means that the empty list is a trace for any stack session coalgebra. For composed lists, however, we have for \(a \ne \mathrm{end}\) that
\begin{align*} \tilde{c}(x)((a, b) : w) & = h((a, b) : w)(x, u) \\ & = \Gamma (c)(a, b, h(w))(x, u) \\ & = h(w)(\beta (b)), v + u) \qquad \qquad \textrm {for} c(x) = (a, \beta , v), \end{align*}
and similarly for \(a = \mathrm{end}\). Combining this with the fact that the empty list is always a trace, we have that any list of labels, which is compatible with the transitions of a stack session coalgebra, is a (partial) trace.
More concretely, let us compute some traces for the coalgebra given in example 7.4.
Example 7.9.
We shall refer to the stack session coalgebra induced by Figure 21 as \(c :X \rightarrow G(X)\) with \(X = {\lbrace }T{\rbrace } \cup {\lbrace }q_{k} \; | \; 1 \le k \le 6{\rbrace }\). As any list of labels that is compatible with \(c\) is a trace, we would expect that, for instance, \([(\oplus , {\it next}), (!, {\mathcal{d}}), \texttt {int} ]\), is a trace. Indeed, using the above formulas and
\begin{align*} c(T) & = (\oplus , \beta _{1}, \varepsilon) \text{ with } \beta _{1}({\it next}) = q_{1}, \\ c(q_{1}) & = (!, \beta _{2}, [{T}, q_{2}]) \text{ with } \beta _{2}({\mathcal{d}}) = q_{4}, \end{align*}
we have
\begin{align*} \tilde{c}(T)[(\oplus , {\it next}), (!, {\mathcal{d}}), \texttt {int} ] & = h[(!, {\mathcal{d}}), \texttt {int} ](q_{1}, \varepsilon) \\ & = h[\texttt {int} ](q_{4}, [{T}, q_{2}]) \\ & = h(\varepsilon)({T}, [q_{2}]) \\ & = 1 \, . \end{align*}
It should be noted that in the second to last line, we stop computing in state \(T\) and still have state \(q_{2}\) remaining on the stack. Using a similar computation, is can be easily verified that the word \(w\) with
\begin{equation*} w = (\oplus , {\it next}) : (!, {\mathcal{d}}) : \texttt {int}: (\oplus , {\it quit}) : \mathrm{end}: (?, {\mathcal{d}}) : \texttt {bool}: \mathrm{end} \end{equation*}
is a trace of \(c\) and that the computation ends in state \(q_{6}\) with an empty stack.
Keeping track of the stack allows us to “determinise” stack session coalgebras into session coalgebras [37].
Theorem 7.10.
There is a determinisation functor \(D\) that turns stack session coalgebras into session coalgebra, such that the following diagram commutes where the downward arrows are again the respective forgetful functors:
Proof.
Toward defining the determinisation functor, we define two natural transformations \(\lambda :G \rightarrow F(-^{\ast })\) and \(\sigma :F(-^{\ast }) \times (-)^{\ast } \rightarrow F(-^{\ast })\) by putting
\begin{align*} & \lambda _{X} :GX \rightarrow F(X^{\ast }) \\ & \lambda _{X}(\mathrm{com}, p, \beta , u) = (\mathrm{com}, p, \beta ^{\prime }), & & \text{where } \beta ^{\prime }({\mathcal{d}}) = [\beta ({\mathcal{d}})] \text{ and } \beta ^{\prime }(\ast) = u \\ & \lambda _{X}(a, \beta , u) = (a, \beta ^{\prime }), & & \text{where } \beta ^{\prime }(s) = \beta (s) : u \end{align*}
and
\begin{align*} & \sigma _{X} :F(X^{\ast }) \times X^{\ast } \rightarrow F(X^{\ast }) \\ & \sigma _{X}((\mathrm{com}, p, \beta), v) = (\mathrm{com}, p, \beta ^{\prime }), & & \text{where } \beta ^{\prime }({\mathcal{d}}) = \beta ({\mathcal{d}}) \text{ and } \beta ^{\prime }(\ast) = \beta (\ast) + v \\ & \sigma _{X}((a, \beta), u) = (a, \beta ^{\prime }), & & \text{where } \beta ^{\prime }(s) = \beta (s) + v. \end{align*}
It is straightforward to show that these definitions are natural in \(X\). Using the transformations \(\lambda\) and \(\sigma\), the determinisation functor is defined on coalgebras \(c :X \rightarrow GX\) by
\begin{align*} & D(c) :X^{\ast } \rightarrow F(X^{\ast }), \\ & D(c)(\varepsilon) = (\mathrm{end}, f_{\emptyset }), \\ & D(c)(x : v) = \sigma _{X}(\lambda _{X}(c(x)), v), \end{align*}
and on homomorphisms \(h :(X, c) \rightarrow (Y, c^{\prime })\) by \(D(h) = h^{\ast }\). A routine calculation shows that \(D(h)\) is a homomorphism from \(D(c)\) to \(D({c^{\prime }})\) by using naturality of \(\lambda\) and \(\sigma\). That \(D\) preserves identities and composition is immediate by \((-)^{\ast }\) being a functor.□
Both Theorem 7.7 and 7.10 follow the outline provided to construct traces and to determinise stack session coalgebras by using coalgebraic modal logic, as it can be found in the coalgebra literature [37]. However, there are some discrepancies that we are unable to resolve:
(1)
Theorem 7.7 is typically proven by providing a certain natural transformation. However, it seems unclear how it could be constructed in our case.
(2)
Similarly, Theorem 7.10 arises typically from a natural transformation and is, again, unclear how it could be given in our case.
These issues may suggest that we either may have to use a different setup, for example, by replacing the category of sets with the category of monoids to deal with lists, or that the framework of coalgebraic modal logic and trace semantics needs to be extended. We leave this as future work. Theorem 7.7 and 7.10 provide in any case a clear picture akin to functorial semantics.

8 Concluding Remarks

We have developed a new language-independent foundation for session types by relying on coalgebras. We introduced session coalgebras, which elegantly capture all communication structures of session types, both linear and unrestricted, without committing to a specific syntactic formulation for processes and types. Session coalgebras allow us to rediscover language-independent coinductive definitions for duality, subtyping, and type equivalence. A key idea is to assimilate channel types to the states of a session coalgebra; we demonstrated this insight by deriving a session type system for the \(\pi\)-calculus, unlocking decidability results and algorithmic type checking.
Interesting strands for future work include extending our coalgebraic toolbox to give a language-independent justification to other session-type systems, such as multiparty session types [31] or nested session types [14]. Another line concerns extending our coalgebraic view to include language-dependent issues and properties that require a global analysis on session behaviours. Salient examples are liveness properties such as (dead)lock-freedom and progress: advanced type systems [6, 13, 38, 45, 46] typically couple types with advanced mechanisms (such as priority-based annotations and strict partial orders), which provide a global insight to rule out the circular dependencies between sessions that are at the heart of stuck processes.
Last, the whole area of coalgebra now becomes available to explore session types. One possible direction is to make use of final coalgebras and modal logic, which would allow us to analyse the behaviour of session coalgebras. This would be particularly powerful in combination with composition operations for session coalgebras to break down protocols and type checking. Another direction is to use session coalgebras to verify other coalgebras that take on the role of the syntactic \(\pi\)-calculus [18, 44] and thereby allowing also for the exploration of other semantics like manifest sharing [5, 6] without resorting to a specific syntax.

Acknowledgments

We are grateful to the anonymous reviewers for their useful remarks and suggestions.

References

[1]
Jiří Adámek, Stefan Milius, and Lawrence S. Moss. 2021. Initial Algebras, Terminal Coalgebras, and the Theory of Fixed Points of Functors. Retrieved from http://www.stefan-milius.eu.
[2]
Jirí Adámek, Stefan Milius, and Jiri Velebil. 2006. Iterative algebras at work. Math. Struct. Comput. Sci. 16, 6 (2006), 1085–1131.
[3]
J. Adamek and J. Rosicky. 1994. Locally Presentable and Accessible Categories. Cambridge University Press, Cambridge.
[4]
Steve Awodey. 2010. Category Theory (2nd ed.). Number 52 in Oxford Logic Guides. Oxford University Press.
[5]
Stephanie Balzer and Frank Pfenning. 2017. Manifest sharing with session types. Proc. ACM Program. Lang. 1, ICFP (2017), 37:1–37:29.
[6]
Stephanie Balzer, Bernardo Toninho, and Frank Pfenning. 2019. Manifest Deadlock-Freedom for Shared Session Types. In Proceedings of the 28th European Symposium on Programming (ESOP’19). 611–639.
[7]
Giovanni Bernardi and Matthew Hennessy. 2016. Using higher-order contracts to model session types. Log. Methods Comput. Sci. 12, 2 (2016).
[8]
Francis Borceux. 2008. Handbook of Categorical Algebra: Volume 1, Basic Category Theory. Cambridge University Press.
[9]
Mario Bravetti and Gianluigi Zavattaro. 2007. Towards a unifying theory for choreography conformance and contract compliance. In Proceedings of the 6th International Symposium on Software Composition (SC@ETAPS’07)(Lecture Notes in Computer Science, Vol. 4829), Markus Lumpe and Wim Vanderperren (Eds.). Springer, 34–50.
[10]
Luís Caires, Jorge A. Pérez, Frank Pfenning, and Bernardo Toninho. 2013. Behavioral polymorphism and parametricity in session-based communication. In Proceedings of the 22nd European Symposium on Programming (ESOP’13)(Lecture Notes in Computer Science, Vol. 7792), Matthias Felleisen and Philippa Gardner (Eds.). Springer, 330–349.
[11]
Samuele Carpineti, Giuseppe Castagna, Cosimo Laneve, and Luca Padovani. 2006. A formal account of contracts for web services. In Proceedings of the 3rd Conference on Web Services and Formal Methods (WS-FM’06)(Lecture Notes in Computer Science, Vol. 4184), Mario Bravetti, Manuel Núñez, and Gianluigi Zavattaro (Eds.). Springer, 148–162.
[12]
Giuseppe Castagna, Nils Gesbert, and Luca Padovani. 2009. A theory of contracts for web services. ACM Trans. Program. Lang. Syst. 31, 5 (2009), 19:1–19:61.
[13]
Mario Coppo, Mariangiola Dezani-Ciancaglini, Nobuko Yoshida, and Luca Padovani. 2016. Global progress for dynamically interleaved multiparty sessions. Math. Struct. Comput. Sci. 26, 2 (2016), 238–302.
[14]
Ankush Das, Henry DeYoung, Andreia Mordido, and Frank Pfenning. 2021. Nested session types. In Proceedings of the 30th European Symposium on Programming (ESOP’21), Held as Part of the European Joint Conferences on Theory and Practice of Software (ETAPS’21)(Lecture Notes in Computer Science, Vol. 12648), Nobuko Yoshida (Ed.). Springer, 178–206.
[15]
Luca de Alfaro and Thomas A. Henzinger. 2001. Interface automata. In Proceedings of the Symposium on the Foundations of Software Engineering (FSE’01), A. Min Tjoa and Volker Gruhn (Eds.). ACM, 109–120.
[16]
Pierre-Malo Deniélou and Nobuko Yoshida. 2012. Multiparty session types meet communicating automata. In Proceedings of the 21st European Symposium on Programming (ESOP’12)(Lecture Notes in Computer Science, Vol. 7211), Helmut Seidl (Ed.). Springer, 194–213.
[17]
Pierre-Malo Deniélou and Nobuko Yoshida. 2013. Multiparty compatibility in communicating automata: Characterisation and synthesis of global session types. In Proceedings of the 40th International Colloquium on Automata, Languages, and Programming (ICALP’13)(LNCS, Vol. 7966), Fedor V. Fomin, Rusins Freivalds, Marta Z. Kwiatkowska, and David Peleg (Eds.). Springer, 174–186.
[18]
Clovis Eberhart, Tom Hirschowitz, and Thomas Seiller. 2015. An Intensionally Fully-abstract Sheaf Model for Pi. In Proceedings of the Conference on Algebra and Coalgebra in Computer Science (CALCO’15). 86–100.
[19]
Cédric Fournet, C. A. R. Hoare, Sriram K. Rajamani, and Jakob Rehof. 2004. Stuck-free conformance. In Proceedings of the 16th International Conference on Computer-Aided Verification (CAV’04)(Lecture Notes in Computer Science, Vol. 3114), Rajeev Alur and Doron A. Peled (Eds.). Springer, 242–254.
[20]
Clément Fumex, Neil Ghani, and Patricia Johann. 2011. Indexed Induction and Coinduction, Fibrationally. In Proceedings of the Conference on Algebra and Coalgebra in Computer Science (CALCO’11)(Lecture Notes in Computer Science, Vol. 6859). Springer, 176–191.
[21]
Nicola Gambino and Joachim Kock. 2009. Polynomial functors and polynomial monads. Math. Proc. Cambridge Philos. Soc. 154 (June 2009).
[22]
Ronald Garcia, Éric Tanter, Roger Wolff, and Jonathan Aldrich. 2014. Foundations of typestate-oriented programming. ACM Trans. Program. Lang. Syst. 36, 4 (2014), 12:1–12:44.
[23]
Simon J. Gay. 2016. Subtyping supports safe session substitution. In A List of Successes That Can Change the World—Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday(Lecture Notes in Computer Science, Vol. 9600), Sam Lindley, Conor McBride, Philip W. Trinder, and Donald Sannella (Eds.). Springer, 95–108.
[24]
Simon J. Gay and Malcolm Hole. 2005. Subtyping for session types in the pi calculus. Acta Info. 42, 2/3 (2005), 191–225.
[25]
Simon J. Gay, Peter Thiemann, and Vasco T. Vasconcelos. 2020. Duality of session types: The final cut. Electr. Proc. Theor. Comput. Sci. 314 (Apr. 2020), 23–33.
[26]
Sergey Goncharov, Stefan Milius, and Alexandra Silva. 2014. Towards a Coalgebraic Chomsky Hierarchy. In Theoretical Computer Science, Josep Diaz, Ivan Lanese, and Davide Sangiorgi (Eds.). Number 8705 in LNCS. Springer, 265–280.
[27]
Daniel Hausmann, Till Mossakowski, and Lutz Schröder. 2006. A coalgebraic approach to the semantics of the ambient calculus. Theor. Comput. Sci. 366, 1-2 (2006), 121–143.
[28]
Claudio Hermida and Bart Jacobs. 1997. Structural Induction and Coinduction in a Fibrational Setting. Info. Comput. 145 (1997), 107–152.
[29]
Kohei Honda. 1993. Types for dyadic interaction. In Proceedings of the 4th International Conference on Concurrency Theory (CONCUR’93)(Lecture Notes in Computer Science, Vol. 715), Eike Best (Ed.). Springer, 509–523.
[30]
Kohei Honda, Vasco Thudichum Vasconcelos, and Makoto Kubo. 1998. Language Primitives and Type Discipline for Structured Communication-Based Programming. In Proceedings of the European Symposium on Programming (ESOP’98)(LNCS, Vol. 1381), Chris Hankin (Ed.). Springer, 122–138.
[31]
Kohei Honda, Nobuko Yoshida, and Marco Carbone. 2008. Multiparty asynchronous session types. In Proceedings of the 35th Symposium on Principles of Programming Languages (POPL’08), George C. Necula and Philip Wadler (Eds.). ACM, 273–284.
[32]
Jesse Hughes and Bart Jacobs. 2004. Simulations in coalgebra. Theor. Comput. Sci. 327, 1–2 (2004), 71–108.
[33]
Bart Jacobs. 2016. Introduction to Coalgebra: Towards Mathematics of States and Observation. Number 59 in Cambridge Tracts in Theoretical Computer Science. Cambridge University Press.
[34]
Alex C. Keizer, Henning Basold, and Jorge A. Pérez. 2021. Session coalgebras: A coalgebraic view on session types and communication protocols. In Proceedings of the 30th European Symposium on Programming (ESOP’21)(Lecture Notes in Computer Science, Vol. 12648), Nobuko Yoshida (Ed.). Springer, 375–403.
[35]
Bartek Klin. 2005. The Least Fibred Lifting and the Expressivity of Coalgebraic Modal Logic. In Proceedings of the Conference on Algebra and Coalgebra in Computer Science (CALCO’05). 247–262.
[36]
Bartek Klin. 2007. Coalgebraic Modal Logic Beyond Sets. Electr. Notes Theor. Comput. Sci. 173 (2007), 177–201.
[37]
Bartek Klin and Jurriaan Rot. 2016. Coalgebraic trace semantics via forgetful logics. Logic. Methods Comput. Sci. 12, 4 (2016).
[38]
Naoki Kobayashi. 2006. A new type system for deadlock-free processes. In Proceedings of the International Conference on Concurrency Theory (CONCUR’06)(Lecture Notes in Computer Science, Vol. 4137), Christel Baier and Holger Hermanns (Eds.). Springer, 233–247.
[39]
Sam Lindley and J. Garrett Morris. 2016. Talking bananas: Structural recursion for session types. In Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming (ICFP’16), Jacques Garrigue, Gabriele Keller, and Eijiro Sumii (Eds.). ACM, 434–447.
[40]
Étienne Lozes and Jules Villard. 2011. Reliable contracts for unreliable half-duplex communications. In Proceedings of the 8th International Workshop on Web Services and Formal Methods (WS-FM’11)(Lecture Notes in Computer Science, Vol. 7176), Marco Carbone and Jean-Marc Petit (Eds.). Springer, 2–16.
[41]
Stefan Milius. 2010. A Sound and Complete Calculus for Finite Stream Circuits. In Proceedings of the Symposium on Logic in Computer Science (LICS’10). 421–430.
[42]
Stefan Milius, Marcello M. Bonsangue, Robert S. R. Myers, and Jurriaan Rot. 2013. Rational Operational Models. In Proceedings of the 29th Conference on the Mathematical Foundations of Programming Semantics (MFPS’13). 257–282.
[43]
Robin Milner, Joachim Parrow, and David Walker. 1992. A calculus of mobile processes, I. Info. Comput. 100, 1 (1992), 1–40.
[44]
Ugo Montanari and Marco Pistore. 2005. Structured coalgebras and minimal HD-automata for the pi-calculus. Theor. Comput. Sci. 340, 3 (2005), 539–576.
[45]
Luca Padovani. 2014. Deadlock and lock freedom in the linear \(\pi\)-calculus. In Proceedings of the Joint Meeting of the 23rd EACSL Annual Conference on Computer Science Logic and the 29th Annual ACM/IEEE Symposium on Logic in Computer Science (CSL-LICS’14), Thomas A. Henzinger and Dale Miller (Eds.). ACM, 72:1–72:10.
[46]
Luca Padovani, Vasco Thudichum Vasconcelos, and Hugo Torres Vieira. 2014. Typing liveness in multiparty communicating systems. In Proceedings of the 16th IFIP WG 6.1 International Conference on Coordination Models and Languages (COORDINATION’14), Held as Part of the 9th International Federated Conferences on Distributed Computing Techniques (DisCoTec’14)(Lecture Notes in Computer Science, Vol. 8459), Eva Kühn and Rosario Pugliese (Eds.). Springer, 147–162.
[47]
Paolo Perrone. 2021. Notes on Category Theory with Examples from Basic Mathematics. Retrieved from https://arxiv:1912.10642.
[48]
Damien Pous. 2007. Complete Lattices and Up-To Techniques. In Proceedings of the Asian Symposium on Programming Languages and Systems (APLAS’07)(LNCS, Vol. 4807), Zhong Shao (Ed.). Springer, 351–366.
[49]
Jan Rutten. 2000. Universal Coalgebra: A Theory of Systems. Theor. Comput. Sci. 249, 1 (2000), 3–80.
[50]
Jan Rutten. 2019. The Method of Coalgebra: Exercises in Coinduction. CWI, Amsterdam. Retrieved from http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-28550.
[51]
Davide Sangiorgi and David Walker. 2001. The Pi-Calculus—A Theory of Mobile Processes. Cambridge University Press.
[52]
Lutz Schröder. 2008. Expressivity of coalgebraic modal logic: The limits and beyond. Theor. Comput. Sci. 390, 2–3 (2008), 230–247.
[53]
Alexandra Silva, Marcello M. Bonsangue, and Jan Rutten. 2010. Non-Deterministic Kleene Coalgebras. Logic. Methods Comput. Sci. 6, 3 (2010) arXiv:1007.3769.
[54]
Sam Staton. 2011. Relating coalgebraic notions of bisimulation. Logic. Methods Comput. Sci. 7, 1 (2011), 1–21.
[55]
Robert E. Strom and Shaula Yemini. 1986. Typestate: A programming language concept for enhancing software reliability. IEEE Trans. Softw. Eng. 12, 1 (1986), 157–171.
[56]
Joshua Sunshine, Karl Naden, Sven Stork, Jonathan Aldrich, and Éric Tanter. 2011. First-class state change in Plaid. In Proceedings of the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’11), Cristina Videira Lopes and Kathleen Fisher (Eds.). ACM, 713–732.
[57]
Alfred Tarski. 1955. A lattice-theoretical fixpoint theorem and its applications. Pacific J. Math. 5, 2 (1955), 285–309. https://projecteuclid.org:443/euclid.pjm/1103044538.
[58]
Peter Thiemann and Vasco T. Vasconcelos. 2016. Context-free session types. In Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming (ICFP’16), Jacques Garrigue, Gabriele Keller, and Eijiro Sumii (Eds.). ACM, 462–475.
[59]
Bernardo Toninho and Nobuko Yoshida. 2019. Polymorphic session processes as morphisms. In The Art of Modelling Computational Systems: A Journey from Logic and Concurrency to Security and Privacy—Essays Dedicated to Catuscia Palamidessi on the Occasion of Her 60th Birthday(Lecture Notes in Computer Science, Vol. 11760), Mário S. Alvim, Kostas Chatzikokolakis, Carlos Olarte, and Frank Valencia (Eds.). Springer, 101–117.
[60]
Benno van den Berg and Federico de Marchi. 2007. Non-well-founded trees in categories. Ann. Pure Appl. Logic 146, 1 (Apr. 2007), 40–59. arXiv:math/0409158
[61]
Vasco T. Vasconcelos. 2012. Fundamentals of session types. Info. Comput. 217 (2012), 52–70.
[62]
James Worrell. 2005. On the final sequence of a finitary set functor. Theor. Comput. Sci. 338, 1–3 (2005), 184–199.

Cited By

View all

Index Terms

  1. Session Coalgebras: A Coalgebraic View on Regular and Context-free Session Types

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Programming Languages and Systems
        ACM Transactions on Programming Languages and Systems  Volume 44, Issue 3
        September 2022
        302 pages
        ISSN:0164-0925
        EISSN:1558-4593
        DOI:10.1145/3544000
        Issue’s Table of Contents
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 July 2022
        Accepted: 01 January 2022
        Revised: 01 January 2022
        Received: 01 April 2021
        Published in TOPLAS Volume 44, Issue 3

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Session types
        2. coalgebra
        3. process calculi
        4. coinduction

        Qualifiers

        • Research-article
        • Refereed

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)415
        • Downloads (Last 6 weeks)53
        Reflects downloads up to 20 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media