Electronic Notes in Theoretical Computer Science 49 (2002)

URL: http://www.elsevier.nl/locate/entcs/volume49.html pp. 1 — 166

Continuous Domains in Logical Form*

Mathias Kegelmann

Fachbereich Mathematik Technische Universität Darmstadt Schloßgartenstraße 7 64289 Darm,stadt Germany

Abstract

This thesis investigates the mathematical foundations that are necessary for an extension of Abramsky's domain theory in logical form to continuous domains.

We present a multi-lingual sequent calculus, that is a positive logic allowing se-quents that relate propositions from different languages. This setup necessitates a number of syntactic adjustments. In particular, we discuss different reformulations of the cut rule and how they can be used as a basis for a category MLS of logical systems. Then we investigate cut elimination in this logic. From a semantic point of view this can be seen as enabling us to perform domain constructions in purely syntactic form.

The category MLS has a number of different manifestations, and we study it with logical, localic, topological and categorical methods. From a topological point of view, we show that MLS is equivalent to the category of stably compact spaces with certain closed relations. By putting together cut elimination and representation theorems for these spaces we get a continuous domain theory in logical form.

* A thesis submitted to The University of Birmingham for the degree of Doctor of Philosophy, June 1999.

©2002 Published by Elsevier Science B. V.

Contents

Title and Abstract 1

Contents 2

Introduction 4

1 The Basics 12

1 Topology and Order 12

1.1 Topology 12

1.2 Domains 15

2 Stone Duality 20

2.1 The Adjunction O H pt 20

2.2 Sobriety and Spatialitv 22

2.3 Hierarchy of Stone Dualities I 28

3 Stably Compact Spaces 32

3.1 The Definition 32

3.2 Compact Pospaces 35

3.3 Stably Compact Domains 40

3.4 Hierarchy of Stone Dualities II 43

4 Abramskv's Domain Theory in Logical Form 47

4.1 Preloeales 47

4.2 Prelocalic Description of Domains 48

4.3 Domain Constructions 49

4.4 Logic 51

2 Syntax 53

1 Multi-Lingual Sequent Calculus 53

1.1 The Logic 55

1.2 A Category of Consequence Relations 60

2 Cut Elimination 66

2.1 Simple Elimination 66

2.2 Construction of Consequence Relations 69

2.3 Coproducts and Products 76

3 Semantics 86

1 Logic and Topological Spaces 86

1.1 Theories and Models 87

1.2 Algebraisation of the Logic 93

1.3 Topological Semantics 98

1.4 Semantics of Morphisms 110

2 Domain Constructions 119

2.1 Representing Stablv Compact Spaces 119

2.2 Individual Constructions 124

2.3 Function Spaces 129

2.4 Relation Spaces 137

3 Relations and Functions 145

3.1 Diagonals 145

3.2 Regular Categories 150

3.3 Closed Relations 156

Conclusion 158

Bibliography and Index 160

References 160

Categories 165

Notation 166

Introduction

Domains are used in the theory of semantics of programming languages to give denotational models for programs. In this approach a data type corresponds to a structured set, called a domain, and a program is interpreted as a function between the domains corresponding to the input and the output types of the program. One can then employ mathematical reasoning on the domain theoretic side to get insights into the behaviour of the program. This approach goes back to the early work of Scott [55] and [56], Domain theory also has strong links to other areas of mathematics, most notably T0 topology [19], One reasons for this is that the specialisation order on every sober space is a dcpo. Moreover, the lattice of open sets in a topological space is a particular dcpo and so domain theoretic methods are often useful for their analysis.

Another branch of semantics, program logics, studies logical systems that either describe properties of programs or how fragments of programs transform them. The best known example is probably Hoare logic [21] which studies triples

{P} S {Q}

where P and Q are predicates and S is a statement or a fragment of a program. The intended reading of such a triple is that if P holds before the execution of S, then Q holds afterwards. In [1] and [4] Abramskv explicates the connection between denotational semantics and program logics via his programme of domain theory in logical form. The basic idea is to utilise the correspondence between topology and certain logics.

From the computer science perspective, topology enters the theory in several ways. For one, functions between domains arise as the semantics of programs and the continuity of these functions is an abstract way of capturing computability. The Scott topology, the topology usually considered on domains, encodes the idea that to produce any output a computable function can only look at a finite portion of its input. We can also give some meaning to the topology on any space, namely as a logic of observable properties. This logic has arbitrary disjunctions but only finite conjunctions, analogous to the definition of a topology; sometimes this is also referred to as geometric logic. The asymmetry is due to the fact that an infinite disjunctions of properties can be observed by witnessing any one of them, testing an infinite conjunction, on the other hand, requires infinitely many 'experiments' rendering such a property non-observable in general. So, the open sets of a topological space

may be identified with observable or semi-decidable properties of its points. This idea was introduced by Smyth [59], For a much more thorough discussion of these issues see [65, Chapter 2], [1] and [61], Coming back to domains we conclude that the propositions of a program logic should correspond to Scott-open sets. Program fragments, in the form of continuous functions, act on these propositions by taking the preimage and can thus be understood as predicate transformers in the sense of Dijkstra's weakest precondition calculus [10].

Topology and the study of the logic of observable properties are essentially the same thing. We have just seen how we can go from a topological space to such a logic. Conversely, for every logic of observable properties we can look at its Lindenbaum algebra, the quotient of the logic under equivalence. This algebra is a lattice with arbitrary suprema and finite infima, which are connected by a distributivitv law. Such lattices are called frames or locales. This is the starting point of locale theory [25] which can also be seen as topology without points [26], Among other things, locales have the advantage that free locales exist and that one can thus construct them from generators and relations, The connection between topological spaces and locales is the subject of Stone duality which says that certain categories of spaces are equivalent to certain categories of locales.

Stone duality is the main foundation of domain theory in logical form as pioneered by Abramskv [1], A domain can be seen as a topological space by endowing it with the Scott topology, thus also turning it into a locale, A logical description of the domain is a logic of observable properties, a so-called prelocale, such that its Lindenbaum algebra generates this locale. Now suppose we take the product or the function space of domains, form a power domain or perform any other domain construction. If we already have preloeales corresponding to these domains, then the question is how we can translate such a construction into prelocalic terms. The aim is to come up with purely syntactic rules to build a new prelocale from the given ones such that it corresponds to the resulting domain. In [4] Abramskv gives them for the domain constructions that are commonly considered in domain theory. On top of the logic and the constructions for preloeales one can build different logics to talk about elements in different domains or about functions between them. We discuss some details of Abramskv's programme briefly in Section 4,

There are several benefits of this logical viewpoint. On the one hand, it yields new insights for domain theory because we now have to understand how a construction acts on properties rather than on points. The localic description is sometimes even smoother than the domain theoretic one, e.g. in the case of the Plotkin power domain and for bilimits. This new angle may also help us understand why and how certain constructions work. Furthermore, there are concrete application of this theory: If we have a good denotational semantics of a phenomenon we want to model, this theory gives us a corre-

sponding logic almost for free. Its advantage over ad hoc formalisms is that the close connection to the denotational model guarantees that the logic is equally suitable for reasoning about the particular system. For examples of this technique see [3] and [2].

The theory as described by Abramskv does not encompass all domains; his notion of prelocale corresponds to a class of certain algebraic domains, the so-called bifinite or SFP domains. In fact, most of the classical domain theory in computer science used to be focused on algebraic domains. These are domains where every element can be approximated by compact elements which in turn can be thought of as finite pieces of information. This is good enough for many applications since most programs process discrete data and we are often only interested in discrete properties like termination or correctness. However, apart from their mathematical appeal, it has long been argued that continuous domains have a similar importance (see for example [30]), The reasons for this are mainly real number computation, the modelling and computational analysis of other continuous mathematical structures, and probabilistic power domains which are used to study non-deterministic and stochastic phenomena.

We consider some examples of activities in these fields. In [15] Escardo studies PCF with an additional ground type for real numbers. In another paper he and Edalat look at integration in this extension of PCF [13], This is based on Edalat's research of computational measure theory [12] and integration [11] using continuous domains. Apart from real number computation, which we have already mentioned, this theory has applications in theoretical physics, neural networks and fractal image compression, Edalat's approach uses the probabilistic power domain which was introduced by Plotkin and Jones [28], It is a space of valuations which are the domain theoretic analogue of a measure. The original purpose of this power domain was to give a model for a non-deterministic programming language [27], It can also be used to model other types of stochastic behaviour, Siinderhauf studies computational models for uniform totally bounded spaces [64], His models are dcpo's which are unfortunately not necessarily continuous, Edalat and Heckmann give a domain theoretic description of metric spaces [14], and Lawson shows in [44] that every Polish space is a maximal point space, i.e. homeomorphic to the maximal points of a continuous domain. Hence, continuous domains can be used as computational models for classical spaces.

Continuous domain theory is an active field, as the previous paragraph illustrates, and its importance is likely to grow as its applications become more widespread. This is the motivation for trying to extend the domain theory in logical form to continuous domains. In a nutshell, this is the main aim of this thesis. It is worth mentioning that also from a mathematical point of view it is interesting to extend the class of spaces considered. The study of continuous domains often unravels certain features of the theory that are not prominent in the algebraic case, A good illustration of this is the paper

[33] which can be seen as the first steps of continuous domains in logical form. There the dual roles played by compact saturated and open subsets become apparent, something which has been obscured in the algebraic case.

There are other developments starting from Abramskv's theory, Zhang studies the logic of stable domain theory [66], Bonsange gives a Stone-type duality for non-sober spaces and discusses the link to an infinitarv logic with arbitrary conjunctions and disjunction [7], In [53] Brink and Rewitzkv explore the connection with Priestley duality to study the exact relationship between information systems, power domains, predicate transformers and relational models.

One advantage of restricting domain theory in logical form to algebraic domains is the following: Their Stone duals are algebraic lattices which have the pleasant property that the compact elements form a sub-lattice. This unlocks the door to a finitarv, localic description of these spaces. In the continuous case the Stone dual is only a certain kind of continuous lattice and as such no longer has a canonical basis. In their papers [33] and [34] Jung and Siinderhauf show how one can, nonetheless, get a good description of such spaces via strong proximity lattices. These structures have an order of approximation in addition to their (logical) lattice order. While these two papers were motivated by purely topological considerations their findings are the starting point for a continuous domain theory in logical form. The connection to our setup as explained in Section 1,2 and the results of Section 2 show that they have a logical and proof-theoretic content. In particular, this constitutes an independent justification for the two axioms which distinguish strong proximity lattices from the structures studied in [60],

The spaces considered in [33] are the stably compact spaces which are compact, locally compact, sober spaces such that the intersection of two compact saturated subsets is again compact. In the literature they appear in many guises and under a number of names. In [5] and [33], for example, they are called 'coherent'. This term, however, might be a possible source of confusion: The spaces corresponding to the 'coherent' locales of [25] and [65] are called 'spectral' in the latter book. Reserving the use of the term 'coherent' for algebraic locales is justified by the link to 'coherent logic' which is a well-established term. As there is also the danger of mixing them up with Girard's 'coherence spaces' we use the more descriptive term 'stably compact'. The domains that are normally used in semantics like Scott domains, bifinite and FS domains are stably compact with respect to their Scott topology. The stably compact spaces are also closely related to the compact ordered spaces introduced by Nachbin [48], and they can be seen as the T0 equivalent of compact Hausdorff spaces.

This thesis is a mathematical investigation of different aspects of a logical description for the class of stably compact spaces and extends Abramskv's work in three directions. The first one is syntactic and goes back to a sugges-

tion by Andrew Moshier. The idea is to see the auxiliary relation on strong proximity lattices as the entailment h of a sequent calculus. This allows us to understand the completeness proofs for individual domain constructions in Abramskv's programme as cut elimination. To make this precise we introduce a multi-lingual sequent calculus, i.e. is a logic that considers sequents between different languages or worlds. From the semantic point of view we can think of them as logical descriptions of different domains. But the calculus can also be motivated independently as one that allows reasoning about inferences between different areas of reasoning. The logic is very weak; the only logical connectives are conjunction and disjunction, and we exclude the identity axiom, while retaining all structural rules. Given this set-up we consider an alternative to Gentzen's cut rule and explore how it interacts with the logical rules and how it relates to other cut rules. We argue on purely proof theoretic grounds that consequence relations b should be interpolative. This property is well known from semantics, where it is justified by reference to effective computability. These results are then used to define a category MLS of logical systems which we study from a number of different angles. There is also a good theory of cut elimination in the multi-lingual setting. This can be used to perform domain constructions in purely logical form: Rather than showing completeness with respect to a domain which is known to satisfy a certain universal property, we can perform and verify the construction directly in MLS.

The two other directions come out of the semantics of the multi-lingual sequent calculus. We study its model theory in the style of Stone's work on Boolean algebras, which in spite of the ostensible simplicity of our logic is quite intricate. The category MLS turns out to be equivalent to stably compact spaces with certain relations between them. Taking a closer look at this proof, we get a representation theorem that characterises when a sequent calculus in MLS encodes a given stably compact space. It can also be used to construct such calculi: Given a space X and a 'language' together with open and compact interpretations of its formulae in X, the theorem tells us how we have to define entailment to turn it into a logical description of X. The problem is then to come up with inductive rules that generate this logic. This technique is applied to a number of domain constructions, thus extending domain constructions in logical form to the category of all stably compact spaces which contains a large class of continuous domains. Moreover, we have two methods to perform such constructions: One, purely syntactic, using cut elimination and a second one which relies on the topological semantics. The constructions that we perform as examples for either method show that both options are feasible. Purely logical constructions avoid the heavy topological machinery to some extent, but there is quite some overhead, as the construction of coproducts shows. Doing this example in full detail allows us to see exactly what is involved and how it compares to the semantic approach.

As mentioned before, a morphism in MLS corresponds to a relation—rather

than a continuous function—between stably compact spaces, and this is the third direction in which the present work extends Abramskv's, The relations that arise in this way can be characterised topologieallv and are in one-to-one correspondence to topological set-valued functions. It is not clear whether it is possible to give a logical function space construction in the current setup, but for the relation space, on the other hand, the situation is very satisfactory. We discuss the corresponding construction semantieallv and syntactically, and also identify the universal property that defines it. This almost makes MLS a symmetric monoidal category, but the mediating morphism is only unique as a function but not as a relation. To make this precise we need a characterisation of morphisms that are functions in disguise. This can be done in syntactic, localic, topological or categorical terms.

The thesis brings together ideas from proof theory, topology, domain theory and category theory, and some familiarity with the basic concepts from these fields is presupposed. Our principle reference is the handbook article [5] which covers domain theory and some aspects of order theory and T0 topology, When we need more advanced ideas from topology, locale theory or other areas, references are given in the text. We feel free to use the language of category theory throughout the thesis; a good introduction of the necessary material are the first chapters of [45] or [46], As a convention we compose functions from right to left, i.e.

is written as g o /, composition of relations and consequence relations b are from left to right. This may be slightly confusing, but it is in accordance with the usual mathematical practice.

The outline of the thesis is as follows: Chapter 1 provides the technical background for the thesis. It reviews material that can be found in the literature and contains no new results. The only original contribution of this chapter is the short note on complete distributivitv in Section 3,3, as well as the organisation of the exposition and maybe some of the proofs. The chapter does not discuss all the basics needed in the rest of the thesis, only those that lead to the logical description of stably compact spaces. Later we also require information about exponentials of topological spaces and regular categories, but as they are not part of the main thrust of this work we discuss them when we actually need them, that is at the beginning of Sections 2,3 and 3,2, respectively. They can be seen as continuations of Chapter 1, and the same disclaimers apply.

The first two sections cover basic T0 topology, domain theory and Stone duality, mainly to fix notation. For this reason many proofs are omitted. We only include them if they are not easily found in the literature or if they contain an interesting idea that is useful for the subsequent development of the theory. Many subsections give explicit pointers to the literature where

more details can be found. If a theorem is given without proof and no other reference is given, then it can be found there.

Section 3 introduces stably compact spaces. As their logical description is at the centre of this thesis and the relevant details are scattered over a number of books and articles in the literature, we discuss them in quite some detail including all the proofs. The section is meant as an exposition of the basic theory of stably compact spaces, their Stone duality, stably compact domains and the link to compact pospaces. The latter is interesting from a mathematical standpoint as it shows the connection to classical spaces, but they are also an important technical tool and we make heavy use of them in Section 1,4,

In the last section of the introductory chapter we give a quick overview of Abramskv's work. Its main purpose is to allow the reader who is not familiar with it to see where the approaches differ and in what sense and to what degree Abramskv's programme has been extended successfully to continuous domains.

In Chapter 2 we take a completely different point of view. In the first section we study the multi-lingual sequent calculus and how we can turn it into a category. We can think of the objects and morphisms in this category MLS as sequent calculi for internal reasoning and for reasoning about inferences between different logical domains, respectively. There are a number of interesting properties such sequent calculi can have and we investigate to some extent how they interact.

Section 2 discusses cut elimination for this logic and explains how it can be used to construct new objects and morphisms from old ones. As an application we construct products and coproducts in a worked example. In the light of the last chapter this can be understood as performing domain constructions in purely logical terms.

The last chapter ties all these ideas together. Section 1 contains the core of this thesis and is technically the most demanding part. It shows that we can think of MLS as a syntactic description of the category of stably compact spaces and certain closed relations or multi-functions between them. This is done by proving that these categories are equivalent. The category has a number of syntactic, localic and topological manifestations and we discuss their relationships.

In the second section we apply this to do domain constructions. First we characterise when an object from MLS represents a stably compact space. This is the key to doing concrete domain constructions in logical form and we go through a number of them.

The final section describes a categorical way of characterising the closed relations between stably compact spaces that actually correspond to continuous functions. This complements earlier results of Section 1,4 where we do this in logical, localic and topological terms.

Throughout the thesis there are remarks typeset in a slanted font. They

offer a different, often categorical, angle on the topic under consideration. Their purpose is to provide a different point of view or to mention connections to other fields, usually without proofs. Other results in the thesis are always logically independent from these remarks.

Chapter 2 and a large part of Section 1 are based on the conference paper [31]. The material has been improved in several points and some of these improvements will be included in the journal version [32],

Acknowledgements

I would like to mention a number of people who contributed in different ways to this thesis. First of all I want express my gratitude towards my supervisor Achim Jung, Whenever I had an idea or a problem he went out of his way to make time for discussions, I also wish to thank the other members of my thesis group, Valeria dePaiva and Maria Kwiatkowska for their advice,

Andrew Moshier suggested the multi-lingual sequent calculus and the new cut rule. Moreover, the insight that cut elimination gives a proof-theoretic account of domain constructions in logical form is his,

I used Paul Taylor's macro packages diagrams and prooftree for typesetting this thesis, Paul also pointed out the connection to splitting idempotents mentioned on page 65, Peter Selinger introduced me to diagonals as a means of identifying functions in categories of relation as discussed in Section 3,1,

For proofreading I want to thank Thomas Erker and Paola Maneggia, In particular Thomas read large parts of this thesis, checked the proofs and made a number of valuable suggestions to improve it.

Finally, I thank Alessio Lomuscio and Axel Grofimann for their friendship and support.

Chapter 1 The Basics

The material in this chapter provides the necessary prerequisites for most of this thesis. We begin with elementary topology and order theory, mainly to fix notation. Then we discuss Stone Duality in some detail as it is the foundation of domain theory in logical form. This allows us to define stably compact spaces, the class of topological spaces at the heart of this investigation. We study them from the point of view of topology, locale theory and domain theory, A quick review of Abramskv's original domain theory in logical form finishes the preliminaries,

1 Topology and Order

I assume that the reader is acquainted with basic topology, lattices, order theory and domain theory. The principal reference for the latter two is [5], I adhere to the terminology used there, and rely on familiarity with the material of its Chapters 2 and 3 which discuss basic facts and constructions for domains, A good general introduction to order and lattice theory is [9], As a reference for topology the handbook article [61] is particularly well suited for our purposes as it emphasises T0 spaces,

1.1 Topology

Our motivating example of a topological space is a dcpo with the Scott topology, Unless such a dcpo is discrete the resulting topological space is not Hausdorff, or even I\, which means that the spaces at the centre of our attention will only satisfy the T0 separation axiom.

Hence, we start by recalling some definitions and very basic facts of T0-topologv. As these concepts do not appear in the Hausdorff case, they are less well known and usually not covered in textbooks on topology,

1.1.1 T0 Spaces

For every topological space X we can define the specialisation preorder by

X E y- ^^ {x} c {//)•

x e {//!■

Af(x) c jV(y)

^ (V/.' e.v"(r)) yeu,

where jV(a;) is the neighbourhood filter of the point x.

The connection between this preorder and convergence is as follows: If a net or a filter converges to a point x and y Q x then y is also a limit.

The specialisation preorder is antisymmetric, and hence an order relation, if and only if the space is T0. As mentioned above we will focus on such spaces and, thus, will not state the following observations for the preordered case, A topological space satisfies the I\ axiom if and only if the specialisation order is trivial, i.e. the equality on the space.

Note that all open sets of a topological space are upper sets with respect to the specialisation order, and the closure of a point can be written as {x} = l'--={y | yQx}.

The order of specialisations also allows us to order the functions between two given topological spaces. We say / C g in the so called extensional order if this is the case point-wise, i.e. for all x we have f(x) C g{x).

Remark. It is easily verified that every continuous function is automatically monotone for the respective preorders of specialisation. Hence, going from a topological space to a preordered set by using the specialisation preorder defines a functor. This functor restricts and co-restricts to the categories of T0 spaces and of posets.

In fact, using the extensional order on hom-sets it is easy to see that topological spaces form a (pre-)order enriched category.

In general, several topologies can induce the same order of specialisation. The coarsest such topology is the lower topology which has closed sets generated by principal ideals lx. The finest is the Alexandrov topology which has all lower sets as closed subsets.

Remark. The Alexandrov topology gives rise to the left adjoint to the functor that takes the specialisation order. The lower topology, however, is not the right adjoint.

We call a subset A of topological space saturated if it is an intersection of open sets. This is equivalent to A being equal to the intersection of all open sets that contain it. Using the specialisation preorder we can also characterise the saturated sets as being exactly the upper sets with respect to the specialisation order. Hence, in a space every subset is saturated. Note that arbitrary unions and intersections of saturated sets are again saturated.

Any subset A of a topological space has a saturation, i.e. a smallest saturated subset containing it, which is given by the intersection of all open supersets of A. In fact, the intersection of any basis—not necessarily made

up from open sets—of the neighbourhood filter of A yields the saturation of A. Using the language of the specialisation order this saturation is also given by ^A\={x | (3a E A) a Qx} = LU^it^-

1.1.2 Compactness

For us the term compactness comprises the Heine-Borel but not the Hausdorff property. As a consequence local compactness has to be defined to mean that for every point x and every neighbourhood U E N{x) there is a compact neighbourhood K E N{x) such that K C U. In the non-Hausdorff case this is strictly stronger than every point having a compact neighbourhood. Also, for a T0 space compactness does not imply local compactness. Compactness is well-behaved under saturation.

Lemma 1.1 A subset of a topological space is compact if and only if its saturation is.

Proof. This is an immediate consequence of the following observation: A union of open sets covers a given set K if and only if it covers the saturation f K. ' □

As a corollary we get that a space is locally compact if and only if every point has a neighbourhood filter basis of compact saturated sets. Note that this implies that the neighbourhood filter of every compact set also has a basis of compact saturated sets.

Given a topological space X we denote its topology by O(X) and the collection of compact saturated sets by 1C(X). We can construct a new topology on X by taking 1C(X) as the subbasis for the closed sets. This topology is called the co-compact topology and we write XK for the resulting topological space. The coarsest refinement of the original topology and the co-compact topology is called the patch topology and we call the corresponding space Xn. It has O(X) U {X \ K | K E fC(X)} as a subbasis for its topology.

The real importance of these concepts will become apparent later (Section 3) in the special case of stably compact spaces, and the same goes for pospaces that we consider briefly in the next section. The reason to introduce them here is that we want to have the terminology available when we discuss the Lawson topology in the following section.

The specialisation order and the co-compact topology are linked by the following observation.

Proposition 1.2 For a space X the specialisation preorder CK of the co-compact topology is the dual of that for the original topology.

Proof. Suppose iCi/, with respect to the original order, and K E )C(X) is such that y iK. Then x £ K because K is a saturated, that is upper, set. The sets X\K are a subbasis of the co-compact topology, and so we have y CK x with respect to this topology generated by it.

Conversely, take x%y. Then fx is compact saturated and does not contain y. Since it contains x we see that y %K x. □

For stably compact spaces we will see that the co-compact topology is indeed the dual topology in a certain sense to be made precise in Section 3,2,

1.1.3 Pospaces

A (partially) ordered space or simply pospace is a topological space X with an order relation C such that its graph is a closed subset of X x X, endowed with the product topology. This can be recast equivalentlv in a way that makes it clearer that this is a condition on the way convergence and order interact: Given converging nets {xi)i£i and (yj)iej such that xjt C yjt holds point-wise the limits also satisfy lima^ □ limy*.

Proposition 1.3 .1 pospace is Hausdorff.

Proof. The definition of pospace is self-dual. So if (X, C) is a pospace then so is (X, □). This implies that the diagonal, as the intersection of □ and is closed in X x X, and thus that X is Hausdorff, □

As a consequence we see that T0 spaces can only be pospaces with respect to their specialisation order if this order is trivial. Using the patch topology they are, however, a rich source of ordered spaces.

Proposition 1.4 Let X be a locally compact T0 space. Then Xw is a pospace with respect to the specialisation order of the original space X.

Proof. Given x %y we find a compact saturated neighbourhood K E J\f{x) such that y £ K. Hence int(K) x (X \ K) is a patch open neighbourhood of (x, y). Suppose int(K) 3 x' C y', then y' is also in the saturated set K = fK. This shows that the graph of C does not meet the neighbourhood int(K) x (X \ K). This holds for all (x, y) E (X x X) \ C, and thus the order is closed, □

1.2 Domains

As the conventions in domain theory differ slightly from author to author we begin by repeating the basic definitions and results from [5] to fix our notation. Then we cover some domain theoretic concepts that are not covered in the introductory chapters there,

1.2.1 Dcpo's

A subset D of a poset is called directed if it is non-empty and for any two elements in D there is an upper bound in D. As a consequence all finite subsets of D have upper bounds in D. A directed complete partial order or dcpo is a poset where every directed subset D has a supremum U'/>. Note

that we do not require that a depo is non-empty or has a least element ±. If it has a least element we call a depo pointed.

A function / between dcpo's is Scott-continuous if it preserves directed suprema, i.e. f(\J1D) = \Jf(D) for directed sets D. In particular, / is monotone.

A subset C of a depo X is Scott-closed if it is a lower set, i.e. C = i('. and if D C C is directed implies E C. Equivalentlv, the Scott topology £(A) contains the upper sets that are inaccessible by directed suprema. As the sets lx are Scott-closed the specialisation order for the Scott topology is just the order of the depo. This topology justifies calling the above functions "continuous": A function is Scott-continuous if and only if it is continuous with respect to the Scott topologies on the dcpo's.

The specialisation order for the Scott topology is the original order as it is finer then the lower but coarser than the Alexandrov topology.

1.2.2 Approximation

The 'order7 of approximation or way-below relation is derived from the order on a depo X as follows: x <C y if for all directed sets D, y □ implies that there is a d, E D such that x C d. This order clearly satisfies the implications i<i/ ==>■ x □ y and x C x' <C y' E V !<!/■ In particular, < is transitive.

A depo is continuous if every element is the directed supremum of elements that approximate it. This turns out to be equivalent to the set |x:={y | y <C x} being directed and x = |_f f°r x- Following [5] we reserve the term domain for dcpo's that are at least continuous.

An element x is compact if x <C x. We denote the set of compact elements by K(X). If in a domain every element is the directed supremum of compact elements below it we call it algebraic. In an algebraic domain we have i«i/ if and only if there is a compact element k such that x Q k Q y.

We now come to what [5] calls the "single most important feature of the order of approximation" for continuous domains. As we have already seen <C is always a transitive relation. For a continuous domain it is also interpolative, i.e. for x <C x' we can find an interpolating element y satisfying x <C y <C x'. This interpolation property can be stated more generally as follows (see [5, Lemma 2.2.15]): Given M <C x, where M is a finite set and the relation is required to hold for every element of M. we can find an interpolating y such that M «i/«i,

As a consequence of interpolation the sets of the form fa; form a basis of the Scott topology. Moreover, for any element x the sets fy, where y ■«i, are compact neighbourhoods of x, and they are a basis of the neighbourhood filter J\f{x). This shows in particular that a continuous domain is locally compact. We will make use of the following consequence for the neighbourhood filters of compact saturated sets later:

Lemma 1.5 Let O be an open and K a compact saturated subset of a domain X with K C O. Then there is a finite set M such that K C f M C f.\l C O.

Proof. For every x E K we can find an x' E O such that x' <C x because of the continuity of X. The union of the sets fa;' covers K and because of the compactness of K finitely many of the x' suffice; we can take them as the finite set M. □

A basis of a domain is a subset B such that all elements are the directed supremum of elements from B that approximate it. Again this is equivalent to ^x fl B being directed and yielding x as its supremum, A dcpo is algebraic if and only if its compact elements form a basis.

Note that if we are given a basis B we get a more economical basis for the Scott topology; it is given by {fa; | x E B}. If we specialise this to the algebraic case we can use the sets fA; = fk, for k compact, as basic open sets,

1.2.3 Lawson Topology

As we have already seen the Scott topology on a continuous domain is locally compact. By Proposition 1,4 this implies that the patch topology for the Scott topology is a pospace.

There is a different description of this patch topology. For any dcpo X the Lawson topology is given by the subbasis of Scott-open sets and sets of the form X \ fa;. It is the refinement of the Scott topology and the upper topology; the latter has the sets fa; as a subbasic closed sets.

In general the upper topology is coarser than the co-compact topology for the Scott topology. For continuous domains, however, they agree.

Proposition 1.6 For a continuous domain X the Lawson topology is the patch topology of the Scott topology.

Proof. As the co-compact topology is finer than the upper topology the patch topology is at least as fine as the Lawson topology.

Conversely, let us suppose that U is Scott-open, K compact saturated and x E U\K. By the previous lemma there is a finite set M such that K C fM C X\],x. This implies that the point x lies in the Lawson-open set U \ f M which, in turn, is contained in U\K. □

1.2.4 Special Classes of Domains

Unless stated otherwise the proofs for the statements in this section can be found in [5, Section 4],

We call a domain Scott domain if it has a least element ± and any two bounded elements have a supremum. Note that some authors require Scott domains to be also w-algebraic; we use the term in the more liberal fashion.

Remark. Using induction and directed suprema we see that this implies that in such a domain every bounded subset has a supremum. Thus they are

sometimes also known as bounded complete domains. An equivalent condition is that every non-empty subset has an inGmum. Another characterisation is that they are precisely the Scott-closed subsets of continuous lattices, i.e. complete lattices that are also continuous domains.

The ambient category of dcpo's is already cartesian closed. Products and exponentials are created by the forgetful functor to Set, the order in both cases is given point-wise. Unfortunately, neither the category of algebraic nor that of continuous domains is cartesian closed and so one has to restrict to the domains further (see [5, Section 4]). Scott domains form a cartesian closed subcategory, and, moreover, they satisfy a very strong extension property [16].

Sometimes, for example for several power domain constructions, one is forced to consider more general domains. We do the algebraic case first.

Proposition 1.7 For an algebraic domain X with least element the following two conditions are equivalent:

(i) K{X) is MUB-eomplete, i.e. for every upper bound x of a finite subset M there is a minimal upper bound of M below x; and every finite subset M has a finite MUB-elosure which is the smallest set N D M such that N contains all minimal upper bounds of all its subsets.

(ii) There is a directed family of continuous idempotents with finite image on X whose supremum is the identity.

If an algebraic domain satisfies the equivalent conditions of the proposition we call it bifinite. They were introduced by Gordon Plotkin [50].

Remark. There is another characterisation of bifinite domains as bilimits (see [5, Section 3]) of finite posets which explains the name "bifinite" and also the common acronym SFP domains which stands for "sequence of finite posets".

As an immediate corollary to the proposition every algebraic Scott domain is bifinite. Moreover, bifinite domains form a cartesian closed category, in fact it is a maximal cartesian closed sub-category of pointed algebraic domains (see [58] and [29]).

Let C be a cartesian closed category. If C' is a full subcategory of C that is itself cartesian closed, then so is the full sub-category of retracts of objects from C' (see [57]). As the continuous domains are precisely the retracts of the algebraic domains, we get a fairly large category of continuous domains by applying this procedure to bifinite domains. The resulting domains are called RSFP for "retracts of SFP".

Proposition 1.8 .1 domain X is an RSFP domain if and only if there is a directed family of continuous endo-functions on X with finite image such that their supremum is the identity.

Proof. [29, Theorem 4.1] □

Every continuous Scott domain is an ESFP domain. The idea of the proof is as follows: Given any finite subset M of a Scott domain there are only finitely many suprema of subsets of M. Hence, we can assume, without loss of generality, that M is closed under bounded suprema. This allows us to define a projection as required in the proposition by mapping an element to the supremum, and hence the largest, of those members of M that approximate it. It is fairly straightforward to verify that this function is continuous. Furthermore, every infinite set is the directed union of its finite subsets, and hence the identity is the directed supremum of such functions.

It is not known whether the category of ESFP domains is a maximal cartesian closed sub-category of pointed continuous domains. It is contained in the category of FS domains—to be defined shortly—which is maximal, but it is an open problem whether this containment is proper.

We say a function /: X —X is finitely separated from the identity if there is a finite subset M C X such that for all x e X there is an rn e M satisfying f(x) C m C x. An FS domain is a pointed domain for which there is a directed family of endo-function that are finitely separated from the identity and whose supremum is the identity. This condition is clearly weaker than that of Proposition 1,8, hence RSFP is a subcategory of FS,

FS domains were first introduced in [30], For more information see there or [5, Section 4], They can also be described in purely topological terms [34], This paper also contains more detailed proofs of some properties of FS domains than the two other sources,

A function that is finitely separated from the identity maps any element to one that approximates it:

Lemma 1.9 Let X be a dcpo and f: X —X a continuous function that is finitely separated from the identity. Then for all x e X we have /(i) < i.

Proof. Let M C X be a finite set that shows that / is finitely separated from id and .r (r X an arbitrary element. Now, suppose D is directed and x E Li ^ which clearly implies f(x) C f(\_^D) = \_ff[D], For every y e D there is an m E M such that f(y) C m C y. As M is finite and D directed there must be

at least one such m that satisfies f[D] □ m. This implies f(x) □ |_C m

and, moreover, there is an element y e D such that f(y)QrnQy. This proves /(i)<i □

As an immediate corollary we get:

Corollary 1.10 An FS domain is continuous.

We will come back to these special classes of domains in the next section when we study their Stone duality. Later we will see that their Scott topologies give rise to stably compact spaces.

2 Stone Duality

We proceed with a brief introduction to Stone duality. This provides the link between topology and logic in the form of locales. They can be thought of as Lindenbaum algebras for a logic of observable properties that has (normal) conjunction and infinite disjunction. For details see [5, Sections 7,1 and 7,2], [25, Chapters 2 and 3] or [65, Chapters 3-5],

2.1 The Adjunction O H pt 2.1.1 Frames and Locales

The open sets of a topological space are closed under arbitrary suprema and hence form a complete lattice. As only finite infima can be calculated by intersections it is natural to look at these finite intersections. It turns out that they distribute over arbitrary joins motivating the following definition.

Definition 2.1 A complete lattice is called a frame if it satisfies the following frame distributivity:

x/\\/yi = \/(xA yi).

A frame homomorphism is a function between frames that preserves all suprema and finite infima. We denote the category of frames and frame ho-momorphisms by Frm, Its opposite is the category of locales, Loc,

Observe that the definition of a locale mirrors the logic of observable properties: The equations used in the definition are made up from arbitrary disjunctions but only finite conjunctions.

Remark. The category of frames has a number of nice properties. It is, for example, algebraic over Set which means that free frames exist. More importantly one can use generators and relations to construct frames with certain properties. For details see [25, Chapter 11].

A continuous function / between topological spaces X and Y gives rise to a frame homomorphism /_1[-]: Q(Y) —0(A) which takes an open set to its preimage. Hence we get a functor O from Top, the category of topological spaces, to Loc,

It is worth mentioning that open sets in a topological space X are in bijection to continuous functions from X to 2, the poset of two elements 0C1 equipped with the Scott topology. The bijection takes an open set U to its characteristic function xu and a map x: X ^ 2 to the open set x-1 [{1}] ■ This correspondence is an order isomorphism since U C V is equivalent to Xu E Xv-

Taking the preimage under / can then be described as composing the characteristic function of an open set with /, i.e. Xf-^u] = Xu ° /■ This implies that O followed by the forgetful functor to Set is isomorphic to the Hom-functor Top(^, 2),

2.1.2 Reconstructing a Space

The natural question now is whether we can retrieve a space X from its locale of open subsets fi(A), For a Hausdorff space all sets of the form X \ {x} are open and co-atoms in fi(A), These co-atoms turn out to correspond exactly to points of X.

For T0 spaces, i.e. the ones we are particularly interested in, this is no longer true. In general we observe that {x} is an U-irreducible closed set which implies that X \ {x} is A-irreducible in fi(A), In a distributive lattice, in particular in a locale, the A-irreducible and the A-prime elements coincide. Hence, our candidates for points are the A-prime elements.

Another approach is to observe that the open neighbourhood filter of a point a; is a completely prime filter in 0(A) which we denote by J\f°{x). This suggests that we might want to take them as points. Fortunately, in a complete lattice the A-prime elements and the completely prime filters are in one-to-one correspondence: If a; is prime then the complement of lx is a completely prime filter, and conversely if F is such a filter the supremum of its complement is still in the complement and prime.

There is a third way of describing points. The characteristic function of a completely prime filter is a frame homomorphism1 to 2, now considered as a two element locale, and again the correspondence is one-to-one.

The last description allows for the simplest definition of the eontravariant functor pt from CLat, the category of complete lattices with frame homomor-phisms, to Top: Its image under the forgetful functor can be taken to be CLat(—, 2); an open subset of the space pt(L) is a set of 'points' which take a specific i to 1 6 2. Describing opens by their characteristic function they are of the form Xf.f(x).

The two functors pt and O form a dual adjunction. If we restrict pt to locales we can consider it as an adjunction O H pt between Top and Loc, Since Loc is simply the dual category of Frm both functors are now eovariant.

From now on we are going to use the term 'point' to refer to a completely prime filter in a locale or more generally in a complete lattice. We might also use the term to refer to the corresponding characteristic function as long as it is obvious from the context what is meant. Sometimes, if we want to make clear that we are talking about the elements of a locale and not its 'points' we refer to the elements as the 'opens' of the locale.

Given a locale L one can directly read off the specialisation order of the topological space pt(L) it describes; it is just subset inclusion. The extensional order on functions between such spaces, introduced in Section 1,1, is also manifest on the localic side.

Proposition 2.2 For two functions f and g between topological spaces we have f Q g if and only if for all open sets U the containment f^l\U] C g^l\U]

1 We use this as a shorthand for "preserves arbitrary suprema and finite infima" even if the lattices involved are just complete lattices and not frames.

holds, or in other words if il(f) □ 0(g).

Conversely, if f □ g holds point-wise for frame homomorphisms f and g then pt(/) C pt(g).

2.2 Sobriety and Spatiality

We still have not answered the question which spaces can be reconstructed from their lattice of opens. It is obvious that a space has at least to satisfy the T0 separation axiom. For the dual problem of reconstructing a lattice from its 'points' it is clear that the lattice has to be at least a locale. So, the question is tantamount to restricting and co-restricting the above adjunction to an equivalence. That is to say we have to characterise those spaces and locales whose units and co-units are isomorphisms.

To do this we need some information on how the units rjx : X —Y pt(0(X)) and co-units €l : L —Y 0(pt(L)) work. As we have already observed there is a perfect symmetry of the two functors pt and O which allows us to write them both as Hom-functors (—, 2) in the respective categories. So, it may not come as a surprise that both rjX and eL can be written as the evaluation function Xx.Xf.f(x), disregarding the types of the arguments.

For unit this is just a fancy way of saying that rjx takes a point to its neighbourhood filter. Thinking of 'points' as completely prime filters the co-unit takes the form eL(x): = {P E pt(L) | x E P}. When we want to stress that this is an open set in pt(L) we sometimes refer to it as Ox. As €l is a frame homomorphism we infer OyXi = (J 0Xi and OxAy = Ox fl Oy.

Now, we can state the relevant facts concerning the reconstruction of spaces:

Proposition 2.3 For a topological space X the following are equivalent:

(i) The unit rjx is a homeomorphism.

(ii) The function rjx is bijective.

(iii) Every U-irreducible closed set is the closure of a unique point.

(iv) Every completely prime filter in 0(X) is the open neighbourhood filter of a unique point.

A space satisfying these equivalent conditions is called sober. Note that if we restrict ourselves to T0 spaces we only have to ask for rjX to be surjeetive, or equivalentlv we can ignore uniqueness in conditions (3) and (4). On the complete lattice side we get:

Proposition 2.4 For a complete lattice L the following conditions are equivalent:

(i) The co-unit €l is an order-isomorphism.

(ii) €l is injective.

(iii) cl is order-reflecting.

(iv) The elements of L are separated by completely prime filters.

(v) For elements x ^ y there exists a completely prime filter containing x but not y.

(vi) L is A-generated by A-prime elements, i.e. every element is the meet of the meet-prime elements above it.

A complete lattice is called spatial if it satisfies the conditions of the above proposition.

The functors pt and il give rise to closure operators in the following sense:

Proposition 2.5 For a complete lattice L the space pt(L) is sober, and for a topological space X the locale 0(A) is spatial.

The functors pt and il restrict and co-restrict to an equivalence of the full subcategories of sober spaces and spatial locales.

The composition pt o il of these two functors is known as sobrifi,cation, il o pt as spatialisation.

Proposition 2.6 Arbitrary products and coproducts of sober spaces are sober.

Proof. For products this is an immediate consequence of the fact that the functor pt as a right adjoint preserves all limits.

For coproducts let AC^ Xjt be an irreducible closed subset of the disjoint union of the sober spaces X¿, We claim that A must be a subset of one of the X¿, This is true since A fl Xj ^ 0 and A fl Aj ^ 0 implies that we can write it as a non-trivial union

• 1 = ((LI-V.) \Xj n a) U (01 a-,) \ a; n a)

of closed sets, contradicting its irredueibilitv. The claim now follows immediately from the sobriety of the corresponding space Xi DA. □

2.2.1 Sobriety and Domains

For the programme of domain theory in logical form we will concentrate on spaces which are domains with their Scott topology. With respect to our current viewpoint these spaces are 'well-behaved':

Proposition 2.7 If D is a continuous domain then S(-D), the Scott topology on D, is sober.

Continuity cannot be dropped from the previous proposition [24], Later on we will characterise the topologies that arise as Scott topologies for continuous domains.

There is also a connection between continuity and spatialitv: For a distributive complete lattice continuity implies frame distributivitv because intersection on a continuous domain is Scott-continuous, Hence, such lattices are locales. Moreover they turn out to be spatial.

Proposition 2.8 A distributive, continuous lattice is a spatial locale.

For a thorough discussion of continuous lattices see [19]. Most of the results about the hierarchy of Stone dualities, given later in this chapter, can also be found there.

As mentioned above the specialisation order on pt(L) is just subset inclusion. As the directed union of 'points', i.e. completely prime filters, is again a 'point' this specialisation order is always a dcpo. We get even more:

Proposition 2.9 The specialisation of a sober space is a dcpo, and the topology is coarser than the Scott topology.

We can extend this to morphisms as well. The standard proof that for dcpo's Scott-continuitv and topological continuity with respect to the Scott topologies coincide actually shows slightly more. If /: X —Y Y is a continuous function from a dcpo X with the Scott topology to any space Y then / is Scott-continuous, i.e. / preserves directed suprema. In particular, suprema of images of directed sets in X exist in Y. As a consequence we see that continuous maps between sober spaces are Scott-continuous.

Remark. Categorically, this makes the specialisation order a functor from sober spaces to dcpo's.

Considered as a functor to posets the left adjoint of the specialisation order is given by the sobriScation of the Alexandrov topology. The resulting space is just the ideal completion of the poset with the Scott topology.

We can also take directed suprema of functions between sober spaces.

Lemma 2.10 If X and Y are sober spaces and j): X —Y Y is a directed family of continuous maps then the point-wise supremum |_f/j ^ continuous.

This corresponds to taking the point-wise supremum of the locale mor-phtsms: iifljVi) = \Jm)-

Proof. We first claim that for a Scott-open set U we have (|_|Vi) 1[U] = [ffr^U]- The latter set is clearly contained in the former. For the other subset inclusion note that x E (\_ffi) [U] implies \_ffi(x) E U and hence fi(x) E U for an index i.

From this observation we instantly get that fi is continuous as for a sober space every open set is Scott-open. The second claim is also an immediate consequence. □

The proof of the first assertion of the lemma did not even use sobriety. It works whenever Y carries a topology that is coarser than the Scott topology.

2.2.2 Compact Saturated Sets

Given a compact subset K of a topological space X its open neighbourhood filter Af°(K) = {() E il(X) | K C 0} is clearly a Scott-open filter in the

locale 0(A), (Without loss of generality we can assume K to be saturated as any set has the same open neighbourhood filter as its saturation.)

This suggest to think of Scott-open filters in any locale L as compact saturated 'subsets'. To justify this we have to show that every such filter does indeed correspond to a compact saturated subset in pt(L), This is the statement of the (localic) Hofmann-Mislove Theorem [23]. We discuss the proof in detail so we can compare it to the logical version later. It is essentially a localic version of the proof given in [36]. The following is the key lemma.

Lemma 2.11 If F is a Scott-open filter in a distributive complete lattice and x E L\F then there is a completely prime filter PDF such that x ^ P.

Proof. The set L\F is Scott-closed and so in particular inductively ordered. By Zorn's Lemma we find a maximal element x' > x that does not lie in F. Any maximal element not in a filter is A-irreducible, and in a distributive lattice A-irreducible and A-prime elements coincide. This in turn implies that P:=L \ lx' is again a filter. This filter is clearly completely prime as the complement of the principal ideal lx'. The conditions x £ P and F C P follow immediately from x < x' £ F = fF. □

The lemma can be rephrased to say that completely prime filters, i.e. the 'points', separate elements from Scott-open filters, or that every Scott-open filter is the intersection of the completely prime filters containing it.

We can now prove the above claim that all Scott-open filters in a locale correspond to compact saturated sets.

Theorem 2.12 Given a distributive complete lattice L the Scott-open filters on L are in one-to-one correspondence to the compact saturated subsets of pt (L).

The isomorphism takes a filter F C L to the set of all 'points! containing it and a compact saturated set K C pt(L) to f) K = {x E L \ K C Ox}. These maps are order-isomorphisms with respect to inclusion on filters and reverse inclusion on compact saturated sets.

Proof. Suppose K is a compact saturated set. If we consider the equivalences

K C (),. ^ (VP EK)xEP ^ xE Pi K

we see that the sets {x | K C Ox} and K are indeed equal, and as the first is an intersection of filters it is again a filter. That it is Scott-open follows from the compactness of K and the fact that 0(.) = €l is a frame homomorphism and thus commutes with arbitrary suprema.

Now take a Scott-open filter F C L. We only have to show that the set K:={P E pt(L) | F C P} is compact, because it is obviously an upper set and thus saturated. Suppose it is covered by a directed union of open sets [fOXi = Oy tx. which is equivalent to (VP D F) V^ e P, where P is a

'point'. By the previous lemma we get V^ e F an<i as F 'IS Scott-open there

must be an index i for which xi E F. This implies K C 0Xi, and thus K is compact.

It remains to show that the above mappings are mutually inverse. From the lemma we already know that f){P E pt(L) | F C P} = F holds. For the converse we get

{P E pt(L) I Pi K C P} = {P | (Vz G Pi K) x E P}

= P| I r e P|/x'l =

= -fK = K.

Starting with a sober space, the more common, topological version of the theorem is an immediate consequence.

Corollary 2.13 The compact saturated subsets of a sober space X are order isomorphic the Scott-open filters ofil(X).

The corresponding maps take a Scott-open filter to its intersection and a compact saturated set to its open neighbourhood filter.

Remark. We have already observed that a subset of a space and its saturation have the same neighbourhood filter, and we can clearly reconstruct a saturated set from this filter by taking the intersection. This suggest a one-to-one correspondence between saturated sets and filters in the locale. Unfortunately, this does not quite work; in general, several filters represent the same saturated set. As an example consider the filter

T := {K\{l,2,...,n}|neN}C 0(E).

Its intersection is T = R \ N, an open set that does not lie in IF.

The filters that do arise as neighbourhood filters of saturated sets are precisely those that are intersections of completely prime filters.

These observations can be used to give some meaning to infinite meets in locales, although they are not part of the language. The inGmum of any subset in a complete lattice is the same as that of the filter generated by the set. Hence f\ is essentially an operation on filters. In a locale the result yields the interior of the saturated set that such a filter represents.

We summarise the correspondence between topological and localic concepts that we have been discussing up to here in the following table. In the rest of this thesis we are going to make heavy use of these equivalences, sometimes without explicitly mentioning it.

Locale

point open set compact saturated set (saturated set)

completely prime filter element Scott-open filter (filter)

In many cases the Hofmann-Mislove Theorem is used in the form of the following corollary. It is slightly stronger than just stating that for a sober space X the poset (JC(X), D) is a dcpo.

Corollary 2.14 In a sober space X a filtered intersection of compact saturated sets is compact saturated.

Moreover, if such an intersection lies in an open set O than one of the compact saturated sets is already a subset of O. In particular, a filtered intersection of non-empty compact saturated sets in non-empty.

Proof. We can translate the directed intersection into a directed union of Scott-open filters. The result is then again such a filter. This implies that the infimum of the directed family of compact saturated sets exist in (/C(A), c) and that it is given by the intersection of the union of the neighbourhood filters of the original compact saturated sets.

Now, it is generally true in a complete lattice that the meet of a union of subsets is the meet of the meets of the individual sets—this is 'general associativity'. Applying this to the power set ^(A) we see that the above infimum is just the filtered intersection of the compact saturated sets.

Thinking on the side of Scott-open filters it is obvious that an open set is only in the directed union if it was already in one the original filters, □

As another application of the Hofmann-Mislove Theorem we prove the localic equivalent of the well-known theorem that the continuous image of a compact set is compact. But first we need a lemma.

Lemma 2.15 If P is a completely prime filter and f)Qi C P, where the Qi are upper sets, then there is an index i such that Qi C P.

Proof. Suppose otherwise that for all i there is an element Xi E Qi\ P. Then \/Xi lies in f]Qi as these are upper sets, but on the other hand the supremum cannot be a member of the completely prime filter P. Hence we have \/ Xi E Qi \ P, a contradiction, □

Topologicals, the following proposition is almost trivial. The reason for a localic form of it is that we need it later for the proof of Lemma 3,16,

Proposition 2.16 If f: L —y M is a frame homomorphism and F C M a Scott-open filter then so is its preimage

Topologically this can he seen as map fa: /C(pt(L)) —y AC(pt(M)) that takes a compact saturated set K to ff[K],

Proof. The first assertion is trivial because frame homomorphisms preserve finite infima and arbitrary, hence in particular directed, suprema.

Now, take a 'point' P in the compact saturated set K corresponding to a Scott-open filter F—in other words, let F C P. Then we get F] C f^l[P] = (pt f){P) and we see that f[K] is a subset of the compact saturated set corresponding to This implies that the saturation ff[K] is also

contained in this set.

Conversely, if we have f_1[F] C P. then we can re-write the first filter using Lemma 2,11 as

r'[F] = r1 [P\{Q e pt (M) I FCQ}]= Plif-'iQ] \FCQ}.

So, by the previous lemma we get a 'point' Q D F such that C P

which shows the other subset inclusion, □

2.3 Hierarchy of Stone Dualities I

We now investigate how the duality between sober spaces and spatial locales can be restricted and co-restricted. For a road map to the hierarchy of Stone dualities we are studying in this section see Figure 1 on page 30,

2.3.1 Local Compactness

From Proposition 2,8 we know that a continuous locale is spatial. We claim that these locales are exactly those which are isomorphic to the lattice of open subsets of a locally compact sober space. To see this we need more information about the order of approximation for the opens and compact saturated subsets of a space.

Lemma 2.17 Let X he a locally compact space. Then 0<0' holds in the lattice (O(X), C) if and only if there is a compact (saturated) set K such that OCR CO'.

In (JC(X), D) the condition K <C K' implies that there is an open set O satisfying K D O D K'. The converse is true if, in addition, X is sober.

Proof. In the situation O C K C O', where K is compact, we clearly have O < O'. For the reverse implication observe that local compactness implies that O' is the union of compact neighbourhoods of its points, and that finite unions of compact sets are compact.

Now, suppose K <C K'. As a saturated set, K' is the intersection of any filter basis of its neighbourhood filter. By local compactness we can assume this to be made up from compact saturated sets, and so we find L 6 JC(X) such that K' C int(L) C L C K.

The implication K D O D K' Corollary 2.14.

K <<C,K' is a direct consequence of

From the first half of the lemma it follows that local compactness of a space entails the continuity of the corresponding locale.

Proposition 2.18 The equivalence of Proposition 2.5 between sober spaces and spatial locales restricts and co-restricts to locally compact spaces and continuous locales.

Proof. For the remaining direction consider u E P E pt(X), where L is a continuous locale. We can think of u E P to say that the 'point' P lies in the 'open set' u. P is completely prime and thus Scott-open, and as L is continuous we find v! E P that approximates u. Using the interpolation property repeatedly we construct a sequence v! <C • • • <C u2 <C u\ <C u. It is now easy to check that /\':= (J;) .. :|://„ is a Scott-open filter and that we have u E K C fu' C P. Using the Hofmann-Mislove Theorem in the form of Lemma 2.11 we see that thinking in terms of the space pt(L) this means that the compact saturated set corresponding to K is contained in Ou and that the interior of K is larger than 0'u which in term contains the 'point' P. Consequently, pt(L) is locally compact. □

Remark. This result can actually be slightly sharpened: Because of Q(X) = fi(pt(fi(A))), the open set lattice il(X) is continuous if and only if pt(fi(A)), the sobriScation of X, is locally compact.

In the following compact saturated and open sets will play equally important roles; this link will feature more prominently in the next section. Hence, we state the analogue of the proposition for compact saturated sets.

Proposition 2.19 For a locally compact sober space X, (tC(X), D) is a continuous dcpo.

The sets □ 0: = {K E JC(X) | K C O}, for O open, form a basis of the Scott topology.

Proof. We have already seen in Corollary 2.14 that JC(X) is a dcpo under reverse inclusion. Now from local compactness we can infer that every compact set has a neighbourhood filter basis of compact saturated sets. By Lemma 2.17 this shows that this dcpo is continuous.

A set UO is Scott-open by the second observation of Corollary 2.14, and interpolation together with Lemma 2.17 yields that such sets form a basis of the topology. □

In general, we can say nothing about unions of compact sets. But if the set that we take the union over is itself compact the situation is much better. The following lemma is taken from [54, Section 7.3.2] and will be important later.

• Top

• Sob

Fig. 1. Hierarchy of categories of topological spaces I

Lemma 2.20 Let X he a locally compact sober space. IfC is a compact subset of fC{X) then {JC is compact saturated.

Proof. The set (J C is an upper set as a union of upper sets. To see that it is compact suppose O is a directed open cover, i.e. {Jo U^- For every K E C we can find an O E O, by compactness, such that if C O, or in other words K E OO. The set {DO | O E O} is clearly a directed family of open subsets of JC(X), and by the previous observation it covers C. Hence, we can use the compactness of C to find an O E O such that C C DO and thus

U CCO. □

Restricting locales further it is quite natural to ask what happens if we require them to be not just continuous but algebraic. The compact elements in O(X) are exactly the compact-open subsets of X, which we denote by Kfi(X), So, algebraieitv of O(X) is tantamount to saying that the space X has a basis of compact-open subsets.

Proposition 2.21 The functors il and pt restrict to an equivalence between spaces with a basis of compact-open sets and algebraic locales.

2.3.2 Domains

Up to now we have discussed general topological spaces. In the following we want to concentrate on those that are continuous domains with their Scott topology. We let Dom denote the category of continuous domains with Scott-continuous functions and use Alg for the full subcategory of algebraic domains. Their Stone duals have been identified by Lawson [41] and Hoffmann [22] to be the completely distributive lattices.

Definition 2.22 A complete lattice is called completely distributive if the

following infinite distributivity law holds:

AV ; V A/«

«ez fe(UAi)iei

As with the finite version of this law, complete distributivity is a self-dual concept.

Proposition 2.23 The Stone dual of Dom is the category of completely distributive lattices.

To understand the intersection of Dom and CpOpen we have to study compact-open sets in continuous domains. We claim that they are exactly those of the form fM, for a finite set M of compact elements. Such sets are clearly compact-open. To see the converse take a compact-open set K, and note that being an open set it can be written as K = \Jx£K fa;. By compactness we find a finite subset M C K such that K = fM, and after eliminating superfluous elements from M we see that it consists of compact minimal elements of K.

Proposition 2.24 The category Alg is the intersection of Dom and CpOpen. Its Stone dual is the category of algebraic completely distributive lattices.

Proof. In an algebraic domain X sets of the form fM, for M Cfin K(X), are a basis of the Scott topology.

Conversely, for any point x the sets f y, where i/«i, form a basis of its neighbourhood filter. If the compact-open sets are a basis for the topology then the considerations prior to this proposition show that x is the directed supremum of compact elements, □

3 Stably Compact Spaces

Another direction to take from locally compact spaces is to study stably compact spaces. They play an important role in domain theory as the commonly used classes of domains give rise to stably compact spaces when equipped with the Scott topology; for details see [5, Section 4], [29] and [30], Stably compact spaces are also closely related to compact pospaces, and they have many of the 'nice' properties of Hausdorff spaces.

In this section we will use some more advanced tools from general topology, in particular ultrafilters and nets, which are, for example, discussed in [8] and [38].

3.1 The Definition

There are several ways to define stably compact spaces. As these spaces are at the centre of this thesis we explicitly prove these definitions equivalent.

But first we need some auxiliary definitions. The order of approximation in a complete lattice is called multiplicative if x <C y, z implies x <C y A z.

A space is supersober if given an ultrafilter U on it the set of its limits L(W) = P|{F|FeW}is either empty or has a largest element x. The latter is equivalent to the condition that the set of limits is closure of that point, i.e. L(W) =

This terminology is justified by the following observation. Proposition 3.1 A supersober space is sober.

Proof. Let X be supersober and A an irreducible closed subset. We define

B := {UHA | Ueil(A)}\{0}

which is a filter basis because of the irredueibilitv of A, and hence we can refine the filter it generates to an ultrafilter U. By the construction of B every element of A is a limit of this ultrafilter, and as A e B C U and A is closed no point outside of A can be in the set of its limits L (U). We have thus shown A = L (U) and by supersobrietv this set is the closure of a unique point completing the proof that A is sober, □

If A is a Hausdorff space then the specialisation order is simply equality. This implies that such spaces are trivially supersober and thus sober. In the T0 setting these concepts become more interesting, A particularly important class of supersober spaces is given by the locally compact ones which can be characterised in a number of different ways:

Theorem 3.2 For a locally compact sober space X the following conditions are equivalent:

(i) A is supersober;

(ii) the intersection of any two compact saturated sets is again compact;

(iii) the order of approximation on 0(JY") is multiplicative;

(iv) given two Scott-open filters F,G C the filter generated by F UG is again Scott-open.

Proof. We begin by showing the last three conditions equivalent,

"(2) => (3)": Let U, V, W be open sets such that U < V, W. Since X is locally compact we find compact saturated sets K and L satisfying U C K C V and U C L C W. We get U C K fl L C V fl W, and as K fl L is compact by assumption we see U <C V fl W.

"(3) =>■ (4)": Suppose that F and G are Scott-open filters in the lattice O(X). First, we observe that the filter generated by F U G is just {U n V I u G F, V G G} =: H. This set is clearly filtered and generates the same filter. Since O(X) is distributive we see that W DUD V implies W = W U(Ufl V) = {W U U) fl (W U V) which shows that the set is already a filter.

We now verify that H is Scott-open, For a locally compact space O(X) is continuous, and so for U G F and V G G we can find U' G F and V G G such that <€ I and V' < V. We infer U'nV' < U, V and thus U' n V' <€ I C V by assumption. This shows that H is a union of Scott-open sets of the form f (V fl V), hence Scott-open,

"(4) (2)": Let K and L be compact saturated sets. Then the neighbourhood filters №{K)-.={U G Q(X) | K C U) and №{L) are Scott-open. By Corollary 2,13 the set KDL is simply the intersection over the filter generated by J\f°{K) U J\f°{L). By assumption this filter is Scott-open and so because of the Hofmann-Mislove Theorem K fl L is compact,

"(1) =>■ (2)": Let K and L be compact saturated. We have to show that Kf]L is compact. To this end let U be an ultrafilter on Kf]L. Considered as a filter basis it induces ultrafilters Uk, Ul and Ux on K, L and X, respectively. As K and L are compact Uk and Ul have limits xk G K and xl G L. They are also limits of Ux in X. By supersobrietv of X the set L(Ux) has a largest element x, so xK,xL C x with respect to the specialisation order. Now, K and L are saturated which implies .r G K P I.. Moreover, since Ux is the filter in X generated by U, the point x is also a limit of the original ultrafilter U. This shows that every ultrafilter on K fl L converges,

"(2) (1)": Let U be an ultrafilter with L(U) 0. Then the set L(U) = f){F | F G U} is closed, and we will show that it is irreducible. Assume L(W) = AiUA2 where both are proper subsets. Then there are elements Oj G L(U) \ Ai, for i G {1,2}, and as X is locally compact we find compact saturated neighbourhoods Ki satisfying o« G int(Kj) and KiCiAi = 0, The set Kif]K2 is compact saturated by assumption and we get (Kif]K2)f](Ai\jA2) = (Ki fl K2) fl L(U) = 0. The sets Kx and K2 are neighbourhoods of the limits Oi and o2 and we infer Ki,K2 G U and thus /\'i P K-> G U. But this means

that the restriction

UKi K., := {F n Ki n K2 | F E U} = {F E U \ F C Kl fl K2}

is an ultrafilter on the compact set Ki fl K2 and so converges to a point x of that set. Thus U also converges to x E Ki fl K2 which is disjoint from L(W), a contradiction. This implies that L(U) is an irreducible closed set, and by sobriety of X it must be the closure of a point, □

Definition 3.3 A space is called stably compact if it is sober, compact, locally compact and satisfies the equivalent conditions of the proposition.

As supersobrietv implies sobriety a space is stably compact if it is locally compact and every ultrafilter has a largest limit—the latter requirement comprises supersobrietv and compactness. In domain theory compactness is quite a mild condition as, for example, every domain with ± is trivially compact. On the localic side we can express compactness as 1 <C 1,

In a stably compact space we can take finite intersections of compact saturated sets. But in any sober space filtered intersections of compact saturated sets are again compact saturated by the Hofmann-Mislove Theorem, or more precisely Corollary 2,14, Hence, for a stably compact space A we can infer from Proposition 2,19 that (/C(A), D) is a continuous lattice where suprema are given by intersection and finite infima by union.

As a consequence the closed sets of the co-compact topology for stably compact spaces are precisely the compact saturated sets for the original topology. We will improve on this result in the next section.

Like sober spaces, stably compact spaces are well-behaved under products and coproducts:

Proposition 3.4 Stably compact spaces are closed under arbitrary products and finite coproducts.

Proof. Sober spaces are closed under these constructions as we have seen in Proposition 2,6, Compactness and local compactness follow from Tvchonov's Theorem in the case of products, and for finite coproducts they are trivial.

For the remaining condition take an ultrafilter U on fli^i, a product of stably compact spaces. The projection of U onto an individual compact supersober space Ai is also an ultrafilter and hence has a largest limit Xi. Since the product topology is the topology of point-wise convergence we see that U converges to {xj). Moreover, the projections are continuous and thus monotone which implies that {x^ is the largest limit of U. This shows that fli Ai is supersober.

Given an ultrafilter U on X+Y, where both spaces are compact supersober, we must have either A e U or Y E U. This implies that all limits of U must lie in one of the two spaces. As this spaces is supersober, U must have a largest limit, □

The only reason why we eannot take arbitrary coproducts of stably compact spaces is that infinite coproducts of non-empty spaces are never compact. The above proof is easily adapted to show all the other properties,

3.2 Compact Pospaces

A compact pospace is just an ordered space with a compact topology. They were introduced in [48], and as we will see there is a very close connection to stably compact spaces. Unfortunately, the necessary details are hard to find in one place in the literature: The Compendium [19] discusses the link to Stone duality and also characterises them as compact supersober spaces (see also [42]), More recently they have been suggested as the adequate T0 substitute for Hausdorff spaces [43,39], This section is an exposition of the relevant facts taken from these different sources.

As for pospaces the definition of compact ordered space is self-dual, i.e. if (A, C) is such a space then so is (A, □), Often this means that we only have to prove one half of the propositions. When it is as obviously the case as for example in the following lemma we do not usually mention it in the proof.

We begin with a series of observations that will allow us to make the link to stably compact spaces explicit. Some of them are also of independent interest as they illuminate the structure of compact pospaces.

Lemma 3.5 If K is a compact subset of a compact pospace X then fK and \.K are compact, as well.

Proof. Suppose we are given a net (x¿)¿ in fK. We can assume that it converges to a point x E A, otherwise we replace it by a converging subnet which exists by compactness of A, We find a net (y¿)¿ in K such that for all i we have y¿ C The latter net has a converging subnet (y¿ -)¿ —y E K as K is compact. We still have (xy)j —>■ x and since X is an ordered space we infer y C x. This shows that x lies in fK and thus that fK is compact, □

Lemma 3.6 Suppose A = A and B = fB are closed subsets of a compact pospace X such that Af~)B = 0. Then there are open sets U = \.U and V = fF that do not intersect and satisfy A C U and B C V.

Proof. A compact Hausdorff space is normal. So there are disjoint open sets U and V such that A C U and B C V. Hence we have compact sets X \ U and X \ V, and because of the previous lemma the sets f(A \ U) and J,(A \ V) are also compact. We set U:=X \ f(A \ U) and V:=X \ J,(A \ V). It is now straight forward to verify that U and V have the desired property, □

The following lemma is a fact from general topology.

Lemma 3.7 If a C r are comparable topologies, where a is Hausdorff and r is compact, then the topologies coincide.

Proof. Note that a topology that is finer than a Hausdorff topology is also Hausdorff, and that one coarser than a compact topology is also compact as there are less open covers. Hence, both a and r are compact Hausdorff topologies. Now take U E t. Its complement X \ U is closed and hence compact for r. Thus it is also compact for a which implies that it is closed, and we see U E a. We conclude a = r. □

Remark. As a consequence, a compact Hausdorff topology is a maximal compact and a minimal Hausdorff topology. The converse is false. Neither is every minimal Hausdorff topology compact nor is every maximal compact topology Hausdorff [62].

We can now show that there are 'enough' upper open and lower open sets.

Proposition 3.8 The lower open and upper open sets of a compact pospace form a subbasis of the topology.

Proof. The topology generated by these sets is clearly coarser than the original topology. Given x % y the sets fx and \y are disjoint and compact by Lemma 3,5, Because of Lemma 3,6 they can be separated by upper, respectively, lower open sets. This shows that the generated topology is Hausdorff, and by Lemma 3,7 it must be the original topology, □

The next two results provide some extra information about the patch topology for supersober spaces. We need them as the final prerequisites for the main proposition.

Proposition 3.9 Let X be a supersober space and suppose K CI is a compact saturated subset. Then K is also compact with respect to the patch topology.

Proof. Take an ultrafilter U on K and extend it to an ultrafilter U' on X. As it has a limit in K its largest limit x lies in the saturated set K. We claim that W also converges to x with respect to the patch topology. For this it suffices to check that all co-compact neighbourhoods x E X \ L, where L is compact, are in U'. If such a neighbourhood is not in U' then the ultrafilter has a 0(X)-limit in the corresponding compact saturated set L. This limit must be smaller than or equal to x and hence x E L, a contradiction, □

Lemma 3.10 Let X be a supersober space. Then the patch-open upper sets are precisely the original open sets.

Proof. Closed sets are exactly the sets satisfying the property that all converging ultrafilters given on them converge only to points in the set. For a supersober space these are the lower sets A such that the largest limit of any such converging ultrafilter lies in A.

Let us say that a set A has property (f) if the largest limit of converging ultrafilters on A is an element of A. In addition to closed sets the compact

saturated sets also satisfy (f). We claim that the sets with (f) are closed under arbitrary intersections and finite unions. For intersections this is clear. Suppose we are given an converging ultrafilter U on A U B, where both sets have (f). Either A or II must be an element of U and, consequently, the largest limit of U lies in that set.

So, the sets satisfying (f) form the closed sets of a topology which is clearly finer than the patch topology, and the closed sets of the original topology are the lower sets with (f). Hence the patch-closed lower sets must be closed with respect to the original topology.

By taking complements this we get that the original open sets are precisely the upper patch-open sets, □

We now come to the central result of this section. Essentially it can be understood as yet another equivalent description of stably compact spaces.

Theorem 3.11 For a stably compact space X the specialisation order makes Xw into a compact ordered space. Conversely, for a compact ordered space (A, C) the open upper sets fU = U E 0(A) form the topology for a stably compact space X^, and the two operations are mutually inverse.

Moreover, for a stably compact space X the upper closed sets of X% are precisely the compact saturated sets of X.

Proof. The space Xw with the specialisation order is a pospace by Proposition 1,4, That it is also compact follows from Proposition 3,9,

Let us conversely assume that (X, C) is a compact pospace. To see that X^ is locally compact take x E U where U = fU is open. Then fx and X\U are disjoint closed upper, respectively, lower sets which by Lemma 3,6 we can separate by disjoint open sets V = fl and W = |U'. We get x E V C X \ W C I and X \ W is a compact neighbourhood of x as required.

We next show compactness and supersobrietv in one go. Every ultrafilter U on X converges to a unique point x. We claim that x is also the largest limit of U with respect to the coarser topology of X^. The topology being coarser it is clearly a limit. For y % x we find disjoint open sets U = fU and V = IV such that // e ( and x E V. But as U converges to x we get V E U and hence U ^U. So, U cannot converge to y in X^. This shows that every ultrafilter on X^ has a largest limit completing the proof that X^ is stably compact.

It remains to show that the two translations are mutually inverse. In Lemma 3,10 we have already seen that (X^Y = X.

For the other composition take a compact pospace (X, □). We first look at the order: As in X^ all open sets are upper sets the topology is coarser then the Alexandrov topology. The sets lx are compact by Lemma 3,5 and hence closed. This shows the topology of X^ is finer than the lower topology and thus has C as its specialisation order. In the light of Proposition 3,8 the topology of (A^ is at least as fine as the original topology. But as we already

know that carries a compact Hausdorff topology it must be the original

topology because of Lemma 3,7,

The final assertion of the proposition is an immediate consequence of Proposition 3,9 and the fact that a compact set is also compact for any coarser topology, □

The last observation of the proposition can be rephrased as follows: Under the above isomorphism, reversing the order of a compact ordered space corresponds to taking the co-compact topology on a stably compact space. This makes the slogan precise that in stably compact spaces open and compact saturated sets play dual roles.

Corollary 3.12 If X is a stably compact space then the co-compact topology also defines a stably compact space XK. Taking the co-compact topology of a space is an involution and, moreover, we have il(XK) = fC(X) and fC(XK) = 0(JY"). Both of these isomorphisms are given by complementation.

Remark. This can also be seen in the light of Lawson duality for complete semilattices [41 ]. One defines the dual of a semilattice to be the set of semi-lattice morphisms, i.e. Scott-continuous functions that preserve A, to 2 ordered point-wise. These are essentially the Scott-open filters and hence for a locale the compact saturated sets of the corresponding space. Not all semilattices have duality, i.e. are isomorphic to their double dual. But as we have seen the topologies of stably compact spaces do. Going to the Lawson dual on the localic side corresponds to taking the co-compact topology for the space.

Next, we are going to generalise the correspondence between stably compact spaces and compact ordered spaces to the non-compact case. As we will see most of the work has already been done in the above theorem. Some of the ideas needed for the locally compact case are taken from an unpublished note by Klaus Keimel and Regina Tix [37] where the authors tackle the general situation directly.

It is very easy to compactifv a T0 space X; one just adds a bottom element _L and one open set IU{1}. This construction is known as lifting and we call the resulting space X±. It is clear that if X satisfies the equivalent conditions of Theorem 3,2 then so does X± which then implies that X± is stably compact. Conversely, if a space X has a least element ± then we can strip it off by removing the element from the space and the open set X from the topology as it is the only open set that contains ±. We call the resulting space X\{±}. It is again obvious that for a stably compact X with least element the resulting space X \ {±} still satisfies the conditions of Theorem 3,2, Moreover, these operations are mutually inverse.

In the Hausdorff case we have to be slightly more subtle. We disregard the order at first since it is not an intrinsic part of the topology. Given a locally compact Hausdorff space X we get the one point compactification X*

by adding a new point * and define the topology to be

Q(X) U{X*\K\KCX compact}.

It is straightforward to verify that this does indeed define a topology and that it is compact. Given a compact Hausdorff space X and an element x E X we can delete this point by endowing the set X\ {x} with the topology {U E il(X) | x ^ U] = {U \ {x} | U E 0(A)}. This process yields a locally compact space and if we keep track of the distinguished point in the space the two operations are mutually inverse.

If X is a compact ordered space with a least element ± then X \ {±} is a locally compact ordered space. Unfortunately, we cannot simply add a bottom element to a locally compact ordered space to get a compact ordered space because in general the resulting space need not be a pospace.

As an example consider the one point eompaetifieation R^ of the real numbers. We have 1 ^ ^oo but every neighbourhood of ^oo contains arbitrarily large real numbers and hence this space cannot be an ordered space.

The following lemma characterises the pospaces for which we can add a least element in this fashion.

Lemma 3.13 For a locally compact ordered space X the one point eompaetifieation A_l, where _L is a new smallest element, is an ordered space if and only if for every compact set K C X the upper set f K it generates is again compact.

Proof. The latter condition is necessary because it holds in every compact ordered space (Lemma 3.5).

Suppose, for the converse implication that for every compact K the set fK is compact. We want to show that Qx is closed, and to this end we take two elements x % y. We have to find neighbourhoods U and V for these points such that for all x' E U and all y' E V we have x' % y'. If both x and y come from X then this is no problem as X is an ordered space. The only other case is x E X and y = ±. We take a compact neighbourhood K of x in X and by assumption fK is also a compact neighbourhood. Then fK and X± \ fK are neighbourhoods of x and y = -L, respectively, that have the required property. □

We call the spaces that satisfy the equivalent conditions of the lemma properly ordered. There is obviously an order-dual version of this lemma characterising the spaces for which we can add the new point as a top element. Since we focus on the link to stably compact spaces this is, however, not as interesting for us.

With the machinery we have just set up we get the general correspondence between locally compact pospaces and stably compact spaces sans compactness as a corollary of Theorem 3.11.

Corollary 3.14 If X is a locally compact supcrsobcr space, then the specialisation order makes Xw into a locally compact properly ordered space. Conversely, for a locally compact properly ordered space (X, □) the open upper sets form the topology for a locally compact supersober space X^. Furthermore, these two operations are mutually inverse.

Proof. Given such a T0 space X, we can form \ {±} which is a locally

compact pospace by Theorem 3,11 and the considerations of the previous paragraphs. As (X±)n is a compact pospace it is also properly ordered, and it is easy to check that we have (XjJ^ \ {±} = Xn.

Conversely, suppose X is a properly ordered locally compact space. Then we get the space (IjJ^ \ {-L} = X^. The operations are mutually inverse because their integral parts are, □

3.3 Stably Compact Domains

A stably compact domain is just a continuous domain whose Scott topology makes it a stably compact space. These domains can be characterised via their Lawson topologies.

Proposition 3.15 A Scott-compact domain is stably compact if and only if it is Lawson-compact.

Proof. If X is a stably compact domain then its patch topology is a compact pospace by Theorem 3,11, As the patch and the Lawson topology agree by Proposition 1,6, X is Lawson-compact,

For the converse, let us assume that X is Lawson-compact, By the Propositions 1,4 and 1,6 the Lawson topology with the original order forms a pospace, and because of Theorem 3,11 the upper Lawson-open sets form a stably compact topology. We claim that this is precisely the Scott topology. The proof is this claim is similar to that of Lemma 3,10, It suffices to show that Lawson-closed sets are closed under directed suprema. This is clear for Scott-closed sets and the other subbasic closed sets fx that generate the closed sets of the Lawson topology. The collection of all [J^-closed sets is clearly closed under taking arbitrary intersections. It is also closed under finite unions:

Let A and B be two |_f-closed sets and D Ç A U B a directed subset of their union. If for every x & D there i* an G I) P A such that x C x', then A fl D is directed and satisfies = |_f H D) e A. Otherwise there is an .r e I) such that fx fl A fl D = 0, and in this case we have fx fl D Ç B and thus \JD = \J(BDD) e B.

This implies that all Lawson-closed are closed under directed suprema which concludes the proof, □

The next proposition shows that all the classes of special domains discussed in Section 1,2 are stably compact. All the categories considered there are subcategories of FS, Hence, we begin by studying FS domains in some more

detail. Most of this material is taken from [34],

Lemma 3.16 Let J: Y Y be a continuous function between sober spaces. Then the preimage function /_1[-]: 0(F) —y 0(A) and the induced function

fK : (JC(X), D) (K{Y), D) K^ ff[K]

are Scott-continuous.

Proof. The preimage function is a frame morphism regardless of A and Y, so it is in particular Scott-continuous, For fK this is an immediate consequence of Proposition 2,16 since on the localic side this function is given by taking preimages of Scott-open filters, □

We know from Lemma 1,9 that for a finitely separated endo-function / we have f{x) «i, The next lemma shows that for such an / the induced functions of Lemma 3,16 provide approximants for open and compact saturated sets as well.

Lemma 3.17 Let f: X —y X be a continuous function on a dcpo X that is finitely separated from the identity. For each open set O there is a compact saturated set K such that f^l[0] C K CO, and for each compact saturated set K there is an open set O such that K CO C fK(K).

Proof. If M is a finite separating set for / and O C Y open, then we can infer f-l[0] C f (M DO) C O. The set M fl O is finite, which implies that f(M fl O) is compact, thus showing the first assertion.

Now suppose that K is compact. We already know that fix) «i holds for all .r e .V and so we infer K C ff[K] C ff[K] = fK{K). □

The next lemma essentially says that for an FS domain there are enough such approximants for open and compact saturated sets.

Lemma 3.18 Let X be any domain and {f\ | i e 1} a directed set of continuous endo-functions on X such that fi = idx- Then we have O = ffl[0] and K = Qj (ifi)ic)(K) for every open set O and every compact saturated set K.

Proof. The claim for open sets follows immediately from Lemma 2,10,

For compact saturated sets we use the localic form of the functions (/¿)/e which simply take preimages of Scott-open filters F C 0(A) under the corresponding frame homomorphisms (see Proposition 2,16), Hence, we have to calculate [_/(0/i)-1[F], We apply Lemma 2,10 to the continuous lattice 0(A), also endowed with the Scott topology, and the original space X and get

UV/r1^] = iU^/'» ';/••; = iKU7<) ';/••; = mxr'm = /.

Now we can prove the main result of this section.

Proposition 3.19 Every FS domain is stably compact.

Proof. Let (fj)i be a family of finitely separated functions on the domain X whose supremum is the identity, and let (Mj)j be corresponding finite separating sets. For any i we have f.Mj = X which shows that X is compact.

Suppose K and K' are compact saturated sets, i.e. the intersection of the open sets that contain them. Thus, we can write K fl K' as a filtered intersection of open sets O DO' where K CO and K' C O'. By the previous lemma we have K = C O, and analogously for Kand from the

Hofmann-Mislove Theorem in the form of Corollary 2,14 we can infer that there is an i such that (fi)jc(K) C O and (fi)jc(K') C O'. This implies K C frl[0], K' C ffl[0'] and thus

KDK'C frl[0} n frl[0'} = frl[0 no']«on O',

where the final 'way below' follows from Lemma 3,17 and Lemma 2,17, This shows that the filter that these sets O fl O' generate in O(X) is Scott-open, As K fl K' is the intersection of this filter the set K fl K' is also compact saturated. This completes the proof that X is stably compact, □

3.3.1 Complete Distributivity

As we know the completely distributive lattices are precisely the topologies of continuous domains (Proposition 2,23), Throughout our investigation of stably compact spaces we have stressed the point that for them compact saturated and open sets play equally important roles. So one might hope that for a stably compact domain X the lattice ()C(X), D) is also completely distributive, Unfortunately, this is generally not the case.

Theorem 3.20 If X is a stably compact domain then fC(X) is completely distributive if and only if X is bi-continuous and the co-compact topology agrees with the Scott topology on the order dual of X.

Proof. The lattice JC(X) is isomorphic to il(XK) which by Corollary 3,12 is the topology of a stably compact space. This sober space has the same points as X and the order of specialisation is the dual of the original order because of Proposition 1,2,

The lattice JC(X) = Q(XK) is completely distributive if and only if XK is a domain with the Scott topology (Proposition 2,23), □

To give an explicit example that shows that continuous domains, in general, are far from being bi-continuous we make the link to complete distributivity explicit: The completely distributive lattices are exactly the distributive bi-continuous lattices. This result appears in several places in the Compendium [19]. Since this is not too difficult to show we prove it here.

In a complete lattice L we define x y if whenever y <\¿ M holds there is an element m E M such that x < rn. With this auxiliary relation we get Eanev's characterisation of complete distributivitv [52],

Proposition 3.21 A complete lattice is completely distributive if and only if for every x we have x = \J{y | y x}.

Proof. See [5, Theorem 7,1,3 and 7,1,1], □

The relation is closely related to the ordinary order of approximation. The following connection is an immediate consequence of the definitions.

Lemma 3.22 If p is a V-prime element of a complete lattice and p <C x then we have p x.

To get enough primes in our situation we use the following proposition that goes essentially back to Birkhoff [6], It appears as Theorem 7,1,7 in [5],

Proposition 3.23 In a continuous lattice every element is an infimum of A-irreducible elements.

Proposition 3.24 A complete lattice is completely distributive if and only if it is distributive and bi-continuous.

Proof. On implication follows directly from Proposition 3,21 and the fact that complete distributivitv is a self-dual concept.

For the other direction let L be distributive and bi-continuous. By the previous theorem this implies that there are enough V-irreducible elements and they are V-primes as L is distributive. Every element x E L is the directed supremum of elements i/«j; and these y's are suprema of the V-primes below them. This implies that x is the supremum of such primes p satisfying p < y ■«i, and by Lemma 3,22 this means p x. It follows from Proposition 3,21 that L is completely distributive, □

Now let X be the topology of a locally compact sober space that is not endowed with the Scott topology, for example any non-discrete locally compact Hausdorff space. Then X is not completely distributive and hence not bi-continuous by the previous proposition. If we consider the Scott topology on X we get a stably compact space such that JC(X) is not completely distributive because of Theorem 3,20,

3-4 Hierarchy of Stone Dualities II

We now complete the hierarchy of Stone dualities that we started in Section 2,3, For a survey of categories involved see Figure 2, which might also be helpful as a map to the rest of this section.

We call a locale arithmetic if it is continuous, compact, i.e. 1 « 1, and the order of approximation is multiplicative. As we have seen in Theorem 3,2 these locales correspond to stably compact spaces.

Fig. 2. Hierarchy of categories of topological spaces II

Proposition 3.25 The Stone duality for locally compact spaces as given in Proposition 2.18 restricts and co-restricts to stably compact spaces and arithmetic locales.

Combining this with algebraieitv of the locale yields stably compact spaces which have a basis of compact-opens because of Proposition 2.21. The importance of this concept is that we now have a basis which is closed under finite intersections and finite unions. Hence, the elements of KO(X) can be used as tokens for a finitarv description of these spaces; the space itself, or its corresponding locale to be more precise, arises as the ideal completion of this lattice. We have thus in a way changed our localic description from a infinitarv to a finitarv one. The analogue for the non-algebraic case is the main theme of this thesis.

Definition 3.26 A spectral space is a stably compact space which has a basis of compact-open sets.

From our discussion it is clear that these spaces correspond to algebraic

arithmetic locales. But we can go one step further by just looking at the distributive lattice Kfi(X), In the corresponding category the morphisms are not functions but certain approximate relations (see [5, Def, 7,2,24]) which code frame morphisms, much in the spirit of approximable relations between bases of domains (see [5, Def, 2,2,27]), We do not go into the details here, but we will come back to these relations in Section 1,2 where we present them in a slightly different form.

Proposition 3.27 The equivalence of Proposition 2.18 (co-)restricts to spectral spaces and arithmetic algebraic locales.

These categories are also equivalent to the category of distributive lattices with least element and approximable relations.

The space corresponding to such a distributive lattice is known as its spectrum. The functor from the category of distributive lattices to the category of spectral spaces giving rise to the equivalence of the above theorem is called spec.

Remark. At this point we are very close to Stones original representation theorem for Boolean algebras [63]. The distributive lattices of the previous theorem turn out to be boolean algebras if and only if the corresponding spaces are compact and Hausdorff. In this case the morphisms corresponding to continuous maps are Boolean algebra homomorphisms and not just approximable relations. Thus we get the classical duality between the category of Boolean algebras and the category of compact, totally disconnected Hausdorff spaces, the so-called Stone spaces.

We now consider spaces that are algebraic domains with their Scott topology, In Section 2,3 we have seen that algebraic domains correspond to completely distributive algebraic lattices. As observed before, in an algebraic domain every compact-open set is a finite union of principal filters fx for compact x. These principal filters in turn are exactly the V-prime elements in the lattice of compact open subsets. The condition of being generated by prime elements is stronger than complete distributivitv, and thus characterises the locales that come from algebraic domains.

Proposition 3.28 The equivalence of Proposition 2.18 (co-)restricts to an stably compact algebraic domains with their Scott topology and arithmetic algebraic completely distributive lattices.

The corresponding category of approximable relations is the one of distributive lattices in which every element is a finite supremum of V-prime elements.

We now want to combine this with stable compactness. The category of stably compact algebraic domains is not usually studied in domain theory because it fails to be cartesian closed. Such domains might be called 2/3-bifinite domains in the light of Plotkin's "2/3-SFP Theorem" [51], It says that a stably compact algebraic domain is MUB-eomplete, and that for every finite set

of compact elements the set of minimal upper bounds is again finite (compare Proposition 1.7). The only extra condition required for bifinite domains is that finite sets of compact elements must have finite MUB-elosures, In general, this is not the case for stably compact domains. To complete our localic description of bifinite domains we have to translate precisely this condition. This is the contents of the following proposition which is also the most restricted Stone duality we investigate in this section:

Proposition 3.29 A lattice L is isomorphic to the compact-opens of a bifinite domain if and only if every element is a finite supremum of V-primes and if for every finite subset M Ç L there is a finite set N D M such that

(\/A Ç N)(3B Ç N) f\A = \f B.

This concludes our exposition of Stone duality. Abramskv's domain theory in logical form uses a localic description of domains which is very close to the one indicated in the previous proposition. Later on we will discuss a finitarv description of stably compact spaces for our extension of the programme to continuous domains.

4 Abramsky's Domain Theory in Logical Form

This section gives a quick overview of Samson Abramsky's work [4], and we will not rely on any of the material discussed here later in the text. Its purpose is to give an exposition of the established domain theory in logical form, so we can compare it to the results of this thesis. This also allows us to discuss the differences in approach. The introduction given here only sketches the main ideas; for details see [4] or [5, Section 7,3],

Domain theory in the context of the title means performing the usual constructions of domain theory like products, sums, function spaces, solving recursive domain equations, and power domains. In the light of the previous section we now want to describe these constructions solely in terms of the lattices of compact-open sets of the domains involved,

4-1 Prelocales

As we think of the open sets of a domain as certain observable properties the constructions are presented in the form of a logical system. Hence, we do not handle the elements of the lattices corresponding to the domains directly, but these lattices can be understood as the Lindenbaum algebras for the logic. This means that we have to formulate the preordered equivalent of the lattices described in Proposition 3,29,

Let us begin with some notes on preordered sets. If (P, <) is a preorder, then < induces an equivalence relation « which is the intersection of < and >, The quotient P/pz is a poset, and from a categorical viewpoint this quotient can also be seen as a skeleton of the preorder seen as a category. In a preorder we understand the usual order theoretic concepts as meaning that the corresponding property holds in the quotient. For example, an element z is a (rather than the) supremum of x and y if x, y < z and for all z' > x,y we have z < z'. Clearly, all suprema of x and y are equivalent.

Definition 4.1 A domain prelocale (A; <; V, A, 0,1; C) is an algebra of type (2, 2, 0, 0) with a unary predicate C such that o V 6 is a supremum for a and b, a A b is an infimum, 0 is a least, 1 is a largest element, and C characterises the V-prime elements.

Furthermore, every element must be equivalent to a finite supremum of V-primes, C( 1) must hold for all finites subsets M of C(A)—the set of all V-primes in A—there has to be a finite superset N C C(A) of M such that (VO C N)(EP CN)/\0& \/P.

The predicate C characterises the elements of the prelocale that can be thought of as points: As we have observed before, the V-prime compact open sets in an algebraic domain are precisely the upper sets fA; for a compact k. This predicate is only needed for the function space construction as it is notoriously hard to describe function spaces in purely logical terms.

In [4] there is an additional unary predicate to perform the coalesced sum

construction. Currently, we cannot tell whether a 'point' of the domain, i.e. a V-prime element, represents ±, but we will not go into the details of the different constructions anyway.

The above definition caters for two extremes: As we have seen in Proposition 3,29 the compact-opens of a bifinite domain gives rise to a domain prelocale where < is actually an order. The logical approach yields preloeales whose elements are terms made up from certain generators and the operations V, A, 0 and 1, These preloeales can then be seen as purely syntactic objects.

For a prelocale P, the relation « is not just an equivalence but a congruence with respect to the operations of P. The quotient P/pz is a prime generated and hence distributive lattice,

4-2 Prelocalic Description of Domains

Preloeales are used to represent concrete domains and hence we are interested to describe when two preloeales represent the same domain.

Definition 4.2 A pre-isomorphism between domain preloeales A and B is a monotone and order-reflecting function <f>\ A —y B such that every element in B is equivalent to an element in the image of <f>.

If B is of the form Kfi(X) for a bifinite domain X then we call A a prelocalic description of X via <f>. In this case the pre-isomorphism is usually denoted by semantic brackets [•],

Remark. The definition of pre-isomorphism is tantamount to the two preloeales A and B being equivalent as categories (see [45, IV.4, Theorem 1]). In the case of a prelocalic description the conditions boil down to surjectivitv, monotonicitv and order-reflection.

From the remark it is obvious that a pre-isomorphism <f>\ A —y B preserves and reflects suprema, infima, least and largest elements, as well as primeness. Hence, it restricts to a map <f>° between C(A) and C(B).

Given a prelocalic description [•]: A —y Kfi(X) of a domain X we get an isomorphism of domains r: spec(A) —y X as indicated in the following

Formally, the functor spec can only be applied to distributive lattices and not to preloeales, but this can easily be remedied by either pre-composing with

diagram.

spec(A)

the factorisation by « or by a straightforward extension of the definition of spec to preloeales.

The converse of this observation holds as well, which confirms our intuition about the meaning of prelocalic descriptions:

Proposition 4.3 For a domain prelocale A and a bifinite domain X there exists a pre-isomorphism [•] : A —y KO(X) if and only if spec(A) and X are isomorphic.

We now turn our attention to embeddings and sub-preloeales. The importance of embeddings is that they are needed for the bilimit construction of domains.

Definition 4.4 We say that a prelocale A is a sub-prelocale of another one B if it is a subalgebra with respect to the four preloeale-operations and if the relations C and < on A are the restrictions of their counterparts on B.

This definitions captures our intended meaning:

Proposition 4.5 If A is a sub-prelocale of B then there is an embedding projection pair between spec(A) and spec(B).

4-3 Domain Constructions

We now have the machinery to outline what, concretely, doing domain theory by prelocalic descriptions is about. We illustrate it by considering any binary domain constructor F: SFP x SFP —y SFP,

Let us assume we have two domains X and X' and prelocalic descriptions [•].4 : A —► KO(.Y) and [-].4/ : A' —► KO(.Y') for them. We want to construct a prelocale T(A, A') from A and A'—not by looking at X, X' or F(X, X'), of course—and a prelocalic description

Furthermore, this construction has to be natural with respect to sub-prelo-cales so that we can solve recursive domain equations which make use of this construction. That is to say for sub-preloeales B and B' of A and A' describing sub-domains of X and X' we demand that T{B, B') is a sub-prelocale of T(A, A') and that

¡•] : r( A.A') y Ki?(/•'( .Y. .Y') ).

sp ec(T(B,B'))

sp ec(T(A,A'))

F(Y,Y')

F(e, e')

F(X,X')

commutes, where e, e' and I are the embeddings corresponding to the sub-preloeale inclusions via Proposition 4,5,

There is a general technique that one can follow to define a type constructor on the prelocalic side. One advantage of having such a programme is that it identifies the points that follow from general principles and those steps that have to be devised and checked for each individual type constructor. To give an impression of how this is done we go through one rather straightforward example. For an explicit list of the individual steps of the programme see [5, Section 7,3,2] and [4, Section 3,4],

Example 4.6 [Product Prelocale] We assume we are given two domains X and X' and their prelocalic descriptions via A and A' as in the generic scenario above. The compact-opens of X x X' are finite unions of products of compact-opens of X and X'. Hence, we use the elements x \A*\ as generators of the term algebra which is going to be the underlying set of our domain prelocale A x A'.

Apart from the general rules for <, V, A, 0 and 1 which guarantee that our structure becomes the preordered equivalent of a distributive lattice we state the following rules:

a<b a'<b' _

(a, a') < (b, b') 0 * (°> * °)

C(a) C(a') C(o, a!)

We define the interpretation [•] for generators as [o, o']]:=[o]] x [o'J and extend it to the term algebra as a (pre-)lattiee homomorphism.

The rules are obviously sound which implies that [•] is monotone and restricts and co-restricts to a map [-]0: C(A x A') —y KO(X x X').

Because of distributivitv we can prove (o A b, a' A b') pz (a, a') A (b, b') and (o V b, a' V b') « (a, a') V (b, a') V (a, b') V (6, 6'), and thus by induction we get that every element in A x A' is equivalent to a finite supremum of elements which satisfy the predicate C,

The map [-]0 is clearly surjeetive and order-reflecting. It now follows from general considerations that this entails that [•] is surjeetive and order-reflecting, as well. Hence, it is an pre-isomorphism, but as we already know that KO(X x X') is a domain pre-locale this provides a semantic proof that A x A' is also a domain prelocale.

Now, suppose we are given sub-preloeales B and B'. It is clear that BxB' is a subset of Ax A' and that everything that can be proved in the former can also be proved in the latter. Again, it follows in general that the interplay of sub-preloeales and the prelocalic descriptions [•] have only to be verified for

primes, i.e. we have to cheek the eommutativitv of the diagram

C(B x B') c-- C(A x A')

H° H°

C(KO(F x Y')) -L C(Kii(X x X'))

where Y and Y' are the sub-domains induced by B and B' and i is the map induced by the two sub-prelocale containments: If b and b' represent

fx and fx', respectively, and we denote the embeddings by e: Y >-► X and

e': Y' >-X', then both compositions map the pair (6, b') to fe(a;) x fe(a;').

This shows that the above rules do indeed define the product on the pre-loealie side: The syntactic construction yields a prelocale Ax A' and it corresponds to the product of the spaces corresponding to A and A', respectively,

4-4 Logic

Once we have all the constructions for the prelocalic side we can interpret each type built up from these constructors as a domain prelocale. This is analogous to the usual procedure of interpreting a type by a domain.

We can then compare these two interpretations. Assuming that for each free variable the preloeales we choose are prelocalic descriptions for the corresponding domain it is clear from our setup that the resulting prelocale is a prelocalic description of the domain interpreting that type.

The constructions also yield a logical system for 'properties', i.e. the points of the domain preloeales. We have seen some of the rules in our example; for the complete system see [4, p. 49ff], Our previous work immediately yields a semantics for this logic by mapping an element of a prelocale to the compact-open subset it represents in the corresponding domain. It turns out that the syntactic relation < and subset containment in the semantics agree; in other words, the semantics is sound and complete. Another interesting result is that the language of properties is decidable.

On top of that we can add terms of an untyped lambda calculus, extended by term constructors corresponding to the type constructors. We get judgements of the form T b M : (j>, where M is a lambda term, <p a property (an element in a prelocale of appropriate type) and T a list of assumptions x i—¥ i/jx on the free variables of M. The intended meaning is that assuming the variables meet the requirements stated in T the term M satisfies <f>.

There is a straightforward logical system for these judgements, and as usual most of the terms can be seen as encodings of the derivation of the property (j> in the previous system. From classical domain theory we know how to interpret such lambda terms in the domain interpreting the type of that term. Again this semantics is sound and complete with respect to the

logical system for these judgements.

In [4, Section 4,4] there is also a corresponding exogenous logic in which the terms describe morphisms rather than elements of domains. This is an extension of the meta-language for cartesian closed categories (see [40]), but we will not go into this here as it is not relevant for our further considerations.

Chapter 2 Syntax

We now change our point of view to a purely syntactic one and study a Gentzen style sequent calculus. Its special feature is that the formulae on the left and right of the turnstile need not come from the same logical system. Such a sequent can be seen as a consequence between different domains of reasoning. The usual identity and cut rules do not make sense for sequents which connect different logical systems because they mix formulae from antecedent and succedent. This necessitates certain syntactic adjustments.

After an independent motivation for such a logical system we investigate the ingredients needed to set up this logic. In particular, we discuss an appropriate cut rule. It can be used as a basis for composition in a suitable category MLS of logical systems, and as we will see in the next chapter on semantics this category is equivalent to stably compact spaces with certain closed relations between them.

We also study cut elimination in this set-up. The upshot is—as might be expected—that we can push up cuts in a proof to cuts between atomic formulae. This can be used to do define new objects and morphisms from known ones which can be understood as performing domain constructions in logical form. As an example we construct products and coproducts in MLS, The procedure is a bit tedious, but shows exactly what is involved if one wants to do such constructions in a purely syntactic fashion,

1 Multi-Lingual Sequent Calculus

Our main objects of study are sequents in tradition of Proof Theory [18]

(j)i,..., <pn b ij)i,..., lj)m

and we read this as "if all <f>i hold then at least one ipj holds", as usual.

We want to allow the formulae <f>i to come from a different language than the ijjj, i.e. we want to be able to consider a situation where inferences are to be drawn between different logical systems. There are many situations where such a separation is desirable or even necessary. We discuss two of them.

Consider ordinary propositional logic. Someone could say

"It is very cold in here, I need to put on a jumper,"

thus drawing an inference from an observation about the temperature to a certain action. Note that there is nothing logical about this inference and, indeed, someone else might say

"It is very cold in here, I will turn on the heating,"

The inference relation in this example is a subjective one and there can be many different such relations. Although it is common to combine arbitrary propositions in logic we may wish to distinguish in a situation like this between propositions about the state of the environment and propositions about actions of a certain individual,

A second example is given by Hoare Logic, When we write a triple like

{x > 0} x:=-x {x < 0}

we certainly do not mean that x > 0 logically implies x < 0, rather, we read this as

"If x > 0 holds before the execution of x: =-x then x < 0 holds afterwards,"

In this example every program fragment gives rise to a characteristic relationship between preconditions and postconditions. The logical formulae are (typically) all about the contents of program variables and there is no syntactic reason to keep pre- and postconditions separate, as in the previous example, but the separation becomes necessary because the formulae refer to the state at different times.

Having different languages for formulae on the left and the right of the turnstile b forces us to restrict ourselves to positive logic, i.e. conjunction and disjunction. Moreover, we leave out the identity axiom scheme, (j> b <f>. If the two domains of reasoning related by b are different it is impossible to formulate it.

But even if they are the same this is justified by the fact that observing a certain state of the world does not always imply that the corresponding proposition is actually true, the reason being that our instruments for observing the world are not precise enough.

Many features of our logical system are a direct consequence of these special properties. Leaving out the identity axiom, for example, necessitates to check carefully how to retain some of its essential consequences. The cut rule also has to be adjusted to this new situation. There are several related formulations of the appropriate cut rule, and we are going to study how they are related. Considering several versions of essentially the same rule may seem redundant at first, but the development of the theory shows that they all have their particular strengths. So we can always use the rule that is most suitable in any given situation.

Note that we allow classical sequents. At first glance there seems to be no point in this because there is no difference between intuitionistic positive logic and classical positive logic. However, this formulation emphasises the rather pretty self-symmetry of the whole set-up and mirrors the duality between open and compact subsets of the next chapter,

1.1 The Logic

We take a very liberal approach as to what structures the actual formulae of our sequents are drawn from. All we require is the presence of conjunction and disjunction, and the units ± (falsity) and T (truth). Each system embodies a certain 'logic' in the sense that certain formulae imply others. We capture the internal logic by referring to arbitrary (2,2,0,0)-algebras instead of syntactically defined sets of formulae.

Such an algebra could, for example, have been obtained as the Lindenbaum algebra by factorising the set of formulae of the system by logical equivalence, and in the next chapter we will study how to construct such algebras from any given stably compact space. At the other extreme, the syntactically defined set of formulae for a logical system can be regarded as a such an algebra, providing the logic contains the connectives of positive logic. Proposition 2,5 and Theorem 1,16 show that we could have settled for either extreme, but as we will see they have different applications. Hence, we choose to work in this more flexible framework. Regardless of the particular algebra at hand we will refer to its elements as formulae.

The logical part of our system is given by the rules

T,T h A

(LT) (LA)

4>,T \- A r h A

^ V T b A

T h A,±

— (RT)

T \- A,(/> r h A,

r b A,(j)Aijj

Tb A,«^

T b A,<f>Vip

where, as usual, small Greek letters refer to individual and capital Greek letters to finite sets of formulae. Furthermore, a double line indicates that the rule can be used in both directions. The backwards rules are not present in the usual sequent calculus since there they are consequences of the identity and the cut rule. The essential difference in character between the forwards and the backwards rules will become apparent in Section 2,

The rules (LA) and (RV) forces the comma which separates formulae on the left to refer to conjunction and the comma on the right to disjunction.

Note that we cannot refer to implication or negation in the logical systems as the corresponding rules would make it necessary to transfer formulae from one side to the other. However, the logical systems themselves may still support these connectives.

On the side of structural rules we will only refer to weakening

and keep exchange and contraction implicit. Thus we are working with sets of formulae rather than sequences. The forwards rules (R-L) and (LT) are special cases of weakening.

As the examples above suggest, this calculus is not about finding tautologies but rather, each relation b between formulae of two logical systems embodies a particular, possibly subjective, inference. Whatever the reasons are for holding such an inference as true, there are other inferences which should in such a situation also be held as true. The rules above formalise precisely this reasoning: If <f>, ip, T entails A then <f>Aip, T should also entail A, and so on. Our objects of study are therefore relations between sets of formulae which are closed under the rules from above. We fix this in a definition:

Definition 1.1 For two algebras {L; A, V, T, ±) and (M: A', V', T', ±') of type (2, 2, 0, 0), a consequence relation b from L to M is a binary relation between

finite sets from L and M closed under (I__L), (R_L), (LT), (RT), (La), (Ra),

(LV), (RV), and (W).

If, according to a consequence relation b, the formula (j> implies ip, and if, according to a second relation b', ip implies a, then it makes sense to combine these two inferences and to say that (j> implies a according to the composition b o b' of the two given consequence relations.

The obvious way to formulate the cut rule is the following:

r b (j) (j) b' A T (b o b') A.

As consequence relations relate sets of formulae rather than single formulae one might look for a version that allows sets of formulae on both sides of the turnstile, A logically correct alternative is given by the following rule whose premise is meant to be read as two families of sequents, not as proof trees:

rhAi h' A

r h A„ Grn K A

r (h O b') A

subject to the condition that for every choice function / e Aj there exists an index j so that Gj C {/1,..., /„}

The intuition behind this cut rule and its side condition is that if T entails all the Aj's then at least one formula in each A« is true. If for every possibility, coded by a choice function, these formulae cover one of the 0/s then T also entails A,

Remark. The side condition includes the degenerate case n = 0. Then there is only one choice function, namely the empty one. Its image is 0 and thus the side condition says that one of the premises Gj must also be empty. This agrees with the intuition that HA means that the disjunction of the formulae in A is true and hence is a consequence of any set of premises T.

Similarly, hi = 0 is admissible if and only if one of the Ai is empty. Note, however, that rn = n = 0 does not satisfy the side condition.

This cut rule looks rather asymmetric whereas the other rules are perfectly symmetric. That is to say if we take a rule and interchange left and right as well as the connectives V and ± with their duals A and T then we again get a rule. We can, however, reformulate the side condition in a way that shows that it is also inherently symmetric:

Lemma 1.2 The side condition of (Cut*) is equivalent to

(V/e J] (V5 e II{A, ■■■,/«} n {51,...,5™} #0-

Proof. Assume / and g are two such choice functions. If the side condition of (Cut*) holds then there is an index j such that Gj C {/1;,,, ,/„} which implies that gj E Gj is in the intersection.

Conversely, if the side condition fails then we can find a choice function f E 1[■ A( that does not cover any of the Gj. This implies that all sets Gj \ {/i,..., /„} are non-empty which yields a choice function g E f\j®j satisfying {f\, ...,/„} n {gl,..., gm} = 0. □

As the next proposition shows both cut rules are in fact equivalent. The first rule (Cut) is usually easier to handle in proofs. An advantage of (Cut*) is that many properties of the system can be proved without using any logical rules, thus exposing their combinatorial character. Moreover, as we will see in Section 2, the seemingly more complicated rule (Cut*) is easier to handle

from the point of view of cut elimination. We are going to use both cuts—and even a third formulation later—depending on which one is more convenient.

Proposition 1.3 The cut rules (Cut) and (Cut*) are inter-definable.

Proof. The rule (Cut) is clearly an instance of (Cut*) as the side condition is trivially satisfied.

For the other direction consider a cut

rhAi b' A

em h a

r (b o b') a.

We prove that we can rewrite it in terms of (Cut) by induction on the number of choice functions in the product n"=i

The case where there is no such function has to be dealt with separately: We infer that one of the Aj is empty and we get

r b ±

± K A

7(L±) — (w)

r (b o b') A.

If there is precisely one such function then all Ai are singletons {¿¿}, and we can assume without loss of generality that m = 1 and €>i C ,,,, Sn}. We construct the proof

T b <Ji

T b cL

T b Si A • • • A Sr

- LA)*

Si A • • • A S„ b' A

r (b o b') A

where the bold lines indicate multiple applications of the respective rules.

Now for the induction step: If there is more than one choice function then one Ai contains at least two elements, say A„, We are going to reduce the number of choice functions by replacing the sequent r b A„ by T b \/ A„ as follows: For each (restricted) choice function / 6 n"^1 ^ an<i 4> e there is an index j such that 0j C {/l5,,,, fn_and because of weakening we can assume that the two sets are equal.

We apply (LV) to these sequents and get V A„, /1,..., fn-i b' A, Then we

repeat this procedure for all elements of \Yi=i ^ an<i consider the cut

r h Ai VA„,/i,...,/„-IKA : V b' A

r b ab_! :

T (h o b') A

where f,g,.. ,,h range over all such choice functions. The side condition is still satisfied as all possible choices are explicitly listed on the right, but the number of choice functions on the left is now the previous number divided by |A„| > 1, Hence, we can apply the induction hypothesis which concludes the proof, □

We make one more observation about the logic to make some proofs later more convenient. We have weakening in the system and hence the following alternative multiplicative formulations of several logical rules are derivable. By slight abuse of notation we refer to them by the same names.

T b A, T

^,rhA ip, T' b A' T b A, ^ r' b A', ip

T (Ly) ^........ (RA)

b A, A'

T,r' b A, A',(j)Mjj

The advantage of these rules is that we do not have to apply weakening explicitly when it is only peripheral to the proof. Note, however, that there is no multiplicative formulation of the backwards rules.

The second version of the cut rule can also be similarly reformulated as

b Ax ex b' Ax

r„ b An ®m I- A-m

ri,...,r„ (b o b') Ai,..., Am

which, of course, is again subject to the side condition.

Having different formulations of the rules available allows us to use the following general strategy in the next section: When we have to transform one proof into another we will always assume that the given proof uses the most restrictive formulation of the rules. In the construction of the new proof

we can, however, feel free to use the most convenient version of the rules. This enables us to get rid of a lot of the bookkeeping that goes on in these proofs and to focus on the essential features,

1.2 A Category of Consequence Relations

We are interested in constructing a category of consequence relations. As a first step we observe that (Cut) preserves consequence relations and that it is associative.

Lemma 1.4 Given consequence relations b from L to M and b' from M to N the sequents T (b o b') A that arise from cuts form a consequence relation.

Proof. Weakening is obvious because no sequence is altered that is relevant for the cut, and for the same reason (R_L), (RV) and backwards (RA) are trivial.

For forwards (RA) assume we have two cuts:

rba ah'A,0 T b /3 /3 b' A, if)

-(Cut) -(Cut)

r (ho b') A, <p r (b o b') A, ij)

Then we can directly construct the new proof:

T b a ah'A ,</> fj b' A, t/j

Tb/3 a, /3 h' A, <f> A ?p

-(Cut )

r (b o b') A, <p A Ij)

For (RT) we have the proof

-(RT) -(RT)

b T ; Th'T ;

-(Cut)

(b o b') T. 1 '

Since all the rules are self-dual, as discussed in the paragraph preceding Lemma 1,2, it is now clear that b o b' is closed under the remaining rules, □

Lemma 1.5 The composition of consequence relations induced by (Cut) is associative.

Proof. We can easily rewrite a proof of the form

-(Cut)

r (b o b') ip ,

-(Cut)

r ((b o b')o b") A

-(Cut)

FN <^(h'oh")A ; /

-(Cut)

T (h o(h' o b")) A.

And vice versa, □

1.2.1 A Proof-Theoretic Analysis of Identities

What remains to be done, to make consequence relations into a category, is finding identities. One might be tempted to employ ordinary logical implication between formulae of one world for this. However, this is somewhat against the spirit of our whole setup where we want to suppress purely logical equivalences in order to exhibit the properties of inferences which are, in some sense, subjective or observational. As we have argued, for such inferences it is not necessarily the case that a formula 4> implies itself. That is, we refuse the identity axioms

We reserve the symbol lb to represent a consequence relation that has identical source and target algebra L.

On the other hand, Gentzen's original cut rule

makes sense even in an observational interpretation. The point of introducing yet another cut rule is the following: The rule (Cut) is clearly structurally simpler then (Cut*), and hence it is also easier to check whether a given relation lb is closed under the former. Unfortunately, (Cut) is not very well suited for cut elimination as we will see in Section 2, The new rule (Cut') is a compromise that to some extent combines the advantages of the two other cut rules; it is easy as it has only two premises and it is still well-behaved under cut elimination.

We might hope that (Cut') holds for an identity consequence relation. More precisely, if lb o lb = lb for a consequence relation on some algebra L then one might expect that lb is closed under Gentzen's cut. As it turns out, this can be shown if consequence relations are assumed to be interpolative in the following sense:

Definition 1.6 We say that lb has interpolants if the following are satisfied: (L-Int) If 4>., T lb A then there exists 4>' e L so that 4> lb <f>' and <p', I' lb A, (R-Int) If r lb A, <p then there exists <j)' e L so that !' lb A, <j>' and <j>' lb <j>.

Of course, if the identity axioms of sequent calculus are adopted then interpolation is trivial. Looking at this from the other end, we can say that

interpolation will provide us with some of the consequences of the identity axiom scheme.

Lemma 1.7 If lb has interpolants, then lb C lb o lb.

Proof. A sequent of the form lb A can be cut with itself using (Cut*) to give (lb o lb) A since the side condition is trivial (see also the remark on page 57), If it is of the form 71,... jn lb A we use interpolation n times to get ..., j'n lb A and 7i lb 7-, for i = 1,,,,, n. This allows us to form the proof

In lb 7n

71,..., 7„ (lb o lb) A.

Proposition 1.8 A consequence relation lb with interpolants is closed under (Cut'), if and only if lb o lb Ç lb.

Proof. Suppose that lb o lb Ç lb holds. We want to show that it is closed under (Cut'). To this end, let r lb A, (j> and <f>, 0 lb A be two sequents with A = {ii,..., and 0 = {9i,..., 9m}. As lb has interpolants there are elements S[ and Oj such that S[ lb Sf, T lb S[,..., S'n, <f>; Oj lb 9'p and (j>,0[,..., 9'm, lb A. Hence, we can form the cut

T S[,..., S'n, (/) SilhS,

9l lb 9[ :

i S'n II- 6n

9m lb 9'm <f),9[,... ,9'rn lb A - Cut

r, 0 (lb o lb) A, A

as the side condition is straightforward to verify. By assumption this implies r, 0 1b A, A.

The converse follows immediately from the observation that (Cut) is an instance of (Cut'). □

Corollary 1.9 If lb has interpolants, then it is an idempotent with respect to (Cut) if and only if it is closed under Gentzen's cut rule.

From this, we take our cue to define the objects of a category.

Definition 1.10 {L; A, V, T, ±; lb) is a continuous sequent calculus if lb is a consequence relation from L to L such that lb has interpolants and is closed under (Cut), or equivalentlv (Cut').

The relations lb are in fact idempotents but not identities for all consequence relations. This is not surprising because, as yet, we do not have any axioms that make sure that identities and other consequence relations interact in a sensible way.

Definition 1.11 Say that a consequence relation b from L to M is compatible with lbl and IbM if

\\-L ob = b = bo Ihjv./

We let MLS (for multi-lingual sequents) be the category that has continuous sequent calculi as objects and compatible consequence relations between them as arrows.

The facts that lbL is self-compatible on both sides, and that composition of compatible consequent relations preserves compatibility are both evident from the definition. From the discussion preceding Lemma 1,2 it is clear that MLS is self-dual.

It is also obvious that for any consequence relation b: L —M the derived consequence relation lbL o b o lbM is compatible.

The properties (L-Int), (R-Int), and being closed under (Cut') of continuous sequent calculi are inherited by compatible consequence relations:

Proposition 1.12 For a consequence relation b between continuous sequent calculi L and M the following are equivalent:

(i) The relation b is compatible with Ib^ and lbm-

(ii) It is closed under

(L-Int') (f), T b A implies that there exists (j)' e L so that (j) \hL (j)' and #,rb A;

(R-Int') r b A, 4> implies that there exists (j)' e L so that (j)' Ib^ <f> and Tb

(L-Cut) if T \hL (j) and o. (-) b A, then r, 0 b A; and (R-Cut) if I b A. o and (p lbM A, then V b A, A.

(iii) It satisfies \hL o b C b and b o lbM C b, and is closed under (L-Int') and (R-Int').

Proof. If b is compatible with lbL, then any sequent <f>, T b A can be written as a cut

6, T a a b A --(Cut)

(f), T (lb^ o b) A,

Because of interpolation in L there exists a o' e I. such that ^ lbL <j)' and 4>', r lbL a. We cut and get

(j)',T\hLa at A —-(Cut)

<^',r_(lbLob) A

which proves (L-Int'),

Now, suppose T lbL (j> and o. (-) h A, where (-) = {0\.....0,„ }■. By the existence of interpolants that we have just proved, there are elements 9[ through 9'm so that <f>, 9[,..,, 9'm h A and 9jt lbL These sequents allow the proof

r lbl 4> 91 lbL 9[

9m II^L 9'm (/), 9[,..., 9'm h A - Cut

r,e(ibLob)A.

The two dual rules are proved analogously. This concludes the proof of (1) =>■ (2).

The implication (2) =>■ (3) is clear since from (L-Cut) and (R-Cut) we immediately get lb^ o h Ç h and h o C h.

For (3) =>■ (1) we only have to observe that (L-Int') and (R-Int') yield h C \\~i o h and h Ç h o \hM—the trivial case, where the relevant side contains the empty sequent, is taken care of by (W), (L±) and (RT), □

As with (Cut'), the advantage of introducing the new rules (L-Cut) and (R-Cut) will only become clear in Section 2,

A closer inspection of the above proof shows that it did not really depend on h being a consequence relation. We only needed (W), (L±) and (RT) to prove that a relation h satisfying the conditions of the proposition is compatible. Such a compatible relation automatically satisfies most rules for consequence relations, since lbL and lbM do:

Corollary 1.13 A relation between continuous sequent calculi is a compatible consequence relation if and only if it satisfies (W), (L_L), (RT), forwards (LV) and (RA), and the rules of Proposition 1.12, namely (L- Int'), (R-Int'), (L-Cut) and (R-Cut).

Proof. The conditions of the corollary are clearly necessary. If we assume conversely that the conditions hold for such a relation h then we know already that it is compatible. That it also satisfies all the remaining logical rules is shown using the pattern of first interpolating, then applying the appropriate rule for lb and then cutting. We consider the only interesting case:

For a sequent r h A, <f>, ip interpolation yields the sequents T h A, </>', ip';

4>' lb 4>] and ipf lb x\) enabling us construct the proof:

<j)' lb <p

(Rv) .,„ , , (W)

r h A, 4>, if)' ^»'Ib^v^ ip' lb (f>, if)

- (R-Cut) - RV)

rb A^V^t/)' ^Ih^Vt/;)

-(R-Cut)

It is worthwhile to note that all definitions and theorems up to this point still make sense if one does not allow the application of the logical rules from the lower sequent to the upper sequent. We will come back to the significance and problems of backwards rules in the next section where we discuss cut elimination and later in Section 1,2 when we study semantics.

Remark. A different perspective on the definition of MLS is given by the following. We consider consequence relations and the composition given by (Cut). As we observed before this is not quite a category but only because it fails to have identities. We now split the class of idempotents that have interpolants, a technique which is well-known in category theory [17, 1.28]. It works even if the original 'category' fails to have identities. The resulting category in our case is precisely MLS.

Alternatively, we can build a proper category before we split the idempotents: We can restrict the 'internal' logic to situations where a proposition (j> does imply itself. As the identity morphism on a (2, 2, 0, 0)-algebra we can then take the smallest consequence relation generated by the identity rules, which will yield precisely the classically valid sequents of the system. Compatibility is then not an issue and we immediately obtain a (self-dual) category RMLS. Now we can again obtain MLS by splitting idempotents in RMLS that have interpolants.

2 Cut Elimination

The famous Cut Elimination Theorem of Gentzen [18] states that every valid sequent in the sequent ealeulus can be derived without employing the cut rule, Sequents in our setting, however, are not about absolute validity but about derivabilitv of sequents from assumed sequents. The analogous theorem for this situation says that in every such derivation cuts between arbitrary sequents can be eliminated in favour of cuts between assumed sequents. We will exhibit a similar result which applies to the rule (Cut*),

2.1 Simple Elimination

If R is any relation between finite sets of elements of (2,2,0,0)-algebras let l> denote the smallest such relation which contains R and is closed under application of forwards rules, including weakening. In this definition we do not close R under backwards rules like

(j)Aip,T b A

<^,rbA

as they eliminate connectives and hence decrease the complexity of a sequent. For algebras themselves W6 Sciy that B CI L is a generating set if the smallest subalgebra B+ of L containing B is L itself. This generation process can also be described by finitarv rules:

o e ll (/),i/jeB+ (/),i/jeB+

(j)eB+ <pA?peB+ <p\/ipeB+ T,±EB+

That is to say, the elements of B+ are precisely those whose membership in B+ can be derived using these rules.

If B C L and C C M and if b is a consequence relation from L to M, write b | ^ to denote the restriction of b to sequents made up entirely from the respective subsets, b n (^¡„(i?) x ^fm{C)),

For a fixed set of generators B we define the rank r{(f) of a formula 4> to be the minimum height of a derivation for <p £ B+ using the rules given above, setting r{(f) = 0 for <f> e B. For a finite set T C L, let r(T) be the maximum rank of any member of T, Also let rc(r) denote the number of elements of T of maximum rank. For finite T, define the grade g(T) as the pair g(T):={r(T), rc(T)), with the lexicographical order on the set of all grades to make it into a well-order.

We come to the first important lemma relating sets of generators for algebras and freely generated consequence relations. Note that although the definition of (-)+ excludes backwards rules we nonetheless require consequence relations to be closed under these rules. In fact, many of the results on cut elimination depend on them. As we will see, for example, in the proof of

the following lemma they are essential for many induction arguments to go through.

Lemma 2.1 Let b be a consequence relation from L to M. If B and C are generating sets of L and M, respectively, then b = (b \b)+-

Proof. We obviously have (b C K For the interesting containment we have to verify that T b A implies T (b A, We do so by induction on the grades g(T) and g(A): The basis is immediate, and if r(T) > 0, then we can write r as <f>, f for some token (j> such that r((f) = r(T). Now, (j> is of the form T, ±, ip Ax or ipV x where ip and x are of lower rank.

Let us look at the case <f> = ipV x as an example, where this is the decomposition giving rise to the rank of <f>. We get

^VX,fbA

^,rbA x,TbA by using backwards (LV). As for the grades involved we have

g(^r),g(X,r) < g(ipV x,T) = g(T) and so the induction hypothesis yields

V,f(h|g)+A X,f(b|g)+A --~-77;--(LV)

^VX,r(b|g)+A.

The other cases (and their duals in A) are proved similarly, □

The lemma shows that we can restrict our attention to the behaviour of consequence relations on generators for the algebras involved. Note that this is, in particular, true for relations lb that define the objects of our category MLS, In the following we examine how far this idea can be pushed.

We start with the composition of consequence relations. Suppose R and S are binary relations between finite sets of formulae. We write R o S for the set of sequents that can be derived by using (Cut*) which, of course, includes all sequents derivable by (Cut),

In the context of cut elimination it actually matters which formulation of cut we use: If we go back to the proof of Proposition 1,3 we see that cuts using (Cut*) can be reformulated as (Cut)-euts by using logical forward rules. Unfortunately, this operation increases the grades of the sequents involved and hence cannot be used in an induction proof based on these grades. If we had formulated our logical system only in terms of the simple cut rule (Cut) most of the following results would not hold.

Lemma 2.2 IfC is a generating set for M and b, b' are consequence relations from L to M, and M to N, respectively, then

bob' = b |c o b|c, 67

iwhere the composition on the right hand side stands for the sequents that can he derived by either cut rule.

Proof. One containment is obvious. For the other one the proof is by induction on the maximal grade of A's and O's involved in a cut

r |_ A Q A

r b An 0m b' A - Cut*

r (b o b') A

and the number of sequents with the maximal grade. If all A's and O's have rank 0, i.e. they contain only formulae from C, then we clearly have r (b |c o b |c) A, For the induction step we look at the sequent Ajt or Gj with the maximal grade. Without loss of generality we can assume that An = An, 4> is that sequent, where (j> is a formula of maximal rank. We distinguish cases on the basis of the decomposition determining the rank of <f>.

If (j> = tj) V x we replace the sequent T b An, r V \ by I' b An, ip, \ and consider the choice functions that now arise. The only ones that might be a problem are those that include i\) or x- Suppose / e n"^1 ^ '1S a choice function such that {/i,,,,, /„_i, ip V x} covers a Gj but {/i,,,,, /„_ 1} does not. Then Gj must be of the form ip V x, We deduce

i> V x, G j b' A

J (LV)

Gj b' A x, ©j H' A

and adding these two sequents ensures that the side condition is satisfied whether we extend / by ip or x- Note that we have to add these sequents rather than just replacing the old one since we cannot be sure that none of the fi is tj) V x- However, the new sequents that arise in this way by 'splitting' tj) V x where it appears in a O do not have the maximal grade. Hence, having already decreased the maximal grade or the number of its occurrences by substituting r b A„, ip, x f°r T b A„, ip V x, we can simply add all these sequents and apply the induction hypothesis to get T (b |c o b |c) A,

For (j> = ip a x, analogously, we replace r b An, ip a x by t b An, ip, x and add all results of applying (La) to sequents of the form ip a x, % a-

If (j> = ± we apply (R-L) to get the new sequent T b A„, This reduces the number of choice functions, and the side condition is valid without making any modifications to sequents on the right.

Finally we have to consider the case <f> = T. Here we delete all sequents r b Aj, T and replace all sequents T, G b' A by G b' A, Any choice function / for the remaining A's can be extended to one / e II"=i by picking T for the other Vs. Since {/1,,,, ,/„} covers one Gj the image of / contains

Qj \ {T}, Note that in the ease that there is no longer a sequent on the left there was a ehoiee function picking only T, This implies that a sequent on the right is of the form T b' A or b' A, the later being derivable from the former using (LT). This is precisely the degenerate case of the side condition discussed in the remark on page 57,

The dual conditions, where one of the O's is the sequent with the maximal rank, are argued the same, □

Theorem 2.3 (Cut Elimination) Let B C L, C C M and D C N be sets of generators. Then for any consequence relations b and b' between L, M and N it is the case that

hoh'= (h|g ob'|g)+. Proof. Because of the previous two lemmata we immediately get

bob' = b|c ob'|c = ((Hcob'|c)|g) + . The set (b |c o b' |c) contains precisely the sequents that arise as cuts:

rhAi b' A

r b A» em H A

r (b O b') A

where T is a subset of B, the Ajt and Oj are subsets of C, and A is a subset of D. In other words (b 6 o I ' | ) | jj — I- | § o I ' | £7 which together with the first equation proves the theorem, □

2.2 Construction of Consequence Relations

So far we have looked at given consequence relations. Now, we want to use similar techniques to construct consequence relations by specifying them on generators. The question arises, what is needed so we can guarantee that for a given such relation R the resulting R+ is actually a morphism in our category. As it turns out, it is very difficult to derive general rules in this direction. The problem is that originally we allowed formulae to be drawn from general (2, 2, 0, 0)-algebras. The situation becomes much more manageable if we restrict ourselves to term algebras over a given set of generators. In logic, formulae are usually freely defined, so this is a quite normal restriction. As we will see in the following discussion, it is not even a serious one,

2.2.1 Term Algebras

For any set B let T(B) be the term algebra for the signature (a,v,t,±), We are now going to show that in MLS every object is isomorphic to one

where the underlying algebra is a term algebra. It is well known from universal algebra that every algebra L is a quotient of the term algebra T(L) [47, Corollary 5,1,7], The canonical quotient map takes a term from T(L) and evaluates it in L. Writing [•]: T(L) —L for this map we can define

Tlj ■ ■ ■ j In II~T(L) ■ ■ ■ j Srn

[7l]> ■ ■ ■ > bin] 1K ■ ■ ■ , [<Jm]-We usually abbreviate expressions of the form [7J,,,,, [7J by [r].

Lemma 2.4 If (L, lhL) is a continuous sequent calculus, then (T(L), Ihr(L)) is as well.

Proof. To prove that Ihr(L) is a consequence relation we have to check each of the rules of the calculus. This can be done by using the fact that lbL satisfies these rules. We do one case to give the flavour of the argument: Let us suppose

F Il~T(L) A, ^ r Il~T(L) A, V*-

which, by definition, is equivalent to

[r] \\~l [A], [(j)] [r] lhL [A], [ip]. We now apply the relevant rule in L

[r] \\~l [A], [(j)] [F] \\-L [A], [ip] [r] lbL [A], [£a]£]

[4>f\4,]

and the result is defined to be

r lhT(L) A,(j> Axp.

Hence Ihr(L) is a consequence relation. The proof that it is closed under (Cut) follows exactly the same pattern.

The argument for the interpolation axioms (L-Int) and (L-Int) is also very similar: Suppose (j>,T Ihr(L) A, or equivalentlv [</>], [r] lhL [A], Interpolation in L yields an element ip E L such that [4>] IhL ip and ip, [r] lhL [A], Now, ip is also a term in T(L) and it satisfies [ip] = ip. Thus we get (j> Ihr(L) V* and r i\~t(l) A. □

Proposition 2.5 The continuous sequent calculi (L, lhL) and (T(L), lbT(x)) are isomorphic in MLS.

Proof. Given the definition of Ihr(L) it is easy to come up with the isomorphisms between L and T(L). We define h: L —y T(L) and H: T(L) L by

T h A T lhL [A] and fh'A [r] lhL A.

To prove that h and H are compatible consequence relation we have to check the rules listed in Corollary 1,13, namely (W), (L_L), (RT), forwards (LV) and (RA), (L-Int'), (R-Int'), (L-Cut) and (R-Cut).

Apart from the last two rules we can use essentially the same proof as in the previous lemma. And the cut rules require only a minor new ingredient: Consider two sequents r h A,(j> and (j> Ihr(L) A. We infer I' lhL [A], [4>] and [4>] IhL [A], and because of Proposition 1,8 we can form

r \\~l [A], [4>] [4>] \tL [A]

r \\~l [A], [A]

which shows T h A, A, The other cases involving cut rules are analogous.

It remains to show that h and H are mutually inverse. The containment (H oh) C I \~t(l) follows from the fact that cuts of the form

r h' <p <p h A -(Cut)

r(h'oh)A

correspond directly to cuts

[r] lhL <(, <f> lhL [A]

[r] (lhLolhL)[A],

and the last sequent, by definition, means T Ihr(L) A, The other containment follows by the same argument since lbL = lhL o \hL.

The proof of (b o H) = lbL is practically the same, □

The proposition shows that we can restrict our study of continuous sequent calculi to the purely syntactic ones, i.e. the term algebras. We can express this categorically by saying that the full subcategory of MLS whose objects are algebras of the form T(B) is equivalent to MLS,

We now return to the main thrust of this section. Our first lemma shows that (-)+ yields consequence relations:

Lemma 2.6 Let R be a relation between finite subsets of B and C. Then R is a consequence relation from T(B) to T(C).

The restriction R+ is just the closure of R under weakening with formulae from B and C, respectively.

Proof. The algebras T(B) and T(C) are freely generated. So every sequent in l> is derived from the sequent or the sequents required by the corresponding backwards rule. The only exception are formulae that have been introduced by weakening. In this case an inspection of the rules shows that the sequents resulting from a backwards application of a rule can then likewise be derived by weakening,

All rules other than (W) introduce composite formulae which shows the second claim, □

A condition that is needed for compatibility and identities in MLS is interpolation, We prove a slightly technical lemma that is strong enough so we can use it to show both, compatibility of generated consequence relations and interpolation for candidates of continuous sequent calculi.

Lemma 2.7 Let R C ^\n(B) x ^\n(B) and S C ^\n(B) x ^rm{C) be relations such that (f), T (S) A implies the existence of an interpolant (p' e T(B) satisfying <p (R+) (p' and <p', T (S+) A. Then for all S+-sequ,ents there are R+-interpolants: I.e. for all (f),T (S+) A there is a (p' e T(B) such that <P (R+) 4>' and (j)', T (5+) A.

Proof. The proof proceeds by induction the rank of <p and the height of the derivation of <p, T (S+) A, the former taking precedence. The base case concerning sequents that contain only elements of B and C is trivial, and so are the cases where <p is not the principal formula and the rule depends on only one sequent.

For rules taking two sequents, consider the derivation

</>,T(S+)A^ o. I' (>' ) A. \

(f), T (S+) A,i/j Ax

as an example. The induction hypothesis yields two formulae <j)' and <f>" such that (p (R+) (/>'] <j> (R+) (/)"] fi, T '(S+) A, ^ and <p", T (S+) A, X- From this we can deduce

* (R+) *' <P (R+)

^ (R+) A <P"

4>', r (S+) A, tp o". I' (>• ) A. \

v (W) —-—— (W)

(>'. o". I' (S ) A. \ (LA) T7-^TT^TT^ (LA)

0 A f, r (S+) A,ip <j)' A <P", r (S+) A, x ^

$ A (j>", T (S+) A,i/j A X-

For the rules actually introducing <p, the cases for the constants T, ± and weakening are trivial because we can interpolate using either T orl.

Next we consider the case that the principal formula is 4> = ip A x- Given the proof

t/;Ax,T(S+) AV

we interpolate twice using the induction hypothesis and get tp', x' such that ip (R+) ip'; x (R+) x' and V*') x'i F (S+) A, It is here that we actually need that the induction is not only on the derivation of the sequent but also on the rank of <p. The first interpolation may well increase the height of the

derivation, but as we have already reduced the rank of the formula this is not a problem. From these sequents we construct the two derivations

(R+) 4/ x (R+) x' Ax'

4) A X (R+) 4/ A x'

?//, x', r QS+) A $ A x', r (5+) A.

Mutatis mutandis, the same argument also proves the last case (LV). □

To construct continuous sequent calculi and to show compatibility of constructed consequence relations we have to reconsider cut elimination. Looking at the generators of consequence relations, the original rule (Cut*) and the multiplicative one with implicit weakening are not interchangeable unless the relations in question is already closed under (W), In the following if we refer to the rule (Cut*) or the composition defined in terms of it we understand this in the stronger sense of the multiplicative formulation of (Cut*) as given on page 59,

The only reason that we cannot directly use Theorem 2,3 is that there we started from a consequence relation, whereas now we want to begin with a relation R that generates one. Lemma 2,6 tells us that R contains almost all sequents of R+ \ the ones that are missing can be derived by weakening. Hence, we have to study how weakening and (Cut*) interact. Consider a cut

r'.r b A1. A' (-)',.(-), b' A. A'

r' r b A A' (-)' (-) M \ V

A j -1- 1 TO? ^TO 1

r', r (b O b') A, A'

where all the primed formulae where added by weakening. Then the proof

r b Ai b' A

r b An em b' A

r (b o b') A -----(W)

r', r (b O b') A, A'

is also valid: It is easy to verify the side condition in the formulation of Lemma 1,2 as before weakening there are less choice functions on both sides.

Proposition 2.8 For a relation R between finite subsets of B and C and a relation S between finite subsets of C and D we have (R+) o (S+) = [R o S)+.

Proof. Theorem 2.3 yields (R+) o (S+) = ((R+) o (S+) |g)+. The upshot of the discussion preceding this proposition is that the only difference between (.R+) o (S+) \ ® and R o S are sequents that are derivable from the latter by weakening. Hence we conclude

(R+) o (<?+) = ((J2+) |g o (5+) |g)+ = (RoS)+.

Corollary 2.9 If a binary relation R on finite subsets of B is closed under (Cut*) then so is R+.

If we want to construct continuous sequent calculi, then it is sometimes more convenient to use (Cut'), The following lemma shows that we can do so.

Lemma 2.10 Let R be binary relation on finite subsets of a set B and R! its closure under weakening with formulae from B. If R is closed under (Cut') then R! is closed under (Cut*).

Proof. We show that R' is closed under (Cut*) by rewriting cuts using this rule in terms of (Cut'), But first we observe that R' is also closed under (Cut'): This follows from the fact that every such cut over a formula introduced by weakening can be entirely replaced by weakening.

Now, we prove by induction on n that if Ai,,,,, An and €>i,,,,, 0TO satisfy the side condition, then the two sequents T (R1) Ai • • • T (R1) An and ©i, T (R!) A • • • 0TO, r (R') A imply T (R') A. Note that this immediately shows that R' is closed under cuts of the form

T (R') Ai 0i (R') A

T (R>) An Sm (R>) A T (R') A

as we can weaken the sequents on the right with T,

For n = 0 the side condition boils down to one 0j being empty and we get T (R1) A by weakening.

In the induction step we 'eliminate' the sequent T (R1) An+i. Given any choice function / 6 II"=i ^ an<i an element (j> E A„+i there is a j such that {/i,... ,fn,(j)} D Oj. Cutting T (R') An+i with the sequent 0i;r (R') A— weakening it to 0j, <f>, T (R1) A if (j> does not occur in Qj—produces a sequent

r.e.AW (#) a„+i\{#,A.

The important observation is now that Qj \ {(f)} C {/i,...,/n}. We pick the next formula ip E An+i and cut the sequent we just generated with the corresponding Gj>, T (R') A to get

r, Oj \ {</>}, ef \ {# (R') An+l \ {</>, A.

We iterate this procedure of cutting the resulting sequents with the appropriate 0fe, T (R1) A and finally get a sequent of the form

r,S, (R')A

where 2 C {fu .. .,/„}.

Repeating this process for all such choice functions / yields sequents such that Ai,,,,, An and (Ef)fejjAi satisfy the side condition and hence the induction hypothesis, □

The proof of the lemma essentially shows that (Cut*) is derivable from (Cut') and (W) without any other rules. The statement of the lemma is reminiscent of one direction of Proposition 1,8, There the proof was much simpler since we could use (Cut), As the rewriting of (Cut*) in terms of (Cut) uses logical rules we cannot use it in the current context.

Corollary 2.11 If a binary relation R on finite sets is closed under (Cut') then R is closed under (Cut*).

We can put all these results together to get the central theorem about the construction of continuous sequent calculi:

Theorem 2.12 Let R be a binary relation R on finite subsets of a set B that is either closed under (Cut*) or (Cut'). If it also satisfies the condition that for all <j>, T (R) A there is a </>' e T(B) such that #, T (R+) A and <j> (R+) #, and dually for interpolation on the right, then R makes T(B) into a continuous sequent calculus.

Proof. The relation l> is a consequence relation by Lemma 2,6, From Lemma 2,7 and its dual we infer the existence of interpolants. That it is an idempotent then follows from Corollary 2,9 or Corollary 2,11 together with Lemma 1,7, □

An analogous result for the generation of arbitrary morphisms, rather than just identities, is now also easy to prove.

Theorem 2.13 Let T(B) and T(C) be continuous sequent calculi and R a binary relation between finite subsets of B and C that is closed under (Cut*) with respect to generators of lbr(#) and lhr(c). If, moreover, for all 4>, T (R) A there is a (j)' e T(B) such that 4>', T (Ji+) A and (j) lbr(#) (j)', and the dual interpolation on the right, then R is a compatible consequence relation from T(B) to T(C).

Proof. Because of Lemma 2,6, R+ is a consequence relation, and Proposition 2,8 implies that l> is closed under cuts with lh The remaining conditions of Proposition 1,12 are consequences of Lemma 2,7, □

In applications it is quite common that we want to describe a consequence relation between T(B) and T(C) by referring to arbitrary formulae and not just the generators, i.e. elements from B and C, The following lemma explores when we can do this without any additional overhead:

Lemma 2.14 Let R be a relation between finite subsets of T{B) and T{C) that is closed under the backwards rules. Then R is a consequence relation, and moreover it is equal to (R \g)+■

Proof. Since R is closed under backward rules and T{B) as well as T(C) are freely generated we have R C (R \g)+. The claims of the lemma are an immediate consequence of this observation, □

Suppose R is an 'over-specified' relation as in this lemma. If we look at the formulation of Theorem 2,12 and 2,13, we see that if R satisfies the premises of these theorems—apart from being over-specified—then so does its restriction to the generators.

There is another condition that we can relax, namely being closed under the appropriate cut rules. In the applications it often happens that the result of a cut does not quite lie in R but can be derived from such a sequent by weakening. Adding such sequents does not create new problems for interpolation, Moreover, the discussion before Proposition 2,8 shows that this does not introduce essentially new cuts, either. Hence, we get slightly more liberal versions of the above theorems:

Corollary 2.15 Let R be a binary relation on finite subsets of a freely generated (2,2,0,0)-algebra that is closed under backwards rules. If R satisfies the interpolation conditions of Theorem 2.12 and is closed under (Cut*) or (Cut') up to weakening, then R is a continuous sequent calculus.

Corollary 2.16 Let R be a binary relation between finite subsets of continuous sequent calculi T{B) and T{C) which is closed under backwards rules. If it satisfies the interpolation conditions of Theorem 2.13 and is closed under (Cut*) with respect to generators of lbT(B) and \hT(C) up to weakening, then R is a compatible consequence relation from T{B) to T{C).

2.3 Coproducts and Products

As an example application of the results of the previous section we construct coproducts in MLS, We do this in considerable detail to show what is involved in performing a construction like this in a purely syntactic fashion.

Let L and M be continuous sequent calculi whose underlying algebras are freely generated. We take the disjoint union {0} x LU{1} x M and let L + M be the algebra that is freely generated by it. We already know that we only

have to come up with a suitable relation on the basis to define the continuous sequent calculus on L + M. For this, we specify the rules:

r IK A r lbM A

(r * 0) 777 ---- (r # 0)

{0} x T lbL+M {0} x A {1} x T lbL+M {1} x A

Ihi r Ihjvi A

\\~l+m {0} x T, {1} x A (°> (!> ^l+m

To understand these rules it might be helpful to jump ahead a bit and to consider them semantieallv: In the next chapter we will see that coproducts in MLS correspond to taking the disjoint union of spaces. Let us suppose L corresponds to a space X and M to Y. In these terms the restriction r ^ 0 in the first rule says, in effect, that the embedding of X in X + Y is only a part of the latter; and analogously for the second rule. The two other rules can be read as saying that the union of the embedding of X and Y is all of X + Y and that the intersection of any part of X with any part Y is the empty set.

Lemma 2.17 The relation (lbL+M)+ is a continuous sequent calculus.

Proof. Clearly, the relation I\~l+m has interpolants precisely because lbL and lbM do. It is also closed under Cut' as is readily checked by considering the different cuts that arise. Let us verify one case explicitly to give the flavour of the argument: Consider the proof

lbL«^,r lbMA o. (-) lb/ A

lbL+M 0,0 , 0 xr, 1 x A 0,0 ,{0} x eibL+M {0} x A / ;

{0} x e lbL+M {0} x r, {0} x A, {1} x A.

If 0 is not empty we can construct

lbL O. I' (f), 0 lbL A

0 Ibr r, A

(Cut')

{0} x 0 lbL+M {0} x T, {0} x A {0} x 0 lbL+M {0} x T, {0} x A, {1} x A

otherwise we simply use a different rule and get

\hL(j),T <p lbL A -(Cut')

lbLT,A lbMA

{0} x r, {0} x A, {1} x A,

All other cases are equally straightforward. So (lbL+M)+ is a continuous sequent calculus by Theorem 2,12, □

We claim that this is the identity for a coproduct of lbL and lbM- The embeddings from L and M to L + M are defined by the behaviour on the generators {0} xlU {1} x M. We define them by:

ribL A r lbM A

r bt0 {0} x A rbtl {1} x A

For these rules we do not need the restriction r ^ 0 since the preimage of X C X + Y under the embedding is all of X.

Note that on the left hand side of these sequents arbitrary formulae from L and M may appear. Of course, we could restrict them to elements from the generators of L and M. but this would actually make it harder for us to show the necessary interpolation properties.

Lemma 2.18

lations.

The relations (h0)+ and (KJ+ o,re compatible consequence re-

Proof. They are closed under backwards rules since lbL and lbM are. For the same reason they satisfy the interpolation properties needed in Corollary 2,16, If we consider (Cut*)-euts between bt0 and lbL+M-sequents we see that those of the form (0, (j>), (1, ip) lbl+m are always redundant since no choice function can cover the formula (l,^). If we have to use a sequent that is derived as

Ibi r Ibj\.f A

we get

~l+m {0} x r, {1} x A

U {0} x r

0bto {0} x T, {1} x A

for any 0 Cfin L. The remaining case follows immediately from the fact that lbL is closed under (Cut*), and so does closure under cuts with lbL, Thus, all conditions of Corollary 2,16 are satisfied, □

To show that L + M is indeed the coproduct we have to come up with unique mediating morphisms for arbitrary co-cones. Suppose that b f. L —y N and bg: M N are compatible consequence relations, where N is also freely generated. We define b¡g by:

T\-f A

{0} x T b/9 A

{1} X r \~fg A

(o, 4>), (i, V*) \~fg

Lemma 2.19 The rules generate a compatible consequence relation (H/9)+, and it satisfies (K0)+ ° (^fg)+ = ^f and (KJ+ ° (^fg)+ = g■

Proof. As before the existence of interpolating formulae for bfg follows directly from the fact that bf and b9 have interpolants, and for the same reason it is clear that b¡g is closed under backwards rules.

The main difficulty of the proof that b¡g is closed under cuts with I \~l+m and Ihjv is to make sure that we consider all the different ways in which such cuts can arise. Let us begin with cuts of the form o \hN. If such a cut uses a sequent (0,0), (1, V*) bfg, then we can get the result simply by weakening this formula, whatever the other formulae that are involved in the cut. The same is the case if we have at least one sequent of the form {0} x T b/9 A and one of the form {1} x T' A', where T, V ^ 0, Now, suppose that on the left hand side of (Cut*) there is at least one sequent of the form {¿} x T bfg A with r 0, Then any sequent derived as

H/ Bp bg Qi

b fg 0Q, Qi

can be replaced by {¿} xrhj9 by applying weakening to the corresponding premise. This shows that we can assume, without loss of generality, that the sequents on the left are all derived by the same rule.

If it is the first or the second rule then closure under cut with \hN follows immediately from the compatibility of b/ and b9. Otherwise, the cut must be of the form

I"/,, IV A, o, Ibv A

b/i; I'„. A„ (-)„, lb v A

-^-(Cut )

hfg o Ihjv A,

The sequents bfg r«, Aj are derived from bf Ti and b9 Aj, and we construct the proof

bfTl O, lbv A hg Ai O, lbv A

7 I'/» O,,, Ibjv A bg An (-),„ Ibjv A

(Cut*) —-(Cut*

I-/A _^A

^fg A-

The side conditions are satisfied since there are less choice functions to consider in each of the two cuts. We have now shown that, up to weakening, we have

b fg O |bj\T Ç b fg.

For cuts between I\~l+m and b¡g essentially the same argument shows that we can assume the lhL+M-sequents involved in such a cut to be of one type. This restricts the h^9-sequents that can appear on the right of such a cut. The only new case is

{0} x Gx hfg A

{0} x Si \-fg Л {0} x Гь{1} x Ax {1}хв!Ь/9А

~l+m {0} x Гк, {1} x Ak {1} x e^ hfg A

(o, (j>l), (1, Ф\) \~fg

(0,фп), (1 ;Фп) \~fg

-l+m оь/9л,а.

We observe that the side condition must in particular be satisfied for choice functions that pick elements exclusively from the {0} x ^ or exclusively from the {1} x Aj. For such choice functions only certain sequents on the right can be relevant. Since we know how the individual sequents in the cut were derived from the respective continuous sequent calculi lbL, lhM, \~f and b9 this implies that we can construct the proof

IH, r, (->, h, A \\~l Ai (-)', A'

Tfe B,b/ A \\-L Afe (-);„ b„ A'

(Cut*) -—-(Cut*

Ь/Л_^A

\~f9 Л-Л'

We have thus proved I\~l+m о Ç up to weakening. So, hfg satisfies all conditions of Corollary 2,16 showing that (H/9)+ is a compatible consequence relation.

Now we check (h0)+ ° (^fg)+ = (K0 ° ^fg)+ = where the first equality follows from Proposition 2,8, For Г \-f A we can find а о e I. such that Г \\-L ф

and 0 b/ A since b/ is compatible. We construct the proof

r lbl <p 0 b/ A

ru (0,0) (0,0) A

r (bt0 o b/9) A.

which shows b^ C bt0 o C (bt0 o b^9)+.

Because of the structural similarities between the rules for lbL+M and b¡g the proof of the other containment is almost identical to the argument for K0 ° I'^l+m ^ I-to in the previous lemma, □

Unfortunately, the uniqueness of the mediating morphism (b^9)+ is harder to show. For this we need more information about the exact relationship between lbL+M and btj, We establish the necessary prerequisites in a series of lemmata.

Lemma 2.20 If (h0)+ A then we can prove {0} x T (lbl+m)+ A for all non-empty t cfin L.

Proof. The proof is by induction on the derivation of (K0)+ A, For the base case consider

ho {0} x A. We can immediately construct the proof

r Ibr A

{0} x T lbL+M {0} x A.

The rest of the induction poses no problems since the term {0} x T on the left does not interfere with the logical rule that is used on the right, □

Lemma 2.21 For any derivable sequent (0,0), T (lbl+m)+ A and any tp e L we can derive (O,0A-0),r (lbl+m)+ A.

Proof. The proof by induction is straightforward: For the base case

0, r lbl A

we construct

(O,0),{O}xf lbL+M{0}x A

0,f lbL A

-— (W)

0,^,ribLA ---- (la)

0 a tp,T lbL A

(0, (p Aip), {0} x f lbL+M {0} x A. 81

No logical rule can introduce the term (0,(/>), so it is always a side formula. And for weakening it is clear that if we can weaken with (0, <f>) then we may as well weaken with (0,4> a ip) instead, □

Lemma 2.22 If the sequent (0, 0), (0, -0), T (IHL+M)+ A is derivable then so is (0,«M^),r(lbL+M)+ A.

Proof. As before the only interesting bits of the induction are the cases where either (0, 4>) or (0, ip) is introduced. For the base

<p,ip,f \\~l A

(0,^),(0,^),{0} X rihL+M {0} x A

we immediately get

<l> a ip, t II-l A

(0,0 a {0} x rihL+M {0} x A and for weakening we can use Lemma 2,21 □

Lemma 2.23 If {0,(p),T (IhL+M)+ A and (0,ip),T' (IhL+M)+ A' is derivable, then so is (0, 4> V xp), T, T' (I\~l+m )+ A, A'.

Proof. The proof is essentially the same as before; we perform an induction on the derivation of both sequents. The base case is done by using multiplicative (Lv), and all the other cases, even weakening, are trivial, □

We can now make the connection between I\~l+m and bt0:

Proposition 2.24 (i) If {0} x T (IhL+M)+ A then we also have T (h0)+ A. (ii) J/r (h0)+ A and r ^ 0 then we can derive {0} x T (IA.

Proof. Both claims are proved by induction on the derivation of the sequent in question. The first claim is straightforward and does not need any of the previous lemmata.

For the second claim the base case is not a problem because of the side condition T 0, The only difficulty with weakening is the case

r ho A.

This is, however, taken care of by Lemma 2,20, The only non-trivial logical rules are (La) and (LV); this is where we need Lemma 2,22 and 2,23, □

This allows us to show finally that we have actually constructed the co-product of L and M:

Theorem 2.25 The continuous sequent calculus L + M, together with the compatible consequence relations (h0)+ o,nd (btl)+, is a coproduct of L and M in MLS.

Proof. Suppose that b¡\ N -ь L and b9: N —M are compatible consequence relations, where N is also freely generated. Lemma 2.19 says that (bfg)+ is a compatible consequence relation and, moreover, that it satisfies (K0)+ ° (^fg)+ = I"/ an<i (Ki)+ ° (^fg)+ = Hence, we only have to show that it is unique.

To this end let us suppose that bd: L + M —y N is any morphism with this property. We first show that the generators b¡g are contained in bd, If we have Г Ь/ Д, where Г ф 0, then compatibility of b/ allows us to find a ф E L such that Г lh; о and ф \-f A. Since we assume \~f = (h0)+ о bd we find а ф E L + M such that ф (h0)+ Ф and ф bd Д. From ф (bt0)+ ф we get (0,0) (IHi+M)+ ф because of Proposition 2.24.(2) and we construct the proof

ribL0 (0,0) (\\-ь+м)+Ф ФЫА

-(Cut)

{0} x r \\~l+m (0,0) (O,0)bdA

-(Cut)

{0} x T hd A.

We still have to check two rules that generate b^9-sequents, Because of

(0,ф),(1,Ф)1Ым _

(0,0), (1,^) (lhL+M)+± ± rd

(O,0),(l,^)bd

those of the form (0,0), (1, V*) are n°t a problem. We consider the last rule

I-/ T Hp A

hfg r, A.

Because of weakening and what we have already shown we get the sequents (0, T) bd r and (1, T) bd A and we can use them in the following derivation:

— (RT) —— (RT) : I U~m 1

-L+M( 0,T),(1,T)

(0,т) bdr

(1,T) bdA

ЬГ,Д

This completes the proof of bfg C bd.

For the reverse containment we know from Lemma 2.1 that we only have to show that bd-sequents that relate generators lie in (b/9)+. As before the only interesting such sequents are those of there form {¿} x T bd A; those

containing (0,(/>) and formulae on the left are trivial, and we can take

care of the case where the left hand side is empty by (LT) and (RT).

So, we take a sequent {0} x T bd A, where r ^ 0. We exploit compatibility of bd to get {0} x r (lbL+M)+ 4> and o b(/ A lor a o El.- M. By Proposition 2,24,(1) this implies T (h0)+ (j> and we get

r (K0)+ <t>

-(Cut)

T\-f A

{0} x r b/9 A.

Hence, we have shown that bd and b¡g are equal.

If N is not freely generated, we construct T(N) which is isomorphic to N by Proposition 2,5; let us call the isomorphism b£: T(L) —L. We get a mediating morphism bd for b/ and b9 composed with (b£)_1, It is straightforward to check that bd o b£ is the unique mediating morphism for b/ and b9. So, L x M is a coproduct in the category MLS, □

It is clear from Lemma 2,1 that the free algebra on no generators supports exactly two continuous sequent calculi; one is created by the empty relation, the other by the empty sequent 0 lb 0, Because of weakening, the continuous sequent calculus 0 generated by the latter one relates all finite subsets of formulae. We infer that it is a zero object in MLS, i.e. it is initial and terminal.

Corollary 2.26 MLS has all finite coproducts.

Proof. The only thing we have to show is that we can also form the coproduct of continuous sequent calculi that are not necessarily freely generated. This is a consequence of Proposition 2,5 and the following observation: If objects Xi and X[ are isomorphic in any category and the coproduct of the Xi exists, then it is also a coproduct of the X[. □

Remark. There is also a more conceptual way of explaining why the objects of MLS that are not freely generated do not cause any problems: Our constructions show that the full subcategory of freely generated continuous sequent calculi has coproducts. As we already observed after Proposition 2.5 this category is equivalent to MLS. But as equivalent categories have the same categorical properties (see [17, 1.39]), MLS has all coproducts.

The construction for coproducts also yields products:

Corollary 2.27 The category MLS has all finite products.

Proof. As pointed out directly after the definition of MLS on page 63 the category is self-dual. Unfortunately, the corresponding involution (-)op does not fix objects; it reverses lb and interchanges A, V, T and ±. Hence, we only know L x M = (Lop + Mop)op. □

There is no reason why L + M and the product (Lop + Mop)op should be equal. Somewhat surprisingly they are, however, isomorphic. In the next chapter we will see a semantic argument why this is the case.

An alternative proof is to give the projections and the corresponding pairing directly which make L + M into the product of L and M. The projections are generated by the rules:

F lbl A F lbj\.f A

{0} x T bf0 A {1} x T bTl A

(r * 0) x.. (r ^ 0)

{l}xrbT0 {0}xrbffl

L J " U t j

For compatible consequence relations b¡: N L and b9 : N M the mediating morphism is generated by the single rule

r \-f A 1' K; (-)

Th{M {0} x A,{1} x e.

The proof that the rules generate compatible consequence relations and that they satisfy the conditions defining products uses the same techniques as the one for coproducts, and we do not consider it in any detail. To show that (b(f,g})+ is unique we need an auxiliary observation similar to Proposition 2,24, namely that the rule

r (KJ+ A r (KJ+ e

r(lbL+M)+ {0} x A, {1} x 0

is admissible.

Chapter 3 Semantics

In this chapter we make the connection between the topological spaces we studied in Chapter 1 and the syntactic objects of the last chapter. As mentioned before, continuous sequent calculi correspond to stably compact spaces and compatible consequence relations to 'closed' relations between them, A continuous functions between two spaces can be considered to be a special instance of such a relation, and we can characterise the consequence relations that give rise to functions in purely syntactic, i.e. logical, terms.

We first establish this link between syntax and semantics in an abstract way that says that the respective categories are equivalent. Then we investigate how we can determine whether a concrete continuous sequent calculus actually represents a certain stably compact space. Once we have these tools available, we can use them to perform domain constructions in logical form. We consider several examples in some detail. Function spaces are notoriously hard and we look at some of the problems that arise.

The space of relations, on the other hand, is much easier to handle. Certain relations can be understood to correspond to functions and we study this problem with logical, topological and categorical methods,

1 Logic and Topological Spaces

We begin by looking at theories, sets of formulae that are closed under internal reasoning Ik This is essentially still a proof-theoretic concept, but a closer scrutiny of the poset of all theories on a continuous sequent calculus reveals that they are arithmetic locals, the Stone duals of stably compact spaces. Hence, these considerations lead directly to the semantics of continuous sequent calculi.

There is a close resemblance between theories, or filters, for continuous sequent calculi and those for strong proximity lattices which were introduced in [33,34], The latter can in fact be understood as Lindenbaum algebras for our logic. We make this connection precise and finally we give a topological semantics for both, continuous sequent calculi and compatible consequence

relations,

1.1 Theories and Models

Let b be a compatible consequence relation from L to M. For X C L and Y C M. we define

X[\~] := {(j) E M I (ar Cfin X) r b (j)}

[h]y ■= {(/) eL \ (3a Cfin Y) 0 b a}

which we can think of as the b-eonsequenees of X and the dual for Y. As usual, in the case of singleton sets we write 0[b] for {0}[b], and [b]0 for [b]{0}_ A filter of L is a set F C L such that F = F[lbL]; an ideal of L is a set I C L such that I = [IhL]J, Let filt(L) and idl(L) denote the partial orders of filters and ideals, respectively, both ordered by inclusion.

Consider the role that filters and ideals play in logic. Roughly, a filter corresponds to a theory. One typically says that a theory is consistent if it is not the entire language, and one formulation of soundness and completeness is that a theory is consistent if and only if it has a model. The latter means essentially that the corresponding filter is contained in a prime filter. We will return to this point later. First we prove some properties about filters and the poset filt(L),

Proposition 1.1 A subset F of a continuous sequent calculus is a filter if and only if

(i) T G F;

(ii) (f),ip G F implies (j) A ip G F; and

(iii) <f> G F if and only if for some i/j E F, i/j lb

Furthermore, in any filter F, if (j) G F (or xp G F) then <f> V ip G F, and 4> A ip G F implies (f),ip G F.

Proof. If F is a filter then the first two conditions are immediate from the rules (RT) and (RA), The third one follows from Proposition 1,12,

Conversely, the last condition implies F C F[ 11—] _ For the other inclusion, suppose 4> G F[lb], i.e. T lb 0 for T Cfin F. This implies f\T lb 0 by (LA) and f\T E F because of the second condition; weakening and the first one take care of the trivial case T = 0, So we have o e /•' by the third condition.

Weakening and (RV) show that for a filter /•'. if o e /•' or c G /•' then the element 4> V ip must also lie in F. For the last statement suppose 4> A ip is an element of a filter F. From condition (3) we get a \ G /•' such that x 0 A ip. So backwards (RA) implies x^ $ and X V* thus proving <f>,ip E F. □

Condition (3) of the proposition is usually called roundness of filters. We will use it, and the other properties listed in the proposition, quite frequently in the remainder of this chapter.

For filters and ideals we can restrict b to the singletons it relates:

Corollary 1.2 Let F C L be a filter and b: L —y M a compatible consequence relation. Then we have

F[b] = {o e M |

and dually for ideals.

Proof. Suppose we have T Cfin F and a sequent r b <f>. By (La) we get f\T b (j> and from the previous proposition we infer f\T E F. This shows one inclusion, the other one is trivial, □

We can use this observation in many proofs and as it is very elementary we will do so tacitly. Next, we observe that there is always a wealth of filters and ideals, namely the ones generated as 'consequences' of an arbitrary set.

Lemma 1.3 A[b] is always a filter and [b]F is always an ideal.

Proof. Given any X the set X[\~] clearly satisfies the first two conditions of Proposition 1,1, Since we made the general assumption that b is compatible, the third condition follows from Proposition 1,12,

The result for [b]F is for free by duality. The same goes for the remaining arguments in this section; we hence omit pointing it out every time, □

We now embark on a first investigation of the internal structure of filt(L) and ¡dl(L), namely the part that is inherited from the lattice structure of the power set of L.

Lemma 1.4 The posets filt(L) and idl(L) are closed under directed unions and finite intersections.

Proof. Since sequents are relations between finite sets it is clear that a directed union of filters is again a filter.

For finite meets we first observe that L itself is always a filter. Given two filters F and G we verify that F fl G is again a filter using Proposition 1,1, The first two conditions and the implication F fl G 3 ip lh 4> =>■ 4> E F fl G are immediate consequences of F and G being filters. For o e /•' P (•' we get elements ipF E F and ipG E G such that ipF <f> and ipG lh (j> since F and G are round. This means ipF V V'g by (LV), and again from Proposition 1,1 we know that ij)FV ij)G E F C)G. □

These posets filt(L) and idl(L) are dcpo's and a-semilattices, In fact, they have much more structure and we come back to this points when we make the connection to topological spaces in Section 1,3, The reasons to emphasise finite infima and directed suprema at this point are the following: Firstly, these operations are very simple because they are just ordinary intersection and union. Secondly, filt(L) and idl(L) being a-semilattices is a cue that it may be worthwhile to characterise the a-prime elements. Directed suprema are needed later to show that there are 'enough' prime ideals and filters.

But before we tackle the primes we list some properties of the map (-)PH which will be useful later:

Lemma 1.5 The following properties hold:

(i) X C Y implies C F[lbL];

(ii) ({0}UX)[lhL]n({#UX)[lhL] = ({0V#UX)[lbL];

(iii) if(/>e X[\hL] then ({(/)} U r)[lbL] C (X U Y)[\hL];

(iv) <p ^ -Yph^] implies ({0} U X)[lh/y] = Xpbt]; and

(v) ({0}ux)ph]=uV#4(Wux)ph].

Proof. The first two properties are obvious from the definition and the rule (LV), The next one follows from (Cut') which is admissible by Proposition 1,8, and the fourth is an immediate consequence of (1) and (3), For the last property one containment, namely

follows directly from (3), For T Cfin X and r, 0 lb 9 we interpolate to get 4> lb ip and r, ip lb 9 showing the other containment. We still have to prove that the union is directed. Given ip,x £ 0PH we fi11^ interpolants tp' lb ip and x' X such that 4> lb tp' and 4> lb x'- By (W), (La) and (RA) we get tp' Ax'Ib ^ and V>' A x' II" Xj as wgll as <f>\h ip' A x'- Because of (3), this implies

({# U X) [lb], ({x} u X) [lb] c (W A x'} u X) [lb]

and thus directedness, □

Proposition 1.6 For a filter F C L the following are equivalent:

(i) F is a A-prime element of filt(L);

(ii) F is inaccessible by finite disjunctions, i.e. (j)\J ip e F implies o e /•' or ip e F, and ± £ F;

(iii) if T lb^ A for some T Cfin F then FT] A ^

The dual conditions characterise the A-prime elements of\6\(L).

Proof. "(1) =>■ (3)": Suppose r lb ,,,, Sn for some T Cfin F. Then we can find interpolants S[ lb Si such that T lb S[,... ,S'n and iterated applications of (RV) yields T lb \/A', where A' = {S[,... ,S'n}. Using the previous lemma and the fact that F is a filter we get

F = F[lb] D r[lb] = (r U {\/ A'}) [lbL] = f| ((r U {5,'}) [lb]).

Since F is A-prime there is a S[ such that 5-[lb] C (ru {5i})[lb] C F. Because of 61 lb St this shows F fl A # 0.

"(3) =>■ (2)": Evidently, ± £ F, for otherwise we have ± lh 0, If oVi: e /•' then there is a set T Cfin F such that T lh <p V ip which implies T lh (p,ip by backwards (RV). This means that either (p or ip is in F.

"(2) =>■ (1)": If G and H are filters not contained in F then we can pick <p E G \ F and ip E H \ F. Thus, we have <p V ip E (G fl H) \ F by (2) and Proposition 1,1, This says that the intersection G fl H is also not contained in F which is hence A-prime, □

Remark. The converse of the third condition is equivalent to F C F[lh], Hence, we can directly characterise the sets that are prime filters: They are precisely those F C L such that for all A Cfin L we have I-'PA ^ 0 if and only if there is a subset T Cfin F such that T lh A.

Also note that to get to (2) we needed backwards rules for the first time.

A set satisfying any of these equivalent conditions is called a prime filter. A set satisfying the dual conditions is called a prime ideal. The following lemma is a useful tool to construct prime filters.

Lemma 1.7 If I is an ideal and F a filter, maximal with the property Ff~)I = 0, then F is prime.

Proof. Assume F not to be prime, then by the previous proposition there are sets T Cfin /•' and A = {¿i,,,,, <)„ }• such that T lh A but F fl A = 0, Using interpolation we can make sure that for all ^ we have ¿¿[lh] \ F ^ 0, i.e. the filters (F U {¿¿}) [lh] properly contain F. Because of the maximalitv of F this implies that for each i = 1,... ,n there is a (pi E I and a set ^ Cfin F such that r«, Si lh <pi. We put this together to get

TiA lh (px r lh A I„ • S„ lh o(.

r,Ti,... ,r„ lh (pi,... ,<pn

r,rl5... ,r„ lh

(Cut')*

- (RV)*

We have ruriLJ- • -ur„ C F, and as F is a filter and I is an ideal, (pi V • • • V (pn lies in F and I. Note that the case A = 0 is trivial as it implies ± E F fl I. In either case we get a contradiction to F fl I = 0, thus F must be a prime filter, □

Prime filters and prime ideals are almost complements:

Proposition 1.8 If P is a prime filter then [lh](L\P) is a prime ideal. Conversely, for a prime ideal I the set (L \ /)[lh] is a prime filter, and these two maps are mutually inverse order anti-isomorphisms.

Proof. The set J: = [lh](L \ P) is an ideal by Lemma 1,3, It does not meet P since P 3 4> lh A C {L \ P) implies o \h \/ A E (L\P) as P is prime, and the latter contradicts P being a filter. As I is clearly the largest ideal satisfying I fl P = 0 it is prime by the previous lemma.

Because of duality all that remains to show is P = (L\/)[lh], Obviously, P is contained in the right hand side as the latter is the largest filter that has an empty intersection with I. To see the converse, suppose 4> E ((L \ /)[IH]) \ P. We get a set T C {L \ I) such that T lh and hence f\T lh <f>. This implies f\T E I by the construction of I which contradicts the primeness of I. □

1.1.1 Consistency

As discussed at the beginning of this section a filter F C L is consistent if it is a proper subset of L, and completeness can be expressed as each such filter being contained in a prime filter. But closer inspection of the proofs of completeness theorems, say for Gentzen's system K [18], shows that more is proved. In particular, we have nearly complete freedom to choose, apart from the formulae in F, what formulae are not to be satisfied in a particular model. Say that a pair of sets (A, Y) for AC L and Y C M is b-consistent provided that for all T Cfin A and A Cfin Y, it is the case that I' \f A. The idea here is to understand X as a set of formulae that 'hold' in L and Y as a set of formulae that do not hold in M. So the least we should expect is that b does not contradict this understanding.

Among other things, the next proposition shows that consistency has to do essentially with filters and ideals.

Proposition 1.9 For every compatible consequence relation b from L to M the following are equivalent:

(i) (A, Y) is b-consistent;

(ii) (A, [lhM]F) is b-consistent;

(iii) (A[lhL],F) is b-consistent;

(iv) (A [b], F) is IhM-consistent;

(v) (A, [b]F) is lhl-consistent;

(vi) A[b] n [lhm]Y = 0;

(vii) A [I hi] n [h]y = 0;

(viii) (A, I) is b-consistent for some prime ideal I D [II-m]^/ and (ix) (F, Y) is b-consistent for some prime filter F D A[lhL],

Proof. "(1) (3)": Suppose the sets A = {¿i,,,,, <)„ }• C A[lh] and O Cfin Y are such that A b O, Then for each Si E A there is a Cfin A such that r« lh Si. Multiple application of (L-Cut) yields Ti,,,, b O which shows that (A, Y) is b-inconsistent,

"(3) => (5)": Given sequents ¿i b Oi;...;Sn b ©„ and T lh A, where A = {¿i,,,,, S„ }•. (-)( Cfin Y and I' Cfin A, we get V A b (-)i.....(-)„ and I' lh

V A by iterated application of (LV) and (RV), respectively. The element V A clearly lies in X[\\~] and because of €>i U • • • U €>„ C Y the pair (X[lh], F) is Kineonsistent,

"(5) =>■ (7)": The contraposition of this implication is obvious,

"(7) =>■ (1)": Let I' = {~ i.....~ „ }• C X and A Cfin V be sets such that

T h A, By interpolation in the form of (L-Int') we get sequents 7i lb 7- and T' = {~'.,,,, 7^} h A, We construct

which shows /\V G X[\\~] fl [l~]F, thus contradicting (7),

We have shown the equivalence of the conditions (1), (3), (5) and (7), We proceed by proving (9) equivalent. To this end we assume that the filter X[\\~] does not meet the ideal From Lemma 1,4 we know that a directed

supremum of filters is simply the union and hence if such filters have an empty intersection with any given ideal then so does their supremum. Thus we can apply Zorn's Lemma and get a maximal filter F D X[\\~] such that FT) [h]y = 0, By Lemma 1,7 this F is a prime filter and since (7) is equivalent to (1) we also have that (F, Y) is Keonsistent,

Conversely, if (F, Y) is Keonsistent for a prime filter F D X[lb] then clearly (X[lb],F) is also Keonsistent,

By duality (1), (2), (4), (6) and (8), and hence all conditions of the proposition are equivalent, □

This proposition is quite central to the rest of the development of the theory. The equivalent conditions (8) and (9) in particular have many applications, For one thing, they show that there is a wealth of prime ideals and prime filters. They also correspond to completeness, as mentioned above. This will be made more precise in Section 1,3 where we discuss in which sense a prime filter can be considered to be a model. In topological terms this equivalence can also be understood as the Hofmann-Mislove Theorem in disguise.

Remark. Consistency provides the following bridge between filt and idl: A morphism K L —M induces a function (-)H : filt(L) —filt(M) because of Lemma 1.3. As we will see in Lemma 1.23 this can be used to turn filt into a functor, but at the moment we are not interested in composition. Analogously, we get a function [H](-): idl(M) —y idl(L) which makes idl a contravariant functor.

As consistency is a relation, let us, for the moment, regard them both as covariant functors to Rel, the category of sets and relations. Now, consider

the diagram

filt (L)

idl(L)

filt(M)

idl(Af)

where 'normal' arrows indicate functional relations and the crossed arrows cl and cm the relations induced by \\-L- and lbM-consistency. In these terms we see that the equivalence "(4) (5)" says, in effect, that consistency acts

as a natural transformation between filt and id I.

1.2 Algcbraisation of the Logic

Before we go on to the topological semantics of continuous sequent calculi and compatible consequence relations we approach the issue from a different angle. We want to find the analogue of a Lindenbaum algebra for our logic. To this end we state the basic definitions of the paper [33],

Definition 1.10 A strong proximity lattice (A; V, A, _L, T) is a distributive lattice together with a binary transitive relation -< satisfying -< o -< = -<; the algebraic structure given by the lattice and the approximation structure are connected by the following four axioms:

(V - -<) (Vo G A, M Cfin A) M < a \f M -< a (~< — A) (Vo G A, M Cfin A) a -< M ^a < f\ M

(-< ^ V) (Vo, x,y) a ~< xV y ==>• (3x', y') x' < x, y' < y and o -< x' V y' (A —<) (Vo, x, y) x A y -< a ==>- (3x', y') x < x', y -< y' and x' A y' < a

We use o -< M to mean o -< m, for all m, G M, and analogously for M -< a.

Mappings between proximity lattices are certain relations.

Definition 1.11 A relation G C A x B between strong proximity lattices A and B is called approximate if it satisfies the following five conditions:

(G —<) G o = G

(-< - G) <Ao G = G

(v - G) (VAf Cfin A, b e B) M (G)b ^ \/ M (G) b

(G - A) (Vo G A, M Cfin B) a (G) M ^ a (G) ¡\ M

(G - V) (Vo e A, M Cfin B) a (G) \J M

(3N Cfin A) a -<a \f N and (Vn e AT) (3m E M) n (G) m

If a relation satisfies all conditions but (G — V) we call it a weak approximable relation.

Note that strong proximity lattices and weak approximable relations are self-dual notions. This means that as in Chapter 2 we usually get away with proving only half of our assertions.

Sometimes it is useful to have an alternative formulation of (G — V). But first we prove a technical lemma that we use several times in the following; it appears as Lemma 7 in [33],

Lemma 1.12 If G is a weak approximable relation, then:

(i) x (G) y x (G) y V y',

(ii) x (G) y, x' (G) y' xV x' (G) yV y'.

Proof. In every lattice the equation y = y A (y V y') holds. Hence, the axiom (G — A) implies (1),

To prove (2) suppose x (G) y and x' (G) y'. From (1) we get x (G) y V y' and y (G) y V y' which implies xV y (G) yVy' because of (V — G). □

Lemma 1.13 For a weak approximable relation G from A to B the condition [G — V) is equivalent to the conjunction of the two implications

a (G) _L =$> a -<.4 _L and a (G) x\J y ==$■ (3a/, y') (x' (G) x, y' (G) y and a -<.4 x' V y'),

Proof. Let us begin by observing that the first implication is an instance of (G — V) where M = 0,

Now suppose (G — V) holds and we are given a (G) x V y. We find a set N as in the condition (G — V) and define Nx\ = [n E N \ n (G) x} and Ny: = {n E N | n (G) //}. We clearly have N = Nx U Ny and thus can infer

a~<\/N = V(.\, U Ny) = (\/ -V,) V (VNy).

Using (V — G) we also get \J Nx (G) x and \J Ny (G) y.

We show the converse by induction on \M\. The case M = 0 has already been taken care of, and if M is a singleton then (-< — G) allows us to return a singleton as N.

For the induction step assume we have a (G) \/ M and M can be written as M = Mi U M2 where Mi and M2 are proper subsets. Because of a (G) v M = (\/ Mi) V (v M2) we find elements m, 1 and m2 such that a -< m,i V m2, rni (G) V Mi and rn2 (G) \J M2.

The induction hypothesis yields sets N1 and N2 according to (G — V); in particular we have m 1 -< \j N1 and m2 < \j N2. By the previous lemma

this implies mi V rn2 -< (V Ni) V (V N2) = \J(Ni U N2) and, as a -< mi V m2, also a -< V(M u Al') • The only thing remaining to be cheeked is that for all n E Ni U N2 there is an m, E M = Mi U M2 such that n (G) rn. But this is clear since the analogue condition relating Ni and Mi as well as N2 and M2 are satisfied by construction, □

An immediate corollary of the lemma is that -< is an approximable relation itself. Hence, strong proximity lattices together with approximable relations as well as with weak approximable relations form categories SPL and SPLW, respectively. Composition is given by relational product o, and the orders of approximation -< act as identities.

Now, we want to compare this with continuous sequent calculi and consequence relations. For a weak approximable relation G we set

TbGA /\T(G)\jA

which, as we will see in a moment, defines a compatible consequence relation.

Lemma 1.14 For weak approximable relations G and H and the order of approximation -< on a strong proximity lattice the following hold:

(i) bG is a consequence relation;

(ii) bGoii = bG o \-H; and

(iii) b^ has interpolants.

Proof. The only rules for consequence relations that do not follow immediately from the respective axioms for approximable relations are (LV), (RA) and (W), Let us consider (LV): As strong proximity lattices are in particular distributive we have (<f> A f\T) V (<f> A f\T) = ((f) V ip) A f\T which together with (V — G) shows both directions of (LV). Weakening follows from Lemma 1,12 and its dual.

Using the cut rule (Cut) the second claim of the proposition is trivial. To see that b^ has interpolants take a 'sequent' f\T -< \/ a V From (-< - V) we get 4>' and 8' such that o' o. <)' < \J A and /\V < S'Vfi. Interpolating again we find a <f>" with <j)' -< <f>" -< <f>, and then Lemma 1,12 shows AT -<5'V0'-<VAv <i>"- D

Corollary 1.15 The translation b(.) defines a functor from SPLW to MLS.

Proof. By the previous lemma b^ has interpolants, is closed under (Cut) and hence is a continuous sequent calculus. The compatibility of consequence relations bG that arise from weak approximable relations is again a consequence of b(.) preserving composition, □

Conversely, we may wonder whether we can turn continuous sequent calculi into strong proximity lattices and compatible consequence relations into weak approximable relations. We begin by turning a continuous sequent calculus

into one where the underlying algebra is a distributive lattice. To this end we factor a given (2,2,0,0)-algebra L by the equations of such a lattice, and it turns out that lb is invariant with respect to these equations. To make this precise, let = denote the least congruence such that L/= is a distributive lattice and write [(/>]:={xp \ <f> = ip} for its congruence classes. We now claim that if 7i = ..., 7„ = j'n and Si = S[,... ,Sm = S'm then

7i,,,, ,7» Ih Si,...,Sm ii-,....,in¥ S'i.,....,S'm.

This is proved as follows. We take the rules that say that = is a congruence, i.e. symmetry, transitivity etc, (see for example [47, Definition 5,2,7]), plus the ones for distributive lattices and then show by induction over the derivation of 4> = 4>' that

T Ih A r Ih A, </>

===== and =

4>', r lb A r Ih A, <p'.

These verifications are a bit tedious but all straight-forward. Hence we restrict

ourselves to the example of the distributive law to give the flavour:

^A(?/>Vx),rihA ===== LA <^VX,Flh A

4>, V>, r Ih A (/>, X; T Ih A

===== LA ===== LA (f) A i/j,T A (/)Ax,T\hA

(4> A ?p) V (<f> A A

Note, that for this to work it is essential that the logical rules can be used in both directions. It is worth pointing out that it is not distributivitv in particular but already the lattice laws where the backwards rules are needed. These considerations allow us to define a relation lh= on the quotient algebra L/= unambiguously by setting:

[<pi],...,[<i)n} lb= [ipi],...,[ipm] (/>i,...,(/>n\\-ipi,...,ipm ■

The axioms for a continuous sequent calculus are readily checked as they are directly inherited from Ih, This argument is very similar to the one in Lemma 2,4,

We now have most of the information needed to support the following:

Theorem 1.16 The categories MLS and SPLW are equivalent.

Proof. From Lemma 1,14 we already know that there is a functor from SPLW to MLS, For singletons we note that 4> (G) ip is equivalent to 4> bG ip which already implies that the functor is faithful.

This observation also tells us how to prove fullness. Given an arbitrary compatible consequence relation b between objects that are in the image of

(LV) (LV)

the functor we define

a G h a b h.

The relation G> satisfies (G —<) and (-< — G) because of b — Ho II / and lb^ o b = b. Similarly, (V — G) and (G — A) are direct consequences of (LV) and (RA). Hence, G> is a weak approximable relation and we have r bGh A ^ A rch\/A ^ ArhVA ^ TP A.

To show that we have an equivalence of categories we have to prove that every isomorphism class of objects of MLS is hit by this functor. Here we use the considerations preceding this proposition: Given a continuous sequent calculus L we look at L/= with lb= as given there. To make sure that this object is in the image it suffices that lb= restricted to singletons is a -<-relation: The conditions (V—<), (-< — A) and -<o-< = -< follow immediately from (LV), (L . ). (RA), (RT) and lb= = lb= o lb=. For (>; - V) consider <p lb tp V x- We apply backwards (RV) and get <p lb ip, x■ Interpolation now yields tp', x' such that tp' lb tp, x' II" X and 0 V*'; x' which gives us <p lb tp' V \' by (RV). The remaining condition follows from duality.

We also have to prove that L and L/= are isomorphic in MLS, a situation that is reminiscent of Proposition 2,5, We define maps between L and L/= by

rb[4,,.,[5n] rib 6u...,6n

[(H...,^] bT Sl,...,Sn\\-T.

Clearly, b and b' are well-defined compatible consequence relations, and we have b o b' = lb and b' o b = lb= since the original relation lb satisfies lb o lb =

lb. □

The proof also shows that MLS is equivalent to its full subcategory whose objects are distributive lattices.

In the light of this proposition we can say that compatible consequence relations are the appropriate extension of approximable relations from individual elements to finite sets. More importantly, the cut rule (Cut*) is the corresponding purely structural generalisation of the relational product,

1.2.1 Filters

The close connection between continuous sequent calculi and strong proximity lattices, in particular the translation giving rise to the previous theorem, can be extended to filters: If F is a filter in a continuous sequent calculus L we write [-F1]:^^] | <p E F} for its image under the quotient map to the distributive lattice L/=. The next proposition says, among other things, that [F] is a filter in the sense of [33], There a filter is defined to be a subset X such that X = fA' and M Cfin X f\ M E X, where f refers to the order of approximation -< which in our case is merely lb= restricted to singletons.

Proposition 1.17 For F E filt(L) the following hold: (i) F is closed under =, i.e. it is a (disjoint) union of=-equivalence classes;

(ii) [F] is a filter in the strong proximity lattice L/=; and

(iii) the 'map [•] is an order isomorphism between filt(L) and the strong proximity lattice filters in L/=.

Proof. If tj) = 4> e F we know from roundness of filters (Proposition 1,1) that there is ax e F such that x From the discussion preceding Theorem 1,16 we know that this implies x V* an<i hence ip e F.

As a consequence of this and Proposition 1,1 we get that [F] is a strong proximity lattice filter.

Given such a filter IF in L/= we set = [jF] = {(/> \ [4>] e IF). Again by Proposition 1,1 it is clear that this defines an element of filt(L), This map as well as [•] are clearly monotone with respect to subset inclusion, so it remains to show that they are mutually inverse. On the one hand we have [4>] e [fjr] <f> e fjr <=?> [4>] e t and on the other <p e f[f]

[4>] e [F] <j> e F, where the last equivalence is true because of (1), □

As [•] is an order isomorphism it takes prime filters to the A-prime filters in the corresponding strong proximity lattice. In the distributive lattice L/= 'ordinary' filters and those of strong proximity lattices coincide because of the previous proposition. Hence, Proposition 1,6 tells us that these A-primes are exactly the prime filters as defined in [33],

We could thus use this observation to transfer the topological results from that paper to continuous sequent calculi. However, in the interest of keeping this thesis reasonably self-contained we prove the relevant results directly in the next section. Actually, this does not take much longer than quoting the results from [33] by translating them via Proposition 1,17: We want to give explicit constructions in terms of continuous sequent calculi anyway, and moreover we are studying a larger class of morphisms,

1.3 Topological Semantics

We now embark on a first investigation of the structure of the posets filt(L) and idl(L), From Lemma 1,4 we already know that they are dcpo's and A-semilattices.

Lemma 1.18 The assignments X X[lhL] and X are Scott-

continuous retractions on the power set of the continuous sequent calculus L.

Proof. Both assignments are obviously monotone, Scott-continuitv follows immediately from the fact that sequents are relations between finite sets, and idempotence is a consequence of the Lemma 1,3, □

This shows that much more structure is inherited from ^i(L):

Corollary 1.19 The posets filt(L) and idl(L) are continuous lattices.

Proof. The previous lemma shows that filt(L) and idl(L) are continuous retracts [5, Proposition 3,1,7,1] of the algebraic lattice (*$(£), C), Hence they

are continuous lattices [5, Proposition 3,1,2, Theorem 3,1,4], □

As the next step we give an internal characterisation of the order of approximation in these lattices. Any filter F in a continuous sequent calculus is round and hence can be written as

We now claim that this union is directed: Given <f>,ip e F we have <f> A ip e F by Proposition 1,1 and the containment [lh] D 4>[\\~], ^>[lb] follows readily

from (W) and (LA), This allows us to characterise <C for filters and ideals:

Lemma 1.20 In the continuous lattice filt(L) we have F<C G if and only if there is a (j) e G such that F C </>[lh].

Proof. We know from Lemma 1,4 that directed suprema are simply given by unions. Hence, the condition is clearly sufficient for F«G, That it is also necessary follows from our discussion preceding this lemma, □

The lemma gives us an alternative proof for Corollary 1,19 that does not rely on facts about retractions: The discussion preceding the lemma shows that every filter is the directed union of filters way below it. Hence, filt(L) is continuous. To see that it is a complete lattice note that 0[lb] is clearly the smallest filter of L. As we know that filt(L) has directed suprema we only have to construct binary suprema. We can read off the construction from Lemma 1,18, but let us state it explicitly for future reference:

Lemma 1.21 The supremum of two filters F and G is given by (F U G)[lb],

Proof. We already know that (-)PH is monotone. As F and G are filters we have F[lh] = F and G[lb] = G which implies that (/•' U G)[lb] contains F and G, Also because of monotonieitv it is the least such filter, □

These two lemmata enable us to infer even more about the structure of the lattices of filters and ideals:

Proposition 1.22 For a continuous sequent calculus L the lattices filt(L) and idl(L) are arithmetic.

Proof. We begin by proving that filt(L) is distributive. For this it suffices to verify F A (G V H) C (F A G) V (F A H) for filters F, G and if; the other containment is true in any lattice. We can write the left hand side as F n (G U H)[lb] because of the previous lemma. So let us suppose we have 4>i,..., 4>n e G L II and (¡>i,..,, (¡>n lb ip for a ip e F. As F is a filter we can find a \ e /•' such that x V* using roundness of F. Applying (LV) multiple times yields 4>i V x, ■ ■ ■, 4>n V x "" V* an<i by Proposition 1,1 we know that all the formulae <f>i V x come either from (F fl G) or (F fl H). This means that we have shown ip e ((F fl G) U (F fl if)) [lb] = (FaG)V(FaH).

Suppose F, G and H are filters and F <C G, H. By Lemma 1,20 this means that we can find o e G. i.' E II such that F C 0[lh], The infi-

mum of G and H is given by the intersection because of Lemma 1,4, and by Proposition 1,1 we have o Vi.'f G P //. Finally, Lemma 1,5,(2) allows us to conclude

which shows /•' <C G P II by the previous lemma.

The fact that the top element L is compact is an immediate consequence of L = _L[IH], □

We know from Proposition 3,25 that arithmetic lattices are the Stone duals of stably compact spaces. Hence, the previous proposition is a first glimpse at the topological meaning of a continuous sequent calculus. At the moment we can describe the space as pt(idl(L)), but we will find a much more economical description of the space. Before we continue in this vein we study how we can turn id I and filt into functors since we are also interested in the meaning of compatible consequence relations,

1.3.1 Morphisms

Lemma 1.23 Let L and M he continuous sequent calculi and b: L M a consequence relation. Then (-)[H is a Scott-continuous A-semilattice homo-morphism from filt(L) to filt(M).

Moreover, this assignment is functorial, i.e. = (')H ° (Of-]-

Proof. We know from Lemma 1,3 that (OH takes filters to filters and the map is clearly monotone. By Lemma 1,4 the supremum of a directed set of filters is just the union, and as b relates finite sets the function (OH is Scott-continuous,

We have L[b] = M since lei and b satisfies the rules (L_L) and (W), For binary meets it is clear that (F P G)[b] C F[b] P G[h] by monotonieitv. For the other direction take a 4> E F[h] P G[h], i.e. there are ip E F and \ e G such that tj) b 0 and x ^ From this we get

b 0 x I" 0 ,

and because of Proposition 1,1 we know that ip V x £ F D G. This shows (j)E (F P G)[b],

Finally, we have to show filt(b o b') = filt(b') ofilt(b)—note that relational composition is from left to right whereas concatenation of functions from right to left. We know that for any filter F the sets F[b], (F[b])[b'] and F[b o b'] are filters. Because of Corollary 1,2 and the original formulation of the cut rule (Cut) we can thus restrict our considerations to singletons:

<p E (F[l-])[l-'] ^ (^x) ^i1 and and x <P

^ (3ip) ip E F and ^(ho b') <P

^ <p E F[b o b']

We introduce the auxiliary category ASL (arithmetic semilattices) whose objects are arithmetic lattices and whose morphisms are Scott-continuous semilattice morphisms. The category of arithmetic lattices is a non-full subcategory as not every continuous semilattice morphism is a frame morphism; we will come back to the exact connection later, ASL is a full subcategory of the category of complete semilattices that we have already seen in the remark following Corollary 3,12,

The previous lemma almost shows that filt can be considered as a functor from MLS to ASL, The only thing that we have not shown, yet, is that it preserves identities. Fortunately, this is trivial as (-)PH is by definition the identity of filters. Using some of the information we have gained so far, in particular Proposition 1,9, we can actually prove a much stronger result:

Theorem 1.24 The functor filt: MLS —ASL is full and faithful. Dually, idl is a contravariant, full and faithful filter from MLS to ASL.

Proof. Assume b and b' are two compatible consequence relations from L to M such that F[b] = F[b'] for all filters F C L. We start by observing that, because of weakening, for two finite sets T, A the condition T b A is equivalent to (r, A) being b-ineonsistent. Hence, using Proposition 1,9, we can conclude

T b A (r, A) is b-inconsistent <i=>- (r[lhL], A) is b-ineonsistent

Now r[lhL][b] is equal to r[lhL][b'] by assumption and turning the above argument upside down we see that T b A is equivalent to T b' A, Hence the functor filt is faithful.

To show that it is also full take any Scott-continuous semilattice homo-morphism /: filt(L) —filt(M), We have to find a consequence relation b/ such that filt(b/) = /, Looking at the chain of equivalences of the previous paragraph we see that we are more or less forced to define

To show that this is a compatible consequence relation we have to check the conditions of Corollary 1,13,

Weakening is obvious and so are the rules (L±), (RT) concerning the constants. To check (Lv) assume <p,T b/ A and xp, T b/ A, Because of Proposition 1,9 we can find two formulae a E f((<p, r)Ph]) H [IHM]A and /3 E f((ip, T)pbL]) fl pbM]A, By Proposition 1,1 the element a V /3 lies in pbM]A as well as in

rb/A (/(rpbL]),A) IbM-inconsistent

/(0,r)[lbL]) n /((^,r)[lhL]) =/((0,r)[lhL] n (V,r)[lhJ)

= /((0V^,r)[lhL])

where the last equality follows from Lemma 1,5,(2), Thus, we have shown <p v \-f A.

For (L-Int') suppose hj A which again means that we can find an element 9 e f((<f>, F)[lbL]) fl [lbM]A, By Lemma 1,5,(5) we get

(^r)[ihL] = |Jt (^,r)[lhL]

and as / is continuous there is a ip such that 4> lbL ip and 9 e f((ip, T)[lhL]) which shows A,

The admissibility of (L-Cut) is a consequence Lemma 1,5,(3), The right rules (RA), (R-l

nt') and (R-Cut) are proved using the same techniques; they follow from (RA), (R-Int) and (Cut') for lbM, respectively.

It remains to verify that filt(b^) = /, For a filter FCLwe calculate

F[tf] = {(i>\(ETCfmF)Ttf 0}

= {(j) | (3r Cfin F) (/(r[lb]), (p) lbM -inconsistent}

= {<f>\ (arcfinF)/(rpb])npbM]0#0}

= {0| (3rcfinF)0e/(rpb])}

F Cf|n F

= f(F)

using Proposition 1,9 to get from the second line to the third and the fact that /(rpb]) is a filter to get to the next, □

The theorem implies that the image of filt is equivalent to MLS, and as we shall later see the image is itself equivalent to all of ASL, Whereas the category MLS is clearly self-dual, this property is not obvious for ASL, It was originally discovered by Jimmie Lawson, [41],

1.3.2 Compact Saturated Sets

We return to the topological spaces that arise from a continuous sequent calculus L. We know that both idl(L) and filt(L) are arithmetic lattices and hence correspond to stably compact spaces. If we consider the category ASL, we see that some of the morphisms there are actually frame morphisms and thus can be thought of as continuous functions via Stone duality. This means that we should think of ASL as a category of frames rather than locales. Because of this it is more natural to take the eontravariant functor id I to define the open sets of the space corresponding to a continuous sequent calculus.

If we make this choice, what is the meaning of filt(L)? In the light of

Corollary 3,12 and the surrounding discussion it is not far fetched to surmise that it is the co-compact topology, or equivalentlv, the compact saturated sets of the topology given by idl(L).

Coming back to Theorem 2,12 we see that a compact saturated set is essentially the same thing as a Scott-open filter in the topology. Given a filter F we consider

T:={le\d\(L) | InF#0}.

It is clear from the definition that T is a Scott-open subset of idl(L) as directed suprema are simply directed unions. To see that it is filtered suppose /, J E T which is witnessed by o e /•' P I and ip E F P J, Because of Proposition 1,1 we get <f> Aip E F D I D J which shows that I P ■/ G IF.

Now, let us assume that we are given a Scott-open filter T C idl(L). We define

F := {<f> | [lb]^ E

and verify that F is a filter by checking the conditions of Proposition 1,1: We trivially have T G F as filters must not be empty and this in turn implies L = [lh]T G T. For [lb]^>, [IH]t/> G T we get [\\-](<l> A xp) = [\b]<f> P [Ih]^ by the dual of Lemma 1,5,(2), and this intersection lies in the filter T. Given [\\-]4> E T and 4> lb ip we get [lb]^> C and hence E T. Finally, take [\\-]4> E T. From the dual of Lemma 1,5,(5) we know that we can write it as [\\-](j) = V^{["~]V' | V* which implies the existence of a c lh o such that ip E F since we assumed that T is Scott-open,

Proposition 1.25 For a continuous sequent calculus L the Scott-open filters of\d\(L) are order-isomorphic to filt(L).

Proof. The two translations that we have just discussed are clearly monotone. So, all that we have to show is that they are mutually inverse: Consider

/•' i—y | / 6™ idl(L) | I P F # 0} ^ {<j) | P F # 0}.

Because of the roundness of filters the result is simply F. Now let T be a Scott-open filter. We construct

T ^ {(J) | [lb]^ G F) ^ {/ | (3^ e I) [lh]<j> E T)

and claim that the result is T. Suppose we have [lh]</> G T for a 4> E I, then [\\-](j> C I and hence I E IF. For the other containment assume I E IF. By the argument preceding Lemma 1,20 we can write I as a directed union of ideals [\\-](j) for o E I. As T is Scott-open this implies that there is a <p E I such that

[lb]<^ G T. □

1.3.3 The Spectrum

We can use this result to get an alternative descriptions of the 'points' of idl(L), i.e. the completely prime filters. Every supremum can we written as

a directed supremum of finite suprema. So, a Scott-open filter is completely prime if and only if it is inaccessible by finite suprema.

Theorem 1.26 Let L be a continuous sequent calculus. The order-isomorphism of Proposition 1.25 restricts and co-restricts to prime filters in L and completely prime filters in idl(L).

Proof. The prime filters of L are precisely the fl-prime elements in filt(L), We already have an order-isomorphism between filt(L) and the Scott-open filters of idl(L). Hence, all that we have to show is that such a filter is completely prime if and only if it is prime with respect to finite intersections of Scott-open filters.

By Lemma 2,15 a completely prime filter is even prime with respect to arbitrary intersections of upper sets.

Suppose conversely that V £ idl(L) is a Scott-open filter that is fl-prime. Our considerations preceding this propositions show that we only have to check that V is inaccessible by binary suprema. So let us assume I V J E V. As idl(L) is a continuous lattice we can write this as a directed supremum of elements I' V J' where I' <C I and J' <C J. Now, V is Scott-open which implies that there are such ideals I' and J' that satisfy I' V J' E V. We have fV n f J' = f (/' V J') C V. Since idl(L) is arithmetic the Scott-open sets fV and f J' are also filters and we can infer that either of them must be contained in V. This shows that either J or J is an element of V and hence that V is completely prime, □

The following lemma is a consequence of the equivalent formulations of consistency given in Section 1,1, It can be seen as a reformulation of the Hofmann-Mislove Theorem, in the form of the key Lemma 2,11, and has many applications.

Lemma 1.27 If I is an ideal which does not meet a filter F in a continuous sequent calculus, then there is a prime filter PDF such that P fl I = 0. Consequently, every filter is the intersection of the prime filters that contain it.

Proof. The first statement is an immediate consequence of Proposition 1,9 as F fl I = 0 is equivalent to (F, I) being lb-consistent.

For the second claim take an arbitrary 4> ^ F. This implies [lb]0 fl F = 0 because of /•' = F[lb]. Hence there is a prime filter PDF such that [lb]0 n P = 0. As P = P[lb] it cannot contain and thus F = fl/. ,. P- n

By Proposition 1,25 a filter F C L corresponds to the Scott-open filter {J E idl(L) | J fl F 0} in idl(L). Hence, the lemma says, in effect, that if an ideal I does not lie in this Scott-open filter then there is a prime filter PDF such that I is not an element of the Scott-open filter corresponding to P, either. This is exactly the statement of Lemma 2,11, the key ingredient for the proof of the Hofmann-Mislove theorem.

We define the spectrum of a continuous sequent calculus L to be

spec(L) := {P E filt(L) | P prime}.

Theorem 1.26 tells that this is essentially the topological space pt(idl(L)). To complete the description of these spaces in terms of prime filters of L we also have to translate the topology: In ptiidl(L)) an open set is given by an ideal I E idl(L) and contains the 'points' {V E pt(idl(L)) | I E V}. We apply the translations given before Proposition 1.25 and get:

Corollary 1.28 An ideal I E idl(L) corresponds to the open set of 'points'

Oj := {P E spec(L) | Pn I # 0}.

Proof. A prime filter P corresponds to V: = { J E idl(L) \ J D P ^ 0}. We can read off immediately that I E V is equivalent to I fl P ^ 0. This shows that Oj corresponds to {P E pt(idl(L)) | I E V). □

We can also describe compact saturated sets in this way. A filter F corresponds to the Scott-open filter {/ G idl(L) | I fl F ^ 0}. By the Hofmann-Mislove Theorem (2.12) this in turn corresponds to

V E pt(idl(L)) {IE idl(L) | JfiF#0} C p}

V E pt(idl(L)) (VI g idl(L)) Fn/#0=^JG-P

Corollary 1.29 A filter F E filt(L) describes the following compact saturated set of 'points':

K:,, := {P E spec(L) | PDF}

Proof. As in the previous proof we consider a prime filter P and its translation V. If we have F C P and F fl I ^ 0, then we get P fl I ^ 0 and hence

For the other containment suppose F ^ P. Then we can find a <f> E F\P and by roundness another ip E F such that ip lb <f>. This shows that the ideal [\\-](j) meets F, but by construction it does not meet P. We conclude [lb]^> ^ V which implies that V is not a member of the set of completely prime filters corresponding to F. □

In Section 3 we saw that for a stably compact space X the co-compact topology XK is also stably compact, and that the topology for the latter is given by the complements of compact saturated sets from X. We can now express this in terms of the underlying continuous sequent calculus:

As we observed several times in Chapter 2 the setup of our logical system is symmetric: We can go from a continuous sequent calculus L to the dual structure Lop by turning around the lb, and interchanging A and V as well as T and ±. Furthermore, this process turns filters into ideals and vice versa.

In addition Proposition 1,8 tells us that Lop has essentially the same 'points' as L. The connection to the topological dualisation of taking the co-compact topology is given by the following:

Theorem 1.30 Let L be a continuous sequent calculus. Then we have

(spec(L))K ^ spec(Lop)

and this homeomorphism is the function given by the pseudo-complementation of Proposition 1.8.

Proof. Because of idl(Lop) = filt(L) the first statement is an immediate consequence of Proposition 1,25 and Corollary 3,12,

For the second statement we only have to show the following: For a filter F pseudo-complementation translates tCF into the complement of Of, considering F as an ideal in Lop. Take F E filt(L), P E spec(L) and define P': = [\h](L \ P) according to Proposition 1,8, As this P' is the largest ideal that is disjoint from P, we have F C P F fl P' = 0, This shows the

claim, □

1.3.4 Logic

A formula 4> gives rise to an ideal [lb]0 and a filter 4>[\\-]. So we can ask about the open and the compact reading of such a formula. We use O$ and tC^ as abbreviations for O^ and JC^, respectively. Since filters are round we express O^ more economically as

C% = {l'E spec(L) | P n [lb]0 # 0} = {P E spec(L) | <f> E P}.

We can interpret this in the following way: As in propositional logic, a prime filter on a continuous sequent calculus represents a model. The spectrum spec(L), then, is the space of all models, and every formula 0 of L defines a subset of models, namely, those in which 0 is true. The definition of the topology on spec(L) is such that all these extents of formulae are open. We will see shortly that the O$ are in fact a basis of this topology. In the classical setting of Boolean algebras and Stone spaces the extents are also compact. This is not the case here. However, every formula has the canonical compact subset tC^ associated with it. The logic is translated into set-theoretic operations both through the open and the compact interpretation:

Proposition 1.31 The following are true for a continuous sequent calculus L:

Otj,^ = 0<pP\ Of O^yf = Of/, U Of

fc<pAip = fC<pP\ JCf fC^f =£■</) u fcf

Proof. The two equations for O follow from Proposition 1,1 and 1,6, and the remaining claims are then a consequence of Theorem 1,30, □

We summarise the different concepts in a table which complements the one given in Section 2,2:

Logic Spectrum Dual Spectrum

formula open and compact reading open and compact reading

prime filter/ideal point point

ideal open set compact saturated set

filter compact saturated set open set

There are many more connections between the logic and the topological interpretation:

Proposition 1.32 Let L be a continuous sequent calculus and (p,xp e L formulae. Then the following properties hold:

(i) 0* C K,<p;

(ii) (j>\\- xp K\, C Oc: and

(iii) for a compact saturated set K C spec(L) and an open O C spec(L) satisfying K CO there is a formula <p e L such that l\ C Oc, C K'a CO.

Proof. Property (1) and one direction of (2) can be read off directly from the description of O and JC. If we have K'X1 C ('), .. this means that all prime filters P D [ii—] meet the ideal [ll-]V>. By Lemma 1,27 this implies </>[lh] fl [lh]V> ^ 0 and hence (p lh xp.

As in the previous argument, we can translate K C O into F fl I ^ 0 for the ideal I and the filter F corresponding to K and O, respectively. We can pick a o e /•' P I and get K = X"/. C Oa C K\:, C Oi = O: the last inclusion follows from the roundness of I. □

The proposition shows, in particular, that (p lh xp implies O$ <C O^ and Kc < K^ (see Lemma 2,17),

Remark. The condition (p lh xp is strictly stronger than the conjunction of [\\-](/> < [lh]i/> and i/'ph] < <^[lh]—a counterexample is easily found in the continuous sequent calculus constructed in Proposition 1.34. This matches up with the theory of abstract bases for domains [5, Definition 2.2.20]: If (B, -<) is such a basis then x < y implies Ix <C \y, but not vice versa. In fact, lh restricted to singletons is an abstract basis, and the continuous sequent calculus ideals are precisely the ideals for this abstract basis. Unlike the general case, the relation lh has a concrete meaning for individual tokens of the basis as given by the preceding proposition.

The third condition of the proposition implies that there are 'enough' formulae to generate the topology:

Corollary 1.33 For a continuous sequent calculus L the sets O$ form a basis for the topology on spec(L).

Proof. The third property of the previous proposition applies in particular in the situation fx C O. □

1.3.5 From Spaces to Continuous Sequent Calculi There is one obvious question that we have not tackled, yet:

Which stably compact spaces arise as spectra of continuous sequent calculi?

As we will see, up to isomorphism, the answer is: all of them.

Given a stably compact space X we know that O(X) and JC(X) are arithmetic lattices. Hence, it is quite natural to define a strong proximity lattice from them first:

Proposition 1.34 For any stably compact space X, the following defines a strong proximity lattice:

• Lx: = {(0,K) E Q{X) x K{X) \ OCK}

• (0,K) V (0',K'):=(0U0',KUK')

• (0,K) A(0',K'):=(0n0',Kf]K')

• _L:=(0, 0), T:={X,X)

• (0,K) -< (0',K') KCO'

Proof. It is obvious that Lx is a distributive lattice and that -< o -< C -<, For the converse conclusion assume KCO'. As X is in particular locally compact, the neighbourhood filter of K has a basis of compact saturated sets which means that there is such a set L that satisfies K C int(L) C L C O'. This shows that -< is interpolative, i.e. -< C -< o -<,

Verifying the two axioms (V —<) and (-< — A) is trivial since all involved operations are set theoretic.

For (-< — V) let K be a subset of Oi U 02- Since X is locally compact this implies that for every point x E K we can wind a compact neighbourhood Kx such that either Kx C 0\ or Kx C 02- The interiors of the Kx cover K and by compactness we find a finite sub-cover. We collect them in two finite unions depending on which of the two open sets they are a subset of. The two resulting compact sets and their interiors give us the required interpolating elements.

If we replace X by XK we get an LXk which is just the dual of Lx- This means that we can think of the axiom (A —<) as (-< — V) for the co-compact topology which implies that it also holds, □

Section 1,1, and in particular the discussion of consistency, helps to explain the meaning of the open and compact sets O and K making up the tokens (O, K) in this proposition. An open set O can be seen to represent positive information, and a compact set K to represent the negative information X\K. The constraint O C K avoids self-contradiction of tokens.

The results from Section 1,2 show that by slight abuse of notation we can think of Lx as a continuous sequent calculus where -< is just lb restricted to singletons. We now have to check that Lx actually represents X:

Theorem 1.35 If X is a stably compact space, then we have spec(Lx) = X.

Proof. We map an ideal I E id I (Lx) to the set

\J{0\(3K) (0,K)eI}

and as a union of open sets this set is clearly open. Conversely, we take an open set O to

{(0',K') E Lx | K' C O}.

We verify that this is an ideal using Proposition 1,1: The first two conditions are trivial, and the argument for the third one is the same as for -< C -< o -< in the proof of the previous Proposition,

The next step is to check that these two mappings are mutually inverse. Beginning with an open set we see that we get it back because X is locally compact. Starting with an ideal I we get

I^\J{0\(3K) (0,K) E /}

>->{(0',K') K' C |J{0 I (3K) (0,K) E /}}.

For every element in the original filter (0,K) E I we can find a token (0',K') E I that satisfies (0,K) -< {0',K'), or in other words K C O'. This proves that (O, K) appears in the resulting ideal.

For the other containment take O' C K' C (J{0 | (3K) {0,K) E /}, Since K' is compact and I as an ideal is closed under finite suprema there must be a (O, K) E I such that K' CO. This implies (O', K') -< (0,K) and hence (Or, K') E I.

The two translations are clearly monotone which shows idl(L) = 0(A), From this we immediately get spec(L) ^ pt(idl(L)) = pt(0(A)) = X. □

A token (O, K) has an open and a compact reading. We might expect that they are essentially O and K which is indeed the case: The token gives rise to the ideal

[lb](0,K) = {(0',K') E Lx I K' C O}, precisely the ideal corresponding to O in the above proof.

It also gives rise to a filter (0,K)[lb] in Lx which corresponds to the Scott-open filter {/ e \d\(Lx) \ {0,K} E /} by roundness. Every ideal I in it stands for a concrete open set p]:= Ul^' | (3A') (0',K') E /}. By the Hofmann-Mislove Theorem it suffices to show the equality of the following two Scott-open filters:

{O' E 0(A) \0'DK} = {[/] I (0,K) E 1}

If O' D K, then we have (0,K) G [lh](0', X), and as we already know |[lh](0',X)] = O' we get O' g{[J] | (0,K) G /}. For (0,K) G I we get a token (O', K') G I satisfying K CO' which implies [J] D K.

As an immediate consequence of the previous theorem we get that the functor filt: MLS —ASL reaches all objects up to isomorphism.

Corollary 1.36 The categories MLS and ASL are equivalent.

Proof. This follows directly from the previous theorem, Theorem 1,24 and the fact that the stably compact spaces are precisely the Stone duals of the arithmetic lattices (Proposition 3,25), □

1-4 Semantics of Morphisms

Given a compatible consequence relation b between continuous sequent calculi L and M it is natural to consider the function

By Lemma 1,3 the result is a filter, whatever the P. But even if P is a prime filter there is no reason why P[b] should be prime.

As an example consider the relation that comprises all sequents. This is always a compatible consequence relation and for all P C L we get P[b] = M which is never prime as it contains _L,

1.4-1 Morphisms as Multi-Functions

We can, however, consider (-)[b] as a function from spec(L) to filt(M), Filters in M correspond to compact saturated subsets of spec(M), but if we want to give a functional interpretation of morphisms we have to turn filt(M) into a topological space in its own right. The poset (filt(M), C) = (/C(spec(M)), D) is an arithmetic lattice. Endowed with the Scott topology it is the so called Smyth or upper power domain, though usually the top element 0 is omitted. We can directly describe its topology from the continuous sequent calculus:

Lemma 1.37 Let L be a continuous sequent calculus. Then the sets

Oa := {/•' G filt(L) \ (/) e F} form a basis for the Scott topology on (filt(L), C).

Proof. By Lemma 1,4 directed suprema are unions and hence the sets O^ are Scott-open,

Assume that a filter F lies in a basic open set f F'. Then we can find an interpolating filter F" satisfying F' <C F" <C F and by Lemma 1,20 there is a (j) G F such that F" C 0[lh], This implies F G 64, C fF" C fF'. □

Using the terminology of Proposition 2,19 we can also understand the elements of this basis as O^ = DC^,

Before we taekle the correspondence of consequence relations and functions spec(L) —filt(M) we provide some useful facts for the proof of the following proposition:

Lemma 1.38 For a continuous sequent calculus L and formulae (f),ip G L the following hold:

(i) <P lb tl; Kit c k:c. à* Ç Ôf;

(ii) ^(jihf Q 0<f, Ç O^y^; and

Proof. The first two properties follow immediately from Proposition 1,31, 1,32 and 1,1, The union in (3) is directed because of (LV) and (2), Finally, we get the equality of the two expressions from the fact that filters are round, □

Proposition 1.39 Let L and M he continuous sequent calculi. The compatible consequence relations from L to M are in a bijection with the continuous maps from spec(L) to K(spec(M)) = filt(M).

Proof. Let us begin by checking that the function (-)[H is continuous. To this end we look at the preimage of a basic open set

((•)H)-1[Ô*] ={Pe spec(L) | P[b] g

= {P G spec(L) I 4) G P[b]}

= {P G spec(L) I (3<i> G P) <p b 4j} (Corollary 1.2)

which is obviously open.

Next we show that the map (b (-)[b]) is injective. Suppose b ^ b', then by Theorem 1.24 we have a filter F Ç L such that F[b] ^ -^[b']. Without loss of generality, we can assume that there is a ip G F[b] \ F[b']. This implies that F does not meet the ideal [b']^ and because of Lemma 1.27 there is a prime filter PDF such that P fl [b']^ = 0. Because of Corollary 1.2 this means ip £ P[b'], but note that we do have ip G -P[b], So, (-)[H and (•)[!"'] are different maps even when we restrict the domain to prime filters.

For surjeetivitv take a continuous function /: spec(L) —y filt(M), We set

T b/A JC^r Ç f^l[OyA]

and show that it is a consequence relation. Permutation of T and A—the implicit structural rule—follows from Proposition 1,17,(1), weakening from Lemma 1,38,(2),

The only non-trivial logical rules are (LV) and (RA), The former follows from Proposition 1,31 since we can calculate

fc(4>vi>)a/\t = n JC/\r = (£</, U /Q,) fl /C/\r

= (JCff, n r) U (fCf n fC[\r) = fc(f>Af\ r U K"rp/\[\ r ■

For (Ra) we have to be more subtle sinee (9^f = O^f] Of holds, as a consequence of Proposition 1,1, but O^jf = O^U Of fails as O does not refer to prime filters only. Nonetheless, if F is a filter we know from Proposition 1.17.(1) that <p V (tp A x) e F is equivalent to (<p V tp) A (<p V x) G F because of <p V (ip A x) = (0 V -0) A (<p V x)- This suffices to verify that ^ ,f^l\P\j f^l\0\j AWf\ if and only if tC/\T Ç [¿>y av(^)] ■ Having shown that b^ is a consequence relation we now check that it is compatible: From property (1) of the previous lemma it is clear that we have |[-£ o \-ji o \\~m C hj,

If, conversely, we have tC/\T Ç /^[Ôsja], then we can apply Proposition 1.32.(3) and find a formula <p <E L such that

^ Oa C K\ , C / 1 [¿»y A].

This implies I' lb o by Proposition 1.32.(2) and backwards (La).

We also want to interpolate on the right. To this end we observe that by property (3) of the previous lemma we have Ôy A = Taking

preimages preserves directed suprema and as tC^ is compact, one of these ip satisfies K\:, Ç f^l[Of], This shows T \\-L <p b/ ip lhM A and hence f = \\~l o b/ o Ihjvi which is clearly compatible. The last thing that remains to be done is to verify that b^ is mapped to /. For a prime filter P we calculate:

Ptf] =

= {iP = {i/J

= bP = f(p)

(3 <P e P) <p b ftp) (Corollary 1.2)

(30 ep) k.0gj 'c),:}

P^rl[àf]} (f)

f(P) g

ipef(P)}

The step to (f) follows from the continuity of / and Proposition 1.32.(3). □

We can reformulate this further: For a stably compact space X the Smyth power domain ()C(X),D) is an arithmetic lattice. Every continuous lattice is an FS domain when equipped with the Scott topology. Hence, we can apply Proposition 3.19 and see that )C(X) is again stably compact. A continuous function between stably compact spaces lifts to a mapping between the compact set lattices: The lifted function takes a compact saturated set to the saturation of its image, a Scott-continuous mapping by Lemma 3.16. This lifting is functorial which can be easily seen from the localic description of Proposition 2.16. Consequently, K, defines an endofunctor on StCp, the category of stably compact spaces.

This functor is also part of a monad: Its 'unit' takes a point x to fx, its upper closure with respect to the specialisation order. The 'multiplication' of the monad takes a Scott-compact family of compact saturated sets C G /C(/C(A)) to \JC G fC(X) which is again compact by Lemma 2,20, For the full details that this defines a monad see [54, Proposition 7,21], We will show in a moment that the category of compatible theories is exactly the Kleisli category of this monad.

Some aspects of the following argument are more transparent if we start with semilattices rather than continuous sequent calculi. For this reason we relate the previous proposition and Lemma 1,23, The latter tells us that compatible consequence relations b: L —M gives rise to Scott-continuous semi-lattice homomorphisms [b](-): idl(M) —idl(L). We claim that taking preim-ages under this map corresponds to the function (-)[H : spec(L) —filt(M) via Proposition 1,25, A prime filter P C L corresponds to the completely prime filter {/ G idl(L) | / flF^I} and its preimage under [b](-) is

{/ G idl(Af) | P n [b]J # 0} = {I G idl(Af) | P[b] n I # 0},

where the equality follows from Proposition 1,9, The resulting Scott-open filter is just the translation of -P[b],

Theorem 1.40 The categories MLS, SPLW and ASL are equivalent to the Kleisli category StCp^ of the Smyth power monad (K, f , (J).

Proof. Because of Theorem 1,35 we already know that we reach all objects up to isomorphism. Moreover, most of the work for the morphism has already been done in Proposition 1,39, We only have to show that the translation appearing in the proof there is functorial. Then we know that the functor is full, faithful and hence part of an equivalence.

First, we check identities: P[lb] = P, but now this prime filter, seen as a compact saturated set rather than a point of the spectrum, corresponds to the saturation {Q G spec(L) | P C Q}. This means that the identity lb is mapped to the unit of the monad which is the identity in the Kleisli category.

For composition we consider the composition of two semilattice homomorphisms p and o\

pt(A) pt (B) pt(C)

£(pt(A)) -........£(pt(B))

K{K{pt (A)))!

In the diagram the continuous functions r and s correspond to p and a, respee-

tively, and from the preceding discussion we know that they are essentially given by taking preimages under p and a. The Kleisli composition of r and s is given by p o tC{r) o s, where the 'multiplication' p of the monad is union.

We think of JC{pt(B)) and /C(pt(A)) as spaces of Scott-open filters of B and A by Theorem 2,12, The space /C(/C(pt(A))), however, we prefer to consist of the compact saturated subsets of/C(pt(A)), In these terms we can describe )C(r) as follows: It takes a Scott-open Filter F C B, which corresponds to {Pe pt(-B) | P D F}, to the saturation of

{p-l[P] \ P e pt(B) and PDF}.

Normally, the multiplication p is taking the union of compact saturated sets. In terms of Scott-open filters this translates into intersection, as an argument very similar to the proof of Corollary 2,14 shows. If we compose this with )C(r) we can disregard the saturation of the compact set as this does not add anything new to this intersection. Using Lemma 1,27 we can calculate

/i((/C(r))(F)) = riiP"1^] I P e Pt(^) and P^F}= p-l[F], From this observation it follows that for all P E pt(C) we have (a o py'iP] = p-1 [a-'iP]} = (p o 1C(r) o s) (P) which concludes the proof of the theorem, □

1-4-2 Morphisms as Relations

We can alternatively describe these morphisms as certain relations between stably compact spaces. For any set theoretic function /: X —y ^J(F) we can define a binary relation

Rf := {{x,y) | y E f(x)} C X x Y.

Conversely, every relation R C X x Y determines a function

fR(x) := {y\xRy}:X^<V{Y),

and these translations are mutually inverse. Categorically speaking, the category of sets and relations is the Kleisli category for the power set monad on Set, We now tackle the analogue in our topological setting.

Proposition 1.41 Let X and Y be stably compact spaces. For a continuous function f: X —y JC(Y) C ^J(F) the relation Rf is closed in X x YK.

Proof. Suppose x ftf y which is equivalent to y £ f(x). The set f(x) C Y is compact saturated, and as Y is locally compact its neighbourhood filter has a basis of compact saturated sets. So there is such a set K satisfying

y^KD int(K) D f(x). 114

Since / is continuous and the set {L e )C(Y) | L C ¡nt(ii)} is open by Lemma 2,17 there is an open neighbourhood V of x such that for all x' e V we have f(x') C int(K) C K. This implies (V x (Y \ K)) P l> j = 0. The set V x (Y \ K) is open and contains (x, y) thus showing that R is closed, □

Unfortunately, the converse is slightly more involved. We will make use of the fact that we can freely translate between a stably compact space X, the stably compact space arising from the co-compact topology XK and the compact ordered space Xw as discussed in Section 3,2,

Proposition 1.42 Let X and Y be stably compact and R C X x YK a closed relation. Then }'r is a continuous function from X to JC(Y).

Proof. First, we show that for all x E X the set fn{x) = {y \ x R y} is compact saturated: The specialisation order of the product topology is the product of the two orders, i.e. point-wise. Now R is closed and hence a lower set, and we see that /r(x) is a lower subset of YK. By Proposition 1,2 this is equivalent to /r(x) being an upper, or equivalentlv saturated, subset of y.

The topology on Xw x Yw is finer than that on X x YK. Thus, R is a closed subset of the compact Hausdorff space Xw x Yw. Moreover, the subspace fn(x) ^ Yw is homeomorphic to the closed set ({a;} x F) fl R C Xw x Yw. We conclude that it is compact in the patch topology and consequently also in the coarser original topology. Putting the two observations together we see that fn(x) is compact saturated.

Now assume that fu is not continuous. Because of Lemma 2,17 this implies that there is an x E X and an open set V C Y such that fn(x) C V, but for all neighbourhoods U of x there is an element x' E U for which fn(x) ^ V. We can reformulate this as K,j\=(lJ x (Y \ F)) fl R ^ As X is locally compact we can restrict ourselves to compact saturated neighbourhoods U of x. The sets Y \ V are patch-compact which implies that all the sets Kv are patch-compact, as well. They are also clearly filtered which means that they form a filter basis of compact non-empty sets in the compact Hausdorff space Xw x Yw. This allows us to find an element (x,y) E f)Ku ^ 0, By construction we have x R y and x C x which implies x R y because R as a closed set is also a lower set. But we also have y E Y \ V which contradicts fn(x) ^ V. This shows that fu is continuous, □

The last two propositions already characterise the relations that arise from MLS-morphisms, The only thing that remains to be done is to investigate how composition behaves under this translation.

Note that for the function (-)H we can write R^:=R(,)[h] as

P RhQ ^ P[h] C Q using the correspondence of Corollary 1,29,

Theorem 1.43 The category MLS is equivalent to the category of stably compact spaces with closed relations R Ç X x YK. The composition is given by the ordinary relational product.

Proof. Suppose b: L —M and b': M —N are compatible consequence relations. We need to show:

R\-0\-> = R\- ° R\->

Suppose O Ç L and Q Ç N are two prime filters such that 0[bob'] = 0[b][b'] Ç Q, the equality being a consequence of Lemma 1,23, Then we must have 0[b] fl [b'](iV \ Q) = 0 and using Proposition 1,9 we find a prime filter P Ç M such that 0[b] C /' and I' P [b'](iV \ Q) = 0. As P is a filter, Corollary 1,2 applies and thus we have P[b] Ç Q.

For the other containment assume 0[b] Ç P and F[I-'] Ç Q. This immediately implies 0[b o b'] = 0[b][b'] Ç P[b'] Ç Q. □

We now have several descriptions of essentially the same category. Syntactically we introduced it as MLS and showed that it is equivalent to strong proximity lattices SPLW, a formulation that is very close to Abramskv's pre-loeales. From Corollary 1,36 we know that we can also see it as arithmetic semilattices with Scott-continuous semilattice morphisms; this is the localic viewpoint. Topologicals we can understand it either as the Kleisli category StCp^ or as stably compact spaces with closed (in the appropriate sense) relations, the category that we will call StCp*. Note that in this category the identities are given by the specialisation order.

Having several different ways of seeing this category has the advantage that for any construction we want to perform in it, we can choose the most convenient point of view for that particular purpose.

Remark. It is instructive to consider the self-duality of this category in its different manifestations. We already discussed the case MLS in Section 1.1: It flips a consequence relation b : L —M around and interchanges the connectives v and ± with their duals a and t. The same procedure takes care of strong proximity lattices.

For arithmetic semilattices the self-duality is given by exponentiation 2(-'\ The elements of 2A correspond precisely to the Scott-open filters in A and hence to the compact saturated subsets of the Stone dual by the Hofmann-Mislove Theorem. Because of Corollary 3.12 this gives rise to an involution on ASL. The key ingredients for the proof that it does indeed correspond to the dualisation in MLS are contained in the discussion preceding Theorem 1.40.

The hardest case is that of the Kleisli category. Given f: X )C(Y) we claim that the dual map f* : Y tC{X) is given by

f*(y) ■■= {xeX\yef(x)}.

For this to work it suffices to verify that for a compatible consequence relation

b: L M and P E spec(Mop) \\v have

{Q E spec(Lop) | Q D [b]P} = {Q E spec(Lop) | Q*[b] C P*},

where (•)* denotes the pseudo-complementation given in Proposition 1.8 and Theorem 1.30. To this end let Q D [\~]P, 0 E Q* and 0 b ip. This implies P since Q n Q* = 0, and because of interpolation on the right we get ip E P*. The argument for the other containment is almost identical.

As an immediate consequence we get y E f*(x) x E f(y) which

means that the dual of a closed relation R C X x YK is simply R again.

1-4.3 Functions

In general, consequence relations correspond to relations between stably compact spaces, but some of them are really functions in disguise: If X is a stably compact space, then we can think of it and JC(X) as spec(L) and filt(L) for the same continuous sequent calculus. We see from the bases of the respective topologies given in Corollary 1,33 and Lemma 1,37 that X is a subspace of 1C(X). Consequently, we can compose a continuous function from spec(L) to spec(M) with the embedding spec(M) c—filt(M), Conversely, if a function /: spec(L) —y filt(M) can be co-restricted to the subspace spec(M) then this is also a continuous function.

This is readily rephrased in terms of closed relations. Given a continuous function /: X —y Y between stably compact spaces we can consider the hypergraph

{{x,y) EX x Y I /(x) c y}. If CI x YK is a closed relation with the property that for all x there is a least y such that x R y we can consider the function that maps x to y. These translations correspond directly to the ones given in the previous paragraph using Proposition 1,41, 1,42 and Corollary 1,29, Hence, it is a priori clear that the hvpergraph is a closed relation, that the other translation yields a continuous function and that they are mutually inverse.

For completeness sake we also look at the situation in asl, The discussion before Theorem 1,40 can be summed up as follows: A Scott-continuous semi-lattice morphism p: A B corresponds to a continuous function r: pt(B) — /C(pt(A)), It is given by taking preimages if we identify /C(pt(A)) with the Scott-open filters of A. Hence, we can certainly think of a p as a 'function' if it is in fact a frame morphism.

If it is not, then there are x,y E A such that p(x) V p(y) < p(x V y). The Scott topology on any continuous domain has a basis of Scott-open filters (see [5, Lemma 2,3,8] or the proof of Proposition 2,18), This implies, together with Lemma 2,11, that there is a completely prime filter P C B such that p(x V y) E P, but p(x) V p(y) ^ P. The preimage p^l[P] contains x V y but neither x nor y and hence is not a completely prime filter. We have shown:

A Scott-continuous semilattice homomorphism corresponds to a 'function' if and only if it is a frame morphism.

The problem we are really interested in is to give a syntactic characterisation of these semantic notions of being a function:

Theorem 1.44 A compatible consequence relation h: L —y M corresponds to a Junction' in the above sense if and only if for all T HA there is a set Q Çfin L such that T lb/, 0 and for all <p E © there is a xp E A with <p h xp.

Proof. For a prime filter P C L we check that F[h] is also prime using Proposition 1.6. Assume that F[h] contains ±. We get (p E P such that <p H ± and hence <p The condition of the theorem implies <p which contradicts the primeness of P.

Now let us suppose P 3 (p h xp V x- We take the right hand side apart to get (p h xp, x which implies that there is a set A Çfin L such that (p lb A and for all S E A we have S h xp or S h x- Because of P 3 (p lb A we see that \/ A lies in P = P[lb] and as P is prime this implies that there is a S E A fl P. Hence, we either have xp E P[h] or x 6 f [b].

For the converse we study the relation hf that arises from a continuous function /: spec(L) —filt(M) that can actually be co-restricted to spec(M). According to the proof of Proposition 1.39 T h/ A is given by )C/\ r Ç f\Oy a] ■ Since we can actually consider / as a map to the subspace spec(M) we can write this as tC/\T Ç f^lOy a]- The advantage of this is that 0(.) preserves V whereas Ô(.) does not.

Given r \-f ¿i,..., Sn we get

)CAr C f-l[oSlV..,ySn] = /-1 [U= U^W

i=1 i=1

and by Proposition 1.32.(3) this means that for all x E tC/\T we can find an i E {1,..., n} and a formula (p E L such that x E O^ Ç tC^ Ç As

1c/\t is compact, finitely many of these o^ cover it. Taking the disjunctions of the formulae that correspond to each Si we get (pi,... ,<pn E L such that ^A r Ç Ofa U • • • U OiJ:: = O^v-v^ and KiK Ç f'l[Os J, for i = 1 ,...,n. This is just a translation of the condition stated in the theorem. □

We now have explicit descriptions of 'functions' in all of the equivalent categories MLS, ASL, StCp^ and StCp with closed relations. Later we will see how we can characterise the subcategory of functions in purely categorical terms.

Note that the identities are 'functions'. For the semantic categories this is obvious, and hence it is also true for the syntactic category MLS where it is a consequence of interpolation.

2 Domain Constructions

We are now in a position where we ean perform domain constructions in logical form. Our first step is to improve on our representation theorem for stably compact spaces by characterising all continuous sequent calculi that correspond to a given stably compact space. This allows us to make the connection between syntactic and topological constructions. We can also turn the argument in the characterisation around to construct continuous sequent calculi.

Then we look at several domain constructions to show how this technique can be applied. In particular we consider lifting, sums, products, the Smyth power domain, the function and the relation space. There are only partial results about the functions space construction. We contrast this with the space of relations which has a very nice logical description,

2.1 Representing Stably Compact Spaces

Theorem 1,35 tells us that for every stably compact space there is a continuous sequent calculus that represents it. As its tokens are pairs of open and compact sets this is far from being a syntactic representation. Normally, we want to construct a continuous sequent calculus syntactically and then prove that it corresponds to a certain stably compact space.

We have already done most of the work in Section 1,3 under the heading "Logic", The next theorem says in effect that Proposition 1,32 identifies the key properties.

Theorem 2.1 Let X be a stably compact space and L a continuous sequent calculus. Then spec(L) and X are homeomorphic if and only if there are maps o: L —¥ 0(JY") and k: L —y JC(X) satisfying the following properties:

(iii) for K Ç O Ç X, where K is compact and O open, there is a formula (j) such that K Ç o((j>) C k(o) C ().

In this case o corresponds to O^, k to £(.), and both maps translate finite conjunction and disjunction into intersection and union, respectively.

Proof. Let us start by making some observations: The first two conditions, together with Lemma 2,17, imply that o is a monotone map from (X, lb) to (0(X),<C) and that k is antitone from (X, lb) to (fC(X), <C). This together with (3) ensures that both maps take ± to the empty set and T to X.

If X = spec(L) we use (9(.) and £(.) as our maps o and k. The conditions of the theorem are then verbatim the ones of Proposition 1,32, and it is clear that they are invariant under going to a homeomorphic space X = spec(L), Conversely, assume we are given o and k satisfying the three conditions

(i) (V0 G L) o{</>) Ç k{</>);

(ii) (V0, ij) G L) (f) lb ip k((f) Ç o(ip); and

of the theorem. The outline of the proof is to extend o as indicated in the diagram

idl(L)---- 0(A)

and to prove that O is an isomorphism. We define the extension by

0(1) := \J{o(<f>) | 0 G /}.

To see that the union is directed take <f>, ip G I. We get 4> V ip G I and by roundness we find a \ G / such that (pV ip lb x- This implies (p lb x and ip lb x and as o is monotone with respect to lb we get o(<p),o(ip) C o(x). The map O is clearly monotone and we claim that it also reflects the order. Given 0(1) C O(J) and (pel we find an element ip G I satisfying <p lb ip, by roundness of I, and we get

o(4>) C k(<P) C o(iP) C 0(1) C O(J) = IJ^oix) | X G J}-

As A;is compact, there is a x ^ J such that k(<p) C o(x) which implies 4> lb x £ J and thus o e •/. So, O reflects the order and is, in particular, injeetive.

To prove that it is an isomorphism it remains to show that it is surjeetive. Given U G 0(A) we define

Iu ■= {<peL I k(4>) c u}

and verify that this is a filter using Proposition 1,1, Closure under V is the only property that is not an immediate consequence of (2) and (3), So, let us suppose k(<p), k(ip) C U. By (3) we find a \ G such that

k(4>)j k(tp) C k(4>) U k(tp) C o(x) C k(X) C U.

We get (p lb x and ip lb x which entails (j> V ip lb x and thus (j> V ip G /¿j.

Obviously, 0(IV) C U holds since for all (p we have o(^) C &(</>). For the converse inclusion, take an x G U. This is equivalent to {a;} C U and the third condition ensures the existence of a formula (j> G L such that {x} C fx C 0(<j>) Ck((j)) C U. Thus, we get in particular <p G Iu and x G o(^) which entails U = 0(IV). So, O is a monotone, order reflecting and surjeetive map and hence an isomorphism.

At this point we already know that L corresponds to A:

X ^ pt(0(A)) ^ pt(idl(L)) = spec(L)

Now, we prove that O is an extension of o. Because of the monotonieitv of o we have 0([lb]0) = (/{<#) I ^ lh #

c o(0), For x g o(0) we again find a ij) such that {x} c o(ip) c A;(^>) c o(0) which implies ip lb 0 and thus x g O([lb]0), This proves that o corresponds to (9(.), For A; we define the map K: filt(L) by

tf(F) := I ^ e

The proof that it extends k is analogous to the argument for O. We claim that this map is also the one induced by O via the Hofmann-Mislove Theorem, We know from Proposition 1,25 that the isomorphism between Scott-open filters on idl(L) and filt(L) is given by the map

tp(F) := {/ g idl(L) | JnF#0}.

So, we have to chase the diagram

filt(L)---- JC(X)

EOFilt(idl(L)) —^y EOFilt(iî(X))

i.e. we need to check the equality

F K(F) i—Y {C e tt(X) | K(F) C u) ^ {/ g idl(L) I K(F) C 0(1)}

= {/G idl(L) | JfiF#0}.

Expanding the condition K(F) Ç 0(1) yields | $ e ^ 0(1).

By Corollary 2,14 such a filtered intersection of compact saturated sets is contained in an open set if and only if one of the members of the family already is. So, the condition is equivalent to

(30 g F) k(4>) ç 0(1) = (jVWO \

and this, in turn, is equivalent to (30 G F,ip G I) k(<f>) Ç o(ip). Now, k(4>) Ç o(ip) is tantamount to 0 lb ip and because of roundness of filters and ideals we see that K(F) Ç 0(1) is equivalent to I fl F ^ 0, This means that k corresponds to £(.) and, because of Proposition 1,31 and our considerations at the beginning of this proof, we can infer that o and k respect T, ±, A and v. □

The first condition of the theorem is usually straightforward to check; it is just a sanity requirement about o and k. The two implications of the second condition can be understood as soundness and completeness. Finally, the

third one means that the interpretations of token are dense with respect to open and compact saturated sets. Density implies that the open sets of the form o(4>) are a basis of the topology and that the complements of the k(<p) are a basis for the co-compact topology. The first statement is the content of Corollary 1,33, the latter is an immediate consequence of Theorem 1,30,

We can turn the above characterisation around to construct continuous sequent calculi:

Theorem 2.2 Let X he a stably compact space, L a (2, 2, 0,0)-algebra, and o: L —y 0(A) and k: L —y fC(X) two maps that satisfy

(ii) for K C O C X, where K is compact and O open, there is a formula (j) such that K C o((j>) C &(</>) C O;

and that translate finite conjunction and disjunction into intersection and union. Then

defines a continuous sequent calculus that represents X.

Proof. The rules for consequence relations are all easy to check. Let us do (LV) as an example: Suppose we have <p,T lb A and ip, T lb A, i.e. k(4>) nJfc(Ar),Jfc(V)nJfc(Ar) C o(V A), From this we immediately get

(a#) n *(Ar)) u (a#) n ¿(Ar)) = (K<t>) u a#)) n {J\T)

= k(4> v VO n (A r) ç

o(V A)

and hence </> V tp, T lb A,

To see that lb is a continuous sequent calculus it suffices to check that it is closed under (Cut) and has interpolation. The former is trivial, and for (L-Int) consider <p, r lb A, or in other words k(<p) fl T) C o(\/ A), As a saturated set k(<p) is the filtered intersection of its open neighbourhoods and thus the density condition (2) implies that we can write it as a filtered intersection

Thus we can also write k(<p) fl T) as a filtered intersection of terms k(ip) H r), with ip as above. By Corollary 2,14 this implies that there is a ip such that k((p) Ç o(ip) and k(ip) HT) Ç o(\/ A), Translating these subset containments yields (j> lb ip and ip, I lb A,

Having shown that lb is a continuous sequent calculus it is clear that o and k satisfy the conditions of Theorem 2,1 by definition and consequently that L represents X. □

(i) (V<£ G L) o((j)) Ç k(4>);

m=f)SkW\ Ht) Ç oty) Ç kty)}

2.1.1 Examples

To show how these results ean be used we construct continuous sequent calculi that represent two well-known examples of stably compact spaces:

Example 2.3 [The Extended Positive Reals] We begin with a very simple space, the extended positive real numbers:

1<f:={r e R I 0 < r}U {oo}

The topology is the Scott topology for the canonical order on Rj and the new top element oo.

We take L to be the term algebra over the rational numbers Qj and set o(q) :=]g,oo] %) := [5,oo]

There is a unique way to extend o and k to L such that they respect the algebraic structure, in particular we have to set o(_L) = k(±) = 0 and o(T) = A;(T)=I+.

It is clear that o and k satisfy the conditions of Theorem 2,2 as the rational numbers are dense in the real numbers. So, the theorem tells us how we have to define lbL such that L represents Rj,

We now want to give syntactic rules that generate the consequence relation \hL. From Lemma 2,1 we know that we only have to cater for sequents made up exclusively from the generators of L. Since Rj is linearly ordered, so are O(Rj) and /C(Rj), This means that sequents relating generators arise from ones of the form <fi lb ip by weakening. These are obviously generated by the single rule

r lb q

where we can assume that, whatever our syntactic representation of rational numbers, we can decide the inequality q < r.

Example 2.4 [The Interval Domain] The interval domain can be seen as a model for computations that produce real numbers, in our case taken from the unit interval [0,1], It is given by

I := {[x,y] | 0 < 2; < y < l},

the compact sub-intervals of [0,1], ordered by reverse inclusion. This yields a continuous Scott domain and hence, in particular, a stably compact space. The order of approximation is given by

[x,y] < [x',y'\ ^^ [x',y'\ e int([a;,y]).

Here, the interior is taken in [0,1], and not in R,

We construct a continuous sequent calculus to represent it by refining the previous example. We take the atomic formulae A(q) and p(q), where q is a

rational number such that 0 < q < 1, and call the term algebra generated by them L. The intended reading of the atomic formulae is as follows:

o(X(q)) :={/el| JC[0,<z[} = fM

o(p(q)) := {/ G I j / C 1]} = 1]

k(X(q)) :={/Gl|/C[0,iz]}=t[0,(z]

k(p(q)) ■.= {lEl\lC[q,l}}=t[q,l]

As before there is a unique way of extending o and k to L that translates logical connectives into the corresponding set theoretic operations.

The only condition of Theorem 2,2 that is non-trivial is density: First, note that we get o(A(r) a p(q)) = f [q,r] and A;(A(r) a p(q)) = 1%,r], i.e. we have tokens for all fI and f I where I is an interval with rational endpoints. Now, let us suppose we are given O e 0(1) and K e /C(I) such that K C O. As I is a continuous domain, the sets of the form f I are a basis of the Scott topology, and since Q is dense in E we can restrict ourselves to intervals with rational endpoints. By the compactness of K we get finitely many intervals ii, ...,/„ G O such that K C f Ix U • • • U f In and by our previous observations L contains the tokens to guarantee density.

One task remains to be done, namely to come up with syntactic rules for the continuous sequent calculus lb that we can extract via Theorem 2,2, As in the previous example we only have to worry about sequents containing atomic formulae. Furthermore, as [0,1] is linearly ordered, we see that A(q) fl A(r) must be either A(q) or A(r), and similarly for X(q)UX(r) and the corresponding terms involving p. This implies that any sequent that contains more then one A- or more then one p-atom on either side of the turnstile is derivable from a simpler sequent by weakening.

We can go even further: The meaning of a sequent

p(q),X(r) lb p(q'), A(r')

t[g,r]Cf[g',l]Uf[0,r'] which is equivalent to the disjunction of [q, r] C ]q\ 1] and [q, r] C [0, r'[. This shows that we can restrict ourselves to singletons on the right and the left, unless the right hand side is empty.

We sum up the discussion with the three sound and complete rules for our continuous sequent calculus L that represents the interval domain I:

q < r q < r q < r

X(q) lb A(r) p(r) lb p(q) X(q), p(r) lb 2.2 Individual Constructions

Our next topic are domain constructions in logical form. We discuss a number of them in some detail to show how the techniques we have developed so far

can be applied.

From now on we use the following convention: If the correspondence between a continuous sequent calculus L and a stably compact space X is understood, in particular if we say that L represents X, then we assume that we have already constructed the functions required by Theorem 2,1 and we refer to them as 0[-] and K[-].

Let us begin with the most elementary domain construction:

2.2.1 Lifting

For any topological space X its lifting X±:=X U {±} has one new point ± which is the new bottom element. We can ensure this topologieallv by adding just one open set to the topology:

0(.Y ) := 0(.Y)L{.Y j.

Now suppose that L is a continuous sequent calculus that represents X. We let L_l be the algebra freely generated by L, but where we identify the old and the new falsum ± = U. Note that we distinguish between the old and the new verum T^T'. We generate the logic for L from the one rule

T \\~l A t lhl± a

and set o(^):=0[^] and &(</>):=K[</>]] for o e I.. As in our previous examples we can extend o and k to all of L± and we then have to check that the conditions of Theorem 2,2 are satisfied. But this is the case because of A' = o(T') = k(T') and the fact that for all compact K CI we have

K C 0[T] = K[T] = X c x±.

This completes the proof that L± represents X±.

2.2.2 Sums

The sum or coproduct of topological spaces is simply the disjoint union with the topology that is generated by the disjoint union of the respective topologies.

This construction is also very easy because we have already done all the necessary work in Section 2,3: In MLS the coproduct L+M can be constructed from L and M using the rules given on page 77,

From Theorem 1,40 we know that MLS and the Kleisli category StCp^ are equivalent. Furthermore, the usual left adjoint from StCp to StCp^ (see [45, Chapter VI, Theorem 1]) is the identity on objects. This means in particular that coproducts in StCp^ and StCp are the same since left adjoints preserve them. Thus, we get spec(L + M) = spec(L) + spec(M), where the latter coproduct is taken in StCp,

It is also not hard to verify that the functions o and k are the extensions of 0[-] and K[-] for L and M.

Remark. Semantic considerations along these lines can be used to see that slight changes to the rules given in Section 2.3 produce related sum constructions: Without the rule

we get the lifted sum, and if we omit the side condition T ^ 0 for the first two rules given there we get the coalesced sum.

At the end of Section 2,3 we have seen that we can endow the coproduct with projections that also make it into a product. Hence, in MLS and all the equivalent categories finite products and finite coproducts are isomorphic, Semantieallv, the reason for this is easy to see if we consider how disjoint unions and the co-compact topology interact:

Lemma 2.5 If X and Y are stably compact spaces, then XK + YK = (X + Y)K holds.

Proof. This follows immediately from the following two observations: If K C X is compact saturated then K U Y is a compact saturated subset of X + Y. Starting with a compact saturated K C X + Y, the sets K D X and K fl Y are again compact saturated, □

If L corresponds to X and M to Y, then we know from Theorem 1,30 that (.Lop + Mop)op corresponds to (XK + YK)K. This space is simply X + Y because of the previous lemma and the fact that taking the co-compact topology is an involution (Corollary 3,12),

Remark. There is another, more direct semantic argument that in StCp* finite products and coproducts are isomorphic. The injections from X and Y to X + Y are given by x iq x' x Qx x' and y i\ y' y Cy y'. In the light of our discussion at the end of the previous chapter, we can also see them as the relations corresponding to the continuous functions given by the two subset inclusions. Given two closed relations F: X —b>~ Z and G : Y —b>~ Z, the unique mediating morphism D is given by

where we assume without loss of generality that X and Y are disjoint.

These morphism correspond directly to the MLS-morphisms bt0, btl and hfg constructed in Section 2.3, but it is also straightforward to check that they satisfy the universal property.

hi+M{0}xr,{l}xA

a F z for a E X

a G z for a E Y

The projections from the product are very similar to the injections:

x I o x' x Qx x' and

y Pl y y Qy l!

Note that these relations almost correspond to functions; they are however partial. It is readily checked that P0 and Pi are closed subsets of (X + Y) x XK and (X + Y) x YK, respectively and hence morphisms in StCp*. For two closed relations F: Z —X and G: Z —b- Y we define:

z (F, G) a

z F a for a E X z G a for a E Y

This is clearly a morphism and uniquely determined by F and G, thus showing that X + Y is a product of X and Y.

2.2.3 Products

We know from Proposition 3,4 that the product of two stably compact spaces X and Y is again stably compact. Let the continuous sequent calculi L and M represent X and Y, respectively. Similarly to sums we freely generate L ® M from {0} x LU{1} x M, the disjoint union of L and M. The intended reading of the tokens is:

°(0; 4>) ■■= OWxF o(l, x/;) := IxO^]

k(0,0 := K[</>]] x Y k( := X x KM

The extensions of o and k to L ® M clearly satisfy the first condition of Theorem 2,2, To show density suppose we have K C O C X x Y, where K is compact saturated and O open. For every point (x, y) E K there are open sets U E N(x) and V E N(y) such that U x V C O. Hence, there are tokens <j)xy E L and ijjxy E M satisfying x E C KJ^J C I and

y E 0№xyl C C V. As K is compact, finitely many of the sets

x = o((0,4>xy) A (1,^))

cover it. This means that we can find a token

X := ((0, </>(xy)l) A (1, v ''' v ((0» ^v)n) A (1, VW)„>)

such that K C o(x) C k(x) C O.

As we have seen several times before, we can now extract I\~l®m and only have to come up with rules that generate enough sequents relating atomic formulae, To this end we consider a generic situation where a compact saturated set

K = (Ki x Y) n • • • n (K[ x Y) n (A x Ci) n • • • n (A x Cm)

is contained in an open set of the form

(Ui x Y) U • • • U (Un x Y) U (X x U • • • U (X x Vp).

The set К is of the form К' x С and hence has to be a subset of either (Ui x Y) U • • • U (Un x Y) or (X x Vi) U • • • U (X x Vp); let us call it O. Depending on which is the case we get that either (Кг x Y) П • • • П (Kt x Y) is already a subset of O, or (X x Ci) П • • • П (X x Crn) is. This means that because of weakening we get all the sequents that we are interested in from ones relating atomic formulae of the same kind. Hence, we can generate our logic by the two rules:

Г lbL Д Г lbM Д

{0} x Г lhi0M {0} x Д {1} x Г lbLmi {1} X Д

Note that this also takes care of extreme cases like Г lb which might be a problem because of 0 x Y = 0,

2.2.4 The Smyth Power Domain

For a space X this power domain is simply JC(X), i.e. the set of compact saturated subsets ordered by reverse inclusion, though usually without the compact top element 0,

It has a very simple description which follows almost immediately from some of the results in Section 1,4: Let us suppose that L represents a stably compact space X. Then Lemma 1,37 tells us that the sets Оф = ООф, ф E L, form a basis for the Scott topology on 1C(X). From Lemma 1,5 we know that every compact saturated set in 1C(X) can be written as an intersection of finite unions of sets f K.

This suggests that we can essentially reuse the tokens from L. We freely generate Vs(L) from atomic formulae Оф, where ф E L, and set:

о(Пф) := ПОЩ = {K E K(X) I К С ОВД} к(Пф) := ПКЩ = fKM = {КЕ JC(X) I К С КМ} We clearly have о(Пф) С к(Пф), and hence the first condition of Theorem 2,2 is satisfied for the extensions of о and k. For density we have to combine the observations from the previous paragraph. To do this we have to refine the proof of Lemma 1,37 slightly: Let С С tC(X) be compact saturated and О D С Scott-open in 1C(X). For every К E С we find L. U E О such that L <C V <C K. By Lemma 2,17 this means that there is an open set О such that К С О С V and thus we can find a token ф E L satisfying К С О [0] С К[0] С /Л This implies К Е о(Пф) and к(Пф) С fL' С С О. As С is compact, finitely many of the sets о(Пф) suffice to cover it and hence we get

С С о(Пф1 V • • • V □ фп) С к(Пф1 V • • • V □ фп) С О.

As before we ean extract Ivia 0 and k, and we only have to characterise the sequents relating atomic formulae and come up with rules that generate enough such sequents. To begin with we observe that

fc(D^) fl • • • fl k(U<j)n) = DKM fl • • • fl DKpJ = □(Kp1] n • • • PI K[^]).

This is a principal filter in tC(X) and as such contained in a union of Scott-open sets o(Oip) = □0[V'] if and only if it is contained in one of them already. We infer that up to weakening we can restrict ourselves to singletons on the right, disregarding for the moment the case of an empty sequent on the right. Moreover, a containment □ K = fix C DO, for K C A compact saturated and O CI open, holds if and only if K C O. This means that we need only one rule:

4>x,...,4>n lhL ip

U<f>u . . . ,U<f>n \tTs(L) Dip

If we consider the lattice tC(A) we never have D^i,,,,, 0<pn I\~-ps(L) since the empty set is contained in every UK. If we want to describe the usual Smyth power domain Vs(X) = tC(A) \ {0} we need one extra rule that expresses □0 = 0:

<j)i,... ,<j)n lbL

n 'I "Ps(L)

2.3 Function Spaces

We begin with a brief exposition of the central results about topological function spaces, very much in the spirit of Chapter 1. It also contains the prerequisites for the next section on relation spaces. Most of the material follows the exposition in [42].

2.3.1 More Basics

Given two topological spaces A and Y we can endow Yx, the set of all continuous functions from A to Y with the compact-open topology. It has the

N(K, O) := [f:X—y Y \ f[K] C ()}.

for K CI compact and O C Y open, as a subbasis. In general this does not make Yx into an exponential in Top. This is, however, the case if A is locally compact:

Proposition 2.6 For a locally compact space X the function spaces Yx, endowed with the compact-open topology, is an exponential of X and Y.

Proof. We have to check a number of details that appear in the following

diagram that defines exponentials:

The evaluation e is given by e(f,x):=f(x). Let us prove that it is continuous: For f(x) E O open, we find a compact neighbourhood K of x such that f[K] C O since / is continuous and X locally compact. This immediately implies e [N(K, O) x K] C O and (/, x) E N(K, O) x K.

It is clear that we must define / by f(z):=Xx.f(z,x). So, we only have to show that for all z the function f(z) is continuous and, moreover, that / itself is continuous. The former is easy as we can write f(z) as a composition of continuous functions

X^Xx {z} c—. XxZ

For the latter suppose f(z) E N(K,0). This implies that for all x E K we have f(z,x) E O and by continuity of / we find neighbourhoods Ux E N(z) and Vx E N{x) such that f[Ux x Vx] CO. As the set K is compact, finitely cover it and for the corresponding neighbourhoods UXi in ■■DUXn]CN(K,0). 1 □

many \

Z we get f[UXl fl

Remark. One can generalise this result to core compact spaces, i.e. spaces whose sobriScation is locally compact, using the Isbell topology. The proposition also has a converse: The compact-open topology (or the Isbell topology for the non-sober case) is the only candidate for a topology on the exponential, i.e. if it does not yield one then the two spaces do not have an exponential in Top.

Unfortunately, the compact-open topology is in general not locally compact, This is one reason why it is hard to find cartesian closed categories of topological spaces. Continuous domains are always locally compact and so the question arises how the exponentials taken in Top and those taken in DCPO are related. Furthermore, to construct a CCC of domains one has to come up with a class of domains such that exponentiation does not destroy continuity. We mentioned several such classes in Section 1,2,

Our focus are sober spaces and we know from Proposition 2,9 that they form dcpo's with respect to their specialisation orders. The following lemma shows that this order is well-behaved with respect to taking exponentials.

Lemma 2.7 The specialisation order on Yx is the extensional order of functions, i.e. f C g (Vx E X) f(x) C g(x).

Proof. Let us call the extensional order Ce and the specialisation order with respect to the compact-open topology Cs, If we have / Ce g and f[K] C O, where K is compact and O is open, we get g[K] C ff[K] C O and thus / & g.

Conversely, suppose / Cs g and f(x) lies in an open set O. We get f[tx] ^ O and hence / E N(fx, O) which implies g E N(fx, O) and thus g(x) E O. This proves / Ce g. □

If Y is a continuous Scott domain with the Scott topology then Yx gets most of its structure from that space. In particular, we will see that it is again a continuous Scott domain and that the Scott topology on Yx agrees with the compact-open topology.

The most important tool to understanding such an exponential as a domain are step functions. Given an open set OCX and a point y E Y we define:

, y if x E O

(o\y)(x) :

_L otherwise

Lemma 2.8 Let f be a continuous function from a space X to a pointed continuous domain Y equipped with the Scott topology, O E 0(1) and y E Y. IfOCf-l[U then (0\y)</.

Proof. Suppose we have / C \_fgi- From Lemma 2,10 and the comment following it we know that the preimage of a Scott-open set under a directed supremum of continuous functions is just the directed union of the preimages of the individual functions. This means that we get

o « rlm c (U^r'to] = UV'M

and thus O C for an index i. This implies (O \ y) C ^ and thus

{O \ y) « /■ □

Proposition 2.9 For a locally compact space X and a pointed continuous domain Y with the Scott topology the following hold:

(i) If Y is a Scott domain then so is Yx.

(ii) If the exponential Yx is a continuous stably compact domain, then the Scott topology and the compact open topology agree.

Proof. We begin by showing that every continuous function /: X —y Y is the supremum of step functions (IJ \ z), where U <C For any x E X

and y < f(x) we have x E and as X is locally compact there is an

open set O such that x E O <C This implies that

is larger than y, and as Y is continuous that the supremum is in fact f(x). Hence, / is the supremum of such step functions and by the previous lemma we know that these step functions approximate /, In general, however, this supremum is far from directed.

Next, we check that Yx has bounded suprema if Y does. The constant bottom function is the least element of the exponential, and for / and g bounded we construct the point-wise supremum / U g as follows: We add a compact top element to Y to get the continuous lattice YT and call the

embedding e: Y c-YT, Taking binary suprema on YT is clearly Scott-

continuous as directed suprema commute with binary suprema. Now, we consider

iflO) r ,, T 7" eXe T 7 , , T 7 ^ T 7

A -► 2 X 2 -► 1 j X 2-|--► 1 j

and see that if / and g are bounded then this map co-restricts to Y. This yields precisely fUg which, as a composition of continuous maps, is continuous and hence the supremum in Yx.

In any dcpo we have that x,y <C z implies x U y <C z, provided the supremum exists. This implies that for a Scott domain Y any function /: X —Y is the directed supremum of approximating functions, namely that of finite suprema of step functions as constructed above. This shows that Yx is a continuous Scott domain.

It remains to prove that under the conditions of the proposition the two topologies on the function space coincide. From Lemma 2,7 we infer that the subbasic open neighbourhoods N(K, O) are upper sets. To show that they are Scott-open suppose K C (|_f/j) X[0] = [j^f\rl[0]. Then compactness implies K C /¿_1[0] for an index i and hence fa E N(K, O).

For the other direction note that the sets {/ E Yx \ O <C /-1[fy]} are Scott-open because of the interpolation property in O(X). We claim that moreover these sets form a subbasis of the Scott topology: Suppose / is in the Scott-open set U C Yx. We know that / is the supremum of step functions (O \y), where O <C /-1[fy], or equivalentlv

17 = f|{t(° \ v) I 0 « Q u.

As Yx is stably compact by assumption, we get from Corollary 2,14 that the intersection of finitely many such sets f(0 \ y) is already contained in U. The finite intersection of the corresponding subbasic open neighbourhoods {g | O <C C f (O \ y) C f(0 \ y) is a neighbourhood of / that is a

subset of U. Hence, it suffices to consider such Scott-open sets. If / satisfies O < then local compactness allows us to find a compact saturated

set K such that O C K C which implies / E N(K, f y) C f (O \y):

The function / clearly belongs to N(K, f y) and for any g E N(K, f y) we get OCifC and thus (O \y) <C g by the previous lemma, □

A continuous Scott domain is stably compact. Thus, for a locally compact space X and a Scott domain Y the exponential Yx is a Scott domain and ear-

ries the Seott topology. The reason to formulate the proposition in a slightly more general way is that this also covers the case of function spaces between pointed FS domains.

Remark. We could relax the condition that Y has to be pointed by using the techniques described in [5, Section 4.3.2] and [29, Chapter 3]. As this extra generality is not necessary for the further development of the theory we avoid the technical complications that would be introduced by it.

2.3.2 The Function Space Construction

Suppose L and M are continuous sequent calculi representing the stably compact spaces X and Y. The space X is locally compact by definition and hence the exponential Yx in Top exists because of Proposition 2,6, Looking at the definition of the compact open topology gives us a hint for a candidate for the continuous sequent calculus corresponding to the function space.

We take terms (ф —ф), for ф E L and ф E M, as basic tokens and let [L—yM] be the algebra freely generated by it. The intended meaning of such a token is that it stands for the set of functions mapping the set represented by ф into that represented by ф. However, each token x denotes an open set О [x] and a compact saturated set K[x]; on the other hand we have to provide an open and a compact saturated reading of the new token (ф —у ф). The following interpretations of the new tokens suggest themselves:

о(ф ^ ф) := {f:X-> Y | /[КВД] С ОВД} = А(КМ,ОВД)

к(ф ^ ф) := {f: X —У Y | / [ОВД] С КВД}

It is clear from the definition that the interpretation о(ф —ф) is open. Also note that an expression N(A, B): = {f: X —Y | f[A] С Hj—we use this by slight abuse of notation even if A is not compact and В not open— is antitone in A and monotone in В with respect to subset inclusion. From this observation and the fact that 0[x] С K[x] holds for every formula x in any continuous sequent calculus we get о(ф —Уф) С к(ф —ф), the sanity condition of Theorem 2,2,

The following proposition shows that from the open point of view we have defined the right continuous sequent calculus together with the right open semantics.

Proposition 2.10 The sets о(ф —ф) with ф E L and ф E M form a subbasis of the compact open topology on Yx.

Proof. Suppose f E N(K, O) for a compact saturated set К С X and an open set О С Y. Then we infer К С f^l[0] and by Theorem 2,1,(3) we can find a token ф E L such that К С О [0] С К [</>]] С /-1[0], This implies t/[K[^]] С О and so we can apply the same theorem again to get a token ф in M such that t/[K|^]] С 0[^>] С С О. Putting it all together we get / E о(ф —Уф) С N(K,0), where the subset inclusion follows from the fact

that N(-, •) is antitone in the first and monotone in the second argument, □

For a more restricted case we will prove a stronger property in Theorem 2,12, namely density in the sense of Theorem 2,2,(2), But prior to this we consider the compact reading of a token (x —y y). Unfortunately, here the situation is not as nice as in the open case. The sets A;(</>—>• V) are clearly saturated, but in general they fail to be compact as the following example illustrates.

Take /: = [0,1] to be the unit interval with the usual topology. It is a compact Hausdorff space and hence in particular stably compact. As I is both open and compact we have N(I, I) = 11 and this corresponds to an open and a compact reading. The functions

0, \i x > 1/i

1 — ix, otherwise

converge point-wise to the constant zero function with the exception of the argument 0 where its value is 1, As the compact open topology is finer than the topology of point-wise convergence there cannot be a subnet that converges to a continuous function in 11 which thus cannot be compact.

In a way the counterexample is not very surprising: The fact that the exponentials of locally compact spaces are in general not locally compact is the reason why it is so hard to find cartesian closed subcategories of Top, It makes it clear, however, that here, for the first time, we need extra assumptions about the continuous sequent calculi or the spaces they represent.

In spite of this observation, we can prove the compactness of k(4> —y ip) under additional assumptions. The following propositions is given in a general form to be applicable to two situations, the function space between domains and the relations space construction of the next section.

Theorem 2.11 Let X he locally compact and Y a pointed continuous domain such that the exponential Yx is stably compact. For an open OCX and a compact saturated K C Y the set

N(0,K) = {f: X ^ Y | f[0] C K)

is compact if either X is a continuous domain or K is of the form l\ = \y.

Proof. The idea of the proof is the following: We show that the set N(0, K) is the intersection of finitely generated compact saturated sets, i.e. sets tM where M Cfin Yx. Because we assume that Yx is stably compact this implies that N(0,K) is itself compact.

To this end suppose / ^ N(0, K) which implies that there is an x E O such that f(x) £ K. If K is a principal filter fy, then we consider the step function (O \ y) and instantly see / ^ f(0 \ y) D N(0, fy).

In the other ease where К is an arbitrary compact saturated set but X is a domain we proceed as follows. By Lemma 1,5 the compact set К has a neighbourhood t{mb ,,, ,mn} that does not contain f(x). Now, pick an x' E О satisfying x' <C x and consider the finite set of step functions M:={{ fx' \ mi),..., (fa;' \ mi)}. We claim f ^ fM D N(0,K): The function / is clearly not in f Л4 as is maps a; to a point that is not above any of the nii. For the second part of the claim suppose g E N(О, К). Then g(x') must be larger than at least one гщ which implies g □ (fa;' \ гщ).

In either case this implies that the intersection of the sets D N(О, К) with Л4 finite is exactly N(0, К). □

Note that the theorem applies in particular if X and Y are Scott domains or pointed FS domains. Moreover, as an immediate corollary we get that if the continuous sequent calculi L and M represent such spaces then the compact interpretation к(ф —ф) of any basic token (ф —у ф) is indeed compact saturated.

Remark. It is not clear at the moment under which exact conditions the saturated set N(О, К) is compact. For the approach of the above proof to work we have to assume that Yx is supersober. The domain structure of Y is only needed to guarantee that for every point у that does not lie in a compact set К there is a finitely generated compact set containing К but not y. This appears to be related to Y being a quasicontinuous domain (see [20]). For X we seem to need the property that for an open neighbourhood U of x there is an x' E U such that x E int(fa;') which, for example, is clearly not the case if X is Hausdorff.

We now return to density for o(-) and k(-), i.e. property (2) of Theorem 2.2.

Theorem 2.12 Let L and M be continuous sequent calculi representing stably compact spaces X and Y such that Y has a least element and Yx is a stably compact continuous domain. For an open О С Yx and К E /C(FX) with К С О there is a formula x £ [L—yM] such that К С o(x) 4= k(x) С О.

Proof. We begin by simplifying the problem. By Lemma 1.5 we can find finitely many functions Д,..., fn such that К С f{/i,..., /„} С О. Suppose

we produce formulae \ i.....\ „ such that t/i ^ o(xi) С k(xi) Я: O, for

i = 1,..., n. Then we get

К с tfi U • • • и tfn С o(x 1V • • • V Xn) Q Hx i V-Vx„)CO

which proves the theorem. Hence, it suffices to show that for any / e Yx and any open set О С Yx containing / there is a formula x such that t/ С o(x) С k(X) С О.

Let us therefore assume we are given such a function f E О E 0(FX). Then for every x E X and every у <C f(x) we can find а ф E M satisfying

/(x) E ОМ С КВД С fy.

We get x E / 1 G and hence another formula 4> E L such that

xeOWcKicf1^]].

We now claim:

/eo(H^) c ^ tfO c f(OW \ y) (l)

The only thing that is not obvious is the last containment. To check it note that g E k(4> —VO is by definition equivalent to <7 [0 [</>]]] C C fy. This implies y <C g(x) for all x E O[0] which proves (O[0] \ y) C g.

Repeating the argument from the proof of Proposition 2,9 we see that / is the supremum of such step functions (0[<^]] \ y«), or equivalentlv fl, t(O[0i] \ yi) = f /CO. Since we assumed that Yx is stably compact we can invoke Corollary 2,14 and infer that there are finitely many such functions that already satisfy

/ G t(OM \ Vl) n • • • n t(O[0j \ yn) C O. Together with the equation (1) this implies

/ G o(Oi ^ ik) A- • -A (<j>n ipn)) C k({</>i ^ ijh) A- • -A (<j>n ipn)) Q O which concludes the proof, □

For continuous sequent calculi L and M satisfying the premises of the theorem this is a strengthening of Proposition 2,10, It implies that the sets o(4> —VO form a basis of the topology on Yx, that the complements of the sets k(4> —y VO form a basis for the co-compact topology, and moreover that they do so in a joint way.

Putting the last two theorems together we see that under reasonably mild conditions, for example if L and M represent pointed FS domains, the algebra [L—yM] satisfies the properties of Theorem 2,2, Hence, it can be equipped with a continuous consequence relation Isuch that it represents the exponential Yx of the corresponding spaces.

Unfortunately, this does not mean that we have a function space construction in logical form. For this we lack a syntactic description of sequences T IA relating atomic formulae. Even under more restrictive conditions, like the case of continuous Scott domains, it is not clear how to proceed.

It is instructive to compare this to the situation in Abramskv's original work [4]. It is at this point where the primalitv predicate C is needed. We always have:

((j) (ip A x)) « 0 VO A {</> x) and ((</> V i/j) ^ x)« 0 ^ x) A (?/> ^ x)

For disjunction on the right and conjunction on the left we only get

0 ^ V X)) > 0 VO V 0 ^ x) and (0 A i)) ^ X) > 0 ^ X) V (ip X) which is a consequence of (• —•) being antitone in the first and monotone in the second argument. The first equation becomes an equivalence if C((f) holds.

As we observed in the discussion leading up to Proposition 2,24 the compact open subsets in a continuous domain are precisely those of the form fM, where M is a finite set of compact elements. The V-irreducible elements in Kfi(A) for a stably compact algebraic domain X are thus simply the principal filters fx. In effect, the C-predicate says that a formula corresponds to a point in the domain. Given that one of the slogans underlying locales and domains in logical form is to consider open sets, or properties, as primary objects and points as derived ones, this is a bit strange. For algebraic domains this is however justified by the fact that compactness of an element is an intrinsic property, unlike being a member of an arbitrarily chosen basis for a domain. Thus the compact elements give rise to a canonical subbasis of the Scott topology, Moreover, in applications they typically correspond to finite objects and thus, even from a eonstruetivist point of view, the nature of their existence is not in doubt.

In the continuous case points are more problematic. It is also not clear to what extent formulae whose compact interpretation is a point and whose open reading is a Scott-open filter can be used as a substitute for formulae satisfying Abramskv's C-predicate,

Surprisingly, the situation for the relation space is much nicer. This is the topic of the next section,

2-4 Relation Spaces

Let us take two continuous sequent calculi L and M corresponding to the topological spaces X and Y and consider the hom-set MLS(L,M), or equivalentlv StCp*(A, Y) StCp^(A, Y) = StCp(A,JC(Y)). From the last description we can see that we can endow it with the compact open topology and make it into a topological space again; in fact, this space [X=>Y] turns out to be stably compact. Unfortunately, this does not give rise to a cartesian closed structure nor to a symmetric monoidal closed one. Nonetheless, it is not just an arbitrary object and we will explore the universal property it satisfies. This relation space also has a nice description in syntactic terms,

2-4-1 Closure

We begin by showing that StCp* is not cartesian closed. It has a zero object 0, namely the singleton {*}, Moreover, it is non-trivial, in the sense that it has morphisms that are different from the identities on the objects. The claim now follows immediately from the following well-known general observation:

Lemma 2.13 In a cartesian closed category with a zero object all objects are isomorphic.

Proof. The zero object 0 is both initial and terminal, and hence we have 1x0 = 1 for all objects X. If the product functor X x — has a right adjoint, it preserves colimits and in particular 0, This implies that for all objects X we get X = X x 0 = 0. □

A candidate for a symmetric monoidal structure suggests itself: the product taken in the category of continuous functions rather than in the category of relations. We have studied the construction of L ® M in Section 2,2 and have seen that this continuous sequent calculus corresponds to X x Y, where the latter is the cartesian product with the product topology. To avoid confusion with the categorical product, which is the same as the coproduct, we also refer to this space as X 0 Y.

Similar to the cartesian product in Rel this tensor gives rise to a symmetric monoidal closed structure on StCp*, albeit a rather trivial one. We have StCp*(X <g> Y, Z) StCp*(X, YK <g> Z) as is readily checked.

In the following we are more interested in the relation space construction although it cannot have a left adjoint: As we know, in StCp* finite products and coproducts agree. We can easily construct finite counterexamples that show that [X=>—] does not preserve products, for example:

[1=^1 + 2]

But on the other hand:

StCp*(l, 1 + 2) = StCp(l, tC(l + 2))

1=^1] + [1=^2] = [1=>1] + [l=r-2]

2 + 3 = 5

Nonetheless, X ® — and [X=>—] are almost adjoint, as we will see later. In fact, [X=>Y] is a weak exponential, in the sense that there is an evaluation relation, and for all other morphisms R: Z ® X —|—- Y there is a closed relation R: Z—b-[X=>Y]—though not necessarily a unique one—making the relevant diagram commute. The relation R turns out to be a function, and as a function it is unique. Note that this is precisely the same situation as for the relation space in Rel, the category of sets and relations. We will come back to this universal property once we have set up the necessary machinery.

2-4-2 The Topological Relation Space

As we have already discussed at the beginning of this section, the hom-set StCp*(X, Y) is isomorphic to StCp(X,1C(Y)) and hence we can equip it with the topology inherited from the compact open topology on tC(Y)x. We call this space and by Proposition 2,9 this is a continuous Scott domain

with the Scott topology which implies, in particular, that it is stably compact because of Proposition 3,19,

Let us rephrase the topology of this space in terms of relations rather than functions to the Smyth power domain. If O C Y is open, then the set OO = {K G fC(Y) | K C 0} is Scott-open by Lemma 2,17, Hence we get the open neighbourhoods

X(K. DO) = {f:X-> K{Y) | f[K] C DO}

= {R: X —I— Y | (Vz G K) x R y y G 0} = {R: X —I— Y \ R[K] C 0} and we will soon prove that they form a subbasis of the topology.

For an open set OCX and K C Y compact saturated we can also consider the sets

N(0, OK) := {/,': X —H- Y \ R[0] C A}, where we use OK to denote {L G )C(Y) | L C K} by slight abuse of notation, As this set OK is simply fK taken in JC(K), where the order is reverse inclusion, Theorem 2,11 applies and we see that N(0, OK) is compact. Next we study the order structure on the relation space:

Proposition 2.14 The specialisation order on the space is reverse

inclusion. Furthermore, arbitrary -suprema and finite infima are given by intersection and union, respectively.

Proof. By Lemma 2,7, the inequality R □ S holds for relations from X to Y if and only if for the corresponding functions fR, fs: X )C(Y) we have

(Vs G X) fR(x) D fs(x).

This is equivalent to RD S.

The second claim follows immediately from the fact that closed sets are closed under intersections and finite unions, □

Before we can tackle the universal property of the relation space we first have to turn ® into a bifunctor. To this end let us take two closed relations R: X —l— X' and S: Y —Y'. We define R®S: X ®Y —I— X' ® Y' by:

(x, y) (R ® S) (x', y') x R x' and y S y'

This assignment is clearly functorial, the only question is whether it defines a closed relation. If we have x fi x' we find O G 0(A) and K G tC(X') such that O x (A' \ K) n R = 0. This implies

(O x Y) x ((A' \ K) x Y') n(R ® S) = 0.

(x'xy')\(kxy')

The case y $ y' is argued analogously.

As we have discussed before we can think of a continuous function /: X —y Y as a relation, namely its hvpergraph F C X x Y, by setting .r /•' // :•=•

f(x) C y. This defines a faithful functor i: StCp c-StCp*. Using this

embedding we can see that ® extends the product bifunctor on StCp:

For f:X X' and g: Y Y' we get / x g: X x Y X' x Y', and its embedding is given by:

(X,y) (t(f X g)) (a:',y') ^ (f{x),g{y)) E (x',y')

We can also embed / and g first and then compute i(f)®i(g) which yields the exact same relation since the order on the product is component-wise. This shows that the following diagram of functors commutes:

StCp2 ~ X ~ ■ StCp

(StCp*)2 —— StCp* - ® -

As a final preliminary consideration we study the composition of a continuous function /, or more precisely of the hvpergraph of /, with a closed relation R. In the following we will use crossed arrows for closed relations and normal ones for continuous functions considered as relations. We claim that

X Y —^ Z

is simply given by

x (/ o R) z f(x) R z

where as before we write the composition of relations as the usual relational product, i.e. from left to right. The composition certainly contains all these pairs, and for the other inclusion it suffices to observe that f(x) C y R z implies f(x) R z.

Theorem 2.15 The relation space construction ] is the right adjoint

to the functor i o (X x —) = (X ® —) o i.

Proof. We begin by defining the evaluation relation e: [X=>Y] 0 X —b- Y:

(R, x) (e) y xRy

To see that this is indeed a morphism suppose x fly. The relation R is closed in X x YK by assumption and these two spaces are locally compact. Hence, we find compact neighbourhoods k 3 x and l 3 y such that (k x l) fl R = 0, Note that l is a neighbourhood with respect to the co-compact topology on Y. From this we immediately get

(n(K,d(Y \ l)) x K x L) n (e) = 0 140

and the product is a neighbourhood of (R,x,y). This proves that e is closed in [X=>Y] x X x YK.

Now we have to verify the details of the universal property indicated in the following diagram:

\X=>Y i

We define the function R by:

(R(z)) y

(z, x) Ry

Rather than checking all the details 'by hand' we use the fact that certain exponentials exist in StCp: The relation R corresponds to a function fR: X x Z —fC(Y). Now, X is locally compact and hence exponentiable which means that fa has an exponential transpose. This transpose is a continuous function from Z to tC(Y)x, namely R up to the isomorphism tC(Y)x = [X=>Y], This shows both that for all z E Z the relation R(z) is closed in X x YK and that R is continuous.

It follows from the discussion preceding the theorem that the function R is uniquely determined, □

2-4-3 The Relation Space Construction

Now suppose L and M represent X and Y, respectively. We generate [L=>M] freely from the tokens (<p ip) for <f> E L and ip E M. We have already proved that the two functions

o(^^):=JV(KM,DOM) = {R: X —b— Y \ i?[KM] C C#]}

= {R: X —b— Y | R[O[0]] C KM}

define an open and a compact saturated reading. It is also clear that o and k satisfy the sanity condition o(o ip) C A:(<f> ^ ip).

The proof of density of o and k, in the sense of Theorem 2,2,(2), is essentially the same as that for the function space. We have to modify the proof of Theorem 2,12 only slightly: Take a relation seen as a multi-function f E O E 0(/C(F)x), For every x EX and open set O containing the compact saturated set fix) we find a token ip E M such that

f(x) C OM c KM c o.

This implies x E E il(X) and hence we find a formula 4> E L

such that

owe KMC/-1 [dom].

This implies

/ e =► tfO c k(<l> => V) c f(OW \ KM),

where by slight abuse notation we consider o(<f> =>■ ip) and k(<f> =>■ ip) as subsets of tC(Y)x rather than of the isomorphic [X=>Y], The saturated set f(x) is the intersection of sets as constructed above which implies that / is the supremum of such step functions, and we can conclude the proof as in the case of the function space.

To complete the syntactic relation space construction we have to come up with rules that generate sufficiently many I -sequents between atomic

tokens. First, note that there are no such sequents with an empty right hand side because of the empty relation which is a member of every k(<f> =>■ ip). Next we show that we can restrict the right hand side to singletons: Take a situation

k((j)i => Vi) n ''' n k(4>m => VVn) C o(<ti => n) U • • • U 0(0n => Tn)

where we cannot leave out any of the 0(0 i =>■ r«) on the right. If n > 1 we can find closed relations Ri,... ,Rn that lie in the intersection of the compact saturated sets, but Rjt £ 0(0^ => Ti). The union (JiRi still satisfies all the conditions on the left, but it is not a member of any of the 0(0^ =>■ Tj), a contradiction. This proves that in such an irreducible situation we have n = 1, or in other words that, modulo weakening, we can restrict ourselves to singletons on the right.

This means that we have to characterise the sequents:

(<j)i => ipi),..., (</>n => ipn) lh[L^M] (a => t) (2)

The key is the following relation. We construct the following multi-function

/ := (OM \ K[^]) V • • • V (Om \ K[^])

from X to fC(Y). A component in this disjunction (0[^J \ K[^]) is the smallest function such that the corresponding relation lies in k(<f>i =>■ ipi). Hence, the relation Rf: X —Y corresponding to the supremum / of these step functions is the largest relation—in terms of subset inclusion rather than the specialisation order on [X=>Y]—that satisfies the premise of the above sequent. If the sequent (2) holds, then we must also have -R/[K[<t]]] C 0[r]. We now study what this means for the relative position of Kjcr] and 0[^i],,,,, on the one side and for 0[r] and Kj^i],,,,, KjVvJ on the

other. We first consider a trivial case, although it will later be subsumed by the general one: Suppose Kjcr] ^ U • • • U Then Rf re-

lates points from Kjcr] to all of Y which implies 0[r] = Y. Because of o(_L =>■ t) = 0(0 =>■ T) = [X=>Y] this implies I\~[l=>m] (a => t).

If Kjcr] is covered by the 0[<^]] then we proceed as follows. For each x e Kjcr] we determine the 0[<^]] that x is an element of. If we get

where x is not contained in any other O[0j], then we have

f(x) = KMJn-.-nKMJ

and thus

KWn-n%]CO[r],

Putting it all together we get a covering of a by a union of intersections of sets such that the corresponding set made up from the instead of

the is contained in 0[r].

This leads to the single rule:

a \\-L t{4> i, ...,<j>n) t(ip i,....,%!)„) IHm t

{4>l => ^l), ■ ■ ■ , {4>n => Ipn) (a- => r)

where t is a term in n variables using the connectives ±, T, V and A,

Since we allow the usage of the constants ± and T we have the 'special' case as an instance if we pick t:=T, The rule is clearly sound and our discussion shows that this rule is sufficient to derive all I -sequents with a singleton

on the right.

We can reformulate the rule in a way that does not resort to the logical connectives: Beginning from the two sequents

a lhL t((j)i,... ,<j>n) and t(ipi,..., ipn) lbM r

we apply backwards rules until we end up with sequents

a I\~L i I\~M T

0 \\-L $fe \frl |bM T

such that the are subsets of {<f>i,,,,, <f>n} and the tyj of {^i, ..., ipn}, i-e. the sequents no longer contain any logical connectives. We now claim that these sequents satisfy a slightly modified form of the side condition of (Cut*) (see page 56): Let i: {<f>i,,,,, <f>n} —{^1,..., ipn} be the bijection that maps 4>i to ipi, (j>2 to tp2, and so on. Using i we can express the condition as:

(yfeH%)m^JCz[{fl,...Jk}} (3)

To understand why the sequents we constructed earlier satisfy this condition we take a closer look at the and tyj. The sets tyj give rise to the following disjunctive normal form of t(ipi,..,, ipn):

/\ tfi V • • • V/\ ty

The on the other hand, are more naturally understood as giving rise to a conjunctive normal form of t(<f>i,,,,, <f>n). To get a disjunctive normal form we have to distribute all the conjunctions over the disjunctions. This leads precisely to choice functions:

V (a a • • • a fk)

Both these disjunctive normal forms come essentially from the same term t. In general they need not be identical, but considered as elements of the free distributive lattice on n generators they must be equivalent. Thus, we see that the condition (*) must be satisfied.

Hence, we can give the following alternative rule for the relation space:

a i \~l $1 i i \~m t

a lbL $fe lhM t

(<j)i => ipi),..., (</>n => ipn) lh[L^M] (a => t)

where the sets $¿ and the \I/¿ are contained in {^i,,,,, <f>n} and {^1,..., ipn}, respectively, and they satisfy the above side condition (3),

Of course we could also formulate the side condition in a more symmetric fashion by adding the isomorphism i to the condition given in Lemma 1,2, Moreover, note that the possibility to have empty left or right hand sides in the sequents allows us to dispense with the constants ± and T,

As before our derivation of the rule shows that it is complete. To prove that it is sound we only have to translate what the side condition means for the disjunctive normal forms: It says that for every conjunction term of <f>i appearing there is a conjunction of ipj that contains at most the formulae corresponding to the <j>i. This implies that if the normal form on the left is given by s(4>i,..., 4>n), and the one on the right by t(ipi,,,,, ipn) then we have s < t in the free distributive lattice. From this we infer KlFoi and

s{KM,..., KM) C i(K[t/>i],..., K[Vvi]) c 0[r] which implies soundness.

3 Relations and Functions

As our final topic illuminating the relationship between the function space and the relation space we come back to the last problem of Section 1,4, namely characterising the 'functions' among the morphisms of StCp*. In this section we do it in categorical terms.

The idea is simple: Take a category with finite products. If we choose binary products and a terminal, then the products give rise to a symmetric monoidal structure on the category. Moreover, the unique maps to the terminal and the diagonals A.4: A A x A are natural transformations. Now consider the relationship between Set and Rel, Cartesian products in Set can be extended to a bifunctor on Rel although they are no longer categorical products. Moreover, a relation is a function if and only if the diagonals and the unique functions into the chosen singleton are natural with respect to it. This suggests to define 'functions' in these terms.

The plan of this section is as follows: We begin by introducing categories with diagonals. In such a category we can define an intrinsic notion of a mor-phism being a function. We then explore how this relates to the standard construction of a category of relations from any regular category. All this material is essentially category theory folklore. Finally we apply this machinery to our category StCp,

3.1 Diagonals

Suppose C is a symmetric monoidal category. Recall that a commutative comonoid in C is an object A e |C| together with morphisms t: A —I and A: A —A 0 A that makes the three diagrams

I ® A A® A

a A® A

A-- A® A-- A® (A® A)

A® A------ (A <g> A) <g> A

A® A v ;

commute. Here I is the unit for the tensor, the unlabelled arrows are the isomorphisms that are part of the definition of the monoidal structure and

Kj \ \ IS the isomorphism that makes C symmetric monoidal.

Definition 3.1 A diagonal structure on a symmetric monoidal category C is a family of commutative comonoids {t,4, A.4) 4e|C|, indexed by the objects of C, that respects the symmetric monoidal structure of C in the sense that the following diagrams commute:

(A ® B) ® (A ® B)-- (A® A)® (B® B)

We also call a symmetric monoidal category with a chosen diagonal structure a category with diagonals.

The guiding example is Rel, the category of sets with relations as mor-phisms. This category is self dual since for every relation R from X to Y the opposite relation {{x,y} \ y R z} is a morphism from Y to X and taking the opposite relation is clearly functorial. As the self duality fixes objects, products and coproducts agree, and in fact they are given by disjoint union, Cartesian products and any terminal, say /:={*}, give rise to a symmetric monoidal structure on Rel: For objects we set X ® Y:= X x Y and for mor-phisms we take the relations component-wise:

(x, y) R® S (x', y') x R x' and y S y'

In the following we consider the following diagonal structure on Rel: tA ■= Ax {*} = {(o, *) | a e A}

a A.4 (b, c) a = b = c

The t,A and A.4 are readily seen to define commutative comonoids and to satisfy the conditions of the previous definition. We will see shortly that this is a consequence of the fact that in Set the tensor is just the categorical product and the singleton is a terminal.

Let us return to the general situation. Without additional assumptions neither (tA)A nor (A.4).4 is a natural transformation. They are if and only if the symmetric monoidal structure on C is given by finite products. One direction of the proof is easy: If C has (chosen) finite products, then we can take the

unique arrows into the terminal as tA and let Aa: = {A, A). These morphisms are always natural since they are determined by a universal property and they define a diagonal structure with respect to binary products and the terminal. The converse is the content of the following proposition.

Proposition 3.2 (i) If (tA)A is a natural transformation from the identity on C to the constant I functor, then I is a terminal.

(ii) If in addition (Aa)a is natural from the identity functor C to (— ® —), then ® is the categorical product.

Proof. The object I is weakly terminal, and if /: A —I is any arrow to I then naturalitv means

A-^-► /

1 = I ti

which implies f = tA.

Suppose A: C —(— ® —) is natural. To show ® = x we first have to define the projections:

If we are given two morphisms f:C^A and g: C ^ B we define (/,g) to be the composition

(/, g): C — C®C — A® B.

Now, we have to consider the composition with the projections:

^ ^ f®f A C ®C --- A

The top square commutes by naturalitv, the bottom one since we already know that I is a terminal, and the triangle on the right is one of the axioms defining comonoids. Hence, we have shown tt0 o (f,g) = /, and the proof for ° (/, g) = g is analogous.

It remains to show that (f,g) is unique. For another mediating arrow h: C —A 0 B we consider:

---r {A® B) ® {A® B) {A® A) ® {B ® B)

fi fi \ %

(A®t)®(t® B)

(A® I)® (I® B)-- A® B

For most sub-diagrams eommutativitv is obvious. The rightmost rectangle is the tensor of the equations (A ® t) o A = A and (t ® B) o A = B, both of which are instances of one of the axioms for comonoids. To understand

{A ® tA) ®(tB®B)oi = (A® tB) ® (tA ® B)

we have to take a closer look at the isomorphism i. It is made up from the isomorphisms saying that ® is associative and commutative. These isomorphisms are natural transformations by definition. Note that I ® I = I and

so I 0 I is also terminal which implies that the commutativity morphism Krj: I ® I I ® I is the identity. This concludes the proof that the bottom middle triangle commutes. If we trace the arrows at the lower edge of the diagram we see that we have shown h = (f,g). □

We now use the lack of naturalitv of (tA)A and (A.4).4 to define an abstract notion of function.

Definition 3.3 We say that a morphism f: A —B is total if t is natural with respect to it, or in other words if

commutes. It is deterministic if A is natural for it, i.e. if the diagram

A-— A® A

commutes.

If / is total and deterministic, then we call it functional or a function, and we denote the subcategory of functions by Fun(C),

The following lemma ensures that Fun(C) is indeed a category.

Lemma 3.4 The classes of total and of deterministic morphisms are closed under composition and contain all identity morphisms.

Proof. All claims follow immediately from the definitions.

To get a better understanding of the properties that we have just defined we investigate what they mean for our principal example Rel, A relation R: X —b- Y satisfies

if and only if for every x E X there is a y E Y such that x R y. equivalent to R being a total relation in the usual sense. For determinism consider:

This is

Y-Y ® Y

For x R y A y {y, y) we clearly also have x à.x (x,x) (R® R) {y, y). But if conversely

x Ax (x,x) (R®R) {y, y'} holds, then we can find a y" satisfying

x R y" Ar (y, y')

only if // = //' = y". This means that R is deterministic as a morphism if and only if it is a single-valued relation. Putting the two things together we see that in Rel the functional relations are precisely those that are functions in the classical, concrete sense.

All structure maps are functional, but since we do not need this fact later on, we simply state the result without proof:

Proposition 3.5 Let C be a symmetric monoidal category with diagonals. Then the structural natural isomorphisms that make C symmetric monoidal, as well as all arrows i4 and A4 of the diagonal structure are functional.

With the new terminology we can reformulate Proposition 3,2 as follows:

Corollary 3.6 If C is a symmetric monoidal category with diagonals, then the category of functions Fun(C) is the largest subcategory of C such that the tensor—together with the diagonal structure—gives rise to a product.

3.2 Regular Categories

There is a standard way of talking about relations in an arbitrary category,

A relation from A to B is simply a subobject R >-Ax B. For two arrows

u: C —A and v: C —B the mediating morphism {u,v}: C —A x B is monic if and only if u and v are jointly monic. Hence, we can equivalentlv think of relations as jointly monic pairs r0: R A and ri : R B. To be more precise we should say that a relation is an equivalence class of such pairs; the equivalence is given by isomorphisms that make the obvious diagram commute,

To define a proper composition of relations we need more structure on the category C; it has to be a regular category. We quickly review the basic definitions and results. Our exposition follows [46, Chapter 25], in particular we will make use of generalised elements which will be introduced shortly.

An arrow q: A —y B is surjective or a surjection if the only subobject of B it factors through is B. In the following we will use arrows of the shape

-> to denote surjections. If C has equalisers then all surjections are epi, A

surjective image of an arrow /: A —y B is a factorisation A-> Q >-B

such that q is surjective and e is monic, A regular category is a category with finite limits such that all morphisms have surjective images and surjections are stable under pullbaek.

The meaning of these definitions will become clearer when we use them to compose relations. But to explain this composition it is useful to introduce the language of generalised elements first,

A generalised element is simply an arrow x: A —y B. We write x Ea B and call A the stage of definition of the element x. Given another morphism f:B—>C, we can apply it to x to get the generalised element / o x Ea B which we also write as f(x) Ea B. If x Ea B is a generalised element and i : C '-B monic, then we write x E i if x factors through i:

In this case we also say that the element a; is a member of i. Provided that i is implicitly understood we may also write x E C and say a; is a member of C, One use of this notation lies in the following:

Proposition 3.7 Let i: A >-B and i! \ A' >-B be two monies. Then

we have i C i', i.e. i factors through i! and hence is also a subobject of i!, if and only if all generalised elements that are members of i are members of i!.

Proof. The "only if" part follows immediately from the definitions. For the other implication we consider the generalised element i Ea B which is clearly a member of A. If this implies that i is also a member of i', then this says precisely that i factors through il. □

As an immediate consequence we see that generalised elements characterise subobjects:

Corollary 3.8 Two monies i: A >-B and i': A' >-B represent the

same subobject of B if and only if they have the same generalised elements.

Before we come back to the composition of relations we have to investigate

how membership of a generalised element in a subobjeet is affected by changing the stage of definition. Suppose x Ea B is a generalised element which is a

member of a subobjeet C >-B. For a morphism y: A' —A we get a

generalised element x o y Ea> B, and this element clearly satisfies x o y e C. The converse is false in general but holds if the earlier stage A' covers A, i.e. if y is surjeetive:

Proposition 3.9 Let C be a subobjeet of B in a regular category. If x Ea B

is a generalised element and y: A'-> A surjeetive, then x o y E C implies

Proof. We take the pullbaek of the monic C >-B and get the situation:

Since x o y factors through C we get a unique arrow d as in the diagram, Pullbaeks of monies are monic and as y is surjeetive we infer that the monic at the top of the square is an isomorphism. This implies that x factors through C. □

Putting the proposition and the discussion preceding it together we can say that membership of generalised elements in subobjects is invariant under going to covering stages of definition.

Remark. We can also read this proposition as the non-trivial part of the proof that in a regular category the surjections and the monies form a factorisation system. Both these classes of arrows contain all isomorphisms and are closed under composition with isomorphisms. Every morphism factors as a surjection followed by a monic by definition, and the diagonal fill-in property is exactly the property given in the proposition.

The relevance of the proposition is the following. Suppose R is a relation from A to B and S another one from B to C. Trying to mimic the situation in Rel we might try to define the composition of R and S as the subobjeet of Ax C that contains a pair (o, c) Ex A x C if and only if there is an element b Ex B such that (a, b) E R and (6, c) E S. Unfortunately, it is possible that we cannot find such a b at stage X but that we can at a covering stage X'-> X, contradicting the previous proposition. This means that the best

we can try is to define the elements of the composition at stage X as those (a, c) Ex A x C that satisfy:

(3X' ——> X, b Ex> B) (aou,b) E R and (b,cou) E S (4)

If such a subobject of A x C exists, then Corollary 3,8 tells us that this condition determines it uniquely. In the following we show that there is such an object in C using the constructions provided by the definition of a regular category.

We first take the pullbaek of r\ and s0:

This gives us a map from P to A x С which we factorise as

P-> RoS .-- Ax C.

The monic part of this factorisation gives us a subobject of Ax C. By composing with the projections we can also consider it as a jointly monic pair of morphisms from R о S to A and С as indicated in the diagram.

Lemma 3.10 The composition R о S contains precisely the generalised elements given in (4)-

Proof. Suppose we have a surjection и: X'-> X and a generalised element

b Ex> В such that (о о и, b) E R and (Ь, сои) E S. By the definition of the pullbaek this yields a unique arrow d: X' —P such that

commutes. We infer that (o o u, c o u) factors through RoS which implies (a, c) E Ro S by Proposition 3,9,

For the converse we take the pullbaek as indicated in the diagram:

RoS ->-X

This gives us the required generalised element b Ex> B. □

As an immediate consequence we see that composition of relations is associative; the proof is essentially the one for Rel expressed in terms of generalised elements. It is also clear that the relations

act as identities. From now on we let Rel(C) denote the category with the same objects as C and relations as morphisms. We can embed the original category C in Rel(C) by taking a morphism /: A —B to its graph Tf.

At every stage of definition the graph contains exactly the generalised elements (a,/(a)). This immediately implies that the embedding is functorial and faithful because we can test it on the generic element A eA A.

Conversely, if for a relation (r0, ri): R —A x B the arrow r0 is an isomorphism, then the relation is equivalent to one of the form Tf and hence corresponds to a 'function' in the original category.

The category Rel(C) has a lot of additional structure; it is in particular, an allegory. For such categories there is a standard way of defining a notion of 'function'. As it turns out, in Rel(C) they are precisely the morphisms embedded from the original category C, For more details see [17, 2,132],

In the previous section we discussed an alternative version of defining 'functions' in a given category, namely via a diagonal structure. The reason for this

is that some categories that look like categories of relations are not allegories. One such example is MLS = StCp*. It satisfies most axioms of an allegory, but its self-duality does not fix objects. In the following we will show that for Rel(C), instead of the allegory structure, we can also use an induced diagonal structure to recover C,

Having assumed that C has finite limits we can extend the products to a symmetric tensor of Rel(C): We choose any terminal object 1 in C as /, and for the tensor on objects we simply define A®B\=A x B. Since we can embed the symmetric monoidal structure given by products in C into the new category Rel(C), it is clear that this makes the latter category symmetric monoidal if we can define the tensor on arbitrary relations. Suppose R is a relation from A to A', and S one from B to B'\ We construct

R®S=RxS

Ax B A' x B'

which is a jointly monic pair of arrows and hence represents a relation from A®B to A'®B'. In terms of generalised elements this relation is characterised by

((a;, y), (x', y')) G R ® S ■<=> {x, x') e R and {y, y') 6 S

at all stages of definition. From this description it is clear that (— ® —) is a bifunctor and that it extends products, i.e. Tf ®Tg = r/X9,

The category Rel(C) also inherits the diagonal structure from the original category. We set

t:=T\ and

where ! is the unique arrow from an object into the chosen terminal. It is clear that this satisfies the axioms of a diagonal structure given in Definition 3,1 since all maps are embedded from C where the corresponding diagrams commute, Using the same argument we see that all morphisms Tf are functional with respect to this diagonal structure. The converse also holds:

Theorem 3.11 The functional morphisms in Rel(C) are exactly those of the form Tf, or in other words Fun(Rel(C)) = C.

Proof. We only have to show that all functional relations in Rel(C) come from a morphisms in C, Suppose R, given by the two maps r0: R —A and ri: R —B, is such a relation. From totality we infer that we have a

commuting diagram

which implies that r0 is surjective. We now show that it is also a monic and hence an isomorphism. Suppose we have two parallel arrows .r. y: .V — R such that r0 o x = r0 o y =: a, then we consider the generalised elements (a, ri o x), {a, ri o y) Ex A x B. By construction we have

(a, ri o x), (a, r\oy) E R

=>■ ((a, a), (ri o x, ri o y)) E R® R

=r- (a, {/'i c .r. n c y)) E A.4 o (R® R) = Ro Ab

=> (3u: X'-> X, b Ex> B) (<i c ii. h) E R

and (b, (ri o x o u, r\ o y o u)) E AB.

From this we infer b = r\ o x o u = ri o y o u, and as u is epi this means ri o x = ri o y. The morphisms ro and r\ are jointly monic which allows us to conclude that x and y are equal, thus showing that r0 is monic. Since we have seen before that it is also a cover it must be an isomorphism and hence the relation R is a graph Tf. □

3.3 Closed Relations

Now, we apply this machinery to our category StCp*. Although, it is not of the form Rel(StCp) we still have an embedding from the cartesian category StCp to StCp*, and the tensor extends the product as discussed in Section 2,4, This implies that we can simply embed the diagonal structure as well. Explicitly, the diagonals A^ are given by

tlj y (^tlj ^ tlj ^ i —ilj ^ ilj

and the nullarv equivalents tx relate all elements in X to the one element of I.

As in the previous section it is also clear that the embedding of a continuous function from StCp is functional in StCp*. The converse is a refinement of the proof for Rel,

Theorem 3.12 The functional morphisms in StCp* are precisely the hyper-graphs of continuous functions between stably compact spaces.

Proof. We only have to prove that a functional closed relations come from continuous functions. So, let us take such a relation R: X —b-Y. If it is

total, then for all x e X there is a ye Y such that x Ry. If we consider the corresponding function fR: X —)C(Y) this means fR(x) ^ 0, for all x e X. Let us fix an x and suppose y, y' are in the compact saturated set fR(x). We get

x Ax (x, x) R® R (y, y') and if R is deterministic there must be a y" e Y such that

x R V" Ar (y, y')

which implies y" C y, y'. This shows that for a functional R the set fR(x) is filtered, or directed with respect to the co-compact topology.

The space YK is sober by Corollary 3,12 and thus Proposition 2,9 implies that the directed supremum of fR(x) exists and that it does not lie in the open set Y\fR(x). In terms of the original space Y this says that fR(x) has a least element, i.e. it is a principal filter. We have thus shown that R corresponds to a continuous function from X to Y. □

This means that the categorical characterisation of functions agrees with the concrete one given at the end of Section 1,4,

Conclusion

Let us come back to the aims laid out in the introduction to see what has been achieved and which questions are still open.

The results on cut elimination of Section 2 and the representation Theorem 2,2 allow us to perform a number of domain constructions in logical form and they are defined for all stably compact spaces. This class contains the bifinite domains of Abramskv's theory as well as the FS domains which include the continuous domains most commonly studied in semantics. Of the power domain constructions we have only considered the Smyth power domain, and so the Hoare and the Plotkin power domains are canonical candidates to look at, A more interesting object of study, however, is the probabilistic power domain since there is no equivalent for algebraic domains. The probabilistic power domain for a stably compact domain is again stably compact [35] but, apart from this, very little is known about its structure. It is not even clear whether we can extend the construction to arbitrary stably compact spaces or not. This means that performing this construction in logical form will be difficult. We also have not looked at bilimits, but as it is straightforward in Abramskv's case this should be quite easy. To tackle them in our setup means that we have to characterise the embeddings in MLS,

The situation for function spaces is a very unsatisfactory. It is obvious that we cannot have functions spaces of arbitrary stably compact spaces, but we can for subclasses like FS domains. At the moment we are not even able to perform the syntactic construction for the much more restrictive class of continuous Scott domains. The problem is Abramskv's primalitv predicate that, in the algebraic case, allows him to talk about points which is necessary for his construction. It is not clear how to translate that to our setup or what other modification to the system could replace it.

This has a direct connection to the question whether it is possible to do domain theory in purely logical form, i.e. without referring back to the semantics, The construction of products and coproducts in Section 2,3 show how this could be done. Because of our categorical characterisation of relation spaces in Theorem 2,15 and the syntactic characterisation of functions given in Theorem 1,44 something similar should be possible for the relation space construction. The problem, again, is the function space, and the first step is to translate Abramskv's construction for algebraic domains into a form that can be expressed in RMLS, As it is not known to what extent this can be

done, the question remains open.

As we have seen in Section 1,3, an MLS formula has an open and a compact interpretation. It has been argued in [33] that compact and open sets play dual rules, and the discussion of consistency in Section 1,1 shows that the compact interpretation can be understood as the negative information contained in the formula. This poses the question what this signifies on the syntactic side, and in particular what it means for the semantics of a programming language using this logic.

Compared to the function spaces, the relation space construction is very smooth. This may be an indication that it could be beneficial to focus more on relational rather than on functional semantics, as is traditionally done in denotational semantics. It remains to be seen what exactly can be expressed in such a semantics and what the possible benefits are.

Bibliography and Index

References

[1] Abramsky, S., "Domain Theory and the Logic of Observable Properties," Ph.D. thesis, University of London (1987).

[2] Abramsky, S., The lazy lambda calculus, in: D. Turner, editor, Research Topics in Functional Programming, Addison Wesley, 1990 pp. 65-117.

[3] Abramsky, S., A dom,ain equation for bisimulation, Information and Computation 92 (1991), pp. 161-218.

[4] Abramsky, S., Domain theory in logical form, Annals of Pure and Applied Logic 51 (1991), pp. 1-77.

[5] Abramsky, S. and A. Jung, Domain theory, in: S. Abramsky, D. M. Gabbay and T. S. E. Maibaum, editors, Semantic Structures, Handbook of Logic in Computer Science 3, Clarendon Press, 1994 pp. 1-168.

[6] Birkhoff, G., "Lattice Theory," AMS Colloq. Publ. 25, American Mathematical Society, Providence, 1967, third edition.

[7] Bonsangue, M., "Topological Dualities in Semantics," Ph.D. thesis, Vrije Universiteit Amsterdam (1996), 258pp.

[8] Bourbaki, N., "General Topology—Chapters 1-4," Elements of Mathematics, Springer Verlag, 1988.

[9] Davey, B. A. and H. A. Priestley, "Introduction to Lattices and Order," Cambridge University Press, Cambridge, 1990.

[10] Dijkstra, E. W., "A Discipline of Programming," Prentice-Hall, Englewood Cliffs, New Jersey, 1976.

[11] Edalat, A., Domain theory and integration, in: Ninth Annual IEEE Symposium, on Logic in Computer Science (1994), pp. 115-124.

[12] Edalat, A., Dynamical systems, measures and fractals via dom,ain theory, Information and Computation 120 (1995), pp. 32-48.

[13] Edalat, A. and M. H. Escardo, Integration in real PCF (1996), to appear in LICS'96.

[14] Edalat, A. and R. Heckmann, Computational models for metric spaces (1995), manuscript.

[15] Escardô, M. H., PCF extended with real numbers, Theoretical Computer Science 162 (1996), pp. 79-115.

[16] Escardô, M. H., Properly injective spaces and function spaces, Topology and Its Applications 89 (1998), pp. 75-120.

[17] Freyd, P. J. and A. Scedrov, "Categories, Allegories," North-Holland, 1990.

[18] Gentzen, G., Untersuchungen über das logische Schließen, Mathematische Zeitschrift 39 (1934), pp. 176-210, 405-431, (Reprint 1969, Wissenschaftliche Buchgesellschaft Darmstadt).

[19] Gierz, G., K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove and D. S. Scott, "A Compendium of Continuous Lattices," Springer Verlag, 1980.

[20] Gierz, G., J. D. Lawson and A. Stralka, Quasicontinuous posets, Houston Journal of Mathematics 9 (1983), pp. 191-208.

[21] Hoare, C. A. R., An axiomatic basis for computer programming, Communications of the ACM 12 (1969), pp. 576-580.

[22] Hoffmann, R.-E., Continuous posets, prime spectra of completely distributive complete lattices, and Hausdorff compactification, in: B. Banaschewski and R.-E. Hoffmann, editors, Continuous Lattices, Proceedings Brem,en 1979, Lecture Notes in Mathematics 871 (1981), pp. 159-208.

[23] Hofmann, K. H. and M. Mislove, Local compactness and continuous lattices, in: B. Banaschewski and R.-E. Hoffmann, editors, Continuous Lattices, Proceedings Bremen 1979, Lecture Notes in Mathematics 871, Springer Verlag, 1981 pp. 209-248.

[24] Johnstone, P. T., Scott is not always sober, in: B. Banaschewski and R.-E. Hoffmann, editors, Continuous Lattices, Proceedings Brem,en 1979, Lecture Notes in Mathematics 871 (1981), pp. 282-283.

[25] Johnstone, P. T., "Stone Spaces," Cambridge Studies in Advanced Mathematics 3, Cambridge University Press, Cambridge, 1982.

[26] Johnstone, P. T., The point of pointless topology, Bulletin of the American Mathematical Society 8 (1983), pp. 41-53.

[27] Jones, C., "Probabilistic Non-Determinism," Ph.D. thesis, University of Edinburgh, Edinburgh (1990), also published as Technical Report No. CST-63-90.

[28] Jones, C. and G. Plotkin, A probabilistic powerdomain of evaluations, in: Logic in Computer Science (1989), pp. 186-195.

[29] Jung, A., "Cartesian Closed Categories of Domains," CWI Tracts 66, Centrum voor Wiskunde en Informatica, Amsterdam, 1989, 107 pp.

[30] Jung, A., The classification of continuous domains, in: Proceedings, Fifth Annual IEEE Symposium on Logic in Computer Science (1990), pp. 35-40.

[31] Jung, A., M. Kegelmann and M. A. Moshier, Multi lingual sequent calculus and coherent spaces, in: S. Brookes and M. Mislove, editors, 13th Conference on Mathematical Foundations of Programming Semantics, Electronic Notes in Theoretical Computer Science 6 (1997), 18 pages.

[32] Jung, A., M. Kegelmann and M. A. Moshier, Multi lingual sequent calculus and coherent spaces, Fundamenta Informaticae 37 (1999), pp. 369-412.

[33] Jung, A. and P. Sünderhauf, On the duality of compact vs. open, in: S. Andima, R. C. Flagg, G. Itzkowitz, P. Misra, Y. Kong and R. Kopperman, editors, Papers on General Topology and Applications: Eleventh Summer Conference at the University of Southern Maine, Annals of the New York Academy of Sciences 806, 1996, pp. 214-230.

[34] Jung, A. and P. Sünderhauf, Uniform approximation of topological spaces, Topology and its Applications 83 (1998), pp. 23-38.

[35] Jung, A. and R. Tix, The troublesome probabilistic powerdomain, in: A. Edalat, A. Jung, K. Keimel and M. Kwiatkowska, editors, Proceedings of the Third Workshop on Computation and Approximation, Electronic Notes in Theoretical Computer Science 13 (1998), 23 pages.

[36] Keimel, K. and J. Paseka, A direct proof of the Hofmann-Mislove theorem, Proceedings of the AMS 120 (1994), pp. 301-303.

[37] Keimel, K. and R. Tix, Valuations and measures (1996), unpublished manuscript.

[38] Kelley, J. L., "General Topology," Graduate Texts in Mathematics 27, Springer Verlag, 1975.

[39] Kopperman, R., The skew fields of topology (1994), preprint.

[40] Lambek, J. and P. J. Scott, "Introduction to Higher Order Categorical Logic," Cambridge Studies in Advanced Mathematics 7, Cambridge University Press, 1986.

[41] Lawson, J. D., The duality of continuous posets, Houston Journal of Mathematics 5 (1979), pp. 357-394.

[42] Lawson, J. D., The versatile continuous order, in: M. Main, A. Melton, M. Mislove and D. Schmidt, editors, Mathematical Foundations of Programming Language Semantics, Lecture Notes in Computer Science 298 (1988), pp. 134160.

[43] Lawson, J. D., Order and strongly sober compactification, in: G. M. Reed, A. W. Roscoe and R. F. Wachter, editors, Topology and Category Theory in Computer Science, Clarendon Press, 1991 pp. 179-205.

[44] Lawson, J. D., Spaces of maximal points, Mathematical Structures in Computer Science 7 (1997), pp. 543-555.

[45] Mac Lane, S., "Categories for the Working Mathematician," Graduate Texts in Mathematics 5, Springer Verlag, 1971.

[46] M el-any. C., "Elementary Categories, Elementary Toposes," Oxford Logic Guides 21, Oxford University Press, 1992.

[47] Meinke, K. and J. V. Tucker, Universal algebra, in: S. Abramsky, D. M. Gabbay and T. S. E. Maibaum, editors, Background: Mathematical Structures, Handbook of Logic in Computer Science 1, Clarendon Press, 1992 .

[48] Nachbin, L., Sur les espaces uniformes ordonnés, C.R.Acad.Sci MR 9, 455 (1948), pp. 774-775, english translation in [49].

[49] Nachbin, L., "Topology and Order," Von Nostrand, Princeton, N.J., 1965, reprinted by Robert E. Kreiger Publishing Co., Huntington, NY, 1967.

[50] Plotkin, G. D., A powerdomain construction, SIAM Journal on Computing 5 (1976), pp. 452-487.

[51] Plotkin, G. D., Post-graduate lecture notes in advanced dom,ain theory (incorporating the "Pisa Notes") (1981), dept. of Computer Science, Univ. of Edinburgh.

[52] Raney, G. N., A subdirect-union representation for completely distributive complete lattices, Proceedings of the AMS 4 (1953), pp. 518-522.

[53] Rewitzky, I. and C. Brink, Unification of four versions of program semantics, Research Reports RR 10/96, The University of Cape Town (1996).

[54] Schalk, A., "Algebras for Generalized Power Constructions," Doctoral thesis, Technische Hochschule Darmstadt (1993), 174 pp.

[55] Scott, D. S., Outline of a m,at,hem,atical theory of computation, in: J^th Annual Princeton Conference on Information Sciences and Systems, 1970, pp. 169-176.

[56] Scott, D. S., Data types as lattices, SIAM J. Computing 5 (1976), pp. 522-587.

[57] Scott, D. S., Relating theories of lambda calculus, in: J. R. Hindley and J. P. Seldin, editors, To H. B. Curry: Essays in Combinatory Logic, Lambda Calculus and Formalism, Academic Press, 1980 pp. 403-450.

[58] Smyth, M. B., The largest cartesian closed category of dom,ains, Theoretical Computer Science 27 (1983), pp. 109-119.

[59] Smyth, M. B., Powerdomains and predicate transformers: a topological view, in: J. Diaz, editor, Automata, Languages and Programming, Lecture Notes in Computer Science 154 (1983), pp. 662-675.

[60] Smyth, M. B., Stable compactification I, Journal of the London Mathematical Society 45 (1992), pp. 321-340.

[61] Smyth, M. B., Topology, in: S. Abramsky, D. M. Gabbay and T. S. E. Maibaum, editors, Handbook of Logic in Computer Science, vol. 1, Clarendon Press, 1992 pp. 641-761.

[62] Smythe, N. and C. A. Wilkins, Minimal Hausdorff and maximal compact spaces, Journal of the Australian Mathematical Society 3 (1963), pp. 167-171.

[63] Stone, M. H., The theory of representations for Boolean algebras, Trans. American Math. Soc. 40 (1936), pp. 37-111.

[64] Siinderhauf, P., "Discrete Approximation of Spaces," Ph.D. thesis, Technische Hochschule Darmstadt (1994), 91 pp.

[65] Vickers, S. J., "Topology Via Logic," Cambridge Tracts in Theoretical Computer Science 5, Cambridge University Press, 1989.

[66] Zhang, G.-Q., "Logic of Domains," Progress in Theoretical Computer Science, Birkhauser, 1991.

Categories

For the hierarchy of full subcategories of Top, the category of topological spaces, see Figure 2 on page 44, It contains all categories listed in (1) and (2) below,

(i) Categories of dcpo's with Scott-continuous functions AlgScott algebraic Scott domains, 17

Alg algebraic domains, 16

DCPO dcpo's, 15 Dom continuous domains, 16 FS FS domains, 19

ACp Lawson-compact domains, 40

RSFP retracts of bifinite domains, 18 Scott continuous Scott domains, 17 SFP bifinite (or SFP) domains, 18

|SFP 2/3-bifinite domains, 45

(ii) Categories of topological spaces with continuous functions CpOpen spaces with a basis of compact opens, 30 LocCp locally compact spaces, 29

Sob sober spaces, 22

Spec spectral spaces, 44

StCp stably compact spaces, 34

Stone Stone spaces, 45

Top topological spaces

(iii) Other categories

ASL arithmetic semilattices with Scott-continuous semilattice ho-

momorphisms, 101 CLat complete lattices with frame homomorphisms, 21 Frm frames with frame morphisms, 20

Loc locales, the opposite category of Frm, 20

MLS continuous sequent calculi with compatible consequence relations, 63

RMLS reflexive sequent calculi (i.e. with identity axiom) with compatible consequence relations, 65 Rel sets with relations

SPL strong proximity lattices with approximable relations, 95

SPLW strong proximity lattices with weak approximable relations, 95

Set sets with functions

StCp* stably compact spaces with closed relations, 116 StCp^ Kleisli category for the Smyth power monad on stably compact spaces, 113

Notation

(i) Order

27 C, 13 t, 14 13

Lf, 15 <, 16

Î, 16 1,16 K(X), 16

43 < 47 47 ~<, 93 = , 96

(ii) Topology L(W), 32 Af(x), 13 Af°(x), 21 iî(X), 14 )C(X), 14 UO, 29 UK, 129 KÎÎ(X), 30 £(X), 16 v„- 14 X«, 14 X^, 37 Ox, 22 Oj, 105 Of, 106 KF, 105

106 OUI 125 KW, 125

o*, no H, 48 2, 20 X± , 38 X <g> Y, 138 Vs(X), 129 Yx, 129

N(A, B), 133 (0\y), 131 [X=^Y], 137

(iii) M LS h, 56 Ib, 61 o, 56

(•)°p, 84 X[\~], 87 ¿[h], 87 [\-]X, 87 [h]¿ 87 [<t>], 70, 96 /»> . 66 B+, 66

Hb> 66

r(r), 66 rc(r), 66 g(T), 66

(iv) Named filt, 87 Fun(C), 149 ¡di, 87

pt, 21 Rel(C), 154 spec, 45, 105