Chapter 2

Mathematical preliminaries

In this chapter, we present some basic notions and properties from category theory, partial orders and metric spaces which we will need in the subsequent chapters. This chapter is not intended to provide a comprehensive introduction to the subjects treated. Rather, it is aimed to list those major facts we shall assume in the next chapters (providing references for their proofs), and to make the reader familiar with the notation we will use. The only original material is Proposition 2,3,3 (stemming from [43]),

2.1 Category theory

In this section we introduce some concepts of category theory and universal algebra. Statements and facts in this section will be used only in the third part of this monograph. The first two parts will not need any categorical prerequisites, Unless stated otherwise, our reference for the categorical concepts below is to the book of Mac Lane [136],

We begin with the following three fundamental notions: category, functor, and natural transformation, A category C consists of

(i) a class of objects,

(ii) a class of morphisms. Each morphism has a domain and a codomain which are objects of the category. The collection of all morphisms with domain A and codomain B is denoted by C(A, B).

(iii) a composition law which assigns to each pair of morphisms / e C(A,B) and g e C (B, C) a morphism g of e C(.l. C),

(iv) an identity morphism idA e C(.l. ^4) for each object A such that for all morphisms /, / o idA = / and id^ o/ = / whenever the composites are defined.

For example, there is a category Set whose objects are sets and whose morphisms are functions with the usual composition. Similarly, Set2 is the category whose objects are functions / : A —B between two sets, and whose morphisms between / : A —B and g : A' —B' are pairs of functions {h : A ->• A',k : B ->• B') such that goh = kof.

A functor F: C D is a morphism between two categories. It assigns to each object i of C an object F(A) of D, and to each morphism / e C(^4,I?) a morphism F(f) e D(F(A),F(B)) such that

(i) it preserves identity, i.e. F(idA) = idp(A), and

(ii) it preserves composition whenever defined, i.e. F(f o g) = F(f) o F(g).

A natural transformation // : F ^ G is a morphism between two functors F, G:C D, It maps each object A of C to a morphism r]a e D(F(A), such that for all morphisms / e C(.l. B) we have G(f) o r]A = r]B o F(/).

In a category C a morphism / 6 C(.l. i?) is said to be an isomorphism (and A and B are called isomorphic) if there exists another morphism g 6 C (B, A) such that / o g = ids and g of = idA. An object A of C is called a fixed point of a functor F : C C if A is isomorphic to F(A). A functor F : C :> D is said to reflect isomorphisms if for each morphism / of C, / is an isomorphism whenever F(f) is.

The opposite category Cop of a category C has the same objects of C and / e Cop(^4,I?) if and only if / 6 C(B,A). Composition and identities are defined in the obvious way, A category C is a full sub-category of D if every object in C is an object in D, and C(.l. B) = D(.l. B).

The main concept of category theory is that of adjunction. For two functors F: C D and G: D C, we say F is left adjoint of G if there is a bijection, natural in A and B, between C(.l. G{B)) and D(F(A), B). In this case the functor G is said to be right adjoint of F. Every adjunction induces two natural transformations: the unit r]: ide —G o F (for each A, r]A : A G(F(A)) is the morphism which corresponds to idp(A) '■ F(A) F(A)), and its dual, the counit f:FoG-> idn (for each B. : F(G(B)) —B is the morphism which corresponds to idG(B) G{B) G(B)). In general we do not need to know

that F is a functor [136, page 81, Theorem 2]:

Proposition 2.1.1 A functor G : D —C has a left adjoint if for each object A of C there exists an object F(A) in D and a morphism i/ \ : .1 — G(F(A)) such that for every other morphism f e C(A, G{B)) there is a unique morphism h e D(F(^4), B) satisfying G{h) o rjA = f. □

A reflection is an adjunction for which the counit Eb is an isomorphism for all B. If both unit and counit are isomorphisms then the adjunction is called an equivalence and the categories involved are equivalent. We say that two categories C and D are dual if C is equivalent to Dop. An adjunction is called Galois if it restricts to an equivalence between the categories F(C) and G(D) (here F(C) denotes the full sub-category of D whose objects are in the image of F, and G(D) denotes the full sub-category of C whose objects are in the image of G). An adjunction is Galois if and only if it restricts to a reflection from C into F{C) [110].

Let C be a category and J be a small category (i.e. the objects and the morphisms of J form a set rather than a proper class), The category CJ has as objects functors from J to C (in this context often called diagrams), and morphisms are natural transformations. There is a functor A : C —CJ which maps every object A to the constant functor with value A. We say that C has limits of type J if the functor A has a right adjoint linij, and we refer to limj(D) as the limit of the diagram D. Dually, if A has a left adjoint then we say that C has colimits of type J, If J is the empty category then the limit of a diagram is called terminal object while the colimit is called initial object. If J is a discrete category (i.e. one with only identity morphisms) then the limit of a diagram is called product and the colimit coproduct. Finally, if J is the category with two objects, two identity morphisms and two parallel morphisms between the two objects then the limit of a diagram is called equalizer and the colimit coequalizer.

A category is complete if it has all small limits, and cocomplete if it has all small colimits. Both categories Set and Set2 are complete and cocomplete,

A monad on a category C is a triple (t, r], ¡jl) where t : C —C is a functor, and r]:idc —T and //: V'o /' —T are natural transformations satisfying some commutativitv laws (see page 133, [136]), Every functor F : C —D which is a left adjoint of G : D —C induces a monad (G o F, r], G(ej?(_))) on C, where r] is the unit of the adjunction and e is the counit.

Conversely, given a monad (T,rj,fj) on C we can always find an adjunction inducing it. To this aim, denote the category of the algebras of the monad T

on C bv CT. Its objects are morphisms (called T-algebras) h e C(T(A), A) such that hoTjA = idA and h o ¡j,A = ho T(h); its morphisms from a T-algebra h e C(T(A),A) to a T-algebra k e C(T(B),B) are morphisms / in C(A, B) such that f o h = k o T(f). The obvious forgetful functor GT : CT —C which sends a T-algebra h e c(T(A), A) to A has a left adjoint FT : C —ct which maps any object A in C to the free T-algebra fj,A e C(T(T(A)), T(A)). The monad defined by this adjunction is trivially equal to the original monad T (see page 136, Theorem 1 in [136]), hence every monad is defined by its algebras,

A functor G : D —C is said to be monadic if it has a left adjoint F and the comparison functor K : D —ct defined by K(A) = G{ea) is an equivalence of categories, where T is the monad induced by G o F and e is the counit of the adjunction between F and G. The following version of Beck's Theorem (see page 147 in [136]) gives conditions on a functor G to ensure that G is monadic.

Proposition 2.1.2 Let G : D —C be a functor with a left adjoint F. If D has all coequalizers, G preserves these coequalizers, and G reflects isomorphisms then G is 'monadic. □

We conclude this section by mentioning the connection between the theory of monads and universal algebra. For a more detailed account we refer to [31,61] for the finitarv case, to [133] for the infinitarv one, and to the comprehensive [137],

An algebraic theory T = (O, E) consists of a class O of operators each with an associated set I denoting its aritv, and a class E of identities of the form ei = er, where e/ and er are expressions formed from a convenient set of variables by applying the given operators,

A T-algebra is a set A together with a corresponding function uja : A1 —A for each operator w of O of aritv J, such that independently of the way we substitute elements of A for the variables, the identities of E hold in A. More formally, if e is an expression formed using a set of variables xi for i e i by applying the given operators in O, the substitution of elements of A for the x{'s gives us a corresponding function e : A1 —A. If A is a T-algebra and ei = er is an identity in E using free variables Xj_ for i e /, then the two corresponding functions e;, er : A1 —A must be equal,

A T-homomorphism between two T-algebras A and B is a function / : A B such that ujb ° n ¿e// = f ouja for each operator weOof aritv I. The collection of all T-algebras and T-homomorphisms between them form a category

T-Alg, There is a forgetful functor U : T-Alg —Set mapping every T-algebra A to its underlying set A (its action on morphisms is obvious). The following proposition [137, Chapter 1] summarizes some important features of the category of T-algebras,

Proposition 2.1.3 For every algebraic theory T the category of T-algebras T-Alg has all small limits: they are constructed exactly as in Set, and the forgetful functor U : T-Alg —Set preserves them. Moreover, if the forgetful functor U : T-Alg —Set has a left adjoint, then the functor U is 'monadic and T-Alg has all small colimits. □

Let T = (O, E) be an algebraic theory, A presentation T(G \ R) consists of a set G (called in this context of generators) and a set R of pairs (called in this context relations) of the form (e/, er), where e/ and er are expressions formed from generators in G by applying the given operators in iL A model for a presentation T(G \ R) is a T-algebra A together with a function {—}A: G —A such that if (eh er) is a relation in R then {eijA = [er]^. In the latter equality, we have applied the function \—}A : G —.1 lo an expression e (built up from generators in G and operators in O) by replacing the generators g by their interpretation [g]^, the operators u by the corresponding operations u)A, and evaluating the resulting function in A to give [e]^ e A.

Notice that in a presentation we make two different uses of equations: in the identities of E that are part of T equations contain variables and must hold whatever values from an algebra are substituted for the variables, and in the relations of R equations contain generators and must hold when the generators are given their particular values in a model,

A T-algebra A is presented by a presentation T(G \ R) if it is a model for the presentation, and for every other model B there exists a unique morphism h : A B in T-Alg such that h({g}A) = for every generator g e G. Clearly, the algebra presented by generators and relations if it exists is unique, up to isomorphisms in T-Alg, Once we know that the forgetful functor U: TAlg Set has a left adjoint F, the standard theory of congruences gives us a way to force the relations R on the free T-algebra F(G). The resulting T-algebra is presented by T(G \ R).

For a finitary algebraic theory T, i.e. one with a set (and not a proper class) of operators and such that each operator has finite aritv, every presentation T(G | R) always presents a T-algebra, It can be constructed as a suitable quotient of the set of terms formed from G by applying the operators of T, This implies that the forgetful functor T-Alg Set is monadic [137, Chapter

1] with left adjoint given by the assignment which takes every set G to the algebra which presents T(G | 0).

More generally, given any monad on Set we can describe its algebras by operations and equations, provided that we allow infinitarv theories T (ones with proper classes of operators and equations, respectively). The converse is, unfortunately, false: there are infinitarv theories T for which a presentation T(G | R) does not present any T-algebra, Technically what goes wrong is that the collection of terms formed from G by applying the operators of O and satisfying the equations of E can be too big to be a valid set (i.e. it can be a proper class). Examples are the theory of complete Boolean algebras [73,87] and the theory of complete lattices [87],

2.2 Partial orders

Partially ordered sets occur at many different places in mathematics, and their theory belongs to the fundamentals of any book on lattice theory, A classic reference on lattice theory, representative of the status of the theory in 1967, is the book of Birkhoff [31], A good introductory modern book on lattice theory and ordered structures is that of Davev and Priestley [54], In recent years, partial orders on information have been successfully used in the semantics of programming languages [173,177], The mathematical part of this approach is called domain theory. Here the word domain qualifies a mathematical structure which embodies both a notion of convergence and a notion of approximation [4], A discussion on domain theory is presented in the subsection below. The reader who wishes to consult a more detailed introduction to domain theory and continuous lattices may find [160,4] and [77] useful references.

Let P be a set, A preorder < on P is a binary, reflexive and transitive relation, A partial order on P is a binary relation < which is reflexive, transitive and also antisymmetric, A poset is a set equipped with a partial order, A poset P is said to be discrete if the partial order coincides with the identity relation. Hence every set can be thought of as a discrete poset.

In every preordered set P we denote, for x e P, by f x the upper set of x, and by ^x the lower set of x, that is,

fa; = {y e P | x < y} and ix = {y e P \ y < x}. The set f £ is also called the principal filter of X generated by x.

Let P be a poset and S be a subset of P. An element x of P is a join (or least upper bound) for S, written as x = V S, if for all s E S, s < x and if s < y for all s E S then x < y. Since the partial order is antisymmetric, the join of S if it exists is unique. If S is a two element set {s,t} then we write s V t for V S; and if S is the empty set 0 then we write ± for V S. Clearly if ± exists then it is the least element of P.

Dually, in any poset P we can define the notion of meet by reversing all the inequalities in the definition of the join. We write f\S, sAt and T for the meet of an arbitrary subset S of P, the binary meet and the empty meet. Notice that for every upper closed subset S of P, if A S exists in S then S = f(A S).

A function f \ P Q between two posets is said to be monotone if p < q in P implies f(p) < f(q). If the reverse implication holds then / is said to be order reflecting. The collection of all posets with monotone functions between them forms the category PoSet,

A function / : P —Q between two posets is said to be strict whenever ± E P is the least element of P implies /(_L) is the least element of Q.

A function f \ P Q between two posets is said to be finitely additive if it preserves all finite joins of P, and completely additive if it preserves all joins of P. Dual notions are finitely multiplicative and completely multiplicative.

An element £ of a poset P is said to be a least fixed point of / : P P if f(x) = x, and f(y) = y implies x < y for all other y E P. Dually, x is said to be a greatest fixed point of / : P —P if f(x) = x, and f{y) = y implies y < x for all other y E P. The following proposition is due to Knaster [120] and later reformulated by Tarski [189] to assert that the set of fix-points of a monotone function / on a complete lattice forms a complete lattice (and therefore / has a least fixed point).

Proposition 2.2.1 Let P be a poset and let f : P —t P be a monotone function. If y = /\{x | f{x) < x} exists in P, then y is the least fixed point of f. Similarly, if z = \/{x \ x < f(x)} exists in P, then z is the greatest fixed point of f. □

A poset in which every finite subset has a join is called join-semilattice and, analogously, a poset in which every finite subset has a meet is called meet-semilattice. A lattice is a poset in which every finite subset has both meet and join. By considering arbitrary subsets, and not just finite ones, we can define complete join-semilattice, complete meet-semilattice and complete lattice. A poset is a complete join-semilattice if and only if it is a complete meet-

semilattice. Thus every complete semilattice is actually a complete lattice. However it is convenient to distinguish between the two concepts since a mor-phism of complete join-semilattices (a function preserving arbitrary joins) is not necessarily a morphism of complete lattices (a function preserving both arbitrary joins and arbitrary meets). We have thus two categories CSLat and CLat with the same objects but with different morphisms,

A lattice L is called distributive if

a A (6 V c) = (a A b) V (a A c)

for all a, 6 and c in L. The above equation holds for a lattice if and only if so does its dual [172]

a V (6 A c) = (a V b) A (a V c).

The class of all distributive lattices together with functions preserving both finite meets and finite joins defines a category, denoted by DLat, A distributive lattice L is called a Heyting algebra if for all a and b in L there exists an element a —b such that, for all c,

c < (a —b) if and only if (c A a) < b.

A Heyting algebra L is said to be a Boolean algebra if and only if for all a E L, (a ->• _L) ->• _L = a.

In this case, the element a _L is called the complement of a, and is usually denoted by -ia.

A complete lattice L satisfying the infinite distributive law a A \/ S = \/{a A s | s E 5}

for all a E L and all subsets S of L is called a frame. Frames with functions preserving arbitrary joins and finite meets form a category called Frm, Every frame F defines a Heyting algebra by putting

a ->• b = V(c e F | c A a < b}.

Conversely, every Hevting algebra which has a join for every subset is also a frame. However, frame morphisms do not need to preserve the —operation,

A complete lattice L is called completely distributive if, for all sets A of subsets of L,

AiV S \ S eA} = VIA f(A) | / e

where f(A) denotes the set {f(S) \ S e A} and $(>4) is the set of all functions / : A —{J A such that f(S) e S for all S e A. The above equation holds for a complete lattice if and only if so does its dual [164]

\/{/\S\SeA} = /\{\/f(A)\fe*(A)}.

Because of the presence of arbitrary choice functions in the statement of the above law, proofs involving complete distributivitv require the axiom of choice. For example, the statement that the set V(X) of all subsets of a set X is a completely distributive lattice when ordered by subset (or superset) inclusion is equivalent to the axiom of choice [53], Completely distributive lattices with functions preserving both arbitrary meets and arbitrary joins form a category, denoted by CDL, Every completely distributive lattice is a frame and every frame is a distributive lattice. Moreover, there are obvious forgetful functors CDL —Frm and Frm —DLat, Every complete ring of sets, that is, a set of subsets of X closed under arbitrary intersections and arbitrary unions, is a completely distributive lattice.

For a meet semilattice L, a non-empty subset T of L is said to be a filter if

(i) T is upper closed; i.e. a e T and a < b implies b e T\ and

(ii) T is a sub-meet-semilattice; i.e. a e T and b e T imply a Kb e T.

The collection of all filters of a meet semilattice L is denoted by Fil(L). If L is also a lattice then a filter T C L is prime if for all finite subsets S of L, V S e T implies there exists s e S fl T. Finally, if L is a complete lattice then a filter T C L is completely prime if V S e T implies there exists s e S fl T. For example, for any a e L the subset fa = {6ei|a<6}isa filter.

For a complete lattice L, an element p e L is said to be prime if p ^ T, and a A b < p imply a < p or b < p. The collection of all prime elements of L is denoted by Spec(L). The map which sends each completely prime element p e L to f /\(L\lp) and the map which sends each completely prime filter T

of L to \/(L\ T) form an isomorphism between the completely prime filters and the prime elements of a complete lattice L.

Directed complete partial orders

There is a special class of joins in a poset that we will consider next, A nonempty subset S of a poset P is said to be directed if for all s and t in S there exists x E S with both s < x and t < x. For example, the set of elements of an u-chain of a poset P forms a directed set, where an w-ehain is a countable sequence (xn)n of element of P such that xn < xn+i for all n > 0,

We say that P is a directed complete partial order (dcpo) if V S exists for every directed subset S of P. A dcpo P with a least element ± is called a complete partial order (cpo). If a poset has finite joins and directed joins then it has arbitrary joins.

An element 6 of a dcpo P is compact if for every directed subset S of P, b < V S implies b < s for some s E S. The set of all compact elements of P is denoted by fC(P). In any dcpo, the join of finitely many compact elements, if it exists, is again a compact element, A dcpo P is said to be algebraic if every x E P is the join of the directed set of compact elements below it, that is, x = \/{b E fC(P) | b < x}. A dcpo P is w-algebraic if it is algebraic and the set fC(P) is countable. When a dcpo is w-algebraic we do not need to consider general directed joins, but only joins of w-chains are sufficient,

A monotone function f:P —Q between two dcpo's is said to be continuous (or Scott continuous) if it preserves all directed joins. The collection of all dcpo's with continuous functions forms a category, denoted by DCPO, The full subcategory of DCPO whose objects are complete partial orders is denoted by CPO, whereas the full sub-category of DCPO whose objects are algebraic dcpo's is denoted by AlgPos, The forgetful functor CPO —DCPO has a left adjoint (—)_l mapping every dcpo P to the lift P±, that is, the poset P with a new least element adjoined. If P is a set (e.g. discrete dcpo), then the lift P± is said to be a flat cpo. A flat cpo is algebraic and every element is compact. Dually to the lift, for a poset P we denote by PT the poset P with a new top element adjoined.

The forgetful functor U : AlgPos —PoSet has a left adjoint Idl(—) defined by the map which assigns to a poset P the algebraic dcpo Idl(P) of directed ideals of P (i.e. the directed lower subsets of P) ordered by subset inclusion. Moreover, Idl(P) is also a cpo if and only if P has a least element. The

algebraic dcpo Idl(P) is often referred to as the ideal completion of the poset P. More generally, for a preordered set P, the poset Idl(P) of all directed ideals of P ordered by subset inclusion forms an algebraic dcpo with compact elements fa; for x E P.

Let P be an algebraic cpo. There are three standard preorders defined on subsets X and Y of P:

- the Hoare preorder, defined as X <H Y if \/x E XEy E Y:x < y;

- the Smyth preorder, defined as X <s Y if Vy e Y3x E X: x < y; and

- the Egli-Milner preorder, defined as X <E Y if X <H Y and X <s Y.

Powerdomains can be constructed from these preorders by ideal completion: the Hoare powerdomain 'H(P), the Smyth powerdomain S(P) and the Plotkin powerdomain £{P) of an algebraic cpo P are defined as the ideal completion of (Vfin(JC(P)), <H), (Vfin(JC(P)), <s), and (Vfin(JC(P)), <E), respectively, where Vfm()C(P)) consists of all finite, non-empty sets of compact elements of P.

Let P and Q be two algebraic cpo's. The coalesced sum P © Q is defined as the disjoint union of P and Q with bottom elements identified, whereas the separated sum P + Q is the disjoint union of P and Q with a new bottom element ± adjoined. The product P x Q is defined as the Cartesian product of the underlying sets ordered componentwise. All these constructions can be generalized to arbitrary sets of algebraic cpo's. For example, the separated sum, £/ Pi is the algebraic cpo obtained by the disjoint union of all the algebraic cpo's Pi with a new bottom element ± adjoined.

Let P be a cpo, A minimal upper bound £ of a subset S of P is an upper bound of S (that is, s < x for all s E S) such that for all y E P,

Vs e S: s < y k y < x =>■ x < y.

In other words, £ is a minimal upper bound of S if x is above every element in S and there is no other element y above every element in S but below x. Note that, in contrast with a least upper bound, a minimal upper bound need not to be unique. The set mub(S) denotes the set of minimal upper bounds of S, and the set mub*(S) is the smallest set Y C P such that S C Y and if X C Y then mub(X) C Y, that is, mub*(S) is the least set containing S and closed under mub(—). An algebraic cpo P is said to be an SFP-domain if for every finite subset S of compact elements )C(P):

(i) if y is an upper bound of S then x < y for some x E mu,b(S), and

(ii) the set mub*(S) is finite.

Alternatively, one can define SFP-domains as those algebraic cpo's which arise both as limits and as colimits in CPO of countable sequences (via embedding-projection pairs) of finite posets [158], The full sub-category of AlgPos whose objects are SFP-domains is denoted by SFP, The justification for studying this category is that it is the largest Cartesian closed category with w-algebraic cpo's as objects and continuous functions as morphisms [178], The only fact about SFP which we will need in the sequel is that it is closed under the following constructors: lift, coalesced sum, countable separated sum, and Plotkin powerdomain. Moreover SFP admits recursive definitions of SFP-domains by using the above constructors [160, Chapter 5, Theorem 1],

Fixed points

In Proposition 2,2,1 sufficient conditions are given to guarantee the existence of least and greatest fixed points of a monotone function on a poset. Next we recall other characterizations of least and greatest fixed points of monotone functions. For an overview of fixed point theorems we refer to [129],

Let P be a poset and let / : P —P be a function. For any ordinal a define /^ and /M as the following elements of P (if they exists):

(2.1 )/<«> = f(\J{fw \ 13 < a}) and /M = f(/\{fW \P<a}).

In general they do not need to exist, since Vi/*'"'3'1 I /3 < a} and I

¡3 < a} may not exist. Notice that for a = 0, /^ = /(_L) when the least element leP exists since in this case the join over an empty index set is the bottom element. Similarly, for a = 0, = /(T) when the top element exists. The following proposition, originally formulated by Kleene [119] in a different context, characterizes the least fixed point as a directed join in contrast with Proposition 2,2,1 where the least fixed point is characterized as an infinite meet.

Proposition 2.2.2 Let P be a complete partial order. If f \ P P is a continuous function then f ^ exists in P for every ordinal a, and f <wo> is the least fixed point of f (here w0 is the first limit ordinal). □

Hitchcock and Park have extended the above proposition by weakening the constraint on / from continuous to monotone [100],

Proposition 2.2.3 Let P be a complete partial order. If f : P —P is a monotone function then f ^ exists in P for every ordinal a, and there exists an ordinal (3 such that f ^ = /^ whenever (3 < a. The latter implies that f^ is the least fixed point off. □

The dual of the above proposition holds as well. To guarantee the existence of certain meets we rephrase it for complete lattices.

Proposition 2.2.4 Let P be a complete lattice. If f: P —P is a monotone function then exists in P for every ordinal a, and there exists an ordinal (3 such that = whenever (3 < a. The latter implies that is the greatest fixed point of f. □

Under certain circumstances, the least fixed point of a function on a cpo can be enough to guarantee the existence of the least fixed point of another function on a poset which is not necessarily complete [10],

Proposition 2.2.5 Let P be a cpo and let Q be a poset such that there is a strict and continuous function h : P —Q. If x e P is the least fixed point of a monotone function f : P —P and g : Q —Q is another monotone function such that the following diagram commutes

then the least fixed point of g exists and equals h{x). □

Several generalizations and applications of the above proposition (often called the transfer lemma) can be found in [140],

2.3 Metric spaces

We conclude this chapter with a section on some basic notions related to metric spaces. The results in this section will play a key role only in the second part of this monograph. Like partially ordered sets, metric spaces are fundamental structures in mathematics, especially in topology. For details we refer the reader to Engelking's standard work [63] and Dugundji's classical

book [59], We use metric spaces as a mathematical structure for semantics of programming languages, following the work of Arnold and Nival [11], For a comprehensive survey of the use of metric spaces in the semantics of a large variety of programming notions, we refer the reader to [23],

A (one-bounded) metric space consists of a set X together with a function dx : X x X —[0,1], called metric or distance, satisfying, for x, y and 2 in X,

(i) dx(x,x) = 0 (ii) dx(x,z) < dx(x, y) + dx{y,z)

(iii) dx(x, y) = dx(y, x) (iv) dx(x, y) = dx(y,x) = 0 => x = y

A set X with a function dx : X x X —[0,1] satisfying only (i), (ii), and (iii) is called a (one-bounded) pseudo metric space. A quasi metric space is a set X with a function dx : X x X —[0,1] satisfying axioms (i), (ii), and (iv). We shall usually write X instead of (X, dx) and denote the metric of X by dx. A metric space X with a distance function which satisfies, for all x,y and 2 in

dx{x, z) < mii^{dx(x, y), dx(y, z)}

is said to be an ultra-metric space. Clearly the above axiom implies axiom (ii),

A countable sequence of points (xn)n of a metric space X is said to converge to an element x e X if

Ve > 03k > OVn > k: dx(xn, x) < e.

Every sequence converges to at most one point which, if it exists, is said to be the limit of the sequence. It is denoted by lim„ xn. A countable sequence of points (xn)n of a metric space X is said to be Cauchy if

Ve > 03k > OVm, n > k: dx(xm, xn) < e.

As can be easily seen, every convergent sequence is Cauchy, A metric space is called complete if every Cauchy sequence converges to some point in X.

The simplest example of a complete metric space is the following, A metric space X is called discrete if

Vz e X3e > OVy e X: dx(x, y) < e =>- x = y.

Every discrete metric space is complete since it has no non-trivial Cauchv sequences, A set X can be seen as a discrete metric space if endowed with a distance function which assigns to x,y E X, distance 1 if x ^ y and distance 0 otherwise.

Let X and Y be two metric spaces, A function f •. X Y is said to be non-expansive if dy(/(%),< dx{x\,Xi) for all xi,x2 E X. The set of all non-expansive functions from X to Y is denoted by X A- Y. Complete metric spaces together with non-expansive maps form a category, denoted by CMS,

Of special interest in the study of metric spaces are contracting functions, that is, functions / : X —Y such that

3e < lVz, y E X: dY(f(xi)J(x2)) < e • dx(xi,x2).

The following proposition is known as the Banach fixed point theorem [25],

Proposition 2.3.1 If X is a complete metric space and f \ X X is a contracting function then f has a unique fixed point x such that, for every y E X, x = lim„ yn, where (yn)n is the Cauchy sequence defined inductively by Vo = y and yn+i = f(yn). □

Next we define some of the constructors on metric spaces. For all e < 1 and metric space X, define the metric space e • X as the set X with distance function, for all xi and x2 in X,

de.x(x i, x2) = €■ dx(xi, x2).

The product X x Y of two metric spaces X and Y is defined as the Cartesian product of their underlying sets together with distance, for (xi, yi) and {x2, y2) in X x Y,

dXXY({xi, yi), (x2,2/2)) = max{<ix(^i,x2), dY{yi, jfe)}.

The exponent of X and Y is defined by

Yx = {f : X ^ F | / is non-expansive },

with distance, for / and g in Yx,

dyx(f, g) = sup{dY(f(x),g(x)) \ x E X}.

Notice that if Y is a set endowed with the discrete metric then every function from Y to X is non-expansive. The disjoint union X + Y of two metric spaces X and Y is defined by taking the disjoint union of their underlying sets with distance, for z\ and in X + Y,

{Z\; Z2)

dx(zi, Z2) if zi E X and Z2 6 I dy(zi, Z2) if zi E Y and z2 E Y 1 otherwise.

If both X and Y are complete metric spaces then also X x Y, Yx and X + Y are complete metric spaces.

The Hausdorjf distance between two subsets A and B of a metric space X is defined by

dp(x)(A, B) = max{ sup{inf{<ix(«, b) | b E B] | a E . 1j-.

sup{inf{<ix(a, b) \ a E A} \ b E B} }

with the convention that inf0 = 1 and sup0 = 0, In general (V(X), d-p(x)) is a pseudo metric space: different subsets of X can have null distance. In order to turn sets of subsets of a metric space into a metric space, we need the following notions, A subset S of a metric space X is said to be closed if the limit of every convergent sequence in S is an element of S. For example, the set X itself is closed as well as the empty set. Also, every singleton set {a;} is closed. In general, every subset S can be extended to a closed set

cl(S) = {lima;„ | (xn)n is a convergent sequence in S}.

Clearly, cl(S) is the smallest closed set containing S. Notice that if S is a closed subset then cl(S) = S. A subset S of a metric space X is compact if for every sequence in S there exists a sub-sequence converging to some element in S. Every compact set is closed, and every finite set is compact, A metric space X is compact if the set X is compact. It follows that every compact metric space is complete.

Both the collection of compact subsets of a metric space X, denoted by VC0(X), and the collection of closed subsets of X, denoted by Vci(X), are metric spaces when taken with the Hausdorff distance. Moreover, if X is a complete metric space then VC0(X) is a complete metric space [85], and also Vrj(X) is a

complete metric space [124], We refer to them as the compact and closed powerdomain of X, respectively. Below we give two properties of the Hausdorff distance which will be useful later.

Proposition 2.3.2 Let X be a metric space. For all 5 > 0 and subsets A and B of X, dp(x)(A, B) < S if and only if for all e > 0,

Va G .1=16 G B: dx(a, b) < S + e andib G BEa G A: dx(a, b) < S + e.

While the above proposition is standard the following one seems to be new.

Proposition 2.3.3 Let X be a metric space and V and W be two sets of subsets of X such that for all C C X, if f| V C C then C G V, and if fl W C C then C E W. Then

dnnx))(V,W) = dnx)( f| V,f]W).

Proof. Put Vo = f| V and W0 = f]W. Let also S = d-p(p(x))(V, W) and 50 = d-p(X)(V,o, Wo)- We claim

(2.2)VX G V3Y E W: dnX)(X, Y) < SQ + e and VF G wax G V:dnx)(Y,X) < SQ + e

for an arbitrary e>0. From the above claim S < S0 follows by Proposition 2,3,2, Next we prove the claim. Choose some e > 0 and X E V. Put Y = W0 U X. By the closure property of W, since W0 C Y, also Y E W. Since X C Y we have that for all x E X we can find y E Y such that dx(x, y) = 0, On the other hand, for all y E Y, either y E W0 or y E X by definition. So, either dx(y,x) < S0 + e for some x E V$ C X (since d-p(X)(V0, W0) < S0 + e and V0 C X E V) or dx(y, :r) = () < r)„ — f from x = y E X. Hence (2,2) follows by Proposition 2,3,2, Symmetrically we derive

VF G wax G V: dp(x)(Y, X) < 50 + e

from which our claim follows and we conclude S = dp(p(x)){ V, W) < 50. In order to show the converse, i.e. 50 < S, we establish

(2.3) yx E VQ3y E W0: dx{x, y) < 5 + e and

Vy e W03x E Vq: dx(y, x) < 5 + e,

for all e > 0, Since dp(p(x))(V, W) = 5, V0 E V, and W0 E W we can find Y E W and X E F by Proposition 2,3,2 such that

dp(x)( Vq, Y) < 5 + e and dp(X)(X, W0) < 5 + 6.

From this we obtain the result (2,3) and its symmetric version, for VQ Ç Y and W0 Ç X, respectively. Therefore 50 = d-p(X)(V0, W0) < 8. □

In [9,169], generalizing the results of [24], a method has been developed to justify recursive definitions of complete metric spaces as solutions of domain equations of the form X = F(X), where F : CMS CMS is a functor, A solution for the domain equation X = F(X) exists and it is unique (up to isometries) if the functor F is locally contracting, that is, for every two complete metric spaces X and Y, the mapping

Fx,y : Yx F(Y)f(x)

is contractive, where Fx,y(f ) = F(f) f°r every non-expansive f : X —ï F, If, for all objects X, Y, Fx,y is non-expansive then the functor is called locally non-expansive. Composition of a locally non-expansive functor with a locally contractive one gives a locally contractive functor.

For example, the constructors Vco{—) and Vci(—) can be extended to functors from CMS to CMS which are locally non-expansive, while the constructor | • (—) can be extended to a functor from CMS to CMS which is locally contractive. Also, for a fixed set S (understood as a discrete metric space), the constructors 5x-, and S + (—) can be extended to functors from CMS to CMS which are locally non-expansive [9,169],