# The Smyth CompletionAcademic research paper on "Computer and information sciences"

CC BY-NC-ND
0
0
Share paper
OECD Field of science
Keywords
{}

## Abstract of research paper on Computer and information sciences, author of scientific article — M. Schellekens

Abstract The Smyth completion ([15], [16], [18] and [19]) provides a topological foundation for Denotational Semantics. We show that this theory simultaneously provides a topological foundation for the complexity analysis of programs via the new theory of “complexity (distance) spaces”. The complexity spaces are shown to be weightable ([13], [8], [10]) and thus belong to the class of S-completable quasi-uniform spaces ([19]). We show that the S-completable spaces possess a sequential Smyth completion. The applicability of the theory to “Divide &amp; Conquer” algorithms is illustrated by a new proof (based on the Banach theorem) of the fact that mergesort has optimal asymptotic average running time.

## Academic research paper on topic "The Smyth Completion"

﻿Electronic Notes in Theoretical Computer Science 1 (1995)

URL: http://www.elsevier.nl/locate/entcs/volumel.html 22 pages

The Smyth Completion: A Common Foundation for Denotational Semantics and Complexity Analysis.

M. Schellekens 1

Departement of Mathematics Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Michel.Schellekens@cs.emu.edu

Abstract

The Smyth completion ([15], [16], [18] and [19]) provides a topological foundation for Denotational Semantics. We show that this theory simultaneously provides a topological foundation for the complexity analysis of programs via the new theory of "complexity (distance) spaces". The complexity spaces are shown to be weightable ([13],[8],[10]) and thus belong to the class of S-completable quasi-uniform spaces ([19]). We show that the S-completable spaces possess a sequential Smyth completion. The applicability of the theory to "Divide & Conquer" algorithms is illustrated by a new proof (based on the Banach theorem) of the fact that mergesort has optimal asymptotic average running time.

1 History and Related Work

Smyth in [15] and [16] has provided a topological framework for Denotational Semantics based on the theory of quasi-uniform spaces (Nonsymmetric Topology [6], [8]). This work has been continued in [18] and [19] by Siinderhauf, in the context of the topological quasi-uniform spaces (a category extending the quasi-uniform spaces). The theory of the Smyth completion (or S-completion) has as a typical application to Denotational Semantics: the topological completion of a quasi-uniform space ("representing" a partial order) to a topological quasi-uniform space (which "represents" a cpo). The class of S-completable (topological) quasi-uniform spaces consists of the quasi-uniform spaces whose S-completion is again a quasi-uniform space. The S-completable quasi-uniform spaces have been introduced and characterized by Siinderhauf ([18] and [19]) and were shown to have an S-completion quasi-unimorphic

11 thank Stephen Brookes, Dana Scott and Rick Statman for helpful suggestions on the presentation of the paper.

(i.e. "isomorphic" in the context of quasi-uniform spaces) to the bicomple-tion ([19]). The weightable quasi-pseudo-metric spaces have been introduced by Matthews in the context of the semantic analysis of data flow networks ([13],[14]). Kunzi and Vajner have continued the study of the weightable spaces ([8],[9]) and these spaces have been shown to be S-completable by Kunzi ([8]).

We introduce a new quasi-pseudo-metric on function spaces suitable for the complexity analysis of programs: "the complexity distance". The complexity spaces (i.e. the spaces equiped with the complexity distance) are shown to be weightable and thus in particular to be S-completable.

We obtain a sequential version of the Smyth completion for S-completable quasi-uniform spaces and show that the Divide & Conquer algorithms induce contraction maps on the sequential Smyth completion of the complexity spaces. As an application an alternative proof is presented (based on the Banach theorem) of the fact that mergesort has optimal asymptotic average time.

2 Introductory Notions

We give the standard definitions. For an introduction to the theory of quasiuniform spaces and Nonsymmetric Topology we refer the reader to [6] and [8].

We use o to denote the composition of relations; if R is a relation, then it!-1 is its inverse, and A stands for the diagonal relation, i.e. A = {(x, x)\x G X}. N denotes the set of the natural numbers and R. denotes the set of the reals. N0 = N - {0}. For any set X, V(X) denotes the powerset of X. A filter T on a set X is a subset of V(X) such that

1) (VF,G e F) FnG e F,

2) (VG C X)[(3F eFFCG)=>Gef\,

3) 0 £ T.

A quasi-uniform space (X, U) is a space such that

1) U is a filter on X x X

2) (VC/ G U)(BV eU)VoV C U,

3) (VC/ G U) A C U.

U is a quasi-uniformity on X, and U,V,W, ... denote elements of U, called entourages.

The space (X, U) is uniform when moreover:

4) (VC/ G U) C/"1 G U.

U is referred to as the uniformity on X.

A base B for a (quasi-)uniformity U is a subset of U, such that

(VC/ G U)(3B G B) B C U.

A function / : (X,U) —> (Y, V) is quasi-uniformly continuous iff (VV^ G

V){3U £ U) f2(U) C V, where f2(x,y) = (fx,fy). A quasi-unimorphism is a bijection / between quasi-uniform spaces such that / and /_1 are quasi-uniformly continuous. The topology T{U) associated with a uniformity (X, U) is the topology generated by the family of neighbourhood filters {C/[x]|C7 £

A topology is T0 iff for all pairs of different points x,y either there exists a neighborhood V of x such that y V or there exists a neighborhood W of y such that x W. A topology is T\ iff for all pairs of different points x,y there exist neighborhoods V of x and W of y such that y V and x W. The following properties hold (cf. [6]):

• Given a quasi-uniform space U, then flU is always a preorder (i.e. reflexive and transitive). This preorder is called the preorder associated to U, and is denoted by <u-

• The topology T{U) is T0 iff C\U is a partial order and is T\ iff C\U = A.

For any entourage U, we define U* = U fl U~x. Given a quasi-uniformity U, U* is defined to be the uniformity generated by the base B = {U*\U £ U}, i.e.

U* = {U \ U CX xX and (3B E B) B C U}.

A U*-Cauchy sequence (xn)n is a sequence such that

(VC/*)(3n0)(Vm, n > nQ) xmU*xn.

A quasi-uniform space (X,U) is bicomplete iff the uniform space (X,U*) is complete. A bicompletion of a quasi-uniform space (X,U) is a bicomplete quasi-uniform space (Y, V) which has a T{U*)-dense subspace quasi-unimorphic to (X, U). T0 quasi-uniform spaces have a unique (up to quasi-unimorphism) T0 bicompletion ([6]), indicated by "the bicompletion". A function d : X X X —> IR+ is a quasi-pseudo-metric iff

1) d satisfies the triangle inequality: (Vx,y,z) d{x,y) + d(y,z) > d{x,z), and

2) (Vx) d(x,x) = 0.

d is a quasi-metric when d also satisfies

3) d(x,y) = 0 x = y.

Given a quasi-pseudo-metric d, the induced metric d* is defined to be d*{x,y) = max{d{x, y), d(y, x)}. Note ([6]): a quasi-pseudo-metric d generates a quasiuniform space Ud via the countable base Bd = (Bn)n, where Bn = {(x, y)\d{x, y) < l/2n}. So,

Ud = {U\UCX, (3 Bn G Bd) Bn C U}.

The associated preorder is defined by: x <d y iff d{x,y) = 0. This preorder coincides with the preorder associated to Ud. Given a quasi-pseudo-metric space (X,d), a d-contraction map f : X —> X is a map such that (3c < 1 )(Vx,yeX) d(fxjy) < cd(x,y).

3 The Sequential Completion

For uniform spaces, the construction of the topological completion can be obtained via the standard minimal Cauchy filter completion ([5]). It is well known that in the case of a uniform space with a countable base the filter completion can be carried out by the usual Cauchy sequence completion (a property denoted in [5] by "adequacy of sequences"). This includes the particular case of the Cauchy sequence completion of a metric space. A completion via sequences will be referred to as a "sequential completion".

For Nonsymmetric Topology, for example for the theory of quasi-uniform spaces and in particular for the S-completion, the question of the existence of sequential completions is not settled. The S-completion of a quasi-uniform space as presented in the literature ([18]) is obtained via the (from a topological point of view) non-standard "round S-Cauchy" filters . In [19] Siinderhauf obtains a reduction of this filter S-completion to the "Cauchy net completion". However there is no guarantee that this completion further reduces to a sequential completion. In [15], a sequential (non-topological) completion of quasi-uniform spaces is given (of which we will use the definition of the base on the completion). However this sequential completion has not yet been generalized in the literature to the topological context of the S-completion. We obtain a partial answer to the problem by assuming the extra condition of "S-completability" ([18] and [19]) on the quasi-uniform space. (This extra assumption will be justified below, as the spaces we will consider will be S-completable.) Recall that a quasi-uniform space (as part of the topological quasi-uniform spaces) is S-completable iff its completion in the class of all topological quasi-uniform spaces is again a quasi-uniform space. Under the S-completability assumption the S-completion of a quasi-uniform space is quasi-unimorphic to its bicompletion [19]. This reduces the problem to finding a sequential version of the bicompletion of quasi-uniform spaces. To the author's knowledge no such version is available in the literature 2 . Theorem 3.1 sets up the sequential bicompletion of quasi-uniform spaces with countable base. In Corollary 3.2 we obtain the sequential bicompletion for the case of the quasi-pseudo-metric spaces. We define a quasi-pseudo-metric inducing the quasi-uniformity on the completion from a given quasi-pseudo-metric inducing the original quasi-uniformity. It is this last result which will be used to obtain the application discussed under section 7. The general result given under Theorem 3.1 combined with the fact that (as we will show) the space used in the application of section 7 is S-completable, implies that this application is based on a sequential S-completion, and thus on the topological foundation of Denotational Semantics.

2 A sequential bicompletion for quasi-pseudo-metric spaces has been obtained by A. Di Concilio in [4], as pointed out recently to the author by H.P. Kunzi. This result corresponds to our Corollary 3.2. which follows from the more general result given under Theorem 3.1.

Theorem 3.1 Every T0 quasi-uniform space (X, U) with countable base has a sequential bicompletion (X,U). The base set X is defined by:

X = {(xn)n| (xn)n is a U*-Cauchy sequence }/~,

{•^n In ~ (Vn)n VBl (3n0)(Vm,n > n0) xmBlyn.

The quasi-uniformity U is defined to be generated by the base elements:

{(x,y) | (3 representatives ( •^n In) {yn)n ofx,y) (3 n0)(Vm,n > nQ) xmBkyn}. (The definition of the base corresponds to the one given in [15].)

Proof (Sketch) The details are lengthy so we only give a sketch. We show that the obtained sequential completion is a T0 bicompletion, so by the uniqueness of T0 bicompletions (up to quasi-unimorphism) it is quasi-unimorphic to the filter bicompletion as given in [6]. □

Corollary 3.2 When U = Ud, there is a quasi-pseudo-metric d and a metric d*, defined by:

d([(xn)n], [(y„)„]) = Jim d(xn,yn) and

d*([(zn)n], [(yn)n]) = Jim d*(xn, yn),

such that U = Uq, and such that d = d*. In particular: (Ud)* = =

Ur=U(dFy □

Two results for the sequential bicompletion are obtained next: the extension theorem (a straightforward adaption of the extension theorem in [6] for the filter bicompletion, to the sequential completion) and the Banach theorem (for the classical version of the Banach theorem we refer the reader to [5]).

Theorem 3.3 (Extension Theorem) Suppose (X,U) is a TQ quasi-uniform space with a countable base, and let f:(X,U) —> (X,U) be a quasi-uniformly continuous function. Then f extends uniquely to a quasi-uniformly continuous function f : (X,U) —> (X,U), defined by /[(a;n)n] = [(f(xn))n]. f is unique in the sense that any quasi-uniformly continuous function ~g : (X,U) —> (X,U) which agrees with f on the constant sequences must coincide with f. □

Theorem 3.4 (Banach Theorem) Given a TQ space (X,Ud), and a d-con-traction map f, the function f obtained via the extension theorem is a d-contraction map, and has a unique fixed point FIX(f), given by: FIX(f) = (limfc_>00(/fc[(xn)n])); where [(xn)n] is an arbitrary element of X. In particular, for each constant sequence [(cc)n] : FIX(f) = [(/ cc)n]. □

4 The Complexity Distance

Given a partial recursive function /, and a programming language let [/] be the set of all programs of £ computing a partial recursive function which approximates / (in the usual pointwise ordering on partial functions). We will use P,Q,. . . in what follows to denote programs.

Recall that a complexity measure ([3],[11]) is a binary partial function C{k,n) on № satisfying the Blum axioms ([3]):

1) C{k,n) is defined iff the program with coding k converges on input n.

2) the predicate C{k,n) < y is recursive.

So C{k,n) represents the complexity of a program P (with code k) on input n.

We only use this abstract setting to introduce the complexity distance in its full generality. We will not use the recursion theoretic machinery connected with this axiomatization, so the reader who is not familiar with abstract complexity measures, can keep one measure in mind (for example the running time) while reading the results.

The output value undefined is indicated by _L, and has infinite complexity. Let Cp denote the complexity (function) of a program P, that is the function An.C{k,n) (where k is the code of P). We assume Cp to be non-zero on all inputs.

The intuition behind the "complexity distance" (defined below) between programs P and Q is that ¿(P, Q) measures relative progress made in lowering the complexity by replacing P by Q.

Definition 4.1 Given ? C [/], we define

Note that d is not necessarily a quasi-metric, that is there may be programs P and Q such that d(P,Q) = 0 and P ^ Q. For example, consider a language which supports assignments and a complexity measure counting each assignment. Consider any program P in the language, and obtain the program Q from P by adding a dummy assignment to P (that is an assignment to a variable not occuring in P). We have ¿(P, Q) = 0 and P ^ Q.

A more precise motivation for the definition of the distance is given below. We follow the intuition given above, that is we aim at measuring relative progress in complexity. Assume P and Q are programs such that on a given input n, Cp{n) > Cg(n). We obtain a relative measure of progress by replacing the absolute difference Cp{n) — Cg(n) by (Cp(ra) — Cg(n))/Cp(n). However, progress from Cp{n) = oo to Cg(n) (a finite value) with this distance would (by taking "the limit") yield constant value 1, independently of Cg(n). The limit we have in mind is lirrik^oo{Cpk{n) — Cq(n))/Cpk(n), where P& is the program obtained from P by limiting P to run on each input for at most k steps.

d-.V2^R+ by ¿(P1,P2) = E^i»^î

Lemma 4.2 The complexity distance is a quasi-pseudo-metric.

To be consistent with the wish to measure relative progress, we should distinguish between improvements from infinity to different finite values. This can be obtained by replacing the above quotient by (Cp(n) — Cg(n))/(Cp(n) * Cg(n)), which amounts to (1/Cg(n)) — (1/Cp(n)) when Cp and Cq are finite. This last expression is sensitive to differences in finite values of Cq, when Cp is infinite. The distance di^n) from a defined value Pi(n) to an undefined (note the directedness) is defined to be 0, as this is consistent with the idea that there is no progress in an increase in complexity, in particular in an increase to infinite complexity.

It is clear that d suffers from an indifference against increases in complexity. This is unavoidable as the nonsymmetry of d avoids the problem which occurs for the induced metric d*, namely the loss of information of which program choice (P or Q) is progress. The metric d* does not give any information, in the sense that from the value d*(P, Q) it is impossible to determine which program would be more efficient. For instance assume that the program Q is more efficient on all inputs than the program P. In that case d(P, Q) and d*(P,Q) have exactly the same value, but the last measure does not indicate which program is more efficient, while the first measure provides this information by the fact that d(Q, P) = 0. A second motivation for the use of a nonsymmetric distance is given below.

Lemma 4.3 In general ([f],d) is not T0) but can be made T0 by the identification of programs with same outputs and complexity. The resulting quotient space [/]' equipped with the induced distance dl, defined by d'([P], [Q]) = d(P,Q) is T0. d! is well-defined and is a quasi-pseudo-metric. The space ([/]', d') is in general not T\ (equivalently, d! is in general not a quasi-metric).□

Convention: the equivalence class notation for elements of the resulting quotient ([/]', d') will be dropped in what follows, in particular elements of [/]' will still be called "programs" belonging to [/]'. We will also use (with abuse of notation) ([f],d) for the quotient ([/]', ¿'), that is we assume the spaces to be quotiented.

5 S-Completability

Given ?C [/], define C-p to be the set of complexity functions corresponding to the representatives of the elements of V. As the space ([f],d) is obtained by taking the quotient which identifies programs which have same outputs and complexity, there is a bijection between V and C-p. So it is clear that the complexity distance can be directly defined on C-p, that is the spaces ([/], d) and (C-p, d) trivially are quasi-unimorphic. In what follows we will not distinguish between the two approaches. In fact we will make the following generalization and work on the general function space (0, oo]N, containing the set of all possible complexity functions Cp (recall our restriction: Cp(n) ^ 0.). Note that we do not necessarily have that functions of this space actually are complexity functions of a program.

The following results are stated for the function space approach, that is

we work with a space (X, d), where X C (0, oo]N and where

Note that d is still a quasi-pseudo-metric, and that its associated order is the pointwise order on the function space (0, oo]N. This pointwise order associated to the complexity distance is essential in comparing programs with respect to complexity, as will become clear in the complexity analysis of mergesort presented below.

We can now give a second motivation for the use of the nonsymmetric distance. As both the pointwise order and the induced metric d* are definable from the nonsymmetric distance d, there is no need to introduce the metric and the order separately, that is the introduction of the nonsymmetric distance suffices.

Definition 5.1 A quasi-pseudo-metric space (X,d) is weightable iff there exists a function w : X —> IR+ such that Vcc,y £ X : d{x,y) + w{x) = d(y,x) + w(y). The function w is called a "weighting function" and w{x) is called the "weight" of x, we say "d is weighted via wn.

Weightable spaces were introduced by Matthews ([13]) in the context of the study of the semantics of data flow networks. Kunzi and Vajner have continued their study ([8],[9]). We recall the following result by Kunzi:

Proposition 5.2 (Kunzi) The weightable quasi-pseudo-metric spaces are S-completable. □

Notation: given P,Q £ is the sum ranging over all n such that

Cp(n) o Cq(n), where o is one of the following orders: <,>,<,>.

Proposition 5.3 The complexity distance d on X C (0, oo]N is weighted via the weighting function w defined by:

(V/ £ X) w{f) = £

Proof. VP, g £ [/]:

d(P,Q) + w(P) = J21

i i 7{n)¥-

i i cQ(n)¥

i i cP(n)¥

Cp(n) 2T

n Cpfa) = w(Q) + d(Q,P)

1 1 A 1 1 CQ(n) 2 2n

■CP(n) , 1

1 2" 1 2™

The following S-completability result is an immediate corollary of proposition 5.2 and justifies the existence of the sequential S-completion (X, d) of the complexity spaces (X,d).

Corollary 5.4 For any X C (0, oo]N; the space (X,d) is S-completable. □ 6 Divide & Conquer Algorithms

"Divide & Conquer" algorithms solve a problem by recursively splitting it into subproblems each of which is solved separately by the same algorithm, after which the results are combined into a solution for the original problem. The Divide & Conquer strategy is an important widely applicable technique for designing efficient algorithms ([1]). The complexity C of a Divide & Conquer algorithm typically is the solution to a recurrence equation of the form C(l) = c and (Vn > 1) C(n) = aC(^) + h(n), where a > 1 represents the number of subproblems a problem is divided into, e represents the size of each subproblem and h(n) represents the time it takes to combine the subproblems of a problem of size n into the solution. Divide & Conquer algorithms usually are assumed to be total, which is the assumption we make from here on. In particular we work on the space (0, oo)N° rather than on (0, oo]N°. Note that we exclude 0 as an argument, since the recurrence equation has a base case determined by 1 rather than by 0 (this will be convenient in the presentation of the application of section 7, but is not an essential requirement).

Comment: Since we assume that the complexity on any input is never zero, we should require that c ^ 0 in order to guarantee that (Vn > l)C(n) ^ 0. Instead of requiring this condition (which will actually be violated in the application discussed under section 7 ) we impose the following natural condition: for each Divide & Conquer algorithm S we require that its complexity function satisfies Cs(l) = c. That is, we assume that each algorithm is such that it has the same complexity on the base case. Whatever recursive function the program actually computes, we can assume that the program has a built in test for the base case and behaves the same on this input. We are only interested in differences in complexity caused by the recursion. (Because of our assumption on the shape of the recurrence relation, we only consider recursion with one base case.)

The assumption on the base case implies that the value c = 0 does not cause any problems. Indeed, by this assumption we have dlj2(l) = 0 (Definition 4.1, case 1) and thus division by 0 does not occur. So we continue to work with [0, oo)N°. Let [0, oc)c = {/ | f £ [0, oo)N° and /(1) = c}. We denote this set in what follows by Cc. Define for b £ N, b > 1,

Cc\b = {/' | /' is the restriction of / £ Cc to arguments n = bk, k > 0}.

The following theorem establishes the fact that Divide & Conquer algorithms induce contraction maps on the complexity spaces. This opens up the way to applications of the Banach theorem. We given an example of such an application in Section 7 below.

Theorem 6.1 (Contraction Map Theorem) Let \$e be the functional in-

duced on Cc\b by the recurrence equation £ defined by C( 1) = c and (Vn > 1) C{n) = aC{f) + h(n), that is:

§£\Cc\b —> Cc|6, where = A/An. ¿/n = 1 i/ien c else af(-) + /i(n).

TTien ¿s a d-contraction map iff a > 1, in which case the contraction constant is -.

Proof.

d(Mf),Md))= E

{n=^|fc>0> Mf)(n)

Note that for k = 0, \$g(/)(l) = = c, so:

1 2™ '

d(Mf), M9))= E

{n=bfc |fc>l}

{n=bk\k>l} >

{n=bk\k>l} i è

+ - + k (a/(f ) + ) + *(n))

'<m-g(m i

>/(f))Mf)V 2n

ff(n)-g(n)Y

{n=bfc |fc>0}

f(n)g(n)

<-d{f,g).

Note that a functional induced by a Divide & Conquer recurrence equation £ is monotone on Cc\b, that is (V/, g £ Cc\b) f < g =>■ <

We conclude the section with an application of the Banach Theorem to a special kind of functionals: the "(complexity) improvers". The intuition is that an improver is a functional which corresponds to a syntactic transformation on programs and which satisfies the following property: the iterative applications of the transformation to a given program yield an improved program at each step of the iteration.

Definition 6.2 A functional \$ on Cc\b is an improver with respect to a function / £ Cc\b iff (Vn) \$n+7 < \$n/.

Note that when \$ is monotone, to show that \$ is an improver with respect to a function /, it suffices to verify that \$/ < /.

The following proposition plays a crucial role in obtaining the application of section 7.

Proposition 6.3 A Divide & Conquer recurrence equation £ has a unique solution. If f is the solution to £, and is an improver with respect to some function g, then f < g.

Proof. (Sketch) Let £ be a Divide & Conquer recurrence equation.Note that £ is always solvable (cf. [1]). If / is a solution of £, then we have that = /

and thus \$£[(/)„] = [(\$£/)n] = [(/)n], that is [(/)„] is a fixed point of By Theorem 6.1, is a contraction map on Cc,and by the Extension theorem extends uniquely to a contraction map on Cc. So by the Banach theorem has a unique fixed point and thus £ has a unique solution. By the Banach theorem we have: FIX(\$g) = [(3>£<7)n], and thus, again by uniqueness of fixed points, FIX(\$g) = = [(/)«]•

Since is an improver with respect to g, we have: Vra > 0. i^g g■ So in particular limnd(§Tgg,g') = 0, and thus [(g)n]) = 0. So we have

that [(/)„], [(sOn]) = 0, or equivalently / <d g. □

7 An Application: Mergesort

We will demonstrate the applicability of the theory to the complexity analysis of sorting algorithms for the specific complexity measure of average running time. All sorting algorithms S are assumed to be comparison based ([1], [7]). Comparison based programs are programs such that (ultimately) all computation steps carried out by S on any input list have to be based on a comparison between list elements. For this class of algorithms a lower bound on the average time is known: T{n) > nlog2n ([1], [7]). We will present a novel proof (based on the Banach theorem) of the well known result that the (comparison based) sorting program mergesort has optimal asymptotic average time. We denote the sorting function (this is the function mapping each list to its sorted version) by s, and we will work on the set of all total programs computing the function s. This set is denoted by [5]'.

7.1 Introductory notions

Definition 7.1 Given a countable total order (A, <), a list from A is a finite sequence of pairwise distinct (!) elements from A. We use the restricted version of lists (that is lists consisting of pairwise distinct elements) in order to simplify the presentation. Define ListsA to be the set of all lists obtained from A. For any list L £ Lists"4: \L\ denotes the length of the list L and we use Lists^ for the set of lists of length n.

A list is sorted when its elements (from left to right) are in increasing order with respect to to the ordering < on A. ~ denotes the equivalence relation on Lists^ which identifies lists up to order isomorphism. Lists^/ % and Lists^/ % are denoted by CA and C^ respectively.

Note that the cardinality of C^ is n\.

In what follows we assume we have a fixed given total order (A, <) in mind and we will drop the superscript "^4" in CA and in C^. This will simplify the notation without introducing ambiguities. Since we will always work with lists identified up to order isomorphism, we indicate the elements of Cn and L by L,L',. . ., that is (with abuse of notation) we don't indicate the equivalence classes. Given a list L £ £n, we write L = (¿(1),. . . , L(n)).

Definition 7.2 Given a list L in Cn, where n > 2: Lx = (¿(1),..., L{ |_f J))

and ¿2 = (L(LfJ+l),...,L(n)).

Definition 7.3 A sorting program is a program which takes lists as inputs and returns the sorted version of these lists. A comparison made by a sorting program S between two different elements of a list L, say L(i) and L(j), is a determination (during the computation of S(L)) of their relative order, of the form uL{i) < L(j)n or uL(j) < L(i)n. The running time of a sorting program S is defined to be:

Ts(L) = (the total number of comparisons made by S on input L during the computation of the sorted output S(L)).

The average running time (assuming uniform distribution on inputs) is defined by:

T.\L\=nTs{L)

Ts(n) =

Note that this running time might be 0. For example for a program S which checks whether a list has length \L\ < 1, and when this is the case, returns L as output, we have 7s(l) = 0. We excluded 0-valued running times in order for the complexity distance not to result in a division by 0. However, as noted above, division only occurs through the second clause in the definition of the complexity distance (that is: uTpl{n) > Tp2(ra)", cf. section 4, definition 4.1), so inputs with zero time can be allowed as long as they don't fall under this clause. This will be the case under the (harmless) assumption that all sorting algorithms start with a length-check on the input L, and in case \L\ < 1, return L. (Note that this assumption implies that all inputs L such that \L\ < 1 have zero time, and that, as we work with comparison based sorting algorithms, these are the only inputs with zero time.) So we continue to work with the function space Co in what follows. We give the usual definition of "asymptotic time".

Definition 7.4 (V/,jeC0):

1) / <o g iff (3nD)(3c > 0)(Vra > n0) f(n) < c- g{n). We also use the (more standard) notation: "/ £ 0(g)" instead of "/ <o gn.

2) / g iff / ^o g and g <o f. That is is the equivalence relation induced by the preorder <o-

Definition 7.5 A program S has optimal average asymptotic time iff [Ts] is the minimum of the partial order (Co/ ~o, ^o)-

Definition 7.6 A merging program is a program taking two sorted lists as inputs and returning the sorted list consisting of the union of their elements as output. Given a merging program Merge, a mergesort program (denoted by M) is defined by the following pseudo-code:

M{L) = if \L\ < 1, then return L else return Merge(MLu ML2).

Definition 7.7 A merge pair is a pair of sorted lists.

MPairs(m,n) = {(Li,L2) \ (Li,L2) is a merge pair and the lists Li,L2

are sublists of the unique sorted list of £m+n, of length m and n respectively}.

Remark: The cardinality of MPairs{rn, n) is

Definition 7.8 T Merge{jn, n) = ^^'^^n+m^ ^-"j where the sum ranges

over MPairs{rn,n).

1.2 Recursion

Recall that [5]' is the set of all comparison based sorting programs. Assume Merge is a merge program.

Definition 7.9 M2 : [5]' —> [5]' is defined by: for any program S £ [5]', M2S{L) = [ if \L\ < 1 then return L else return Merge(SLi, SL2)].

Note that with abuse of notation we do not indicate the dependence of M2 on Merge.

Since the mergesort program will split each given list in two sublists, the identification up to order isomorphism will in general be broken as two lists which are distinct up to order isomorphism might have sublists with equivalent orders. We introduce the following notation to deal with this situation.

Definition 7.10 Vi £ {1,2}: Listi = {Li\L £ Cn} and Li = Listi/

Lemma 7.11 For alln > 2, for all L £ Cn) consider [Li] £ L\ and [L2] £ C2. The cardinality of [L\] is p^y and the cardinality of [L2] is •

Proof. The equalities follow by an easy standard combinatorial argument. We give the proof for the cardinality of [Li]. Given L £ £n, say L = (L(l),. . . , L( [f J), L( [f J + 1),. . . , L(n)). Note that since L £ £n, there are exactly n choices for L{n). Once a choice is made for L{n), there are n — 1 choices for L{n — 1). Continuing this for each of the elements of L2, one obtains that the possible choices for elements of L2 are exactly -¡-§77, that is each

L 2 J ■

fixed ordering of Li can occur as a sublist of L in exactly t§tj many ways,

where L2 ranges over the possible orderings of the second part of the list L.O Corollary 7.12 Let n >2 and let L\ be the sorted list of Li, where i £ {1, 2}.

J2 tmerge(L1,L2) = J2 tmerge(L1,L2)l-\\(n - l~\)\, \L\=n {L[iL'3)

where (L1,L2) range over MPairs(K\J^\,n — |_f_|)-

Proof. This follows by an easy combinatorial argument. □

Proposition 7.13 (VS £ [s]')(Vn > 2)

___ yi __yi __yi yi

TM2s{n) = Ts{ L 2 J) + Ts{n - L - J) + T merge{ [-\,n- ).

Proof, (VS G [s]')(Vn > 2)

Tp , EiLi=n Tm2s(L) Y,\l\=n{Ts{Li) + Ts(L2) + TMerge{L1,L2))

TM2s{n) = —-j-= —u-j-•

By lemma 7.11 and corollary 7.12 we obtain:

E^NLfjWO^fT E|L2|=(n-LiJ)rs(iv2)(^kjjT TM2s(n) =-n\-+-n\-

El^Kf] E|L2|=(n-LfJ)rs(^2)

LfJ! (n-LfJ)!

- / T- / \

tmerge(l1,l2 n! ~

L?J!(n-L?J)!

Tl . . —, . n , - — ,.72.

= L2 J ) + Ts{n - L2 J ) + Tmerge{ |_^J, n - L^J )• □

Recall the following well known result:

Lemma 7.14 If f is a monotone increasing function, then

(Vn G {2k | k > 0})/(n) < nlog2{n) (3c > 0)(Vn > 1) f(n) < cnlog2{n)).

As we work with comparison based sorting algorithms we can without loss of generality make the assumption that the complexity functions of these programs are monotone increasing. So in what follows we will assume the list lengths to be a power of 2. In particular Proposition 7.13 reduces to:

___ in __in in

(Vn > 2)(VS G [3]') TM2s(n) = 2Ts(-) + TMERGE(-, -).

In order to obtain the application, we need to make a careful analysis of the average time required to merge, that is we will determine the term Tmerge{f j f ) of the above recurrence equation.

7.3 Optimal Merging

The study of the minimum average number of comparisons necessary to merge "m things with n" is an open problem stated in [7] (cf. the research problem listed as exercise 22, section 5.3.3). In [7] the worst-case analysis of merging is made via the function M(m, n) representing the minimum number of comparisons sufficient to merge m things with n. A lower bound and an upper bound for M(m, n) is obtained, and in the special case where m = n, a precise value for M(m,m) is obtained: M(m,m) = 2m — 1. The proof shows that no (comparison based) algorithm can have better worst-case time, and secondly notes that the standard merge algorithm (cf. [1],[7]) has worst-case time T(m,m) = 2m — 1. Knuth notes that the right hand side expression of this equality corresponds to the upper bound obtained for M(m, n) when m = n (cf. [7]), and argues that the lower bound is therefore "at fault". We

show that the lower bound actually represents the minimum average number of comparisons necessary to merge m things with n.

We solve the problem on the minimum average number of comparisons required to merge, by determining a lower bound and by solving the case for m = n.

Proposition 7.15 (Lower bound on average mergetime) For any comparison based merge algorithm M, TM{jn-,n) > ^^("m"1)"! •

Proof. Follows by a standard comparison tree analysis. □

Corollary 7.16 For any comparison based merge algorithm M, Tm{n,n) > n.

Proof. By proposition 7.16, TAf(n,n) > \log2 ]. Note that (2n\ (2n)! 2n(2n — 1)... (n + 1)

\log^n Jl = = rlog*--1

(2n)(2n-l)...(2n-(n-l))

- 1 92 n(n-l)...l

Since (Vy > 0) 2x — y > 2(x — y), we obtain

_ , , r, (2n)(2(n — 1))... (2(n — (n — 1)))-,

TM(n,n) > \log2-, -— = [%22nl = n. n

n[n — IJ . . . I

In the worst-case analysis performed in [7] the upper bound is shown to be "exact" in the sense that: M(m,m) = 2m — 1. We obtain a similar result for the lower bound for average merging time via the introduction of an oracle O.

Definition 7.17 A consecutive sublist of a list L = (¿(1),..., L{nj), is a sublist (L(ii),. . . , L(ik)) of L such that for each j : 1 ... k — 1 : L{ij + 1) is the successor of L{ij) with respect to the total order on list elements.

There is no guarantee that a merge algorithm exists with optimal average time T(n, n) = n. However, an "ideal" merge program I can be defined, using an oracle O which for every merge pair (Li,L2) yields the positions of the heads of the consecutive sublists in both lists without cost in comparison time.

Recall that the standard merge algorithm is defined by the following pseudocode (where Li and L2 represent sorted lists, head(L) represents the head of the list L, and tail(L) is the list obtained by removing the head from L):

Merge(Li,L2) =

[Let L' = 0

While h ± 0 and L2 ± 0 repeat:

then append(L ,head(Li)) and let L\ = tail(Li) else append(L ,head(L2)) and let L2 = tail(L2)

If Lx = 0

then append(L ,L2) else append(L',Li)]

Given the oracle O, we define the following operations. For every sorted list L, let H(L) denote the list consisting of the heads of the consecutive sublists of L in left to right order, obtained via the oracle O. Note that this list is still sorted. For any element a of H(L), a denotes the consecutive list in L starting with a. Note that this operation is well defined by our assumption that list elements are different from one another. We extend the operation to lists L = (¿(1),. . . , L{n)) by defining L to be the concatenation of the lists L( 1),. . ., L{n) (where we assume that the list elements of L are heads of consecutive sublists of a given number of disjoint sorted lists).

We define the ideal merge program I via the following pseudo-code (where Merge is the standard merge algorithm):

I{LUL2)= _

[Let L' = Merge(H(Li),H(L2)), Return L']

The ideal mergesort algorithm defined via the ideal merge algorithm /, is denoted by M..

The following theorem gives the exact value of the average time the ideal merge program spends on merging lists of identical length.

Theorem 7.18 (Vn > 1) Ti(n,n) = n.

Before presenting the proof, we state the following corollary. Given the oracle O, define

Mo(m,n) = the minimum average time sufficient to merge m things with n,

The following corollary gives the exact value of Mo(n,n). Corollary 7.19 ~M0{n,n) = n.

Proof. Immediate from Corollary 7.16 and Theorem 7.18. □

In order to prove Theorem 7.18, we introduce an encoding of merge pairs, and a lemma dealing with the number of "alternations" of merge pair encodings.

Recall that we assume that lists have a length n which is a power of 2. Note that when n = 2k, Ti{k, k) = ^^Ll'L2)n\ ^-where the sum ranges

over MPairs{k,k).

The number of non-order-isomorphic merge pairs obtainable from elements of the sorted list of length n, say L , is (j^j. This number corresponds to the possible number of ways a sorted list L[ can be obtained from the sorted list L'= {!,...,n).

In order to simplify the analysis, we encode the merge pairs via a "binary encoding", which associates a binary list with each given merge pair. Each choice of L[ from L' can be encoded via a binary list I of length n, where an

occurrence of 1 indicates a choice of an element of L[, and an occurrence of 0 indicates a choice of an element of L2.

For example consider the sorted list of length 6: V = (1, 2, 3, 4, 5, 6): The pair consisting of L[ = (1,3,6) and L'2 = (2,4,5) is encoded via the binary list: 1 = (1,0,1,0,0,1).

Notation: lij2 denotes the binary encoding of the pair An al-

ternation in a binary list I is a change from 1 to 0 or from 0 to 1. Given a binary list Z, let A{1) denote the number of alternations in I. For example the binary encoding lij2 of the pair (L^L^) given above has 4 alternations. Note:

TI{L'1,L'2) = A{k,2).

In order to prove the theorem, we need the following lemmas. Lemma 7.20 Let k = \L[| = \L'2 \ = f. Then

E WIL'J = 2[£ (* T 1)2(2z + 1) + £ (* T (kz%i)].

{L[,L'2) ¿=0 \ / i=0 \ 1 J \l l)

Proof. From the note made above, it suffices to show:

E = ■ 'V^ + ^ + ^T ■ ^-ÎK*)]-

(L[ ,L'2) i=0 V 1 / .=0 \ 1 ' U 1

To count the number of alternations it suffices to consider only the binary lists starting with 1, and to multiply the result by 2 (remark that any binary list

1 has the same number of alternations as its "negative" version, that is the version obtained by replacing each element i of I by 1 — i).

Given i £ {1,0}: a binary list I of which all the elements are equal to i is referred to as an "¿-list". Consider ¿i, the 1-list of length k. Let n = 2k.

The possible alternations in a binary list I of length n, which has k occurrences of the element 1 and k occurrences of the element 0, are determined by the possible insertions of 0-lists among the elements of There are k — 1 poss-sibilities for these insertions, and one extra possibility corresponding to the appending of a 0-list to the list Since insertions of 0-lists each contribute

2 alternations and an appended 0-list only contributes one alternation, we consider two cases:

1) A(lit2) is odd (that is I ends in a 0-list): We have (k"1) ways to obtain i insertions, where i : 1,... ,k — 1. The insertion of i 0-lists (in the i places chosen between the elements of the 1-list li) and the appending of the 0-list to ¿i can happen in different ways, depending on how the 0-list is split into the i + l sublists used in this proces. There are i divisions to be chosen in order to split a 0-list into i + 1 sublists, that is there are (^y1) waYs to split the 0-list l2 in i + 1 sublists.

So when ^(/1,2) is odd:

Aha) = E (* 7(k 71(2* + 1) = E ( A1 (2* + 1).

r\ \ Ts J \ Ts J A \ Î1

i=0 \ / \ j i=0 \

2) ^(/1,2) is even (that is I ends in a 1-list):

am=g (Y) (•:>).

The result is obtained by a similar argument. Note that the sum starts from one on, since as we have no O-list appended to li in this case, we must at least have one insertion of a O-list in Note that there is a spot to insert this O-list, as the length of li must be at least 2.

Recall that we need to multiply the result by 2 in order to account for the negative versions of the binary lists, that is the lists starting with 0. □

Lemma 7.21 For n = 2k, k > 1,

Ti(n, n) =

Proof. Note that

+ £¿=1

k-1 fk-l\ (k-1

% - 1\ (k - r

, i JU-i,

«V)(V.

' k -1\ ik -1

k - 1 / U - 2

k - 1\ k - r

2 + ...+

[2k — 2) + ... +

(2k - 2)

k-l)\k-2j v)(v:

Adding up the two sums term by term (in left to right order) we obtain ^fc-l (k-l\ (k-

Similarly,

■sC:'!

(2z + l) =

'k - r

1 + ...+

'Jfe-r

,k- 1,

(2k - 1)

-se:1:

So we have E(l[,l>2) Ti(L[,L'2) = 2k {e^q1

1 fk~1

"I" ¿^¿=1

is for n = 2k : Ti(n,n) =

{E^C^r+E^YXtO}

Lemma 7.22 For k, I such that k < 21, (2l\ ' (l\( I

W V - (» - k - I})),

i=moa:{0,j

Proof. We distinguish two cases: k < I and k > I. The result follows for the first case by the remark that the number of ways to choose k objects among

21 given objects, is the sum of the number of ways to choose i objects among the first Z, multiplied by the number of ways to choose the remaining k — i objects among the I remaining given objects.

The case where I < k < 21 follows by a similar argument, noting that one is forced to pick at least k — I objects among the first I. □

Lemma 7.23 (VJfe > 1)

f (2k - 2\ (2k - 2\ 1 (2k\

Proof. Since k > 1 we have 2k > 2, so there are at least 2 elements to choose from. Note that k elements can be chosen among 2k given elements by

1) selecting two elements among the last two of the 2k given ones, and by

picking the k — 2 remaining elements among the first 2k — 2 given elements:

(2k-2\ (2k-2\ -1 ■!•,• \k-2) = { k ) possibilities;

2) selecting one element in one of the last two positions among the 2k given ones, and the k — 1 remaining ones among the first 2k — 2 given elements: ^(^k-~i) possibilities;

3) selecting all k elements among the first 2k — 2 given elements: k~2 possibilities. □

Finally we are ready to show the theorem, that is: (V& > 1) Tm

(k, k) = k,

where k =

Proof of Theorem 7.18. By Lemma 7.22 we obtain

(l) Etc1 ('I1)2 = Etc1 ('-'K,^1-,) = (£?) -d

fr)\ TT^k-1 fk-l\ fk-l\ _ T-^k-1 fk-l\ ( k-1 ^ _ f2k—2

2^=1 ^ i )\i-l) ~ 1 V i J {(k-l)-(i-l)J — {k so by Lemma 7.21 k) = = ^(y-i'H'V2)}^ and hence

Tl(k, k) = k^ 2H(^H2V2)} = 2 { (2^ + (2k-2j | = ^ _ The last

equality holds by Lemma 7.23, and thus Ti{k, k) = k. □

7-4 Optimal asymptotic average time of mergesort

Our goal is to show an optimization result for mergesort programs. For these programs the (average) time is monotone in the (average) time of the merge algorithm on which they are based (Proposition 7.13). Since linear time merge programs exist (for example the standard merge program), we need only consider linear time merge programs in the optimization analysis. The following lemma provides a justification for the fact that in this analysis the ideal merge program is representative for the class of the linear time merge programs.

Lemma 7.24 0(Tm) = 0{Tm), for any mergesort program, based on a merge program "Merge" such that TMerge £ 0(n).

Proof. By Corollary 7.16 we know that for any merge program "Merge":

TMerge{n-,n) > So since T/(n, n) = Tl, We have: Tj < TMerge and thus

tm < ^m-

Conversely, since TMerge G (3c > 0)(Vn > 1) TMerge{n) < Cn. So

(Vn > 2)rM(n) = 2TM(f) + TMerged f) < _2TM(f) +_cf • We also have (Vn > 2) Tm{u) = 2Tx(f) + f, and thus (Vn) Tm(^) < cTx(n) (easy proof by induction). □

Note that by Theorem 7.18, we obtain that the equation given under Proposition 7.13 reduces to

___ 77 77

(Vn > 2)(VS e [s]') TM2s{n) = 2Ts{-) +

The functional \$2 corresponding to the recurrence equation which determines Tm therefor is defined by:

\$2: C0\b —> C0\b, where \$2 = A/An. if n = 1 then 0 else 2f(-) + —.

Lemma 7.25 \$2 ¿s an improver with respect to the function g{n) = c'nlog2{n) <*d>\

Proof. Since \$2 is monotone on Co, it suffices to show that \$2g < g O c' > Note that when n = 1, we have (\$2g)(n) = 0 = g(n) and when n > 1 (\$25)(n) = 2(c'f%2f) + f,so

fl fl fl < g(n)^2(c'-log2-) + - < c'nlog2(n)

,n . n ,

O 2c —\log2{n) — 1) + — < cnlog2n

. n <&c'n-->0

2 ~ > -.

So by Proposition 6.3 and Lemma 7.25 we have T_\4^d \nlog2n or M.E 0{nlog2n) and thus by Lemma 7.24 we obtain that every mergesort program based on a linear average time merge algorithm has optimal asymptotic average time.

8 Conclusion

A complexity distance on programs has been defined, providing a new means to perform complexity analyis. Its properties guarantee that the spaces equipped with this distance, that is the complexity (distance) spaces, have a sequential S-completion. This implies that these spaces can be studied within the topological framework of Denotational Semantics (suggesting the possibility that this framework might allow for a future combination of Denotational Semantics and Complexity Analysis into an intensional semantics). A version of the Banach theorem has been shown for this particular sequential S-completion. The Divide & Conquer algorithms have been shown to induce contraction

maps on this completion. The applicability of the theory has been illustrated by the complexity analysis of a particular Divide & Conquer algorithm. The distance thus has interesting properties both from a theoretical and a practical point of view.

References

[1] V. Aho, J. Hopcroft, J. Ullman, Datastructures and algorithms 1987, Addison-Wesley.

[2] B. Bjerner, Time complexity of programs in type theory, 1989, University of Goteborg.

[3] M. Davis, E. Weyuker, Computability, complexity and languages, 1983, N.Y. Academic Press.

[4] A. Di Concilio, Spazi quasimetrici e topologie ad essi associate, Accademia di Scienze Fisiche e Matematiche, Lettere ed Arti in Napoli, Serie 4 - Vol. XXXVIII, 1971.

[5] J. Dugundji, Topology, 1966, Allyn and Bacon, Inc., Boston.

[6] P. Fletcher, W. Lindgren, Quasi-uniform spaces, 1982, Marcel Dekker, Inc., NY.

[7] D. Knuth, The art of computer programming vol.3, 1973, Addison-Wesley.

[8] H. P. Kunzi, Nonsymmetric Topology, Proceedings Szekszard Conference, 1993.

[9] H. P. Kunzi, Complete quasi-pseudo-metric spaces, Acta Math. Hung. 59 ,1992.

101 H.P. Kunzi, V. Vajner, Weighted quasi-metrics, Proc. Summer Conf. Queens College, Gen. Top. Appl., 1993, Proc. NY Acad. Sci.

Ill Ming Li, P. Vitanyi, An introduction to Kolmogorov Complexity and its applications, 1993, Springer Verlag.

121 L- Nachbin, Topology and order, New York Mathematical Studies (vol. 4), 1965, Princeton, N.J.

131 S.G. Matthews, Partial metric spaces, research report RR212, 1992, University of Warwick.

141 S.G. Matthews, The topology of partial metric spaces, research report RR222, 1992, University of Warwick.

151 M. Smyth, Completeness of quasi-uniform and syntopological spaces, manuscript, Imperial College.

161 M. Smyth, Quasi-uniformities: Reconciling domains with metric spaces, LNCS 298, 1987, Springer Verlag.

[17] M. Smyth, Totally bounded spaces and compact ordered spaces as domains of computation, Topology and Category Theory in Computer Science, p. 207-229, Oxford, 1991, Oxford University Press.

[18] P. Sünderhauf, The Smyth completion of a quasi-uniform space, preprint 1427, 1991, Technische Hochschule Darmstadt.

[19] P. Sünderhauf, Quasi-uniform completeness in terms of Cauchy nets, preprint 1529, 1992, Technische Hochschule Darmstadt.