Lauri Carlson
University of Helsinki
Department of General Linguistics
Wading through the vast literature on event structure, tense, mood, aspect, and diathesis,[1] the feeling one gets is that many of the key observations in the field have been not only made but repeated and reinvented many times over.[2]
Philosophers of action since Aristotle have, largely independently of linguists, been interested in the nature of action, events and causes (Bennett 1988). Logicians have sought for notations and principles to regiment and explain tense, aspect, modality and action (van Benthem 1982,1984,85,86). A rich variety of action and process calculi have more recently arisen in computer science (Petri 1962, Pratt 1978, 1992, 1994b, Milner 1993, 1995).
In linguistics, typological surveys have given rise to widecoverage theories aimed to describe and explain the distribution and development of tense, aspect, mood and diathesis systems across languages (Dahl 1985,1998, Bybee et al 1994, Lapolla/Van Valin 1997). Literature theory has studied the use of tense and aspect to create narrative discourse (Benveniste 1966, Genette 1980 [1972], Fludernik 1993).
If anything is missing, it is ways to bring together different strands of research. The main goal of this work is to develop notation for that purpose. I would like it to show that many different approaches to tense, mood, aspect and diathesis or (TMAD) are not only compatible, but comparable in a common calculus. My ambition in this work is thus to make steps toward an axiomatisation of the wealth of data and generalisations accrued in the literature.
The only way to enable cumulative progress is to recognise earlier results and indicate where (if anywhere) new proposals differ from them. I try to mention previous work that I am aware of to further this aim, not to do fair justice to everyone. This is not a survey or a history of the field, but an attempt at a unification of its results. I am also not out to show anyone wrong, or claim some particular perspective (like mine) to TMAD superior to another. Like Aristotle, I would like to save the appearances (σώζειν τ̀α φαινόμενα) of received views. I am happy to repeat what others have said earlier, in order to fit it the notation. I like to see TMAD theory converge (as I think it has been doing lately) instead of just going round in circles (which it has always been doing).
Much of the traditional literature on tense and aspect is informal, the more so the further one moves from words and phrases toward texts: there is a lot of imagery, examples and intuitive terms. Examples, impressions and imagery are very much to the point, as data or explananda. Impressions fall out as corollaries from the interaction of simple structures with complex contexts.
A striking observation that emerged during the work reported here is that intuitions, metaphors and images typically associated with particular TMAD forms can be quite as predictable and universal as the truth conditional constraints governing them. Connotations repeat themselves with awesome regularity from one language or form to another, where the calculus shows the denotation the same or similar enough. Many observations originally made about one language (English) reflect logical relations of compatibility and transpose surprisingly well to unrelated languages with similar constructions. This latter observation is confirmed by the survey in Johanson (1998).
But metaphors, when formalised, also qualify as explanantia (witness cognitive linguistics). I too shall try to operationalise central metaphors in the field, such as figureground (Wallace 1982), viewpoint (DeLancy 1982), and perspective, by giving them formal content. Giving courses of events a topology explicates the figureground metaphor. Logical notions of index (global variable) and abstraction (the abstraction operator) explicate viewpoint.
It is possible and advantageous to allow many interrelated perspectives at once, given well defined exchange relationships between them (van Benthem 1986, 1996). Many phenomena allow several alternative compatible analyses: the same entailments follow from different ways of splitting up the facts. This turns out to be one of the leading insights of the present approach. The network of ideas surrounding tense, mood, and aspect is densely populated, and there is a large number of interchangeable formalisations of many distinctions. My belief is that the sum total of significant, really different distinctions is much smaller. In compensation, they command a vast wealth of interrelations and mutual interpretations.
Instead of choosing between conceptualisations, it pays to determine the interrelationships in order to be able to shift perspective at will. If I am right, there is no absolute distinction between accidental and significant analogies in language. There are just smaller and bigger analogies.
The original program of the work was this: Develop a calculus of TMAD devices. Based on the calculus, define possible TMAD systems as combinations of options from the calculus. Use the conceptual grid to support TMAD typology and to allow logical inference as well as separation between forms. Example: Jespersen (1924:277) puts the past tenses of some well known languages into a table as follows:

Greek 
Latin 
French 
English 
German 
real perfect 
gegraphe 
scripsit 
a écrit 
has written 
hat geschrieben 
perfective 
egrapse 
scripsit 
ecrivit, a écrit 
wrote 
schrieb 
habitual imperfect 
egraphe 
scribebat 
écrivait 
wrote 
schrieb 
descriptive imperfect 
egraphe 
scribebat 
écrivait 
was writing 
schrieb 
Table 1
My goal was to define a fine enough grid in which I can catch both the rows and the items in tables of this sort, so that I can actually prove that the items fill the cells in just the different ways they do.
This means that the calculus which provides the grid should optimally allow explicit formation rules, model theory, and deduction theory. I shall not provide all of that here. I do base my calculus on established formal systems whose syntax and axiomatics is well known.
Model theoretic semantics, in contrast to translation, corresponds to going to the dual of a language to reflect it. Different mathematics may become available, and gratifying new analogies may be gained in the conversion (such as the possible worlds interpretation of modality). In the last analysis, it really does not matter which way one looks at language. Pitting extensional (possible worlds) and intensional (event/situation) approaches as competitors turns a useful duality into an ideological choice (Barwise/Perry 1983, Hintikka 19??, Krifka 1998:199).
The formalisms I develop are the main thesis of the essay. I consider having accounted for those observations and impressions that are captured in the formalism. Descriptions of usage reported here are secondary to the relevant formalisations; they may be wrong as long as the formalisation is not. Impressions that cannot be derived as contextual entailments of the formalisations remain just impressions, and food for further thinking. I don’t mind platitudes, I am not after novelty. I mention facts if and when they may fit into a deductive system I try to formalise.
On the other hand, the particular shape of the formalism is not essential. It is enough to have any one formalism which unifies different approaches. There are others, interdefinable or in various respects superior to the one presented here. I sympathise with van Benthem’s catholic view on formalisms, relating logical varieties to one another through classical logic. There is linguistic interest in studying various logics which hide parts of classical logic from view, because natural language seems to do likewise.
My particular aim is to make simple things look as simple as I can make them, at the risk of losing detail. My growing conviction is that core TMAD is simple. There are relatively few genuinely different distinctions, once formal grain is sifted from impressionistic and terminological chaff (Johanson 1998). What complicates the analysis of tense and aspect is that it is intimately connected to other areas of grammar, in particular, diathesis, modality, reference and quantification, and discourse. In order to do justice to these connections, one cannot avoid forays into the interconnected areas.
This brings up another point of attitude. The fact that natural language and commonsense thinking is characteristically qualitative and vague does not mean that its explication could not be formal. The fact that people do not consciously use topological theorems does not prevent them for behaving as if they knew them. In general, I do not believe in a separation of natural language semantics and common sense thinking from formal semantics or exact thinking. There is no particular place where one ends and the other begins, no linguistic logic essentially different from mathematical logic.
I try to start from simple and undeniable distinctions such as: does an event have to go on or come to an end? is this always so is it just generally the case? and so on. In compensation, I shall try to stick to my principles: apply distinctions literally, paying close attention just what event it is that aspect distinctions apply to in each case. I try to distinguish between entailment and implicature, hardandfast distinction and contextual inference. Unfashionably perhaps, I look for a logic in the imagery, associations, motivations, metaphors and prototypes. Though there are no sharp lines between analytic and synthetic, knowledge of language and knowledge of fact, grammar and lexicon, semantics and pragmatics, necessity and probability, there are clear cases of all of the distinctions.[3]
My current research program could perhaps be described as the study of the category of natural language TMAD systems: algebras of events U through which different TMAD systems S1, S2 can be mapped into one another so that diagrams S1 → S2 = S1 → U → S2 commute. The object of study is thus not a single system, but morphisms in the category of TMAD systems. This applies both to object natural language systems and metalanguage theories and models about them. Together with this perspective goes a category theoretic interpretation of “is”: one thing (algebraic structure) “is” another when they are the same up to an equivalence.
A related insight is that there is no unique set of privileged primitives for a TMA formalism. There are many interdefinable choices of primitives and sufficient sets of axioms relative to them. This much already follows from the underlying mathematics. Boolean algebra, regular algebra, relation algebra, topology, all can be based on many alternative choices of primitives. Before one runs different semantic approaches against one another in competition, one must put them on the same line to see which differences are substantial and what is only notational. Two systems are compatible until one says p and the other not p for the same p. Even so, good features of one system can often be incorporated in another, or a third system found which combines the advantages of two.
This is another point of attitude: there is no fixed semantics for natural language, rather, there is free play with category theoretic morphisms. There does not have to be any way to choose between perspectives for good. Natural language has no one intended model of events and time, it accommodates different world views (Benthem 1998:§8). Linguistic facts have many compatible explanations, corresponding to different partial theories sometimes subsumed by a more general one.[4] There need not be one correct semantic analysis of tense and aspect. As long as all observations agree, many theories may be correct at the same time (Quine 1960). (Compare also van Benthem's program for semantics in 1986:§10).
The same may be true of grammar as a whole. Conceivably, there is no one perspective, logical form, model class, or syntactic theory which does justice to the grammar of a given language, or even one construction. We may have to switch perspective in the middle of analysis. This is a logical analogue of etymology, where no one schematic meaning or prototype may cover the entire spectrum of uses of a word. The grammarian’s “is” may have to be equally abstract than the mathematician’s, that is: operate on the level of morphisms, or analogies, as they are named in grammar.
I think grammatical constructions in natural language are multiply mapped rather like its semantics. they allow alternative partial analogies which are used to extend the system in different directions as it evolves. A Wittgensteinian toolbox view, or more fashionably, a radical constructionist view of language, instead of a quest for the underlying structure (Croft 2001). There need not be any one underlying structure of language or ideal notation for linguistics, no more than there is one for mathematics.
It may also be possible to explain the same phenomenon by a grammatical, semantic, or discourse rule, and predict the other rules from the choice. If there is no way to choose, why choose. Any one, or all three can be right: language is redundant. Redundancy is a locus for neutralization, which change may tell apart.
Language typology can be approached top down or bottom up. The (currently perhaps more popular) top down, semasiological or functional typology starts from some function like passive, perfect, or future and studies how it is manifested across languages. The shared function is used to explain variation and unity of structure. Bottom up, onomasiological or structural typology starts from some construction type like word order or case system and studies its variations. The shared structure is used to explain variation and unity of function. I work mainly bottom up. Both perspectives are useful, and dually complement one another.
It used to be customary in logic grammar works of Montague vintage to present a fragment of a natural language complete with syntax and semantics, just as a proof of concept. The exercise has lost some of its initial interest given the degree of freedom in the task (van Benthem 1986:200).
I won’t provide separate rules for translating natural language fragments into the calculus. I hold on to a hope that the calculus itself will become flexible enough to allow surface true representation of natural languages. This hope is a radical form of Montague’s project of English as a formal language. Only I am going to venture past English to languages of many types. Compared to Montague’s semantics, a difference is that I do not hang on to categorial type theory, but go to dual or adjoint types when I want to.
The opus magnum is divided into three parts. The first part lays out the calculus of events (and objects). The second part applies it to a selection of individual languages. The third part pulls together results from Parts III toward a logical (and developmental) typology of tense, mood, aspect, and diathesis.
The order of the four accidents of the verb in the title corresponds to their logical scope in the Creole prototype. The logical order of presentation in the book reflects my progress in digging into event types.
The first part is divided into chapters on the formalism, aspect, tense, time adverbials, time within the sentence and discourse, modality, diathesis, and object types.
The second part consists of case studies of the TMAD systems of a range of languages, including English, Portuguese, French, German, Russian, Classical Greek, Bulgarian, Finnish, Chinese, and casts cursory glimpses at a number of others, including Creole languages, Hungarian, Icelandic, Inuktitut, Irish, Japanese, Kikuyu, Lezgian, mainland Scandinavian, and Turkish.
The third part pulls together consequences of the theory and the empirical studies for the typology of natural languages. The part is divided into a chapter each on traditional structural ideas, more fashionable evolutionary or developmental typology, and logical studies.
A work which traces the interconnections of a wide range of facts really ought to be shaped as a hypertext. In print, anticipation and repetition is unavoidable, as the same facts take part in generalisations in different directions.
The work will appear abstract at start, and its fabric loose from nearby, but the plot thickens. It can be faulted for being too long, or trying to fit everything in. The only defense is that it is just what this work is about: it is an attempt to fit a lot of things together.
The study of concepts related to TMAD will profit from elementary use of the many branches of mathematics they involve. Some relevant distinctions such as open/closed, point/region (dis)connected, (dis)continuous are topological.[5] Some are metric, like, short/long, (un)bounded. Some are order concepts, like before/after. Some are algebraic, like product, sum, iteration and complement. I shall be using something from all of them. I shall try to use terms in their usual mathematical senses, as defined e.g. in Kelley (1955), Chang and Keisler (1973), Salomaa (1973), and Halmos (1956, 1974).
Lattice theory provides the concepts of partial order, lower and upper bounds, limits, completeness and continuity. It builds on the theory of order relations, which defines linear, dense, continuous, partial, backwards linear, or weak orders. At the concrete end, there is the theory of the real time line. Relation algebra studies properties and operations of relations.
Topology is qualitative geometry. It is explicates the concepts of discrete and continuous, separate and connected, inside and outside, open and closed, neighborhood, adjacency, and boundary.[6] Ordered topological spaces define the concept of convexity. Metric (uniform) topological spaces produce the concepts of duration and distance (long and short, near and far).
Projective and affine geometry have marginal aspectual relevance too. A projection, generally, is a map from a space to a quotient space with simpler structure (Kelley 1957, Maclane/Birkhoff 1967). Projective geometry in particular defines perspective, which is an important metaphor in aspect theory. In a projective space parallel lines meet at a point or line at infinity: a point of view or horizon. Viewed up close or from within, a region fills the horizon and seems unbounded, viewed from outside at a distance, it seems bounded and contracts to a point (Kamp 1981b:51). Changes of scale in turn are linear (affine) transformations.
The theory of regular languages belongs to formal language theory dealing with semigroups and monoids. Graph theory combines theories of relations, orders, group theory and topology. Its key concepts include graphs, trees, cyclicity, and connectedness.
Significant parts of these different mathematics are closely related, in fact identical in the sense of category theory. Thus at the abstract end comes category theory, the theory of invariants of algebraic systems in terms of functors and morphisms, which allows a birdseye view on notions such as resolution (Link 1998:255) or duality (van Benthem 1982:§I.2.3).
A key concept turns out to be the category theoretic notion of duality (MacLane/Birkhoff 1967:32ff, Benthem 1986, Pratt 1997, Barr/Wells 2002:5). In terms of category theory, the dual of a diagram is obtained by reversing arrows. Duality is relative to a diagram, so a concept can have more than one dual relative to different diagrams. For instance, meet is the Boolean dual of join and the ring dual of +.
Once one puts on the right glasses, there are dualities all over. Boolean dualities actually form Aristotelian squares or Klein four groups between a concept, its complement, dual and antidual (Halmos 1974:8, Löbner 1986,1990:§4). Those that have been collected here will come up in one form or another in the body of the work.
Or is dual to and. Some is dual to all. Also is dual to only. Part is dual to whole. Big and small are dual. If is dual to only if. Possible and necessary are dual. Negation, equivalence, one and the are selfdual (Löbner 1990:76). Boolean dual and complement fall together in 2.
Boolean algebras and discrete topological spaces are dual (Stone). Ideals and filters are dual. A set is dual to its power set. The converse of a linear order is its dual. Minimum and maximum are dual. Partial and total functions are dual. Discrete and continuous are dual. Product and sum, quotient and difference are dual.
Category theory objects and arrows are dual. Morphisms and their inverses are dual. Morphisms and composition are dual. Initial and final algebras are dual (Baillie 1989). Induction and coinduction are dual (Rutten 1995). Properties and relations, Boolean and relation algebras are dual.
Geometry generates many dualities. Long and short, straight and round are dual pairs. Inside and outside, point and line, line and plane, line and circle (angle/arc) are dually related. Edges and vertices, paths and boundaries are dual. Vectors and matrices are dual. Integral and differential are dual.
Inanimate nature exhibits dualities. Time and space can be dually related. Potential and flow are dual.[7] Open and closed systems are dual. Concatenation and superposition are dual. Particles and waves are dual. Information and entropy are dual.
Animate nature likewise. Organism and environment are dual. Freedom and determinism are dual. Agency and accident are dual. Strategies and utilities are dual. Knowledge and power are dual. Brain and brawn are dual.
Language abounds with dualities. Types and tokens are dual. Objects and their properties are dual. Languages and models are dual. Form and content, structure (category) and function are dual. Morphology and syntax, syntax and semantics are dual.
Duality turns up in tense and aspect. Events and states are dual (Pratt 1992). Events and objects are dual. Propositions and times are dual. Past and future are dual. Open and closed events are dual. Imperfective and perfective are dual. Loose (location) and tight (duration) adverbials are dual. Before and until are dual. Between and fromto are dual. In and for are dual. Duration and frequency are dual. Tense (non locally testable, dense event types, live properties) and aspect (locally testable, closed event types, safety properties) are dual.
Mood has its dualities. Possible worlds and situations are dual. Possible worlds frames and modal algebras are dual. Cause and let are dual. Able and happen are dual. Habits and dispositions are dual. Subjunctive and optative are dual. Liveness and safety are dual.
So does diathesis. Cartesian product and identity, or passive and reflexive are dually related. Active and passive are order duals. Reflexivity and iteration turn out to be dual as well.
In grammar, word order and morphology are dual. External and internal cases are dual. Count and mass, singular and plural are dualities.
On the side of theory, generative and constraint grammar, generative phonology and optimality theory can be considered dual respectively. Parsing and generation, leftright, top downbottom up, deterministicnondeterministic, depth firstwidth first, are all dualities. Languages and automata are dual. Variables and combinators are dual. Imperative (procedural) and declarative programming are dual. Control flow and data flow are dual. Sorting and permutation are dual. [8]
In discourse, foreground and background are dual. Settings and episodes, frames and scripts are dual. Metaphor and metonymy are dual.
Chu spaces and twoperson games form a selfdual category. (Pratt 1999, van Benthem 2002)
Duality saves work, as one only has to solve one half of a problem. Often a complex problem becomes simple by going over to the dual (well, complex and simple are dual concepts).
Table 2 represents the events on a Monday morning. Time runs in columns, simultaneous events in rows. Each cell represents a (simple or complex) event token. The white cells are (relatively) simple event tokens, the shaded ones are (more) complex event tokens involving simpler events on the previous rows.
lie 
sit 

lie 
sit up 

breathe in 
breathe out 
breathe in 
breathe out 
breathe in 
hold breath 

breathe 
sigh 
brace oneself 

sleep 
drowse 
awake 

sleep 
wake up 
awake 

wake up 

silent 
ring 
silent 

alarm goes off 
alarm is on 
alarm shuts off 

arm on pillow 
arm in air 
arm over clock 
arm on clock 

arm at rest 
arm moves 
arm at rest 

arm at rest 
arm starts moving 
arm stops 


turn off alarm 

Table 2 
A starting point of my way of modeling events is that events of a type are not unique. An event of a given type can contain, be contained in, precede or follow other events of the very same type (Declerck 1997:64,104). This should be particularly clear with open event types.
For instance, the complex event of waking up can be stretched to cover anywhere from the whole course of events to an arbitrary narrow neighborhood of the instantaneous change of state (if any) from sleep to vigilance. Although there are two cells labeled wake up in the graph they count as parts of the same event of waking up whose boundaries are not fixed. A description of an event in process like I was waking up can denote a complex course of events including the alarm clock going off, sitting up in the bed and yawning, and so on. When pressed, we may be able to zero in on the actual moment of awakening, but usually we don’t, and there doesn’t even need to be one. Although we know more or less when the event happened we are not exact about when it started and where it ended and what details were involved in it. Events are not individuated up to identity as concerns their boundaries or internal detail, although we can count how many of them there are within a given course of events. Compare counting waves. At any given scale or granularity, we can count how many waves there are without being able to say exactly where one ends and the next one begins. For a finer resolution, we can count more waves (Bennett 1988:§34,§49).
A consequence is that all events, closed events as much as open ones, are vague about their boundaries. Closed events are vague too because they include their boundaries, and the boundaries are states.[9] Looking back, this was one of the emancipating ideas in this approach. We tend to think of events as regions and their boundaries as points. But we can dually think of the bordering regions as the boundaries of a point. This may be part of what writers of aspect have in mind when they talk of the perfective and imperfective perspectives on an event as viewing an event from the inside as extended, or from the outside, contracted to a point (Lyons 1977:708710, Smith 1991:104).[10] The difficulty of turning the metaphor into formal sense comes from thinking too concretely in terms of some fixed model of time. The pointregion distinction is generalised by the openclosed distinction in general topology, which our models are an instance of. I return to this in the section on topology.
A good model of events should embody the ideas of resolution, i.e. granularity and scale of events. Granularity concerns possibly discontinuous projections that smooth or coarsen a course of events (Hobbs 1985, Euzenat 1993, Link 1998). Scale concerns continuous affine transformations changing unit and origin of measurement (Krantz 1971).
Scale splits a course of events into sequences of subevents with varying scale, which corresponds to continuous stretching or contracting of the course of time. Scale helps understanding continuity and relating it to discrete change. It is sound mathematical practice to capture continuous change as the limit of series of discrete changes of ever finer scale.
Granularity sums up, selects significant and leaves out insignificant or eventless bits, thus redefining the contiguity of events at each level of resolution. It is entirely true to say, on one level of resolution, that I got up immediately after I woke up, although it is also true that I did sit up and slam on the alarm clock in between, if we look at the events more closely. This is part and parcel of the qualitative, generic nature of natural language.[11]
There is a dependence between distance and scale due to perspective. Given constant resolution (number of noticeable differences), one can discern finer detail close by, while the same number of distinctions will produce coarser divisions far away. Perspective contracts distant objects to a point and expands nearby ones.
This is consistent with the recurrent idea of constructing pointlike events as atoms in a Boolean algebra of events, while an event is extended if it has common parts with other events (Wiener 1914, Kamp 1979,1981b:48ff, Löbner 1988:169).[12] Kamp’s discourse representation corresponds to a level of resolution at which events appear as points, while at a finer scale of physical time they turn out to have internal parts. Thus what counts as a point depends on the topology one is working with, which depends on granularity. So it is possible for an event type (say, a generic state like be reliable or smoke) be true pointwise at a given resolution, while not true and false at each and every point at a finer resolution. That He is reliable and he smokes are true does not imply that He is being reliable now or he is smoking now are.
There are branches of mathematics that deal with scaling, smoothing, and simplification of complex quantitative phenomena so as to turn a detailed image or quantified proposition into a coarser picture or a qualitative, categorical assertion. So far, applications to linguistics have been of marginal relevance (Thom 1970, Zadeh 1978, Yang 2001). Granularity in space is studied in Narashimhan/Cablitz (2002).
Following one common practice, I use ‘event’ as aspect neutral cover term like Comrie’s (1976) ‘situation’ or Bach’s (1980) ‘eventuality’ (Tenny/Pustejovsky 2000).
Imagine a set A of (atomic) event (token)s a partially ordered by concatenation. From it construct a set S of longer or shorter courses (sequences) of events from A. In formal language terms, A is a vocabulary and S is a language, i.e. a subset of A+. A complex event (token) x is set of simple courses of events, i.e. a sublanguage of S. Thus for instance a complex event token of type build a house can include all sorts of subevents like carrying stuff, casting concrete, nailing, etc. A complex state like be helpful can include helpful acts. The set of such complex event (token)s R is a subset of the power set of S. Complex event tokens need not be connected. A complex event token d is a subevent of e if d ⊆ e. This way of defining subevents makes subevent structure like a Boolean algebra, i.e. there is no unique fixed split of an event into subevents.
Event tokens can be split into smaller events on orthogonal dimensions (Link 1997). An event may be a temporal part of another event (for instance, coughing consists of a series of coughs), or it may be a spatial or logical part of another event (carrying two suitcases may involve carrying one in each hand; kissing by definition involves touching; Ar. Phys.VI.4) The relations may hold at the same time. Consider for instance Herweg’s (1991) example tune a violin consisting of interleaved subevents of iteratively tuning strings on the violin in turn. Times are sets of simultaneous events. Events in a complex event do not have to be simultaneous in general, but events at a given time are. Neither times nor events need be connected (e.g. sleep at nights).
An event type in turn corresponds (in this set theoretical first approximation) to a set of complex events. The set of event types E is a subset of the power set of R, corresponding to families of languages in formal language terms. A time t is also an event type, i.e. a set of complex events (possibly in different situations or alternative futures). Thus the set of times T is another subset of the power set of R. An event type is on the same level of complexity as a time (i.e. a set of sets of simple courses of events), they only cut the set of simple event tokens differently. The set of times T must be congruent with concatenation so that events x,y ∊t only if x≤y entails y≤x. Times are equivalence classes of the indifference relation induced by concatenation (van Benthem 1982:1189)[13]
This formulation is impartial about possible worlds: simultaneous courses of events may happen at the same location, at different locations, or in different alternative possible worlds.
Tha above construction will not be developed in the sequel. It only serves as a guide to intuitions on the way to an algebraic treatment, where event types are individuals on their own right (Keenan and Faltz 1985:52, Link 1998:252).
My event calculus will be built on a number of well known and closely related algebras, Boolean algebra, regular algebra for events, closely related to relation algebra for objects, and an algebra of transducers to describe relations between objects and events. These algebras can be related to one another using common concepts from universal algebra or category theory.
All of the algebras build on the basic category theoretic constructions of product, projection, adjoint, identity, terminal element, and their duals.
The algebras are related as product algebras of one another. Regular algebra is a monoid of Boolean algebras, or equivalently, a Boolean algebra of monoids. Relation algebra extends it by having converses. Transduction algebra in turn is a relation algebra of regular algebras.
I distinguish several sorts of variables: nonempty free (existentially bound) event or object type variables d,e,f,g, untyped possibly null event or object token variables x,y,z, either free or bound, time variables t,u,v, sorted event type variables a,b,c,d,m,p,q,r,s with aspect restrictions on their values, and contextually bound (indexical and anaphoric) token variables I, here, now and then. Despite the traditional name, now and then here do not (only) range over times but over event types; i.e. they can exhibit aspect.
I follow a common convention according to which identical variable tokens corefer within a formula while different variable tokens do not (so different tokens can denote different events). Given I do not make a sortal distinction between types and tokens, the difference between type and token variables is not in their range but how they are instantiated. Type variables are call by name, evaluated outside in, token variables are call by value, or inside out. For instance, event type (eat x)+ describes rumination, or eating the same thing again, eat x . eat x, while eating twice eat+ is the same as (eat _)+ and instantiates to eat _. eat _or eat x. eat y. (Comon et al. 2002:66)
< is an anonymous free string variable over times/event types. It is a notational variant of the identity or top element 1* of the regular event algebra. _ is an anonymous variable over concatenation atoms, so _ equals 1^{0} and < equals _*. These variables are never shared. The interpretation of _ as an atomary variable only makes sense on discrete time. In dense time, let _ denote the event type of extended events in 1*\∅*.
Genetically, the starting point of event calculus was the language algebra of extended regular expressions. (McNaughton/Yamada 1960), or a combination of Boolean algebra and regular expressions (Aho/Ullman 1972, Rozenberg/Salomaa 1997). It is related to relation algebra (Jonsson/Tarski 1951, Ng/Tarski 1977). which in turn is related to dynamic algebra/logic and twodimensional logic (Pratt 1978,1992,1994, van Benthem 1996), thus indirectly even to such less obvious cousins as Petri nets (Petri 1962, Winskel 1986) and linear logic (Girard 1987). For a family tree, see Pratt (1992c).
Boolean algebra (Halmos 1974) defines join (union) ⋃, meet (intersection) ⋂, complement ¬, sum (symmetric difference) + and relative complement (difference) \, material conditional and equivalence ↔.
I occasionally leave out ⋂ between event types, i.e. use them as homonymous intersecting modifiers (equivalently, oneplace prefix or postfix operators under composition, cf. relation algebra). A particular natural language, as an instance of the calculus, will include event type constants for individual verbs.
The Boolean sum of events a+b is the same thing as symmetric difference a⋃b\a⋂b, exclusive disjunction a⋂¬b ⋃ ¬a⋂b, or nonequivalence ¬a↔b. It is commutative and associative, so a+b = b+a and (a+b)+c = (a+b)+c = a+b+c. Join is defined in terms of sum as a⋃b = a+b(1\a) = a+(b\a) = a+b+ab (Boole 1958:56)[14]
Boolean algebra is also an idempotent ring relative to meet and sum, obeying the arithmetic of 0 and 1 modulo 2 (Halmos 1974), in which a^{2} = aa = ∅→a = a. In this arithmetic, every element is self inverse: a+a = ∅ so there is no sign: a = a. and ∅⊆a⊆1.
Specifically, 1\a = 1+a. This is the only time sum and difference coincide. In general a+b = a\b + b\a, for instance a\∅= a and ∅\a = ∅ so a+∅ = a. The law of cancellation does not hold for meet, so a⋂b = a⋂c does not entail b = c even for a > ∅ (Boole 1858:14). It fails for join as well, but holds for sum and equivalence, so a+b = a+c and a↔b = a↔c entail b=c.
A Boolean algebra is also an additive Abelian (commutative) nilpotent group of order 2 where every element is nilpotent: 2a = a+a =∅⋂a = ∅ (Halmos 1974:3)
The initial object of the category is the twoelement ring 2 of integers modulo 2 where 2 = 0 (the arithmetic of even and odd). The only smaller object is the oneelement group whose cycle length is 0, the trivial Boolean algebra where ∅=1 (Chang/Keisler 1971:291).
Boolean sum a+b is at once the dual and the complement of equivalence. It has dual disjunctive and conjunctive normal forms a\b ⋃ b\a and a⋃b ⋂ ¬a⋃¬b. Equivalence has the dual forms a⋂b ⋃ ¬a⋂¬b and a¬b ⋂ b¬a. Note the following equivalences.
a↔b = ¬a↔¬b = ¬(a+b) = ¬(¬a+¬b)
a+b = ¬a+¬b = ¬ (a↔b) = ¬ (¬a↔¬b) = ¬a↔b = a↔¬b
A Boolean algebra is also a complemented distributive lattice ordered by a partial order of inclusion ⊆. Lattice theory defines monotonicity a⊆ b → fa ⊆ fb , distributivity f(a⋃b) = fa ⋃ fb and continuity f(⋃e) = ⋃f(e). These notions are related through the lattice equivalence of a⋃b = a (absorption) and a ⊆.b A tree is a oneway linear semilattice and linear order is a twoway linear lattice.
Boolean inclusion as an event type always denotes an element of the Boolean algebra 2. Material implication a→b (residual, Pratt 1991, Kozen 1992, 1994, van Benthem 1991,1996:69) is a dual to difference (quotient) b\a. Inclusion ⊆, can be defined in terms of identity and join as a⋃b = b, in terms of identity and meet as a⋂b =a, or in terms of residual or difference as b\a = ∅ or a→b = 1.
It is also possible to let a⊆b denote the Boolean interval of elements between and including a and b. Any nonempty Boolean interval is also a Boolean algebra. In particular, ∅⊆a equals a and a=a is the twoelement algebra 2 of ∅ and a. On this interpretation, a⊆b is true just when it is not ∅. This is a dual of the usual definition of inclusion (Hughes/Cresswell 1968:318fn317). It compares to my treatment of ≤ in regular algebra.
A complete Boolean algebra adds infinitary versions of join and meet. Complete join ⋃e and complete meet ⋂e denote the top (lowest upper bound) and and bottom (greatest lower bound) of e with respect to inclusion. These are second order operators. Kleene star e* can be explicitly defined as the fixpoint ⋂x:x = xe.
(Complete) sum ∑e is an analogous generalisation of disjoint join. ∑e ⊆ ⋃e. The (complete) sum ∑e is a(n infinitary) join of pairwise disjoint elements (relative atoms). When the events in the sum belong to the same event type e, we may define multiples of the event type by setting 0e = e0 = ∅*, (n+1)e = e(n+1) = ne+e. Plural definite the men denotes ∑man. Note that the regular event type e^{n} is included in the Boolean event type ne. For instance, e.e is a subtype of e+e.
∑e = ⋃e when e is a partition, i.e. a set of relative atoms which forms a basis for a quotient algebra of e, i.e. a resolution of e. Note the connection to event resolution above.
Perspectives on Boolean operators are diagrammed in the following. Boolean product or meet is symbolised by concatenation.
1, a→1, a→a, a↔a multiplication identity , join zero
a, 1→a, a¬1 idempotent, addition inverse
¬a, a+1, a→0, 1\a, 1⋂¬a join inverse, meet complement
ab, a⋂b multiplication, meet, intersection
b\a, b(a+1), ab+b quotient, difference, relative complement, meet adjoint
a→b b¬a material implication, residual, join adjoint
a+b, (a⋃b) \(a⋂b) sum, disjoint join, symmetric difference, ring dual of meet
a⋃b, a+ab+b, (a+1)(b+1)+1 join, disjunction, union, Boolean dual of meet
∅, ¬1, a+a multiplication zero, join/sum identity
The product algebra a´b of Boolean algebras with operators defined pointwise is a Boolean algebra. The atoms of the product are the disjoint joins of concatenations of the atoms of the factors (Halmos 1971§26). There is an isomorphism between sum and direct product of Boolean algebras. All Boolean algebras are direct powers of 2. A regular algebra in turn is the concatenation closure of a Boolean algebra, or equivalently, a Boolean sum of (morphisms on finite) monoids.
An atom[15] b in a Boolean algebra is an element without proper parts, i.e. whose only parts are ∅ and b itself: Note that ∅ is an atom, 1 is one only in 2. Atoms are terminal elements in a category (Barr/Wells 2002:4).
An element or an algebra is atomic if it is a sum of atoms, and atomless if it contains no atoms. An atomless algebra is dense: every element contains another, i.e. b ⊂ a for some b for all a. This means every nonempty element a is divisible into two, b and a\b. In an atomic algebra every element is a sum of atoms. Atomic and atomless are contraries.
A Boolean algebra b can be split into two at any element a so that the product (b⋂a) ´ (b\a) of the quotient algebras is isomorphic to the original algebra b. (The product is dual to the Boolean sum.) A Boolean algebra can be split into an atomic and and atomless part in this way (Chang/Keisler 1971:295).
Every element a of a Boolean algebra b defines a smaller quotient algebra a⋂b whose unit is that element. The sum algebra of Boolean algebras a+b with operators defined pointwise is a Boolean algebra. Then the atoms of the sum are sums of atoms of the terms.
The product algebra a.b of Boolean algebras with operators defined pointwise is a Boolean algebra. The atoms of the product are the disjoint joins of concatenations of the atoms of the factors (Halmos 1971§26). A regular algebra is the concatenation closure of a Boolean algebra, or equivalently, a Boolean sum of monoids.
Given a Boolean algebra with elements a and b, call a an atom relative to b if a⋂b is an atom in the quotient algebra by b. Then a⋂b is either a or ∅. This is a symmetric relation.
Atomic inclusion a ∊ b can be defined in Booleans to mean ‘a ⊆ b and a is an atom’. It is the Boolean counterpart of the set theoretic notion of membership. Atoms allow defining immediate inclusion x ⊂_{+} y as the relation ‘x+a = y and a is an atom’. In an atomic algebra, proper inclusion ⊂ is the transitive closure ⊂_{+}+ of immediate inclusion and inclusion ⊆ is its reflexive transitive closure ⊂_{+}*. Meet, inclusion and identity fall together among atoms. So do join and sum. If a is an atom,
(a ⊆ b) = (a⋂b>∅)
If a and b are atoms,
(a=b) = (a⊆b) = (a⋂b>∅)
a⋃b = a+b
The Boolean dual of meet is join, the ring dual of meet is sum. Identity and sum are complementary in 2. It is the largest allatom Boolean algebra. The dual of an atom is the complement of an atom, or a coatom. Any element of an atomic Boolean algebra is a meet of coatoms. The meet of all coatoms is ∅.
A Boolean algebra is dual to a discrete topological space (a field of subsets). A topological base of a Boolean algebra A is a subset B that every element of A is is join of a subset of B. The atoms of an atomic A form a base. It is also a factorisation in that every element is a unique sum of atoms.
A cobase of is a subset C such that every element of A is a meet of elements of C. It is a factorisation if the meet is unique. The set of coatoms is a factorisation. A cobase C is free (independent) if every proper subset of C has nonempty meet.
A Boolean base (set of generators) for a Boolean algebra A is a set B of elements so that A is the closure of B under Booleans. Topological base and cobase are Boolean bases.
A Boolean valuation on a set F of elements e of a Boolean algebra A is a function f from F to A whose value for each e is e itself or its complement. The set of such valuations is of type 2^2^F. A Boolean base F is free if every valuation f on F is consistent (has a nonempty meet).
A Boolean algebra is free if it has a free base B. This means that any mapping from B to a Boolean algebra can be extended to a Boolean morphism. Boolean algebras 2, 4, 16, and 256 are free (they are feely generated by zero one, two, and three elements of atomic features, respectively. The threeatom, eightelement Boolean algebra 8 is not free.
Any Boolean algebra is a product of free Boolean algebras, so it has a base which is a sum of free bases. The size of the smallest complete featurisation of a Boolean algebra is log_{2} the size of the algebra. For instance, the Boolean algebra generated by two features (independent elements) a, b is the fouratom sixteenelement BA 16. Its atoms are the fourfield a⋂b, a\b, b\a, ¬a⋂¬b.
The size of a Boolean algebra is always a power of two, and that of a free Boolean algebra 2^2^n for n the size of the carrier. There are 2 Booleans in 2, four in 4 of which 2 are the same as in 2, sixteen in 16 of which 6 the same as in 4. This leaves 10 live binary ones in 16, of which 8 have been given short symbols. The two left over are each alone sufficient for defining the lot, Sheffer stroke a¯b = (a ¬⋃ b) = ¬(a⋃b) and its dual ab = (a ¬⋂ b ) =¬(a⋂b). More on Boolean algebra in the Appendix.
The language of regular events consists of concatenation symbolised by juxtaposition between oneletter variables and by a dot between longer terms, Boolean: join (alias alternation  ) ⋃, optionality ?, Kleene star ^{*}, iteration ^{+}^{ }(at least once). the empty event type ∅. Relations include identity = and inclusion ⊆, which can be defined in terms of identity and join as a⋃b = b.
The regular operator ^{+} called iteration is concatenation closure, i.e. implies adjacency (contiguity) just as much as concatenation does. There is no bias for or against discreteness involved although the term may suggest it. Discrete repetition or series is the special case of iteration of closed events.[16]
Empty event type ∅ is distinct from the null event type ∅^{* }. The former denotes nothing (is contradictory), the latter denotes the null event (concatenation identity, a virtual event that happens all the time yet takes no time).[17] ¬∅ is the Boolean unit event type 1. Optionality e? can be defined in terms of join and the null event type as e⋃∅^{*}. The usual Kleene star is defined by e* = e^{+}⋃∅^{*}. Finite powers e^{n }can be defined in the obvious usual way. For instance, ∅^{* }is e^{0}. Notation e^{(n, m]} denotes the join of powers from n (exclusive) to m (inclusive). For instance, e? is e^{[0,1] }and e^{+} is e^{(0,∞)}.
More precisely: regular algebra, or Kleene algebra (Conway 1971, Kozen 1991,1994), or the algebra of monoids (Pin 1991), consists of a noncommutative but associative operation of concatenation with zero ∅ and unity ∅*. It is denoted by the language of extended regular expressions.
A monoid is a semigroup with unit, i.e. an algebra with an associative and noncommutative product called (con)catenation a.b and a twoway concatenation identity ∅*. For instance, a category is a monoid of arrows. Left and right projections a, b of concatenation ab are denoted by a´b and a`b. Quotients a/b and a\b are adjoint to concatenation so a.(a\b) = ab = (a/b).b.
Concatenation is a free (cartesian) product satisfying cancellation a.b = ∅ iff a = ∅ or b = ∅ as long as the domain of event tokens is a free monoid. (Cf. relativisation in van Benthem 1996:66)
A Kleene algebra or the algebra of basic regular expressions adds empty element ∅ and complete join ⋃ which allows defining alternation ab and concatenation closure (Kleene star) a* (Kozen 19??, Desharnais 19??). Regular algebra or the algebra of extended regular expressions adds the rest of the Booleans, including meet ⋂ and, complement \ and the unit element of regular algebra 1* dual to ∅. The carrier of a regular algebra is the sum of its atoms, singled out when need arises by 1^{1} (also known as alphabet in formal language theory). The type of nonnull events _ is 1* \∅*. Note that ∅*= 1^{0}.
Boolean algebra is not an extensive category for multiplication does not satisfy cancellation, i.e. a⋂b = ∅ does not entail that one of the factors is zero, except in 2. Concatenation does (van Benthem 1991:244). Concatenation is associative, Boolean meet is also commutative. Both distribute with join. Join and meet are idempotent, concatenation is not. Concatenation closure (Kleene star) is.
There is a subclass of regular languages of particular interest here. That is the class of starfree languages. A star free extended regular expression denotes a (possibly infinite) language obtained from finite languages with Booleans and concatenation. More explicitly, starfree languages are the closure of the following conditions:
unit 1* is star free
atom a is star free
ef is star free when e,f are
ef and e\f are star free when e,f are
Infinite (cofinite) languages arise here from complementing finite ones. For instance, a* “only a’s” is denoted by the expression ¬(<¬a<). “no nona’s”, where a is a concatenation atom. This description includes also (a⋃b)* ‘any number of a’s or b’s defined by ¬(<¬(a⋃b)<). Another exampleof a noncounting language is alternation (ab)* definable as
(ab)*= ¬(b< ⋃ <aa< ⋃ <bb< ⋃ <a) = (a⋃b)* \ <aa< \ <bb< ⋂ a< ⋂ <b
An example of a nondegenerate star is the language (aa)* of an even number of a’s. Starfree languages are noncounting. A regular event e is noncounting if there is a k such that for all x,y,z in 1*, xy^{k}z ∊ e iff xy^{k+1}z ∊ e (McNaughton/Papert 1971). A noncounting language only allows some maximum number or else any number of repetitions of any factor. Noncounting languages allow expressing “twice”, but not “an even number of times”. They can count up to a constant, but they cannot cycle the counter.
The alternating language (ab)* is star free, while (aa)* “an even number of a’s” is of star height one. Comparing the automata, one can identify the state from its transitions in the former, but not in the latter. Hence the former can be started anywhere in the string, while the latter must run from end to end. The former needs no counter, the latter does.
Noncounting languages restrict iteration to event types whose prime factors are of length 1 (Pin 1997:56) or whose cycles are permutation invariant (Mcnaughton/Papert 1971:§5,. van Benthem 1991:§8.3). In terms of group theory, first order definable, starfree or noncounting languages is that subset of regular languages whose automata are aperiodic, i.e. have only trivial cyclic subgroups (of order 2). A finite automaton is aperiodic (group free) if there is an n such that for all states s and words w, sw^{n} = sw^{n+1}.
The class of noncounting languages is also the closure of locally testable languages under Booleans and concatenation. Membership of a locally testable language can be decided in a finite window. Two words are indistinguishable in a window of width n if they have the same set of factors of length n. A locally testable language cannot distinguish locally indistinguishable words, i.e. if one is in the language then all are. An ntestable language is identical to its ngram approximation. [18]
The class of all regular languages is the closure of locally testable languages under homomorphism (Medvedev 1964, McNaughton/Papert 1971). The proof of Medvedev’s theorem is quite simple: it is based on the observation that the set of derivations (sequences of compositions of productions) of a finite state automaton considered as a production system (Salomaa 1973:26) is locally testable language. In other words: the set of traces of an automaton is locally testable, because all nonlocal dependencies are coded into (“remembered by”) the states.
Element b is a concatenation atom or nilpotent if b⋂bb = ∅. Elements a, b are concatenation atoms if they are disjoint from their concatenations, i.e. a⋂a.b = a.b⋂b = b.a⋂a = b ⋂ b.a = ∅.
Element a is concatenation open if a.a ⊆ a and concatenation dense if a ⊆ a.a. An open and dense element a = a.a is concatenation idempotent. A regular algebra is atomless (dense) relative to concatenation if a=b.b for some b for all a in it. Concatenation idempotent event types distribute over meet:
a⋂(b.c) = a.b⋂a.c
States are idempotent. For instance, if it rains through the show, then it rains through all numbers. Atomary (singular closed) events are nilpotent of order 2: trying to do them twice in a row fails.
Relation algebra (Peirce 1870, Tarski 1941, Quine 19??, Suppes 19??, Maddux 1983, van Benthem 1991, 1996, Pratt 1992, Ladkin/Maddux 1993, Marx 2001) is to objects what regular algebra is for events. The formal difference is that relation algebra has converse ~r, also denoted by r^{1}, while events in real life do not: what is done cannot be undone. Converse is selfinverse or an involution: ~~r = r.
My approach to relation algebra differs from the usual one in that I do not start from sets, but develop relation algebra directly on Boolean algebra. Relations get the type of 2¬a´b of mappings from the cartesian product of two Boolean algebras to the Boolean algebra 2. We can thus consider a relation r as a Boolean function of pairs xy where xy ⊆ a´b. From the Boolean point of view, the notations for pair ab and cartesian product a´b are synonymous (modulo the algebra we are in), for a pair is the cartesian product of two atoms.
Building relation algebra on Boolean algebra, I do not have to distinguish singular and collective instances of relations like touch by type. For instance, touch is a relation between wholes and their parts: when two regions touch, their boundaries, which are smaller regions, touch as well, but the relation need not distribute to all parts.
The relations in 2^{a}^{´b} form a Boolean algebra whose zero is the empty relation ∅ and unit 1 the cartesian product a´b. A binary relation r = arb in domain and codomain object types a and b is a subelement of the cartesian product a´b. Cartesian product is associative, so (a´b)´c = a´(b´c).
Some notations introduced below may look unusual but work out (Marx 2001:694). The equation r = arb = a°r°b means domain and codomain of a relation are idempotent on the relation. This convention is in agreement with natural language, where redundant arguments are commonly left out: eat eat is short for you eat it or somebody eat something x eat y.
Left and right projections of r are in binary relations equivalently denoted by a´r and r`b and a//r and r\\b respectively, and equal the quotients r/b and a\r, respectively, thanks to the idempotence r = arb. Since product is associative and relation converse exists, projection to any subset of argument places can be defined.
Think of relation symbols r as infix operators on object types. In particular, identity = and cartesian product ´ are idempotent on object types so that the relations a=a and a´a both reduce to the object type a, considered as a relation. The unity object type, the domain or universe 1 ‘anything’ in particular a reduct of both identity = and the universal relation ´.
The identity relation is denoted as usual by = and equals 1=1, 1=x, x=1, x=x, x=y, xx, x^{2 }and is the relation algebraic identity ∅* or 1^{0}. The universal relation ´ is the unit 1 of relation algebra, also denoted by 1´1, 1^{2}, 1´x, x´1 (where 1 is unit of Boolean algebra), x´x, x´y, xy, _´_, etcetera. a´b = a1b = ab because the cartesian product is uniquely determined by its domain and codomain. If x is an absolute variable, xr = x°r = r⋂x = r°x, the difference being that x gets bound to the domain of the relation in the first two cases and to the codomain in the other two.
A nonrelational absolute object type a can be uniquely retrieved from, hence represented as a relation, the identity relation in a, designated by a=a, a=x, x=a, a^{2}⋂=, a^{0} and many other variants. For absolute object types a,b, meet and composition coincide: a°b = a⋂b. Note also a = a^{0}^{ }= 1´a^{0 }=^{ }a^{0}`1= a°a = a^{n}.
Absolute object types a are also in oneonerelationship to their cartesian products a^{2} = a´a, equally designated by a⋂´, a⋂1^{2} and many other variants.
An absolute object type a can also be uniquely represented by its test relation a´1 or 1´a, or a relation which passes through objects of the given type a and screens out the rest. This idea is applied in dynamic logic. (van Benthem 1996:69)
Conversely, relation tests r°1 or 1°r reduce a relation to an absolute object type. For instance, the relation test parent ° 1 is the property of being a parent (of someone). The converse test 1°parent only fails for Adam. This is a binary version of Tarski’s cylindric operator (see section on relational algebra) For atoms, all three representations coincide..
The universal relation 1 is a unit of composition for 1°r°1 = 1 iff r > ∅ (van Benthem 1996:66). In general, if r = xry, then arb = a°xr°y°b = (a⋂x)r(y⋂b). In particular a°r°b = r where a and b are the domain and codomain of r. Thus we can consider the event type arb as the composition of a, r and b as well as the meet r⋂a´b. Note also a´a = (a´b)°(b´a).
Identity = is an identity of composition, that is, (=°r) = r = (r°=). Note also 1°r = 1´(r`1) and (1´r)´1 = r°1.
It was noted that absolute object types can be represented as degenerate cases of relations, either as diagonal or as cartesian product. Conversely, relation projections (domain and codomain) 1´r and r`1 can be represented as identity relations as follows:
1´r 
corresponds to 
=⋂r°~r 
r`1 
corresponds to 
=⋂~r°r 
The image of a relation r under a given object type a is represented by the composition (a°r)`1 = ar`1. and the inverse image under b correspondingly by 1´rb. For instance 1´love°animal contains those who love some animal (x: x love animal). It is distinct from (1´love)°animal = (1`love)⋂animal which contains those which are animals and lovers. (Note that English animal lover can also mean either.) Image is continuous (preserves joins) so if a = b⋃c then a°r = b°r⋃c°r. Note the equivalences
(r°s)`1 = (r`1°s)`1
ar`1 = (a^{0}°r)`1 = (a^{0}`1°r)`1
(Kaplan/Kay 1994:342) The field r of r is the join of the domain and the codomain, the absolute object type 1´r⋃r`1, equivalently its identity relation r^{0} = r⋂=, cartesian product r^{2 }=^{ }r°~r⋃~r°r, or test relations r´1 and 1´r. The restriction or relativisation of r to field a is r⋂a^{2}. is Positive powers of composition are defined by r^{n+1} = r°r^{n}.
Combining the conventions, we can derive the equation r = r⋂a´b = a°r°b. In other words, a relational object type meets with the cartesian product of its domain and codomain and is the composition of its domain, itself, and its codomain. For instance,
bite ⋂ dog´man = dog°bite°man
This is the restriction of the relation bite to dogs and men.
When the domain and codomain a and b are elements of Boolean algebras it also makes sense to talk about the dual of a relation ¬(¬ar¬b). The relation dual of inclusion all a⊆b is the opposite noninclusion not only. bËa. This can be compared to its quantifier dual some. The meet of all and some and not only is the default implicature of all. Inclusion ⊆ is a preorder so ⊆* = ⊆. = ⊂*.
Here is a list of relation algebra operators. The second column indicates definitions of the operators in terms of transduction.
1 
xy´:1 
top 
= 
xx´:x 
identity 
r°s 
xy´: z´:xrz⋂z´:zsy 
composition 
~r 
yx´:xry 
converse 
r\s 
~r°s 
left quotient 
r/s 
s°~r 
right quotient 
r→s 
¬(~r°¬s) 
left residual 
r¬s 
¬(¬r°~s) 
right residual 
r* 

composition closure 
Composition has a dual called relative sum r+s defined by x~r`1⋂1´~sy > ∅, whose left and right adjoints are the relation quotients r\s and r/s defined by ~r°s and s°~r . They obey s ⊆ r°r\s and r ⊆ r°s/s respectively. For instance, after a wrong left turn, one way to go right is to back up and turn right.
Relation residuals, defined by the equations
r¬s = ¬(¬r°~s) r→s = ¬(~r° ¬s)
are (right/left) adjoint to composition (Pratt 1997). Composition and its variants are characterisd by Boolean relations between codomain and domain:
xr°sy ↔ xr`1⋂1´sy > ∅
xr→sy ↔ xr`1⊆1´sy
xr¬sy ↔ xr`1Ê1´sy
Residual and composition license a modus ponens (cut, cancellation, application):
r°(r→s) ⊆ s
An example of right residual ¬ is the relation do¬want between x and y in x does whateve is wanted by y. Thus do¬want is obey, and its converse ~(do¬want) is power. Power over opposites is control. The reflexive subset of that, by the Aristotelian definition, is freedom: A free agent does precisely what he wants, that is, obeys himself, or controls himself. (This does not have to prevent him from doing what others want, if they want what he wants.)
An example of left residual is the relation enemy between them and us they hurt→help us in whatever hurts them helps us. This relation holds symmetrically between the players of a zerosum game, where in general I oppose you, whatever I want you don’t and vice versa: I ~want→¬want you. The opposite relation help→help holds between aims p and means q. The Latin proverb quae nocent docent is the reflexive left residual x hurt→teach x.
The dual of image and inverse image, respectively, is right and left residual a¬r`1 and 1´r→b. For instance, the residual 1´love→man contains those who love only men, or x: x love y → y man. This relation will reappear in connection with modalities. The relation love→self is loving oneself only. The relation property of almost reflexivity (quasireflexivity in van Benthem 1986) xry → xrx may be characterised by the left residual r→r^{0}.
Residual with respect to inclusion gives another idiom for agent and object nominalisations: eater is whoever eats 1`eat or r→⊆ and food is whatever is eaten eat`1 or 1´~eat or ~eat→⊆ or Ê¬r.
Further relation properties can be characterised by considering instead of the usual angelic composition a dual of it, demonic composition r·s (Bergstra/Stefanescu 200?). It is defined by
x r·s z iff xry ↔ ysz.
(Asymmetric cases where ↔ is replaced by ¬ or→ are called backward and forward demonic composition, respectively.) For instance, the demonic composition of a partial order with itself produces a lattice, and that of an equivalence relation (undirected graph) defines its equivalence classes.
Relation application is composition plus projection, thus for instance my parents is I°parent`_ The relation grandfather is faithfully represented by the relation algebraic formula father^{2 }= (parent⋂male_)^{2} = parent^{2}⋂(male_)^{2}.
Relation algebra can be axiomatised with Booleans, relation composition (alternatively, cartesian product plus selection) and projection as a minimal base. The equational theory of relation algebra is finitely axiomatisable but undecidable (Pratt 1990, van Benthem 1996). It corresponds to the threevariable fragment of first order logic (Tarski/Givant 1987, van Benthem 1996:69). Compare also Kamp’s theorem.
Binary relation algebra is thus a proper subset of first order logic. With only three variables, one cannot express relations among four objects (Marx 2001). Relation algebra extended with variables regains the power of first order logic. For example the fourway relation sibling of two children sharing two parents is represented in relation algebra with variables as
¹⋂(~parent°x°parent°~parent°(x°¹⋂parent)
Putting it into English, one’s full sibling is another child of one’s parent’s child’s other parent, or
sibling = other child of parent of child of other parent
This is not the way we usually say it, given the much simpler plural idiom siblings share parents.
Relation algebra with complete join or Kleene star (Jónsson/Tarski 1951) includes regular algebra as a special case. (In fact, regular algebra is complete relation algebra without conversion.) Accordingly, its expressive power goes to second order. Recursive relations like ancestor become definable as parent^{+}.
The composition closure has the usual properties of Kleene star, including
r⋃r* = r* = r**
(r⋃s)* = (r*.s*)*
Regular algebra can be represented in complete relation algebra by formalising events as regular relations instead of regular languages. A change is a relation between two states or courses of events. Let each event e be represented by the transducer xe:x which continues a course of events x with event e. The transduction relation defines a binary relation between courses of events. The translation defines an isomorphic embedding between relation algebra and regular algebra which preserves Boolean and regular operations (van Benthem 1991:242).
Relation algebra can be extended to manyplace relations by letting the cartesian product of relations iterate. Cartesian product of relations r´s is associative. A manyplace relation rxyz can be considered a twoplace relation between any partition rx(yz), r(xy)z into domain and codomain object types. (This is currying for the monoidal category of relations).
Codd’s (1972) nplace relation algebra, or relational algebra rearranges the interrelationships between relation composition, concatenation, cartesian product, and meet. It singles out composition (a special case of relation join) from cartesian product using selection (meet) and projection: select x,z from r,s where r.y = s.y. Thus composition ° is defined in terms of cartesian product, identity, meet and projection. Borrowing transductor notation, if r = arb and s = csd, then
r°s = ad`:(r´s ⋂ a(b=c)d
which is the relation obtained by projecting the domain a and codomain d columns out of the result of selecting from the cartesian product of r and s the rows where codomain b of r and domain c of s agree. In particular ar1⋂1sb = a(r⋂s)b and ar1⋂1rb = arb. To hit somebody and for somebody to get hurt in one go is to hit and hurt somebody. The twoplace relation hit is the meet of the oneplace relations be hitting and be hit.
Another algebra of nplace relations is Tarski’s cylindric algebra (Henkin et al. 1971, 1985). It contains Booleans, identity (diagonal) relation x=y and a relational operation of cylindrification, in effect existential quantification. Cylindrification cxr is the nary counterpart of test relation,described in the previous section. It can be defined as
cyr = rx1z: rxyz>∅
For instance, binary relation conversion is captured in cylindric algebra by representing a binary relation rxy as a cylindrified threeplace one czxryz and the converse relation as cy(y=x⋂cx(x=z⋂xryz). This is natural language passive: first copy subject x to a free oblique position z, then cylindrify (delete) subject x, then copy object y to subject and delete object.
Cylindric algebra like Codd’s relational algebra defines all firstorder definable relations (Van den Bussche 200?).
Natural language is good at packing relations. She gave me of the tree and I did eat is the threeplace composite relation x give y for z to eat of the relations give and eat. We shall sort out how this sort of event types are built up from binary relations in the style of eve cause apple go adam go eat.
Relational algebra gives a new perspective on the interdefinability of order relations and choice functions. For instance, the animacy hierarchy says human<animal<object. This formula can be interpreted as a cartesian product of the three object types. At any point, it can be split into two sides, so it matches the relation algebra expression animate.¬animate. From a regular algebra point of view, the comparative concept animate represents a prefix and ¬animate a suffix of the order relation.
In the choice set of the man Bill and his donkey Sue animate assigns Bill to the animate end of the hierarchy and Sue to the inanimate end. Formally:
animate. ¬animate ⋂ human.animal = human.animal
Relational algebra and regular algebra are related by the equation
regular algebra = relational algebra + composition closure – converse
Function application can be defined in relation algebra. Consider a oneplace function in postfix notation xf = y, or (in the more usual prefix notation) y = fx as a twoplace relation xfy. In prefix notation y = fx denotes the codomain xf`y = xf`1 of the relation f. It is the same as (x_⋂f)\\_, or the suffix of the meet of f with the cartesian product of x with the universe. The match is better if relations and functions are written in the same sense xf = y. I use both notations when need arises, so yf=x equals y = f^{1}x.
A manymany relation afc ∧ afd ∧ bfc ∧ bfd can be represented oneone by a meet of joins of equations (af = c ∨ af = d) ∧ (bf = c ∨ bf = d), enumerating the functions which are included in the relation. It can equally be represented as the Boolean relation (a⋃b)f(c⋃d) which equals the equations (a⋃b)f = (c⋃d) and (a⋃b) = f^{1}(c⋃d), or by oneway distributive variants af = (c⋃d) ∧ bf = (c⋃d). (Salomaa/Yu 2000).
In this section I consider the algebra of functions h: A→B from one Boolean algebra A to another B. Such a function h is a Boolean morphism if it preserves Booleans, among them
h∅=∅ h1 = 1 zero and unit
h(x⋃y) =hx⋃hy join
h(x⋂y) =hx⋂hy meet
h1\hx = h(1\x) complement
(x ⊆ y) ⊆ (hx ⊆ hy) inclusion
For instance, the Boolean quotient a⋂b of a with b is a Boolean morphism.
For another example, an atomary object type like I is dually represented by an atomic Boolean morphism from people to 2 which returns 1 for me and ∅ for others. The algebra of Boolean morphisms in b^{a} is a Boolean algebra with the constant function ∅ as zero, the constant function 1 as unit, and the Boolean morphisms of atoms as a base.
Many mappings are not Boolean morphisms. Distribution properties are systematically related to quantifier profile. For instance touch is a topological (existentialuniversal) predicate which does not distribute over arbitrary meets. Objects can touch without all of their parts touching. Weaker conditions may hold. A filter preserves meets and inclusions, an ideal joins and subelements (van Benthem 1986:52).
hx ⋃ hy ⊆ h(x⋃y)
h(x⋂y) ⊆ hx ⋂ hy
h(¬x) ⊆ ¬hx
If h is an endomorphism, conditions like the following can be considered.
hx ⊆ x
hhx ⊆ hx
A necessary and sufficient condition for distributivity is that the mapping be a positive Boolean morphism, continuous on joins and meets and monotone on inclusion (Jónsson/Tarski 1951, Keenan/Faltz 1985, van Benthem 1991). Most mappings do not preserve complements, for instance, parents of men do not exclude parents of women. An injective morphism preserves disjoint joins and complements; for instance men’s heads are not women’s heads.
A binary relation r is a Boolean function from pairs ab to 2. The class 2^{b}¬a of Boolean functions between two Boolean domains is much larger than the class 2^{ab} of binary relations in it. Distributivity conditions reduce the former to the latter, letting Boolean functions represent, or reveal, relations.
A binary relation r can also be viewed as a function that maps domain to codomain by the projection or (co)image function ry = 1´ry = x: xry, for instance the teachers x of a class of schoolchildren y, or xs = xs´1 = y: xsy, for instance the students y taught by a team of teachers x. This is the natural language process of nominalisation by which the verb x parent y goes to the nominalisation x be parent of y.
Constraints on functions reveal interesting special cases of relations. Here is a short list:
x ⊆ rx 
reflexive 
rrx ⊆ rx 
transitive 
r¬rx ⊆ x 
symmetric 
More correspondences have been identified in choice function theory (Fishburn 1977, cf.dyadic conditional modal logic Hansson 1969, Chellas 1975, Spohn 1975, van Benthem 19??, Carlson 1994). A choice function is a Boolean function. whose values are subsets of the same set:
sx ⊆ x 
choice function 
A binary comparative relation is revealed by a choice function which selects from any set of options those that are good, better, or best (Fishburn 1977). Alternative statements for conditions on choice functions include
sx=∅ only if x=∅ 
serial 
s(sx ⋃sy) = s(x⋃y) 
transitive 
if x ⊆ y\sy then s(y\x) = sy 
transitive (aka path independence) 
if x⊆y then s(y\x) = sy \x 
cotransitive 
Setting ¬s for s in the last condition shows it to be a dual of the previous one. Combining all conditions gives an equivalence relation, which reduces choice function to meet so that. sx = p⋂x for some type p. (This is the defining property of an absolute adjective, Kamp 1975, Klein 1982, van Benthem 1991). Properties of choice functions correspond to axioms of basic conditional logic (van Benthem 1986:94).
One application of Boolean functions is (algebraic) modal logic. Here may and must are oneplace functions on a Boolean algebra satisfying the conditions
may ∅ = ∅ must 1 = 1
may (p ⋃ q) = may p ⋃ may q must (p ⋂ q) = must p ⋂ must q
may p = 1 \ must (1\ p) must p = 1 \ may (1\ p)
Then there exists a unique binary relation r on 1 so that may p is the relation image 1´rp and must x is its dual, the residual 1´r→p. The following conditions are equivalent (Jónsson/Tarski 1951, Hughes/Cresswell 1968:§17, Bull/Segerberg 1983:11):
p ⊆ may p 
must p ⊆ p 
r is reflexive 
must p ⋃q = 1= p ⋃ must q = 1 
may p ⋂q = ∅ = p ⋂ may q = ∅ 
r is symmetric 
may may p ⊆ may p 
must p ⊆ must must p 
r is transitive 
Combinations of these conditions define well known modal logics. None gives normal modal logic K. Reflexivity gives T. S4 matches preorders. S5 matches equivalence relations. Symmetry is also characterised by the Brouwerian B axiom may must p ⊆ p or its dual p ⊆ must may p.
The weaker the logic, the more iterated modalities can be distinguished. K and T distinguish all (Hughes/Cresswell 1968:70). S4 distinguishes fourteen (sixteen counting p and its negation). These are all the alternating sequences of the language (maymust)?^{3}of at most three modals. Other sequences reduce due to the idempotence of the modalities may, must, may must and must may.[19] S5 only distinguishes two: may p and must p (six if we count in p and the negations). In a sequence of modals, only the innermost counts. (Hughes/Cresswell 1968:48.
Given reflexivity, must defines a morphism of event types. The algebra of must is not closed under joins or complements so it does not form a Boolean algebra.
must p ⋃ must q ⊆ must (p ⋃ q) but not vice versa
must (1\ p) ⊆ must (1\q) but not vice versa
Knowledge can be represented as a reflexive modality (Hintikka 19??). The modal logic S4 of partial order forms the intuitionistic Heyting algebra (Gödel 1933). This makes formal sense of the idea that what one knows is a partial image of what happens. (Cf. information set in game theory.)
Choice functions also allow characterising weaker modal logics than K. Classical modal logic only preserves equivalence. Regular modalities preserve inclusion but do not distribute over join or meet. The dynamic modalities able/happen are an example. Their logic matches that of the S4 existentialuniversal modalities may must and must may, respectively (Segerberg 1971, Carlson 1994).
The difference between relation algebra and modal logic by Boolean function algebras is one of perspective. It is the same classical logic in different wrappers. (van Benthem 1996).
Modal logic can be extended with variables and variable binding operators on indices (hybrid modal logic, Marx 2001). The idea of bindable modalities has been repeatedly introduced in tense logic, modal logic and relation algebra (Rescher/Urquhart 1920, Vlach 1973, Blackburn 2000, Marx 2001). It is available in event calculus as well. For instance, transitivity of time is expressible in hybrid tense logic as
fut (fut (then (now (fut then)))
which says that two hops to the future can be also made by one hop. But the statement of the same fact in regular events looks a lot simpler.
<< = <
A general calculus of functions is lambda calculus or combinatory logic, of which later.
What I call transduction algebra is in turn a relational algebra of regular algebras (or the other way round). Formally, it is no different from relation algebra or regular algebra, all the operators of the former two have natural equivalents here. The transduction product : is a notational variant of concatenation or (cartesian) relation product.
The left side of the transduction relation represents the input language of the transducer and the right side its output language — or the other way round, the direction is a matter of perspective and grammatical choice. Variables x range over event tokens in a shared alphabet between the input and output languages, so that they stay fixed in the transduction.[20]
In this section, I denote transduction composition by whitespace. The left and right projections of a transduction will be denoted as a´:b and a:`b. If r = x:y is a transduction, then a´r = 1´(a r) and t`r = (r b)`1 are its left and right images (Kaplan/Kay 1994:340). The abstraction operator ´: relates to transduction as projection relates to product, i.e. abstraction and transduction are adjoint.
As a notational variant of concatenation or relation product, transduction product : is associative, so e:(f:g) = (e:f):g. An event type e:f where e and f share no variables denotes the free cartesian product of the event types e and f. Shared variables may constrain e:f. For instance, e.f:f.e represents the permutation of event types e and f. Meet and join distribute over : componentwise:
(e:f) ⋂ (g:h) = (e⋂f) : (g⋂h)
(e:f) ⋃ (g:h) = (e⋃f) : (g⋃h)
Hence the transducer e:f is equivalently represented by the meet e:1⋂1:f.
So does concatenation: A concatenation of relations r.s = e:f.g:h between event types is equivalent to a relation of concatenations of event types eg:fh.
(e:f).(g:h) = eg:fh
(e_{11}:…:e_{1n})…(e_{m1}:…:e_{mn}) = (e_{11}…e_{m1}):…:(e_{1n}…e_{mn})
The latter equation represents is mway concatenation of ntuples of events (Kaplan/Kay 1994:338).
The identity transducer x:x is not the same as the forgetful transducer 1:1 which stands for the universal relation. ∅:1 and:1:∅ are final and initial elements for composition, respectively. Note also the transducers e:1 or e:x. The two are equivalent and denote the constant mapping to e, so e:1 f = e. Composition with e⋂x:x equals meet with e, i.e. e⋂x:x f = e⋂f. It thus codes the grammar and semantics of an intersective premodifier. The complement transducer ¬x:x is characterised by ¬x:x e = ¬e. An event type e is equivalently represented by a the identity transducer e:e (e:1⋂x:x or 1:e⋂x:x) which maps e to itself and fails for other inputs.
I use ° or whitespace to represent composition of transducers. The class of transducers is a concrete category with the class of event types as objects and transducers as arrows. In fact, transducers form a a group under composition with the identity transducer x:x as the composition identity. Transducers are arrows for events and objects for composition. Events are dual to transducers, transduction (abstracti equation
e:f ° f = e
and the commutative diagram
Figure 1 
Composition is characterised by
(e:f) ° (g:h) = e:(f⋂g) ° (f⋂g):h = (e:h):(f⋂g)
from which follows the cancellation law
e:f ° f:g = (e:g):f
and further, the inverse law
e:f ° f:e = (e:e):f
Another principle governing transduction and composition is
(e:f) ° (g:h) = (e:h):(f:g) = (e:h):(e:f):(h:g)
In particular, then, we get for composition
(e:f) ° (f:g) = (e:g):(f:f) = (e:g):(e:f):(g:f)
and inverse
(e:f) ° (f:e) = (e:e):(f:f) = (e:e):(e:f):(e:f)
The following connection between the abstraction and transduction interpretations of : is obtained from the above.
x:y e = x:y e:e = x:(y⋂e) (y⋂e):e = x:y⋂e e:e = x:y⋂e
Meet and composition (both fibred products) come down to the same when the law constraining composition is vacuous.
For instance, axb:cxd applied to aeb where a and b are concatenation atoms gives ced, for aeb = 1:aeb, and composition 1:aeb ° axb:cxd yields 1:aeb⋂axb°aeb⋂axb:cxd = 1:aeb°aeb:ced =1:ced. = ced.
Varieties of transduction are graph rewriting and tree transduction (Comon et al. 2002). In general, transductions with variables are not linear or regular. A simple example is x:xx, which represents the copy combinator, or copy language. The restriction of tree transduction to linear terms with one occurrence of each variable per event type is regular (Comon et al. 2002).
Regular (rational, finite state) relations (transducers, transductions) are a generalisation of regular languages to regular relations (Kaplan/Kay 1994, Roche/Schabes 19??, Karttunen/Beesley 200?). They are finite automata with input and output that map regular languages to others, i.e. represent string relations. More specifically, regular transductions are the closure of atomic transductions of form a:b and ∅*:b under inverses and the regular operations. In other words again, they are regular languages over the alphabet of pairs of atoms or ∅*. (Roche/Schabes 1997). The class of regular languages is closed under finite transduction (Rozenberg and Salomaa 1997:88), and so is the class of regular relations. Regular relations generalise straightforwardly to nplace relations, with regular languages as oneplace. (Kaplan/Kay 1994).
The class of regular transductions is closed under concatenation, inverse, union and composition, but not under complementation or meet if the languages include ∅* (Rozenberg/Salomaa 1997, Roche/Schabes 1997). A simple counterexample is the event type
(a:b)*(∅*:c*) ⋂ (∅*:b*)(a:c)* =
(a:b.c*)* ⋂ (a:b*.c)* =
a^{n}:b^{n}c^{n}
which transduces any number of c’s into an equal number of a’s and b’s (Kaplan/Kay 1994).
Any binary regular relation r is of form r = e:f for some regular expressions with variables e,f. There is a rewrite of r in the form of a regular expression over the pair alphabet (1^{0}+ ∅*:1^{0}+∅*)* (a path language for r, Kaplan/Kay 1994:343). r is a samelength relation iff it has a rewrite in the alphabet (1^{0}:1^{0}))*: Samelength relations are closed under meet and complement (Kaplan/Kay 1994:343) so they are regular languages over an alphabet of pairs. So are relations with an upper bound to the differences of the lengths of strings in the relation. Any regular relation is also equivalent to a regular language and a pair of monoid morphisms from it to the input and output. (Comon et al. 2002:174).
More interestingly, star free regular relations are closed under meet and complement. This should follow from the morphism between regular relations and relational algebra, which is closed under complement. A rational relation is star free if and only if it is aperiodic and deterministic Star freeness is decidable for deterministic rational relations but undecidable for nondeterministic ones (Madona/Varricchio 1994).
Events which go backwards in time can be represented by formalising events as regular relations instead of regular languages. Here is one way: let event e be represented by the transducer xe:x which suffixes a course of events x with event e. It is easy to see that this translation preserves the properties of concatenation, now represented by composition of transducers. Define the inverse event e^{1} of event xe:x as the inverse transducer x:ex. It is easy to see that ee^{1}x = e^{1}ex = x. Compare also twoway finite automata (Hopcroft/Ullman 1979:36).
Inverse events which go backward in time undoing real events provide a denotation to after as the inverse of before. Under this extension, > is just the the event type <^{ 1}. Imaginary events have a use in counterfactuals. They help make formal sense of the metaphor between past and irrealis: adding an event moves time forward, while adding the inverse of an event (subtracting an event) moves time backward. This allows interpreting past future tense >< literally as the replacement of one course of events with another.
Aspect and diathesis operators can be construed as transducers on event types. Since transducers form a monoid under composition, optionality and iteration make sense on event type transducers. For instance, pf? e can be defined as pf e ⋃ e:e. and pf^{+} e for pf pf^{+} e^{ }⋃ pf e. In fact, we can define a nested context free event type in this manner as (¬ax¬a:x)^{+} a. It is the fixpoint of the recursive event type equation x = ¬ax¬a ⋃ a.
The abstraction operator x´:y. is left projection of the transduction operator (:). It is a variable binding type abstraction (lambda) operator resembling a natural langage relative clause x´:e ‘the x such that e’. Strictly x:e denotes a relation between types and x´:e its left domain. In practice, I leave out the projection operator ´ from x´:e much of the time.
The abstraction operator allows denoting the lefthand type of events that satisfy the righthand type. Or vice versa; the direction is a choice of grammar. The right to left reading for the operator is more natural for headinitial prefix languages like English. Sometimes it is more iconic to work left to right as in postfix or suffix languages (see section on transduction).
In lambda calculus. lx.e denotes a function which returns whatever e denotes for any x. The application lx.e x denotes the same as e. If e is a truth function, lx.e x denotes true or false according as x satisfies e, so lx.e can be dually construed as denoting the set e is the characteristic function of. Event type abstraction x:e on the reading just suggested is an inverse dual of lambda. It denotes a subtype of x, not e. It need not be applied to anything to do so, for it will turn out equivalent to the composition x´:e e.
With the abstraction operator we can define projections on an event, the prefix and suffix e//f = e:(ex⋂f) and e\\f = f:(e⋂xf) which single out a typed beginning and end of an event, respectively. The double slash distinguishes these operators from the left and right quotient operators e\f = x:ex⋂f and e/f = x:e⋂xf (Salomaa 1973), which subtract a typed beginning and end, respectively.[21] An untyped beginning of an event e can be written as e/< or <//e and the end as <\e or e\\<. The complement of the prefix <_\e contains medial and final events x: e⋂(_<x<) or in e\<//e. Proper medial subevents are in <_\e/_< or in e\<//e⋃e\\<. (For in see section on ideals and filters.)
Further useful shorthands are Perl style left and right abstraction operators e´f = e:ef and e`f = f:ef for events immediately preceding and following given events.
Parentheses bind tightest, then comes dotless concatenation. Beyond that, one place operators bind tighter than twoplace ones. Among operators of the same arity, other operators bind tighter than Booleans. Of the unary operators, the (relatively) basic ones bind more tightly than defined ones, and ones abbreviating intersection less tightly than the others.
The abstraction operator allows two event types to be equivalent (entail one another) but denote different things, for instance the projection and quotient operators are equivalent to ef but denote various parts of it. We might call the part denoted by the bound variable x focus (foreground) and the rest presupposition (background)[22]. The abstraction operator also allows representing disconnected event types and partially ordered courses of events.
An important insight captured in this formalism is that events combine by concatenation and meet, both of which associative operations. Associativity provides for the categorial ambiguity or polymorphism characteristic of natural language.
Meet is essential in that it allows the event calculus makes sense of superposition of events in addition to event concatenation and composition. Superposition allows us to do justice to the redundancy of natural language coding of events. (Unification in situation semantics is another instance of superposition, Cooper 19??.) There is another duality here between an atomic, compositional view on events and an atomless, superpositional one.
Altogether there are at least four ways to compose events: superposition, concatenation, composition and substitution, formalised by Boolean algebra, regular algebra, relation algebra, and transduction algebra, respectively.
A substitution e(x\y) of y for x in z in general can be decomposed into addition of y and deletion of x. Conversely, addition can be defined as substitution for identity and deletion as substitution by identity.
Addition of y to x gives xy, which followed by deletion of x leaves over y. Function application defined through relation product and projection is also an instance of substitution, and vice versa. They are the same thing under different descriptions.
A monoid morphism is a function (substitution) which preserves concatenation: x' = y, z' = z otherwise, (xy)' = x'y'. The last clause is the distribution law which extends the homomorphism. Distribution is the algebraic law of a morphism.
Proper substitution of values to variables as defined in logic combines the following two ideas:
(i) substitution is not a string (monoid) morphism, but a tree morphism which preserves both horizontal order (concatenation) and vertical order (dominance). It is defined inductively by complexity of formula, not just on atoms and concatenation.
(ii) The substitution must be proper, so that free variables do not get bound. A variable gets bound when two variables which were different become the same. In terms of morphisms the substitution is oneone on variables. There condition that the mapping is oneone is an higher order statement. What it entails locally is the following equation:It is defined like this:
x.(x\y) = x\x.y = ∅*.y= y 
substitution consists of addition and deletion. 
z.(x\y) = z(1\1) = = z 
x = 1 when e is linear in x 
(x:e).(x\y) = x(x\y):(e(x\y)) = y:(e(x\y)) 
if not empty 
(x:e).(x\y) = x:e 
Otherwise 
Here x:e and y:(e(x\y)) are alphabetic variants if the latter is not empty.
The key part of condition (ii) is the principle x = 1 applied in the second clause of the above definition. It holds for linear terms which contain only one occurrence of variable x (Comon et al. 2002:13). The equality fails for nonlinear event types which contain shared variables. Although xx ⊆ 1 is true, the converse does not hold. As a result a proper substitution fails to cancel out if the substituend contains the substitution value. For instance,
(xy)(x\wyz) = (x.(x\wyz).(y.(x\wyz) = (x\x)wyz . y.(x\wyz) = wyz . y(x\wyz)
There is no way to get rid of the remaining substitution term when it shares variables with the substituend (Curry/Feys 1958:94, Stoy 1977:62).
The notion of substitution has wide application. Any mapping, for instance a change of place, is a substitution in this generalised sense. Multiplication by an element a of a field by the fraction b/a or the additive term ba carries out a substitution of a with b. The replacement operator of Karttunen (1994, 1995) defines regular substitution. Except thus codes a Boolean substitution or replacement.
The key algebraic notions function, substitution, morphism, variable, abstraction and combinator are interdefinable (Curry/Feys 1958:86, Barr/Wells 2002). Quantification involves Boolean inequality and binding. Monadic quantification does not need variables, it is captured by Boolean (in)equalities. The notion of Boolean unit already involves quantification, being a closure.
Binding entails coreference, shared variables denote the same. Binding is related to substitution for xrx > ∅ means xrx has a nonempty substitution instance. Variables and binding are also interchangeable with combinators, which do the copying and deletion needed for substitution.
Variables involve morphisms. Substitutions are morphisms, and proper substitution characterises variables. 1r1 is of the same form as xrx. The difference is proper substitution.. Each occurrence of the Boolean unit 1 is freely substituted for by the inequality a ⊆ 1, which expresses existential generalisation. xrx is instantiated by doing proper substitution, i.e. xrx.(x\a) = x(x\a).r(x\a).x(x\a) = ara. This is a morphism. It is also a transduction, for the substitution (x\a) is a notational variant of the transduction x:a.
Combinatory logic (Schönfinkel 1924, Curry/Feys 1958, Hindley/Seldin 1986, Quine 1966, 1971, van Benthem 1986:59, 1991) is a variant of functional (lambda) calculus which eliminates variables and parentheses in favor of operators called combinators. Combinators operate on left associative sequences of objects,. The result of an application of a combinator is a sequence made out of some of the objects it combines (possibly with repetitions). Combinatory logic combinators replace variables and parentheses with operators in Polish notation.
Combinators can be classified by what they do to arguments. An identity is just a placeholder that does nothing. A composition adds or removes parentheses. A variation copies or permutes arguments. Specifically, a permutation permutes arguments, a duplication repeats some arguments, and a cancellation (or projection) selects some arguments and drops others.
One fascination of combinators is that they open an algebraic perspective on grammatical operations such as deletion, copying, permutation, and distribution of symbols. This helps recognise and exploit analogies between differently interpreted, but structurally similar domains.
The best known combinators include the following.

translation 
axiom 
associated algebraic ideas 
I 
x:x 
Ix = x 
Identity 
K 
x:xy 
Kxy = x 
left projection, right cancellation 
C 
xzy:xyz 
Cxyz = xzy 
inversion, permutation, commutativity 
W 
xyy:xy 
Wxy = xyy 
duplication, copying, idempotence 
B 
x(yz):xyz 
Bxyz = x(yz) 
composition, associativity 
Combinators have been proposed as a way systematising observations about natural language diathesis (Curry/Feys 1958:§8S2, van Benthem 1991:47,128). This idea will be pursued below. For more details on combinatory logic, see Appendix.
Choose a set of dimensions, say, event type, participants, time and place and construe events as a product space of the dimensions each considered as Boolean algebra:
event ⊆ type ´ objects ´ time ´ place
This representation opens up a way of looking at events as a linear algebra or vector space. The representation of events in time through regular expressions is a onedimensional projection of this idea onto time. Any partition of the space into two sets of dimensions mapped to a third can be interpreted as a Chu space (Pratt 1997, 1999).
The Boolean product space of events can also be considered a linear algebra or vector space with scalars in 2.
Boolean operations in the product algebra are taken componentwise. For instance, rain⋂yesterday abbreviates the componentwise meet of a vector whose type field is rain and all other fields 1 and another whose time field is yesterday and other fields 1. The former denotes the event type rain which includes all tokens of rain, the latter the event type yesterday all tokens yesterday. This is the duality of vector spaces and linear algebra.
rain yesterday = rain_ ⋂ _yesterday
The duality of vector spaces and linear equations is one of the key ideas here. It also explains why it is that natural language so freely equivocates between Boolean and and concatenation and then. See appendix.
There is work for group theory in this as well. There is a group theoretic duality between cyclicity and iteration (McLane/Birkhoff 1967:81). An iterated event goes round in a cycle. The smallest cyclic group is the oneelement group of order 1, the unit, followed by groups of order 2 (reflexivity, idempotence, Boolean algebra), 3 (symmetry, converse, involution), and 4 (transitivity). The Klein four group is the product group 2*2.
The cornerstone of the algebraic theory of machines is the observation that any finite monoid can be seen as a finite state machine and that recognition of regular languages reduces to multiplication in a monoid. More formally, a finite monoid M recognises a language L if there is a monoid morphism from L to M. Kleene's Theorem can then be stated as follows: a language is regular if and only if it is recognized by a finite monoid (Pin 1986, Beaudry et al. 2001).
The duality is put to use in the languageautomaton translatability. Group theory allows distinguishing first order definable, starfree or noncounting languages as that subset of regular languages whose automata are aperiodic, i.e. have only trivial cyclic subgroups (of order 2).
To put it simply, the automaton for the event type a^{+} consists of one cyclic a arc xax from a state x to itelf, which is. the graph of a reflexive relation. In fact a^{+} can be viewed as a reflexive transitive relation in a:
a^{+} = aa^{+} = a^{+}a = aa^{+}a = a^{++}
The term reflexive means ‘bent back’. We tend to understand reflexivity through symmetry and transitivity. Transitivity plus symmetry imply reflexivity. Translated into event talk, a symmetric cycle keeps us in the same state by chaining an event and its inverse. Cyclic events produce open event types. I shall point out phenomena where languages switch between reflexivity and iteration (see section on reflexives).
For Aristotle categories were the most general concepts, their logical properties and relations. Modern category theory retains the spirit of this enterprise. Category theory or abstract or universal algebra (empty set theory for some) takes morphisms, or arrows (structure preserving mappings) between structures (called objects) as the primitive notion instead of sets. A category consists of objects and arrows between them. A category is a monoid of arrows under composition (of arrows). A category is a subcategory if there is an injective morphism that inserts it into the supercategory. A full subcategory keeps all the morphisms of the supercategory.
An initial object of a category is the object from which there is just one arrow to any object. A final or terminal object is the object to which there is just one arrow from any object. In the category of Boolean algebras , 2 is initial and the free Boolean algebra on a set of generators terminal.
Figure 2

Category theory is built round commutative diagrams and dualities. A commutative diagram is a graph of objects and arrows that commutes when two paths lead to the same object. Given two objects and an arrow between them, find a third one and arrows to it from both so that the diagram commutes. The third object is a colimit if it is the initial object of the category (or cocone) of such objects. Here is what Parmenidean megista gene, the different modes of being (existence, inclusion, identity, unity) have in common. In category theoretic terms, the various senses of be are initial or terminal objects of interrelated abstract categories. This fact has linguistic interest too, for such connections are a prime locus of neutralisation (witness the dense etymological networks among these notions in natural languages).
The dual of a diagram is obtaining by reversing arrows. The dual of a colimit is a limit. Instances of limit and colimit are initial and terminal elements, and pushout and pullback. A pushout of two arrows is an object and two arrows from it that makes a square diagram commute. Again the object is the initial object of the cocone of such candidates. Examples of pushout are coproduct, sum, join, unification, or domain of a metonymy. A pullback is the dual of a pushout. Examples of pullback are (free or fibred) product, Boolean meet, generalisation, or image of a metaphor.
Figure 3

A product of objects is an object a´b and arrows (projections) a´´b and a´`b from the product to the components a and b. The dual is a coproduct (sum) of a family of objects is an object a+b and arrows (injections) from the objects to the coproduct.
Examples of products are event type as product of type, participants, time and place, and refinement of a Boolean algebra. Examples of coproducts are Boolean sum (disjoint join) and quotient of a Boolean algebra.
Figure 4

A product can have a right adjoint (one sided inverse) a¬b so that a¬(b´c) = (a¬b)¬c. There is an adjunction or Galois connection between arrow and product:
a´b ≤ c if and only if a ≤ b→c
Product is coadjoint (right adjoint) of the arrow. Instances of adjointness are product and exponential, Boolean meet and implication, binary comparative relations and choice functions, function arity and currying, and categorial grammar. The carrier or forgetful functor of a Boolean algebra which returns the sum of its atoms is adjoint to the functor which maps atoms to the free Boolean algebra generated by them. There is also an adjunction between the product and its projections so that a´´b ≤ a´b iff a ≤ a´1.
Turning the diamond diagram forty five degrees gives a Klein four group.
A category is a quotient category if there is a morphism from the quotient to the numerator which is bijective on objects and surjective on arrows. A quotient coarsens the category without losing its arrows, It just takes out the slack from the objects. That can happen when there is slack, i.e. the bigger category is not propped full of arrows (enough to distinguish its objects). For instance, in model theory, there is slack due to logical equivalence. The quotient, or Lindenbaum lemma, removes it by mapping synonyms to synsets. The dual of a quotient is called residual. It is a refinement of a category by another category.
A functor is an arrow (morphism) between categories which maps objects to objects and morphisms to morphisms. A functor is contravariant if it reverses arrows and contravariant if it does not. Two contravariant functors compose to a covariant one. An example of contravariance is logical type shift one level up. Two shifts are covariant. What is a product is a coproduct at the next level up or down. Examples of covariance are logical type shift two storeys up or the Stone duality of Boolean algebras and discrete topological spaces.
For instance, any partition, like a classification of objects into big and small, is a coproduct. A function from context sets to such partitions is a choice function, whose adjunct is the revealed a binary preference relation.
The category theoretic concept of product can be used to explicate the duality of word order and morphology in natural language. A product, consisting of an object ab and projections f: ab→a and g:ab→b, can be equivalently represented as an indexed product (a,f)(b,g) = (b,g)(a,f). This product is commutative, for the projections are now carried along with the terms of the product.
Alternatively, a product ab can be represented as the product of the left and right projections paired with identity on the right or left respectively: ab = (a,1)(1,b).= (1,b)(a,1) This product is again commutative, for componentwise multiplication gives back (a1)(1b) = (1a)(b1) = ab on either order. (Compare also section on linear algebra.)
A natural transformation is a morphism of functors. Categories connected by invertible natural transformations are equivalent. For instance, adjoint categories, like sets and free Boolean algebras on them are equivalent. Meaning preserving type shift in natural language semantics (the category theorist’s “is” is natural equivalence.
A language like feature of category theory is that it does not count. Categories which only differ numerically are equivalent, and represented by their quotient under equivalenece, or skeletal category.
The strength of the category theoretic perspective is that it gives precise meaning to the analyticsynthetic duality, explicating general analytic concepts such as quotient, partition, disjoint union, factoring out, finding the common denominator, going to equivalence class, generalisation, abstraction, or schematic meaning (all instances of finding colimits) and synthetic concepts such as product, factorisation, fixing dimensions or coordinates, (de)composition (instances of limits). One side puts together what the other takes apart. For instance, we may represent events as combinations of properties and time, and then project those components, and combine them back. It allows doing language specification piecewise and then combining the pieces together.
In the section on linear algebra, I win will be represented as the meet I⋂win of the event types where I am the subject with those event types where someone wins. At the same time I win “is” also the relation composition of I°win of I and win, and the concatenation I.win of the event types I and win. These are all interrelated perspectives on the same event, and all instances of category theoretic product.
Category theory relevant for event structures incl.udes cartesian closed categories and monoidal categories.
A fibred product or pullback e´_{x} f is a product subject to a constraint or law x (Barr/Wells 2002).[23] In other words, maps d → e, d → f that agree on x correspond one to one to maps d → x. The pairs in fibre product are mapped to a shared third object x. For instance, the composition of two relations contains those pairs in their product which share the middle member. Any binary relation can be thought of as a fibred product (cartesian product subject to a law).
Different notions of product can be ordered on a scale according to the strength of the law constraining them.
cartesian product, concatenation composition meet identity
Free products satisfy cancellation, composition and meet do not. The last two are commutative (rings with idempotency < 2):
ef = ∅ iff e = ∅ or f = ∅
Free products commute with Booleans, composition does not (it commutes with join and meet but not with complement).
The failure cancellation indicates presence of constraint (a fibred product). Here are some linguistic instances of this.
Plural symmetric (reciprocal) predicates: A and B are unmarried entails A is unmarried, but A and B are not married does not entail A is not married. Only mutual marriage is excluded.
Motion: The composition of two paths from x by y to z entails from x to z, but the negation of the latter does not entail the negation of either component, for there may be alternative paths.
Causality: A causation event kick upstairs entails the sequence of events kick.upstairs, i.e. someone kicks something and something goes upstairs. The negation of the former does not entail the negation of either component, for there are further subevents linking them: the shared object, and the causal connection.
A Chu space (Pratt 1999) is a map from a product of a category and its dual to a third category of (truth) values. The two dimensions can represent any duality: types versus tokens: objects versus properties, objects versus places, times versus events, for instance. Pratt (1997, 1999, …) shows that the construct can be tweaked to represent most all and sundry mathematical objects.
The interest of Chu spaces here is that they match the natural language articulation of what there by an extensional token dimension of objects against an intensional type dimension of concepts. The token dimension is type lower than the type dimension, applies Booleans in extension (join is more) and constitutes the subject of an event. The type dimension is one type higher than the object dimension, applies Booleans in intension (meet is more), and constitutes the predicate of an event. In category theoretic terms, the basic operator of forming events is not a join of tokens or a meet of types, but an adjunction of a token and a type.
The initial object of the category of Chu spaces is a map from a fourfield of two tokens against two types to 2, exemplifying polynomial (x vs. x^{2}) and exponential (x vs. 2^{x}) duality in one diagram. This Chu space already makes sense of identity x = y as the meet of two types on one token, as well as sharing of one type by two tokens. Aristotle’s square of opposites, the Klein four group, and the commutative square diagram of category theory are twobytwo distinctions, like the four corners of the initial object of the category of Chu spaces. If one of the dimensions is contracted to a point, a two way duality of opposites remains.
Points in the Chu space are values of some sort. In the above figure, they are event tokens. Each one is a product e⋂t of an event type and a time. Going by rows or columns give two dual onedimensional spaces: one maps times to events (the usual view), the other maps events to time (the dual view). The dual view is what physics calls phase space. A given time is described by the spectrum of events happening then. Dually, a given event is described by the spectrum of times it occurs in. By the superposition principle, an (aperiodic) event as the (infinite) sum of (periodic) events, or vice versa.
Cartesian closed categories are a category theoretic abstraction of the adjointness of product, quotient and exponent. Any category is a monoid of arrows (morphisms) under composition ° In cartesian closed categories there is a product a´b which has a right adjoint (inverse) a¬b for every object so that an adjunction (Galois connection) holds between arrow and product[24]
a´b ≤ c iff a ≤ b→c.
Boolean algebras are cartesian closed. Lambda calculus, combinatory logic and categorial grammar are other well known instances of c. c. c. A Heyting algebra is a cartesian closed and finitely cocomplete, but not complete. (Cf. section on priority Booleans.) Relation algebra composition is a tensor product distinct from its cartesian product (Pratt 1997).
Categorial vagueness or polymorphism of natural language is studied in categorial grammar (Lambek 1958, van Benthem 1985:Ch.7, 1991, Parsons 1990:216). The idea is, instead of assigning expressions a fixed logical type, to define a calculus which allows unmarked shifts between categories. Unmarked aspect shifts and coercion in Moens (1988) are an instance of this idea.[25]
Categorial grammar can be thought of as the grammar of type theory. Categorial grammar builds sentences using type inference on type assignments for their parts. Different versions of gategorial grammar buy into different subsets of the types and rules of inference valid for classical logic. A categorical grammar hierarchy has been laid out in van Benthem (1991:247, 1996:252). Each richer system includes the earlier ones. Each adds a structural rule of inference characteristic of a given combinator.
Ajdukiewicz calculus 
application, cut, modus ponens 
K 
Lambek calculus 
composition,conditionalisation, lifting 
B 
Linear logic 
commutativity, permutation 
C 
Relevant logic 
cancellation, contraction, idempotence 
W 
Intuitionistic logic 
monotonicity, expansion 
K^{1} 
Classical logic 
Peirce’s law ((a →b) →a) →a 

Lambek calculus is sound and complete with respect to relation algebra by the following correspondence (van Benthem 1996).
Lambek type 
relation algebra operator 
ab 
relation composition 
a\b 
left residual (relation inverse) 
b/a 
right residual (relation inverse) 
Dynamic logic (Segerberg 19??, Pratt 19??, van Benthem 1991, 1996) combines unary Boolean algebra and binary relation algebra into a twosorted system. Modal logic style, the first implicit argument is a state index. The characteristic operations map between states and actions, which are binary relations between states. In relational algebra, dynamic logic weak modality <e>r ‘e can produce r’ is representable as s<⋂e⋂<r. This should be compared to the counterfactual definition of e let r as s ⋂ s<⋂e⋂<r. The dual of enablement, necessitation [e]r ‘e will produce r’ will then correspond to e cause r. The operators ?p for ‘test p’ and !p for ‘ensure p’ map from states back to actions. The former is definable using priority operators as ?p< and the latter as <p, corresponding to a preconditionfree variant of become. The dynamic logic primitives thus provide an alternative choice of primitives for the causebecome fragment of the present calculus. The program constructs if then else and while do or repeat until are definable in a similar fashion.
Kleene algebra with tests (Kozen 1994, 1996, Cohen et al. 1997) corresponds to an equational subset of propositional dynamic logic without complement (van Benthem 1996). The coalesced product in Kozen/Smith (1996) is a formal language analogue of relation join or composition.
The eventobject duality is realised in computational theories of events as the duality of control flow and data flow. Control flow takes events through a state space, defining alternative courses of events. Data flow takes objects through a network of relations, defining event participant structures. There is a spectrum of systems from singleagent active ones with complex control flow (flowcharts) to multiagent reactive ones where all the action is in the wiring (Petri nets, neural nets).
The formal analogy between relation algebra and regular events, states corresponding to properties and events to relations, is exploited in dynamic logic (van Benthem 1991, 1996). A finite network of binary relations is formally similar to a finite state automaton. This analogy is exploited in the theory of Kleene algebras over matrices (Kozen 1992, Desharnais 200?).
van Benthem (1996) uses relation algebra to compare various dynamic logics as restricted varieties of classical reasoning. The idea is to restrict valuations (assignments) from a static set (corresponding to the cartesian product, or universal relation, of individual assignments) into a dynamic net which imposes constraints between successive assignments. This is analogous to the move from classical logic to modal or resource bounded logics.[26]
Untyped Boolean combinatorial logic is inconsistent (Curry/Feys 1958). Combinatorial logic or lambda calculus operates on reflexive domains (Scott 1973), countable cartesian closed spaces isomorphic to their own function spaces (Stoy 1979). They allow expressing recursive arithmetic, closures, and fixpoints. In particular, combinatory logic defines the fixpoint combinator Y which returns for any function f the fixpoint of f, i.e. fYf = Yf. If implication → or negation ¬ is introduced to the language, their closure or fixpoint is the contradiction ^, and the entire language is derivable. ∅ = 1 becomes true, so duality reduces to triviality.
In category theoretic terms, the category of complete atomic Boolean algebras is not cartesian complete. Conversely, the logic, or topos, of cartesian complete categories is not the twovalued Boolean algebra, but the fourvalued intuitionistic Heyting algebra.
The paradox of selfcontradiction goes back to the ancient paradox of talking about what is not. Since paradox is deduced from a combination of negation plus reflexion (abstraction), there are (apparently) alternative ways out: restrict negation, restrict reflexion, or restrict deduction. All of them have been explored, leading to constructive logic, set theory, type theory, or nonclassical proof theory among other things. No one winner has emerged. Instead, logic has branched off in different directions.
One option is to just ignore the paradox. This is what natural language does. It blithely generates this sentence is not true and leaves logicians to wonder what to do about it. I shall do the same. I use what I need from Booleans and combinators, leaving foundational worries aside.
Regular operators on simple event tokens can be lifted to complex event tokens and further on to event types in the usual way of formal language theory (Salomaa 1973, van Benthem 1991,1996). In formal language terms, a word is represented by the family of languges containing only the unit language containing it, a language by the family of languages including only it, and a language family by itself. There are two levels of abstraction here, from simple event tokens (strings) to complex event tokens (languages) and from complex event tokens to event types (families of languages).
The domain of actual events is in general not closed under regular operations (not free). Operators can be thought of as restricted by or relativised to an underlying universe S (van Benthem 1991). If x and y are complex event tokens then
xy = uv ∊ S: u ∊ x ∧ v ∊ y
and if e and f are event types then
ef = xy ∊ R: x ∊ e ∧ y ∊ f
A simple event token is lifted to a complex event token by considering its unit set. A complex event token is lifted to an event type by considering its unit set. After these identifications, we can let all variables range over event types, some just more restricted than others, and build up the calculus using primitives of Boolean and regular algebra only, fading out the typetoken distinction. The loss of information from identifying a set with its singleton is compensated by viewing events implicitly or explicitly as elements of specific Boolean algebras. Being a member of a set translates to being an atom of a Boolean algebra. A unit set generates a Boolean algebra isomorphic with the trivial Boolean algebra 2.
In Montagovian terms, simple event tokens (type e); complex event tokens (t/e) and event types (t/t/e) are all lifted to the logical type of properties of complex events (t/t/e). The technique is familiar from typed higher order logic. The duality of extension: sets of individuals (type t/e) and intension: sets of properties of individuals (type t/(t/e), is a case of category theoretic contravariance (McLane/Birkhoff 1967). This is the duality of points and their neighborhoods, individuals and properties, situations or propositions and possible worlds, theories and their models, or partial valuations and sets of total valuations (Barwise and Perry 1983, Benthem 1985:44ff). Arrows are inverted: as intension increases, extension decreases and vice versa. In general, going one step up, operators are inverted. Universal quantifiers translate into existential (in fact, definite) ones one type higher (Benthem and Doets 1983:309).
Similar constructions are possible in the domain of objects; allowing analogous talk of object tokens and object types
Figure 5

Two inversions cancel one another, so that there is a covariant mapping between individuals (type x) and first order quantifiers (type 2¬2¬e). It is exemplified by the Stone representation theorem saying that the second dual of any Boolean algebra is isomorphic to it (Halmos 1974:78). Thus types can be isomorphically lifted two stories up, as is done in Montague grammar or Boolean semantics (van Benthem 1986:67, Keenan/Faltz 1985). Individuals map by Leibniz law (an individual is the set of its properties) to Boolean homomorphisms, or principal filters in the algebra of first order quantifiers. The following diagram describes the situation:
This diagram can be related to the observation that natural language types do not go beyond third order. van Benthem (1986:65). The first two types appear in a subject predicate event type, the third is obtained by subjectpredicate inversion. A second inversion gets us back to where we started from.
It does not make sense to ask what the logical type of a natural language expression is. However, and expression may have a lowest type in which its logic can be captured. For uniformity (conjunctive instead of disjunctive definition), one climbs up the ladder to where all expressions become special cases of the same type.
I shall not develop a separate type theory as a metatheory to a concrete syntax. Instead, the entire calculus is built from types: event types, object types, property types, relation types, et cetera. Everything is types, from tokens to logical constants. The highest types may well be dubbed logical (those closed under permutations, or automorphisms, as suggested by van Benthem). Between logical types and concrete tokens there is a full spectrum of intermediates. I allow that types of natural language expressions are not fixed, but may shift along various natural transformations between categories with or without explicit warning.
The theory assumes a liberal supply of combinators or type shifting operators between categories. In fact, such operators are the main object of study here.
Under my catholic notion of type and type shift, I subsume types and combinators described in functional type theories like lambda calculus, combinatory logic or Lambek calculus, but also shifts between functional and relational types, Boolean types, even shifts through category theoretic morphisms and equivalences. Perhaps it is right to say that my types and type shifts are objects and morphisms. In particular, I don’t bank on the (e,t) regime of Montague grammar (van Benthem 1986:65, Keenan/Faltz 1985).
I shall denote types by typed variables and operators on them. I won’t stick to any one notation, but vary notations as seems fit. Aspect types denoted by a, b, c, … are subtypes of type e of events. Untyped bindable variables x,y … range over all types. Concatenation of event types produce more event types, as do other operators. Functional event types can be produced with the abstraction operator. For instance, the characteristic function of an event type e is x:e⋂x. which denotes 1 if e > ∅ else ∅. (This will used to produce a truth definition below.)
Types may exemplify a variety of different categories. Among them is Boolean algebra, which allows defining notions of subtype and compatible type, among other things.
The type of oneplace events px is a product of oneplace event types p = p:px and objects x = x:px. Under an appropriate type shift, for instance the projection p = x:px which identifies an event type with its participants, px is equivalent to the meet p⋂x. (Equivalent in a category theoretic sense, viz. related by a reversible natural transformation.)
There are similar shifts among different representations of twoplace event types xry. A binary relation maps to the oneplace subjectpredicate event type by subject abstraction x:xry. Different word orders are produced by the right combinators, for instance rxy = CIxry. Free word order with dependent marking can be represented by a type shift which replaces composition with meet:
xry = x_ ⋂ _r_ ⋂ _y
Here the roles of x and y in the event type are marked on the arguments by cases. Head marking can be represented as argument abstraction
xry = (xy:r) x y = x: (y: r y) x
where the roles of x and y are marked on the head.
In sum, I give up trying to stick to a single underlying logical form. Generalising the idea of Lambek calclulus, I allow unmarked type shift, or type inference, happen dynamically during the process of interpretation. Any shift of language model will do which makes sense of the grammar and semantics of a given expression. It need not be the same language model for all constructions, nor need it be unique within even one construction, as long as types can be systematically mapped to others. This is the spirit of category theory. To quote one mathematical physicist:
If you find this confusing, take heart.
Getting confused this way is crucial to learning ncategory theory! After all,
ncategory theory is all about how every “process” is also a “thing” which can
undergo higherlevel “processes”. Complex, interesting structures emerge from
very simple ones by the interplay of these different levels. It takes work to
mentally hop up and down these levels, and to weather the inevitable “level
slips” one makes when one screws up. If you expect it to be easy and are
annoyed when you mess up, you will hate this subject. When approached in the
right spirit, it is very fun; it teaches one a special sort of agility.
John Baez, This week’s finds in mathematical physics 75, http://…
Quine (1960, 1985) is an ontology where objects and events are basically the same kind of (filled) fourdimensional regions of spacetime: objects are stable events and events unstable objects. Weather words like rain straddle the distinction. Pace Quine, we don’t normally think of events as individuals on a par with you and me, with temporal continuity and spatial boundaries, but rather as types classifying situations, occasions, or times where events take place (Eckardt 1997, Bennett 1988:§42). I return to this topic in the section on locative cases.
Differences between events and objects have been pointed out in literature. Objects exist in their entirety from the time they first appear, events (well, closed complex ones) only when they are all there (Schmitt 1983). In this respect, objects are like states. Objects, like states, exist at moments. There has been related controversy about whether events can move about in space (Dretske 1967) or objects in time (Kripke 19??). Cooper (1986) suggests (some) states have no location (e.g. love), but cf. Link (1998:306) .
Bennett (1988, Zucchi 1993, Link 1998) makes a distinction between events as facts and events as individuals. Brutus killing Caesar is a fact, The murder of Caesar is an individual event. A fact carries its identity criteria on its sleeve: any two nonequivalent descriptions of fact describe different facts. Caesar dying is a different fact from Brutus killing Caesar. An individual event has parts and it can be described in different ways: The murder of Caesar and Caesar’s death (can) refer to the same event. I see this distinction as one between event type and event token. The event type Brutus killing Caesar does not fix any particular way for Brutus to commit the act. The factual complex event token Caesar’s death contained many other event tokens, including a murderous stab by Brutus. The type extreme is Kim’s fact metaphysics of events, the token extreme Quine’s spacetime region metaphysics, with Link (1998) taking an Aristotelian midway position. I go with Bennett who says that event identity talk covers the whole spread (Bennett 1988:§49).
Events and objects are in fact dual. An object can be characterised by the events it participates in. An event can be characterised by the objects that participate in it. This standpoint gets formal support from computer science, where duality of data flow diagrams (which describe the wiring of a network, or the topology of objects) and control flow diagrams (which describe the flow of control, or the topology of events (Stefanescu 19??) is well established.
A category theoretic viewpoint may help here. An event can be viewed as a product of various dimensions; dually, a product space can be constructed from equivalence relations between events in Russellian fashion. Choose a set of dimensions, say, event type, participants, time and place and construe events as products of the dimensions:
event ⊆ type ´ participants ´ time ´ place
By taking a suitable projection of the product, keeping some of the dimensions constant and currying some, the set of events can be viewed as a characteristic function of any combination of the dimensions, and each event retrieved as a set of sets in that domain. For instance, an object can be viewed as the set of all events in its life, or as a Quinean four (or more) dimensional world line. Or an event can be construed as a function from individuals and their properties to spatiotemporal locations. This is pure category theory.
Given the degrees of freedom in this game (van Benthem 1986), nothing deep can be associated to the choice of ontological primitives. Rather, natural language semantics should allow alternative perspectives to coexist and shift freely (van Benthem 1986:§3,§7).
The extensional Boolean algebra of complex event or object tokens can be lifted in a covariant fashion to extensional Boolean operators between event or object types, in the manner of situation schemata in situation semantics (Barwise and Perry 1983:91, van Benthem 1996:§3). Token join ⋃ is lifted one level up by the definition
e⋃f ⇔ {x⋃y ∊ R: x ∊ e ∧ y ∊ f}.
Token join has been called nonBoolean conjunction (Hoeksema 1988, Krifka 1989, van Benthem (1991:64,90). It is Boolean all right, being the extensional dual of intensional (propositional) conjunction. For instance, a token of work at night is a join of tokens of work and night (the events just happen together). Walk to work is a token join of walk and go to work, for the two form one event (one causes the other). Jack fell down and broke his crown is the token join of fall and break. Similarly, an extensional relation of involvement e ⊆ f _{ }between event types is defined by e⋃f = f. For instance, kissing f involves touching e because every kiss is a touch.
On the extensional view, walk and chew gum cannot be the meet of walk and chew gum, because as spatiotemporal individuals, walking events and chewing gum events are separate, have no common members.
The contravariant, intensional approach to lifting event tokens to types defines Booleans that are dual to the extensional ones. In the intensional perspective, a complex event is a situation or location at which different types of events occur contiguously, and an event type as a set of such situations. This intensional sense is the one apparently intended in Galton’s (1984:55) definition of subevent:
An event (type) E´ is a subevent of an event (type) E if every occurrence of E´ is also an occurrence of E.
I propose to call Galton’s subevent relation type inclusion or subtype relation ⊆ and the related Booleans type join ⋃and type meet ⋂. For instance, carry a suitcase in the right hand is type included in carry a suitcase which is type included in carry, and carry a suitcase in each hand is the type meet of carry a suitcase in the right hand and carry a suitcase in the left hand. The token join ⋃ of event types corresponds to type meet ⋂ in that both constitute a further specification of an event type. Saying extensionally that kissing involves touching touch ⊆ kiss equals saying intensionally that kissing is touching kiss ⊆ touch. Token join and type meet are dual: the former is an extensional (mereological) operation, the latter an intensional (inferential) one.
For a concrete example, enumerate individual walks and chewings as w_{1}, w_{2}…, c_{1}, c_{2}… .Then the extensional event type (only/all) walk would be a set of form {{w_{1}}, {w_{2}}, {w_{1}, w_{2}}…}, or the set of complex events including only walks, and similarly for chewings. The token join of only walk and only chew contains complex events of walking and chewing, while their meet is empty.
The corresponding intensional event type (also/some) walk would be a set of form {{w_{1}}, {w_{2}}, {w_{1}, w_{2}},…, },{w_{1}, c_{1}}, {w_{2}, c_{2}}, …} consisting of all complex events which include some walk, and similarly for chewing. These event types are in effect situation types, so walk is the type of walking situations and chew gum is the type of chewing gum situations, whose intersection walk⋂chew gum is the type of walking while chewing gum situations. I.e. we have some walk = {x ∊ R: w ⊆ x and w ⊆ all walk}. Both of the dual viewpoints appear in our thinking about events (Galton 1984, Zucchi 1993, Link 1998).
This duality seems in fact to distinguish nominal and verbal individuation, or objects and events (Boole 1858: 176, Bennett 1988:18fn10). As tokens, events and objects “are” both Quinean spacetime regions. But we tend to treat objects as tokens, events, properties and relations as types. From a Boolean point of view, identifying events by situations and identifying them by individuals participating in them do essentially the same work, i.e. shift from viewing events as individuals to a dual view of them as types or properties.[27]
For a finite model for the calculus, construe a printed page (like this one) as an event universe. Character tokens stand for event tokens, lines stand for sequences of event tokens. Sets of columns of the page represent times and paragraphs count as complex event tokens. Strings of character c represent tokens of the state c. Event type e is dually represented by the set of paragraphs which include tokens of e. The type meet e⋂f is the set of those paragraphs which include tokens of e and f. The token ideal of c is equal to c, because strings of c only include other strings of c. Tokens of event type lowercase include tokens of all lowercase event types.
There is a school of thought which construes the distinction between states (or open event types) and events (or closed event types) as a sortal one (Pratt 1979, Partee 1984, Galton 1984, Löbner 1988, Herweg 1991, Pratt 1992).[28] Formally, states are properties of times and formalised as monadic first order predicates on times, while event tokens are first order quantifiable individuals. Thus It was raining translates as rain(t) & t < now and There was a flash of lightning as (Ee)flash(e) & t(e) < now. States as predicates of time represent arbitrary sets of time without quantifying over them. Events as quantifiable individuals are discrete. They are related to but not identified with the times they hold of, which is another way to avoid quantification over sets of times.
This idea has seemed particularly attractive to writers who start out from first order quantification theory and/or Priorian tense logic. First order logic is essentially the logic of finite number and discrete individuals. More precisely, it is unable to distinguish finite from infinite and discrete from continuous. First order predicates are not individuated (there is no quantification or identity for predicates), individuals are. As long as one sticks to first order logic, there is no other obvious way to make the countnoncount and discretecontinuous distinctions than a two sorted approach. People who feel events are categorically different from states usually contrast changes to simple states. Lombard (1986) actually says all events are changes. Those who feel events and states are the same sort of thing don’t make this simplification (Broad 1923, Davidson 1980, Bennett 1988).
Galton (1984:2728) summarizes the differences: a state is homogeneous, dissective (has the subinterval property), and negatable, and obtains or fails to obtain from moment to moment; an event, on the other hand, is inhomogeneous and unitary (has the nonsubinterval property), and occurs a definite number of times (possibly not at all) within a period of time.
A claim for a sortal difference is a generic statement, one which is not true or false simpliciter but useful or not. Let us consider some of the arguments. One argument by Löbner (1988:166) is that states are closed under complementation, events are not. True in a sense: the pointwise complement of a state is also a state, while complement in general does not preserve event type. Another argument Löbner (1988:168) is that an event analysis would not work for a state Es war kalt. An analysis “es gibt ein Ereignis e von der Art ‘es ist kalt’, das vor dem Sprechereignis stattfindet” would wrongly imply that the state holds no longer. as “ein Ereignis dieser Art könnte nur das Auftreten einer geschlossenen Kältephase vor dem Sprechereignis sein”. This argument begs the question, as it identifies event with closed event.
Herweg (1991) is a detailed examination of what he calls the imperfective or time/state/proposition based paradigm and the perfective or event/individual based paradigm. To the first camp, he counts followers of both Reichenbach (1947) and Prior (1967): Bennett/Partee (1972/78), Bäuerle (1979), Dowty (1979,1982), Cresswell (1985), Nerbonne (1985), FabriciusHansen (1986), Hinrichs (1987), Ballweg (1988). The second camp (with Davidson 1967,1970 as a locus classicus) features Wunderlich (1970), Kamp/Rohrer (1983), Partee (1984), Saurer (1984), Bach (1986), Reyle (1987), Bäuerle (1988). Herweg’s thesis is that each approach can only properly treat its native side of the aspect opposition. The constructive aspects of the paper will be discussed later. Some of the impossibility claims can be examined here.
Herweg feels a propositional approach cannot account for the present tense in performative statements, because ‘an interval does not become the truth interval for an eventtype proposition until the event has come to an end’. I suspect this is a residual intuition from thinking in terms of truth at points.
One objection against time as the primitive is that time is not enough to individuate events: one can break an arm at once in two different places. The example does show that at least time and space are needed to distinguish concrete events from one another. (In this respect, concrete events are no different from concrete objects.) Eckardt (1997) argues that time and space do not suffice either (say, one can insult twice in one sentence).
Herweg allows that 'rather sophisticated provisions using possible worlds' might suffice to individuate events. There is nothing particularly sophisticated about possible worlds, situations, or indexes, they are just extensional duals of intensional event types. The main attraction of event types over situations that there appear to be less of the former when there is only partial information available. Situations (Barwise/Perry 1983) or model sets (Hintikka 1969) combine features of both approaches.
Herweg’s objection against an event based treatment of states is predicated on his assumption that an event semantics must provide a discrete criterion of individuation which makes events atomic and countable. He goes on to say that for states there are two atomic individuation criteria, fixed duration s⋂t and maximal convex subset (cycle) ¬ss¬s. Of these only the latter one produces a countable set of events.[29]
Herweg’s argument that a medial episode of a state and a maximal one cannot belong to the same event type equivocates between ¬ss¬s and ¬s`s´¬s (closure and interior). The interior of a cycle ¬s`s´¬s is at the same time atomic in its native event type and a member of the event type s. The rest of the argument tries to show that an event semantics for states (DRT, Bäuerle 1988) succeeds only if the countability requirement on events is rejected. (I reject it).
Herweg goes over a number of other familiar Aristotelian contrasts between states and events. Following Dalton (1984) Herweg does not distinguish between states and processes.[30] For him states are homogeneous: if a state holds over some period of time it also holds at each part of this period while events are heterogeneous: no proper part of the overall time occupied by an event is also a time at which the event occurs (Frege 1884:66). According to Herweg, it is exactly this property of homogeneity which durative (measure) adverbials require, and it is the reason why states cannot be counted.
Citing Krifka (1987,1989), Herweg points out characteristic persistence properties of states and events: states and durative adverbials are downward persistent, location adverbials are upward persistent. Here again one has to proceed more cautiously. States are downward entailing, activities are only almost so (up to atoms).[31] Durational adverbials are downward entailing to a resolution, but not arbitrarily: breathing, walking, or working for a while does not entail breathing, walking, or working for every minute of the while. In fact, it is not even sure that one has to breathe for any minute of it. When the scale is brought down to a minute, there may be no cover in seconds of sustained breathing left to be found.
Another qualification is in order concerning Herweg’s formulation of the antisubinterval property: if an interval is a truth interval of an eventtype proposition, no proper subinterval of it is also a truth interval. The antisubinterval property entails an event is atomic. I have argued the subinterval property characterises states but not activities. The characteristic property of all open events is not the subinterval property, but upward closure under join. Its contrary is an antiadditive property: the join of two events is not in the same event type. I think this is the relevant property for closed events, not the antisubinterval property. The difference is that the antiadditive property is not only satisfied by atoms, but also by arbitrary nests of events of the same type. The two definitions coincide for Boolean algebras, but the denotation of a closed event is not a Boolean algebra. I have argued that closed events are vague about their boundaries, so around a change of state, there is a nest of changes of the same type converging to the ultimate point of change. This is allowed by the antiadditive property. Recalling my earlier comment on counting waves, it is possible to count vague events by counting the culminations, even if the boundaries are vague. This shows that the antiadditive property is also sufficient for counting.[32]
Galton’s (1987) model theoretic semantics for his twosorted system makes states denote sets of points of time while events denote pairs of states (the initial and final states of the events). Aspects are defined in terms of the states: the perfect denotes the final state, near future the initial state, and the progressive the complement of their union. All this agrees with my developments. So does Herweg’s formalism, for all its sortal philosophy.
Instead of claiming a sortal distinction, it seems better to bank on the interdefinabilities between the two sorts. (cf. Verkuyl 1993:242, Bennett 1988,1996 Parsons 1990). There are differences between states and events, but there are obvious interconnections as well. In many sortal approaches, notably dynamic logic , there are aspect operators which take events to states and back (Löbner 1988:180, van Benthem 1996). My conclusion is the opposite from Herwegs: not only are the state and event perspectives compatible, they are dual and interchangeable. One can construct states from events or events from time, either way (van Benthem 1982, Landman 1992). Duality is at work here, making the state and event world views intertranslatable.
Thus van Benthem (1996) maps the duality of states and events as one between monadic Boolean algebra for state and dyadic relational algebra for events, with projections taking one from procedures to statements and dynamic logic style modes (generate, test) from statements to procedures. Changes correspond to relations between states, and states correspond to identity relations. This my construal of change as well.
A further reason not to make the stateevent duality an absolute one is that it repeats itself on higher levels of abstraction. A state of change (a change whose derivative is constant) is a state relative to change of change (a change whose derivative is a simpler change). Similarly, oscillation around a point is a state on a coarser resolution, but a change relative to a finer one.
Finally, states and changes can be dually mapped on to one another: a state is what connects two changes just as a change connects two states. This is the geometrical duality of points and lines, or graph theoretic duality of vertices and edges, or the categorial duality of objects and arrows.
Duality ambiguities between Booleans arise depending on which step of the type hierarchy the operators apply, even or odd (Keenan/Faltz 1984:270): It explains why natural language and seems to equivocate between token join and type meet (Keenan/Moss 1985, Westerståhl 1989:57, van Benthem 1991).
all officers and gentlemen 
⋃(officers ⋂ gentlemen) 
⋃(officers ⋃ gentlemen) 
all officers or gentlemen 
⋃officers ⋂ ⋃gentlemen 
⋃officers ⋃ ⋃gentlemen 
Similar things happen with questions about time. Consider the following questions and answers:
When did Socrates live? 
⋃s 
From 469 to 399 BC. 
469≤399 
When did Plato live? 
⋃p 
From 427 to 347 BC. 
427≤347 
When did Aristotle live? 
⋃a 
From 384 to 322 BC. 
384≤322 
When did Plato and Aristotle live? 
⋃(p⋂a) 
From 384 to 347 BC. 
384≤347 
When did Plato or Aristotle live? 
⋃( p≤a) 
Between 427 and 322 BC. 
427≤322 

⋃p⋃⋃a 
From 427 to 347 and from 384 to 322 BC. 
427≤347⋃384≤322 
When did Socrates and Aristotle live? 
⋃(s⋂a) 
Never. 
∅ 
When did Socrates or Aristotle live? 
⋃(s≤a) 
Between 469 and 322 BC. 
469≤322 

⋃s⋃⋃a 
From 469 to 399 and from 384 to 347 BC. 
469≤399⋃384≤322 
Table 3
Socrates lives s is a state, an ideal of events. Its join ⋃s is the life of Socrates. Plato and Aristotle live is the meet p⋂a of Plato lives p and Aristotle lives a, whose join ⋃(p⋂a) is the maximal event of Plato and Aristotle both alive. The event type of Plato or Aristotle alive is p⋃a, which has disconnected members. Its join ⋃(p⋃a), the whole span of Plato and Aristotle alive, is connected and equal to ⋃p⋃⋃a, the join of their lives. The event type Socrates and Aristotle live s⋂a is empty, and the event type Socrates or Aristotle lives ⋃(p⋃a) is disconnected, for Socrates died before Aristotle was born.
The maximal element of s, the life of Socrates ⋃s is atomary in its own event type. Consequently, the meet of the entire lives of Socrates and Platon ⋃s⋂⋃p is empty. By duality, ⋃s equals the meet ⋂(¬ss¬s) of the closed event type ¬ss¬s Socrates lives (once), which is a filter of events around Socrates' life. The dual of the state s Socrates lives (all the time) is the filter <s<,.Socrates lives some time The event types <s< and <a< have a nonempty meet <s<⋂<a<. Socrates and Aristotle live some time which occupies any time including moments in the lives of both philosophers. The event type ⋂(<⋃s<⋂<⋃<p) or equivalently ⋃(s≤p) is the minimal one spanning the lives of both philosophers from the birth of Socrates to the death of Plato.
The event type Socrates or Aristotle lives s⋃a is nonempty, open and disconnected. The join of their lives ⋃s⋃⋃a is a closed disconnected event type with a gap in between when Socrates was dead and Aristotle not yet born. It holds at the equally disconnected time 469≤399⋃384≤347.
These inferences match the variety of answers available for the last two questions above. The first few answers take the question to be in sensu composito 'When was it that Socrates and Aristotle lived? The last answer takes the question in sensu diviso 'When did Socrates live and when Aristotle?
The contrast between a state and its join (maximal element) will be important in understanding the behavior of time adverbials, which can fit an event either loosely or tightly. An adverbial has tight fit when it denotes the top element of its usual denotation. For instance: It rained yesterday could mean it rained for a while yesterday or It rained all yesterday.
Whether Booleans distribute in an event type depends on their types. For instance, (read⋃write) ⋂ yesterday is equal to (read⋂yesterday) ⋃ (write⋂yesterday) only when the Booleans are of the same order. Talking or listening all yesterday does not entail talking all yesterday or listening all yesterday. In the algebra where for a time and talk and listen are elements of the same type talk and listen is not a join of event types. The event type is is either
(read⋃write) ⋂ yesterday = yesterday
where the Booleans are of the same type and distribution goes through, or its dual
(read⋃write) ⋂ yesterday = (read⋃write)
where the join is one order lower than the meet and does not distribute, the join being an atom in this algebra.
A filter over an event type e is an event type f so that
e ⊆ f
if g,h ⊆ f then g⋂h ⊆ f
if g ⊆ f then g ⊆ h ⊆ f
(Equivalently, if g ⊆ f then g⋃h ⊆ f.) Filters are closed under finite meets and inclusion. The smallest filter f over e that contains a is the filter over e generated by a. It is principal if a = ⋂f and proper if ⋂f >∅. If A maximal proper filter over e is an ultrafilter. It contains precisely one of g,¬g for any event type g.
An ideal is the dual of a filter. A set of all complements of elements of an ideal is a filter and vice versa. An ideal over an event type e is an event type i so that
e ⊆ i
if g,h ⊆ i then g⋃h ⊆ i
if g ⊆ i then g ⊆ h ⊆ I
(Equivalently, if g ⊆ i then g⋂h ⊆ i.) Ideals and filters connect rings, partial orders and lattices. Ideals of a ring form a partially ordered set with respect to inclusion and a semilattice with respect to join. An ideal i is principal if it is the quotient algebra b⋂a of b relative to some element a, i.e. a = ⋃i and proper if ⋃i < 1.
Every ideal defines a homomorphism of Boolean algebras. Conversely, the kernel (the inverse image of ∅) of every Boolean homomorphism is an ideal (Halmos 1974:48). Specifically, the relation of a+b belonging to i (equivalent to a and b both belonging to i) is an equivalence relation (Chang/Keisler 1971:294, Halmos 1974:48). Its equivalence classes form another Boolean algebra homomorphic with b. The kernel and zero element is the ideal i and the unit is its dual filter f. When i is principal (generated by some element a) then the quotient algebra b/i is isomorphic to the quotient algebra b\a. (Chang/Keisler 1971:§5.5). Dually, the relation a↔b ∊ f is an equivalence whose equivalence classes form a Boolean algebra. When f is a principal filter generated by a, its quotient algebra b/f is isomorphic to the quotient algebra b⋂a.
For instance, divide time into night and day. Each of them is an ideal of all time and a Boolean algebra of its own time. The times of transition from night and day and back (morning and evening) are filters round the points of change from night to day and back. For a day animal, time is quotiented to daytime, night time mapped to no time ∅.
The event type in e = of e = x:x⊆e is the ideal of subevents of event type e. Extend in to a twoplace operator e in f by defining it as e ⋂ in f. The converse of temporal in is round, the filter of events including e. Event type round e equals x: e in x which in turn equals the event type <e< ‘whatever event e is in’, so e in f equals f round e. The meet of in e and round e is the event type of events precisely at e.
In the temporal dimension, event type in e equals x:<x<=e. In the logical dimension, of live may include breathe, awake, asleep, eat, run etc (under relevant circumstances c). Breathing, sleeping, eating, etc. are facts of life.
Another dual of in e is event type at e of event types overlapping e, defined by ¬in ¬e. at e is an improper filter over e. Events e,f overlap temporally when e at f is nonempty. e at 1 equals e. Proper overlap is e at f\∅. The relation of proper overlap is e at f > ∅ which shows that at is a variant of meet. Thus at e equals t:e⋂t>∅ so e at t equals e ⋂ at t equals e⋂t. The prepositions in, at, and round are related by at e = in e ⋂ round e = ⋃in e = ⋂round e. (More prepositions in the section on location.)
e at t where e is an event and t a time makes e simultaneous with t. Event type e at t at f where t is a time and e,f event types makes events e and f simultaneous. Two times t=u are simultaneous when they are identical.
Some event calculi take overlap as a primitive notion (either as a function or as a relation) instead of adjacency (Russell 1926, Thomason 1979, van Benthem 1985:37ff). Hamblin (1971) has adjacency (‘abutment’) as the primitive relation. Burgess (1981) starts with inclusion and proper precedence. Given the interdefinabilities, it does not matter which relation is taken as basic.
Atom is the Greek word for individual. In the Boolean framework, the notion of individual appears as that of an atom. Definition of atom: a and b are relatively atomary if they are identical or disjoint: This condition reduces the usual fourfield of Boolean relations (same, disjoint, nested, independent) to a twovalued identity. An element of a Boolean algebra is an atom in BA if it is atom relative to all b. Equivalently, an atom is a minimal element of Boolean inclusion.
A key insight obtained from Boolean algebra is that being an atom is a relative notion. For instance, multinational IBM is both here and there, so places and companies are not relative atoms.
Atoms can be created by quotienting a Boolean algebra relative to some element, i.e. intersecting each element of the algebra with it. This move may exclude some elements, mapping them to ∅ (better still, to ∅*) and turn some previously divisible elements into atoms. This compares to taking a projection of event types relative to an alphabet (Naumovich/Clarke 2000).
This happens in generics. For instance, mapping individual lions to a subset of male lions allows a twovalued answer to a question like does a lion have a mane. Mapping visits to the bathroom to ∅* allows claiming that one worked all day. Lifting the type of the universal quantifier turns it into a higher order atom. Context shift in general involves taking quotients. USA and Iraq may both be bad, but Iraq is worse, so between the two of them, USA is good.
Taking quotients is an instance of defining a Boolean homomorphism: a monotone (continuous, distributive) mapping which preserves Boolean operations and relations, including zero and unity. The part of the domain mapped to zero is known as the kernel of the homomorphism. If the kernel is ∅, the mapping is an isomorphism.
A partition is a quotient of a Boolean algebra. A partition is a disjoint union generated by a set of relative atoms which sum up to 1. The elements are the atoms of the generated algebra. There is a natural injection from smaller elements to the partition. To extend this mapping into a full homomorphism decisions must be made about elements straddling the partition. Generic concept formation involves such decisions: is a grey horse, or a black and white horse, a white horse or a black horse? If light grey is practically white and dark grey is black, where does medium grey go? If it goes to white, then white is not closed under complement. Neither is black. Thus vagueness is apt to lead to truth value gaps.
A partition which is the quotient of a weak order can be mapped to numbers. This road leads to measurement.
Event types are terms which denote nonfinite events. What does it mean for an event type to be grammatically finite?[33] According to one tradition, a finite declarative sentence does not denote anything. When one asserts an event, one is asserting a Boolean identity, not just quoting a Boolean term. By another one, a finite event type denotes its truth value. According to a third one, asserting an event type means inserting a token of it into the maximum event token, the world. (This is the definition of truth in situation semantics and discourse representation theory.)
In my view, the notions are compatible. A finite event type denotes an event token, an event related to here and now. We assert things when we point out event tokens in our environment. In this view, there is no essential difference between pointing out and asserting (the two phrases are synonymous in English, by the way). An event type like rain here now again denotes what it is true of: the meet of rain, here, and now.
By means of a truth definition, one can quantify, or measure how true an event type is. The event type rain here now is entirely true if the event types rain and here now single out the same region of nspace separately as well as put together. It is partly true if rain here now denotes something positive, more than ∅. As a generic assertion, it is mostly true if the type rain covers most of the token here now.
Equivalently, one can consider quantifiers as measures of truth or alternative truth definitions. The common property is that the measure is logical, i.e. invariant under permutation or automorphism of the domain (van Benthem 1986,1991,1996).
Under a tight (maximal) truth definition, rain here now is true only if it is entirely true, i.e.
true e = (e = 1)
Under a loose (minimal) truth definition, it is true if it is partly true.
true e = (e > ∅)
These perspectives on truth are interchangeable. Event type rain ⋂ here now is true on the tight truth definition when rain Ê here now is true on the loose truth definition.
Accordingly, there are two notions of equivalence between event descriptions, same denotation and same truth value. Two event descriptions may be logically equivalent i.e. the truth (e.g. nonemptiness) of one entails that of the other but denotationally not equivalent, i.e. denote different things, for instance, die (alive.¬alive) and dead (alive`¬alive).[34]
Propositional connectives ∧,∨,~ abbreviate the composition of a truth definition true (a Boolean morphism to the binary algebra 2) with Booleans ⋂,⋃,.¬. Similarly ≤ represents ⊆ in 2.
Consider next binary truth definitions/quantifiers or mappings in 2^{A}^{´A}. The smallest Boolean algebra is the binary algebra 2 consisting of 1 and ∅. This is the algebra of two truth values, truth and falsity, which can be identified with the unit event, i.e. the the world, and the empty event, or nothing, respectively.
The Boolean algebra for one event type a is the product algebra 2*2, or the fourelement algebra 4 of two atoms 1,a,¬a, ∅. Quotient with a, so that a = 1 (the entire universe of discourse), reduces the algebra back to 2.
The algebra 4 is also the exponential of the smallest Boolean algebra 2. It defines the four corners of Aristotle’s square of opposites, or the four truth values of the Klein four group. 4 relates to 2 as Heyting algebra to Boolean algebra, or topos to logos in category theory.
The Boolean algebra 4 matches the four group all, not all, some, no of the Aristotelian logic of classes and the propositional logic connectives if, but not, and, neithernor.´
¬a⋃b 
a⋂¬b 
a⋂b 
¬a⋃¬b 
Table 4
The binary some A are B is symmetric and equals unary some are A and B. Its binary quantifier dual is the asymmetric All A are B and not the symmetric All are A or B. In this size, the quantifier most falls together with all.
Boolean algebras are a cartesian closed category with ⋂as product, as ⋃ coproduct having quotient ¾ and residual → as respective adjoints. In categorical terms, Some all are B and All are B are not dual, but adjoint. (Pratt 1997)
The asymmetry of A and B is the the tokentype, subjectpredicate duality of the two dimensions of a Chu space: A is the domain talked about, B is what is said about it. Putting it in another way, binary quantification is unary quantification restricted to the domain A.. (van Benthem 1986) In game theoretic terms, a game connected with a binary quantifier involves a choice of subgame connected with A (Carlson/Hintikka 1976).
A twovalued truth definition (as the ones just described) is a Boolean morphism which sends event types to one or other of the the limits of the category of events 1 and ∅.
A comparative notion of quantifier is obtained by considering generally Boolean morphisms h under the truth definition
Qa iff ha ³ h(¬a)
The Boolean quantifiers are obtained as special cases by setting ha = a=1 for all, a<1 for not all, a=∅ for no and a>∅ for some.
Binary quantifiers like most A are B are not definable in terms of unary ones in this setting. The truth definition takes the form (Benthem 1986:82)
aQb iff h(a⋂b) ³ h(a\b)
Let h be the identity mapping and ³ Boolean inclusion. Then the solutions of Q are the Boolean quantifiers, all a⋂b = b ³ ∅ and a\b = ∅. Proper inclusion gives > Aristotelean some and all a⋂b > ∅ and a\b = ∅.
When h is a bijection (automorphism, permutation), we get the counting quantifiers: h maps the smaller member of the comparison to a part of the bigger. A set is infinite if a bijection h maps it to its proper part. There are more fractions than integers in terms of Boolean inclusion (h identity), but as many in number (h bijection).
Given a measure on the space of events, we can construe a truth definition for rain as a normalised measure ranging between ∅ and 1 of the region covered by the event type rain. Under this definition, the event type is entirely true if its truth value is 1, partly true if it exceeds ∅, and mostly true if it exeeds half. A binary truth definition for rain here similarly divides the region covered by the event type rain by the region covered by the event token here. This way leads to probability.
Quantifying a generic claim amounts to a straight rule of induction. That swans are white is mostly true (its truth value exceeds half) means the truth value of most swans are white is 1.
A logic of counterfactual conditionals is obtained when h is a utility function (van Benthem 1986:87). This view will be detailed further anon.[35]
The Boolean relations a⊆b and a=b are binary Boolean quantifiers (all and all and only, respectively). They reduce events to two (unit or empty event), which makes them distinct from the event types a↔b and a→b. The former can be defined in terms of latter using the unary quantifier a = 1 all a. a=b=1 iff a↔b=1and a⊆b=1 iff a→b=1.[36]
Yesterday, considered as a binary Boolean quantifier, leaves It rained yesterday vague, ranging from rain⋂yesterday > ∅ (it rained some yesterday) to rain¬yesterday = 1 (it rained all yesterday), while It did not rain yesterday ranges over the complement. The truth conditions of it rained yesterday  the conditions under which rain⋂yesterday is rounded up to 1  are correspondingly context dependent and subject to negotiation.
This type of range of vagueness between dual extremes not only characteristic of event language. It is an instance of genericity which crops up all over in natural language where explicit quantifier words are missing. Yesterday was rainy or It rains this time of the year are equally vague.
The binary logic of truth and falsity thus involves a reduction of the fourmember Boolean algebra 4 of the square of opposites to the minimal Boolean algebra 2. Classical twovalued logic (precisely one of e⋂t and ¬e⋂t is (non)empty), holds just when some entails all, i.e. when the mapping defines a Boolean homomorphism of predicates, in effect a singular term (Löbner 1990:1617, van Benthem 1991:141).
There are three (nonexclusive) situations where this happens. (i) One is when the reference time t is an equivalence class of coextensive events covering all of t. For instance:
It rained all day yesterday.
Then just one of the contraries rain and ¬rain has a nonempty meet with t. (ii) Another situation is when t has indiscrete (atomic) granularity: all but an atomary subevent of t remains under erasure of irrelevant events.
The boss was (not) satisfied with your work yesterday.
The relevant occasion t might be at the end of the day when the boss comes to inspect day's work. Effectively, the event type has the form satisfied⋂yesterday⋂t for a suppressed atomary t. (iii) A third case is when the event type e is upward monotone (existential). Then e⋂yesterday⋂t nonempty already entails e⋂yesterday nonempty. For instance,
It rained a while yesterday.
These cases thus exemplify how the generic vagueness of yesterday can be eliminated using explicit determiners:
It rained at that point/some(time)/(for) a while/part of the time/occasionally yesterday.
It rained all (day/through)/(for) the whole day/all the time/constantly yesterday.
Two valued logic is imposed on an event algebra when there is a morphism to the algebra 2. There are different ways of finding such a morphism. A truth definition or quantifier is one way. Genericity by taking quotients is another. Going up in types is a third one. (They really all amount to the same, deep down.)
Two valuedness also follows from the definition of an atom a⋂b = a or a⋂b = ∅. An event type A be B is twovalued if the subject is atomary relative to the predicate (so vice versa). In this case, the strong truth definition, tight match or the universal quantifier all A be B a⋂b = a and the weak truth definition, loose match or existential quantifier some A be B a⋂b > ∅ coincide into the definite article the A be B (a⋂b = a) = (a⋂b > ∅) = 1 , which equals (a⊆ b) ⋂ (a>∅).
This correspondence is neutral about number. Plural definite the A are B is tantamount to all A are B (except perhaps for the existence of A). Type ascent to plural definites also does the job of singular every A B for monadic first order event types B.
I exemplify this observation in temporal adverbials.
A tight (durative) temporal quantifier like all day in sleep all day day ⊆ sleep becomes atomary on events next type up, denoting the type of events that last all day. Sleep is either in or out of it, there is no half way, i.e. e⋂all day = e or e⋂all day = ∅ for any e. With this proviso, sleep all day can be written as a two valued meet sleep ⋂ all day.
(wake⋃sleep) ⋂ all day does not entail (wake⋂all day) ⋃ (sleep⋂all day). For suppose (wake⋃sleep) ⋂ all day distributed over join. Then it would allow the case where (wake ⋂ all day) is true and (sleep ⋂ all day) false. In that case (wake⋃sleep) would be half in and half out of all day, which is ruled out.
A loose (frame) time adverbial like walk today walk ⋂ in today is the type of events occurring any time today. For loose adverbials, dual conditions hold. (walk⋂ in today) ⋂ (talk⋂in today) does not entail (walk⋂talk) ⋂ in today, i.e. walking sometime yesterday and talking sometime yesterday does not entail walking and talking at once. A tight adverbial commutes with its loose dual over negation. ¬(sleep well⋂all last night) is equivalent to ¬sleep well⋂in last night. I did not sleep well last night means I slept unwell part of the night.
Point time adverbials like at that point are atomary, and satisfy all the conditions of a Boolean homomorphism (Löbner 1990:1617): (e⋃f) ⋂ t = (e⋂t) ⋃ (f⋂t), (e⋂f) ⋂ t = (e⋂t) ⋂ (f⋂t), ¬(e⋂t) = ¬e⋂t. One talks or listens at a point iff one talks at the point or listens at it, talks and listens at a point iff one talks at the point and listens at the point; and one does not talk at a point iff one shuts up at it.
The event type of states has the structure of a Boolean ideal (Halmos 1974). An ideal is a set closed under joins and subsets. A state s is a complete ideal, which entails it is principal ideal generated by ⋃s. A Boolean principal ideal is a Boolean algebra, and so is the complement state, the principal ideal generated by ⋃¬s. The whole time line gets divided into two Boolean algebras, one for s and the other for ¬s, so that any time is covered by a join of times in s and ¬s. If s is compact, time is divided into alternating connected stretches where one of s and ¬s holds continuously. s holds at those stretches where ¬s does not hold.
Since states are principal ideals, there is a (possibly disconnected) maximal or unit element, the state ⋃s. The event type s is the Boolean algebra generated by ⋃s. Its null element ¬⋃s is the unit element ⋃¬s of the complement state ¬s.
However, this particular cover of all time does not exhaust all the times there are. When we move to consider longer times, we find most times are not covered by either state alone. Such other times are covered by the mixed event types in (¬s ⋃s)^{+}. This means that there are lots of times at which neither s nor ¬s hold, for instance times of change ¬ss. This is the content of the claim that interval semantics entails truth value gaps (Hamblin 1971:97, Kamp 1981b:44, van Benthem 1982). Complementary states are are complementary on time (points) but just contrary on times (intervals). They do not divide up all times between them, although they divide up all the time. This type of truth value gap has little to do with vagueness, for it holds even if the boundary between the two states is sharp (contracts to a point).
Benthem (1982:§II.3, 1985:4143) surveys attempts in the tense logical literature to juggle with valuations to preserve classical logic on all periods. It seems best to accept the fact that most event types do leave truth value gaps for most times. If I slept in little fits last night, I was neither asleep nor awake for the night (though I did both in that night). This notion explains the duality of truth in or at/for a time. (Compare chapter on adverbials of time and Vlach 1979, 1981, 1993).
Antonyms create an Aristotelian square of opposites (van Benthem 1986:110, Löbner 1990:106) where a distinction between contradictory (outer, weak, periodwise) and contrary (inner, strong, pointwise) negation arises. The contrary of a state (as asleep to awake) is its antonym, ¬s never asleep of s always asleep.. The weak, contradictory negation ~s not always asleep denotes times for which s is not always true, i.e. at those times in which ¬s is sometime true. The contradictory contains events not entirely in s. The contrary contains events entirely not in (disjoint from) s.
Figure 6
Figure 7

The contradictory ~s can be written as the relative complement (s⋃¬s)*\s. It is not a state, because it is not downward entailing (tokens of ~s can contain tokens of s). ¬s entails ~s but not vice versa. Both complements are selfdual (¬¬s and ~~s equal s). The dual ~¬s not never asleep of s asleep is equivalent to <s<^{ }or sometime asleep. ¬~s never not always asleep is equivalent to always asleep. A strong (tight, downward entailing, universal) event type all e is a dual of a weak (loose, upward entailing, existential) one some e. This duality will come up in connection with time adverbials, for times too can be loose or tight (sometime yesterday vs. all yesterday).
The different negations identity ¬¬, ~~ or strong positive ¬~ (always), contradictory ~ (not always), contrary ¬ (never) and ~¬ dual (sometime) form a Klein four group under composition (van Benthem 1986:110, Löbner 1990:72). Its multiplication table can be read off the square of opposites by following the lines of the above commutative diagram.
° 
 
~ 
¬ 
~¬ 


 
~ 
¬ 
~¬ 

~ 
~ 
 
~¬ 
¬ 

¬ 
¬ 
~¬ 
 
~ 

~¬ 
~¬ 
¬ 
~ 
 
Table 5
In this multiplication table, every operator is selfdual, identity is composition identity, and the multiplication of any two of the remaining three operators equals the third one.
A change s¬s is one corner of the square of opposites formed by the four group (s⋃¬s)^{2} (Ar. Phys 229b230, Benthem 1986:110, Löbner 1984,1990)
ss 
s¬s 
¬ss 
¬s¬s 
Table 6
The Boolean complement of a change ¬ss ‘become’ with respect to the fourfield is (s⋃¬s)^{2} can be seen to be ss + 1¬s ‘stay, stay not, or become not’.
The pointwise contrary of event type ss ‘stay’ is ¬s¬s ‘stay not’, the dual is ¬s.s ‘become’, and the contradictory s. ¬s ‘become not’, or perish. These four operators obey a Klein four group truth table.
Thus the pointwise complement of ¬ss is the converse change s¬s. Its contradictory in that algebra is ¬s¬s, the denial of a change. In this operation, the antecedent state ¬s of a change ¬ss is a lexical presupposition (Fillmore 1968, Givón 1972), i.e. it stays outside the negation, as if the change denoted its result ¬s`s. Another way of expressing the point is to say that a change ¬ss is the meet of a presupposition ¬s< and an assertion <s, and its negation is the relative complement (difference) of the presupposition and the assertion ¬s< \ <s.[37] Given the presupposition, the complement reduces to ¬s¬s which is open. For instance, the denial of leave is stay, of type ss, which is open.
The contradictory of a simple change c =¬ss relative to (s⋃¬s)* is the join of all other sequences in ¬s,s, i.e. the relative complement (s⋃¬s)*\(¬ss) or
s(s⋃¬s)^{*} ⋃ ¬s(s¬s)^{+}s^{ }⋃ (s⋃¬s)^{*}¬s
The general lesson of these considerations is that complementation is context dependent, relative to a background universe (van Benthem 1986:56). For states, ¬s is the complement of s relative to s⋃¬s and ~s is the complement relative to (s⋃¬s)*. Relative to an algebra where event e is an atom, contrary ¬e and contradictory ~e fall together: the event is either all there or not at all.
Truth value gaps mean that event type s⋃¬s is not 1 in all events (it is in discrete time), but s⋃~s and s⋃1¾s are. For the same reason, s.s⋃s.¬s is not always 1, but s.s⋃s. ~s is.
A consequence of the relativity of complement to algebra is the natural language fact that It always rains sometime or its dual It sometimes always rains are distinct event types, and both distinct from the one quantifier versions it always rains or it rains sometime(s). (Note the singularplural distinction in English between narrow scope durative sometime and wide scope frequentative sometimes.) If always and sometime ranged on the same domain here, one of the quantifiers would be vacuous. This observation is related to resolution. It will be subjected to closer inspection the section on temporal quantifiers.
The law of excluded middle does not hold of contraries so s⋃¬s is not valid. But can any time be divided up into alternating periods of s and ¬s, i.e. is (s⋃¬s)* valid? Yes, for compact time (a space is compact iff every open cover has a finite subcover, Kelley 1951). It holds for finite time, for discrete time, and for closed and bounded subsets of real time.
An observation related to the duality of sometime against always or only is that it rained yesterday rain⋂yesterday can denote anything between them, for it denotes the meet. Simple truth is a case of genericity. This is the topic of the next section.
Truth is an explicit match between object language event types here, not an contextualised notion of truth at an implicit index as in Montague's pragmatics. Otherwise put, indices become explicit event types (now). This will be covered more fully in the chapter on tense.
van Benthem (1996) recalls the Aristotelian view that truth is a relation between language and some aspect of the world, adaequatio ad rem. The algebraic truth definition above makes no sortal distinction, but rather reduces truth to consistency between two descriptions of the same world.
Here is a duality of perspective between syntax and semantics, language and metalanguage, or local versus global variables. The semantic perspective, represented in full by Montague’s pragmatics, with modal and dynamic logic as examples, hides aspects of logical structure from view, or moves them from object language into a metalanguage, and studies logic of the remaining visible tip of the iceberg.
The syntactic perspective, represented by translation of intensional logic into classical logic, codes the index of evaluation in the object language as explicit parameter. It then deduces the form of the tip of the iceberg from the entire iceberg (classical logic) and the water (the translation).
Both perspectives are illuminating. The form of the tip may be interesting on its own right, witness modal logic. The translation may be interesting on its own right, and it allows applying known properties of classical logic to the nonclassical fragment.
The difference between the local and global variable (dually, the global vs. local situation) approaches is described in Rescher and Urquhart (1971:16ff) and discussed in Scott (1970:150) and Dowty (1982:44). It also ties up with our discussion of the notion of perspective later on.
Positive means put, Latin < posinere ‘set forth’. Positive means that there is something: an event type is instantiated by an event token, it is not empty. It is raining means rain > ∅. By duality, a weak (existential) truth definition for positive claims entails a strong (universal) truth definition with negative claims. If it is raining means rain > ∅ then its denial it is not raining means rain = ∅ which in turn means ¬rain = 1.
This gives a starting point for understanding scoping and binding of variables. The starting point is that a variable inside a negation is universal, because it is part of a negative claim, and negative claims are universal (apply the strong truth definition), just because positive ones are existential (apply the weak one). Thus there is nothing special to say about x come > ∅ being understood as existentially quantified and x come = ∅ alias ¬(x come > ∅) or ¬(x come) = 1 being universally quantified. It all follows from the first choice.
The observation also predicts that some is the least marked quantifier and none is its denial. Some may be left out entirely, none may be expressed by negation alone. All is positive but must be marked. and some not which is doubly marked.
By duality, the unmarked connective is and. Asyndetic connection is read as conjunction. This follows frome the duality of tokens and types Setting the type of tokens to X, the type of quantifiers is covariant to it, being doubly dual 2¬2¬X. The type of connectives 2¬X is dual to X and contravariant. [38]
Compare now event types If it rains it pours and it pours or it does not rain. These event types are equivalent in Boolean terms, for r→p and p⋃¬r are the same Boolean function. The difference is that the first one is generic, the second need not be. In terms of truth definitions the strong definition is applied for if, the weak one for or. To me this suggests that a conditional event type is at bottom a negative one, of form ¬(r\p > ∅), which converts to r→p = 1 or r⊆p.
The situation gets more involved when variables span event types. For similar operators, scope does not matter: x come ⋃ x go equals _ come ⋃ _ go. For dual ones, it makes a difference: x come ⋂ x go entails _ come ⋂ _ go but not conversely.
Following van Benthem (1986), we can classify occurrences of event types as positive or negative depending on inside how many negation signs they occur. For the count, all Booleans are thought as spelled out in Boolean lattice operators (meet or join – which one does not matter, the count is the same by de Morgan law). Different occurrences of the same variable may get different counts. In px → qx, the occurrence in the conditional is negative and the occurrence in the main clause positive. According to the explication of the conditional event type above, px → qx comes from ¬(px\qx > ∅), which predicts that the shared variable gets universally bound.
I shall not write out quantifiers most of the time, but rely on the above laws connecting variable scope and quantifier character. There are many ways to make meaning explicit when needed, such as abstraction x:px → qx or type shifting (p→q)x.
The notion of scope in natural language semantics is a contested notion. Scope concerns the order of operators, which in turn can encode either informational dependency or domain dependency (Carlson/ter Meulen 1979).
Scope does not always make a difference to truth. Effective scope depends on what exchange and reduction axioms given operators satisfy. A commutative, associative and idempotent operator shows no scope at all. The dual distributive Boolean operators allow flipping scopes around negation. In combinatory categorial grammar (Lambek 1958) and Montague grammar (Montague 1968), the devices of lambda calculus (Church 1951) and combinatory logic (Curry and Feys 1968) for shifting operators about are used to the full (van Benthem 1985:§7, Landman 1991). On the other hand, on a fixed level of logical type hierarchy, operator orderings may only allow one solution.
I believe with Benthem (1986) that ad hoc type conversions and an associated loose or shifting notion of scope is a characteristic feature of natural language. It does not mean that natural language should never exhibit scope; there are phenomena which can be explicated by scope or order of operators. It means that in many cases, natural language expressions do not have fixed scope because operators are interchangeable, either because operators are commutative or because a sequence of operators allows alternative decompositions due to associativity or duality.
The English any is a case in point: any is one word in English though in many occurrences it is ambiguous between wide scope existential and narrow scope universal force. Logically, it can be shown that any must be a narrow scope existential in some of its uses, but a wide scope universal in others. A game theoretical generalisation covering all the uses is that any always marks an opponent’s move. There are many similar polarity sensitive dual purpose words in English and other natural languages.
Vagueness about scope is thus one important source of flexibility in natural language. We shall find many instances of this. In fact, I believe that both variability and invariance in the typology of natural languages can be explained by the existence of such meaning preserving permutations of operators.
The idea that aspects interact compositionally and not (only) intersectively is not new (Verkuyl 1972, Lindstedt 1984,1985, Dahl 1985:18), but it is not always appreciated. Failure to do so explains the belief that opposite aspects cannot be combined consistently, a belief which has been used as an argument for a strict separation of aspect and Aktionsart.
For instance, Bache (1985:94) claims that a distinction between aspect and Aktionsart is absolutely essential, because cases where both aspects (imperfective and perfective) are compatible with a particular Aktionsart can only be described appropriately if aspect and Aktionsart are autonomous categories (Bache 1985). Examples where open Aktionsart is compatible with perfective aspect or a bounding adverbial are taken to support the claim (Bache 1985:128). Johanson (1998:§10.2) spells out the implicit argument here explicitly:
…for example, the Bulgarian imperfective vs. perfective and Imperfect vs. Aorist distinctions are of different kinds. If perfective and Aorist are both defined as expressing "boundedness", whereas imperfective and Imperfect are both taken to express "nonboundedness", the combinations "imperfective Aorist" and "perfective Imperfect" get contradictory meanings that are impossible to account for.
Similar incompatibility arguments appear in Filip (2000:77).
The distinction between aspect composition and event type intersection will turn up here and there. Scope is relative to logical type. It is possible to represent a composition as a Boolean meet or conversely by juggling types appropriately, just as one can turn functionargument relationships around by raising type. But there will be a lowest type where the logic comes out right (Löbner 1990). For instance, Russian imperfective can be characterised as the composition of its various special cases or as an alternation between of them. Alternation is entailed by composition here, because the component aspects reduce to identity as a special case.
Reassignment of types affects scopes. An interesting case is the relative scope of iteration and meet. Consider Deti umirali vse odin za drugim 'the children all died one by one' (Forsyth 1970:156). This may seem at first blush a clear case of scope "for each child there was a time when it (alone) died". However, lifting types, the event type becomes a match of two disconnected event types, one by one and children die.
From the compositional point of view, twolevel theories of aspect which separate ‘viewpoint’ from ‘situational aspect’ (Smith 1991) or ‘aspect proper’ from ‘actionality’ (Bache 1985, Bertinetto/Delfitto 1998) simplify aspect composition into a dichotomy, exploiting the empirical fact that more than two rounds of composition rarely get grammaticalised.
That semantics is no object is demonstrated by Partee’s (2000:488) series of aspectual constructs of increasing complexity: write a letter every day for weeks every summer for years…
It has not gone unnoticed that much of our talk about time and events involves inherently geometrical, specifically topological notions: openness or closure toward the past or future, continuity, adjacency, nearness and separation, distance, or remoteness. People talk of perspectives on an event viewing an event from the inside as extended, or from the outside, contracted to a point. This is clearly topological or geometrical talk (Rescher and Urquhart 1971:37).[39]
Topology books (Kelley 1955) usually start out with point set topology, taking the notion of point as a primitive. Equally well, one can define a topology directly on open and closed sets and define points as atoms (elements which have no nonempty proper parts). Cf. Wiener (1914), Hamblin (1971), Kamp (1979), Humberstone (1979), Röper (1980), Burgess (1982), van Benthem (1982).
A topological space is a set of objects with a topology. A topology is a family of objects (open sets) containing 0 and 1 and closed under meet and complete join. The complement of the topology is its dual, the family of closed sets, closed under join and complete meet. An element which contains another element is one of its neighborhoods. The closure of an element is the meet of its open neighborhoods. The interior of an element is the join of open elements it contains. The boundary of an element is the difference of its closure and interior.
The distinction between point and region is a topological one. A region has a nonempty interior. A point is a closed element whose interior is 0. Two elements are connected if one meets the closure of the other, otherwise they are separated. An element is connected if it is not the join of nonempty separated elements.
A function in a topological space is continuous when it preserves complete joins and finite meets. A homeomorphism is a twoway continuous oneone mapping. Deformations, including contraction and dilatation, involve continuous mappings. For instance, a region can be contracted into a point if and only if it is connected.
Topological spaces and Boolean algebras are dual. More exactly, every Boolean algebra is isomorphic to a topological space. The set of all Boolean (2valued) homomorphisms on a Boolean algebra A forms a topological space, the dual space of the algebra. Starting from the smallest Boolean algebras, 2 is the indiscrete topology. The twoelement algebra 4 has discrete topology where all sets are both closed and open. The first Boolean algebra with nontrivial topology is the threeatom algebra 8. whose open seets are 0, 1, a, b, a+b, the closed sets are their complements. The only closed nonopen set is the remaining atom c = ¬(a+b). This is the smallest algebra where a distinction between open and closed event types can be made. The open event type containing a, b, a+b is closed under joins, while the closed event type c = ¬(a+b) is atomary.
The class of all regular open regions of a nonempty topological space is a Boolean algebra with respect to Boolean operations defined so that the complement of a region is the complement of its closure, and the join of two regions is the interior of the closure of their join (Halmos 1974:13). For instance, the join of two adjacent open regions is their concatenation (their join plus the point in between).
Pin (199?) defines a metric topology on regular events. For more topologies on regular events see Pin (1991).
One can in fact define a topology for the aspect calculus by defining an event type as open it is does not contain its boundary, closed, if it contains its bounds (Bennett 1981, Parsons 1983, Kuhn 1989:531). In general, e is open iff its complement is closed. In topology, open and closed are not a dichotomy: there are regions that are both (the empty set and the universe) or neither. Strictly, an event is closed if it is closed from both sides, and halfclosed (neither open nor closed) if it contains its boundary from one side (Bartsch 1992:32).
My terminology will be topologically imprecise here: what I shall call closed event types are really topologically closed or halfclosed.The distinction between closed and half closed events is important in aspect theory, for it reflects the traditional distinction between bounded and telic events (Dahl 1981, Bertinetto/Delfitto 1998).
An open event token is of type a ⋃ aa ⋃ aaa ⋃ ... Its complement event type is ¬a ⋃¬aa ⋃ a¬a ⋃ ¬aa¬a ⋃ a¬aa ⋃ .... These are precisely the different event types that can be formed from a and ¬a. In the limiting case case when a is always or never true throughout a reference time a and ¬a are both open and closed.
To make this more precise, consider a sequence (a⋃¬a)^{+} of alternating connected periods of a and ¬a. A base for the open sets of the intended topology are all factors of form a^{+}^{ }, so the set of open sets is the set of all subwords of (a⋃¬a)^{+} of form a^{+} (plus ∅ and (a⋃¬a)^{+} itself). Closed sets are the complements of the open sets, i.e. event types in (a⋃¬a)^{+}\a^{+}. It is easy to verify that this defines a topology (open sets are closed under arbitrary joins and finite meets as required). This topology is the relative topology of (a⋃¬a)^{+}^{ }wrt. a^{+}. It is a quotient of the usual topology of (a⋃¬a)^{+}. For the subspace a^{+} it coincides with the usual topology, except that it makes a^{+ }connected (an event in ¬a between occurrences of a is closed and has no interior, i.e. contracts to a point). Relative to ¬a^{+} it becomes the indiscrete topology. A closed factor is simply connected so it is contractible in the topology, while closed events separated by intervening occurrences of a are not.
Basic closed event types, as they are defined below, are actually (half)closed factors of (a⋃¬a)^{+} i.e. (half)closed and (simply) connected (singular count).
A series b^{+} of an event b closed relative the topology of (a⋃¬a)^{+}^{ }is closed in that topology, but open in its relative topology of (b⋃¬b)^{+}^{ }The general lesson again: there is not just one event topology, but every algebra of event types defines its own topology.
A (half)closed event type cannot be continued (in the closed direction) because it contains its bound. What is the bound or boundary of an event type? Intuitively, a bound is what stops an event from continuing, while a boundary lies between two events and separates them. A closed event is closed at ends by bounds, while one open event goes to another over a change that marks the boundary. A bound meets an event from one side, a boundary separates it from the other sides. A boundary can be contracted to a point, a bound can extended at will in the free direction. This ambiguity is preserved in the present topology. Strictly, the boundary between a and ¬a is the vanishing point of change where a goes over to ¬a, the limit of my notion of change a¬a, which is a halfclosed region. ¬a (together with the point of change) is really a bound of a. However, in my book, when a closed event strictly contains its boundary (the change), it also loosely contains its bound, a bit of its complement event type, because the bounding event also contains the regions it separates. One way of making sound sense of this is to say that I am representing a point (more generally, a closed set) by the set of its open neighborhoods (its principal filter). My boundaryregion representation of a change is thus the vague representation of a point. As resolution is increased, my vague topology falls together with a standard one in the limit. (The same duality of points and their neighborhoods is involved in Russell’s derivation of points from periods: Russell 1926, Thomason 1979, van Benthem 1985:37.) Different aspects on the same change are obtained depending on which regions contract to a point: beginning (inception), end (achievement), middle (culmination), or both ends (cycle).
In my topology, the distinction between a point and region, between momentaneous and extended is related to concatenation. aa is extended because it consists of two adjoining open regions, while a in general is open but has no minimum extension. States, changes and cycles are all events, and closed events can be viewed both ways, as regions bounded by points, or as points bounded by regions. Given the elasticity of events, notions of momentaneous and extended have to be defined in terms of admissible range of variation: whether an event can be contracted to a point or not and whether it can be extended at will or not. An event type can be contracted to a point if it contains simple events of the same type, and it can be extended at will if its open subevents are unbounded. States can be both contracted and extended, achievements can be contracted to a point but not extended at will, activities can be extended at will but not contracted to a point, while accomplishments allow neither.
The common use of bounded/unbounded in aspect literature (Partee 1984, Lindstedt 1985:134) is misleading.[40] The intended distinction is one between closed/open event types (events which contain their bounds and events which don’t, Smith 1991:100). In topology, boundedness does not imply closure (Kelley 1955:144). It is a notion of order or metric topology rather than general topology.
A neighborhood of an event e is an event temporally including e. An event of form <e is a left neighborhood of e.
Density and continuity are topological notions. An event is dense if it meets with every nonempty open event. In other terms, a set is dense in a topological space X if the closure of A is X. A is open in X iff the interior of A is A.
Density allows for the inference of a place between any two separate places and the smoothness of motion. We have contiguity (adjacency) of neighboring places given in concatenation. Given density we can prove that go equals go.go and go by. Density is characterised by the identity
a ⊆ a.a
which says that every token of a is divided into two of the same type. We might even consider a density operator ^ dual to iteration defined by
a^ = 1 ⋂ a.a^
and characterise dense event types as those satisfying a = a^.
Continuity of, say, motion (of place p,q against time t,u) can be expressed by the distribution identity
(p⋂t).(q⋂u) = (p.q)⋂(t.u)
It says adjacent places meet at adjacent times.
The classical and medieval incipitdesinit problem (Ar. Phys. VI.5, William of Heytesbury 1335, cf. Hamblin 1971:101) has not gone away. The problem is this: if states are always true or false pointwise there is no point left for a change of state to happen at (assuming change is an event between two states, Kamp 1980). Hamblin (1971:86 ) states the problem as follows:
At 8 am I get in my car and set off for work. At 7:59, before I started it, my car was at rest; at 8:01 am, it is in motion. When a thing is not in motion, it is at rest, and when it is not at rest, it is in motion. But what was the state of the car at 8:00 am, as I was starting it?
van Benthem (1982:45) points out that the problem of the dividing instant is not a problem for a period view of truth:
For one thing, on a pure period view, it does not make sense at all to talk about a dividing instant. The periods of burning/nonburning may be neighbours (in the sense that no period separates the two) without there being any 'marker' for the transition. This does not exclude periods overlapping both, even of a descending sequence of these. For such overlapping periods, indeed, it cannot be said truly either that 'the fire burns' or that 'the fire is out'. But then, the two qualifications do not form an exhaustive pair for periods anyway, as these can record more complex happenings, like the dying out of the fire. (In a sense, that is exactly what periods are all about.) Thus, on the period view the problem evaporates.
Hamblin (1971) frames the problem as a clash between continuousvariable language and discrete twovalued language: if twovalued statements are predicates of timeinstants they are discontinuous: they cannot flow smoothly from falsehood to truth in continuous time. On the other hand, if time is continuously mapped to a continuous change, then the change is open and the steady states are closed, as there is a first and last point of a steady state but no first or last point of change (there is no smallest difference). Hamblin agrees with Medlin (1963) that natural language seems noncommittal here. If the change is gradual, it can be seen as an open event bounded by two closed steady states[41]; if it is instantaneous, it can be pictured as a dimensionless point bounding two states, belonging to either (or neither).
Galton (1984:3334) votes for the latter:
At any moment just one of p and ~p must be true, and therefore any change from ~p’s being true to p’s being true must be instantaneous. ... When something starts to move, there is a last moment when it is at rest; at any earlier moment, it has not yet started to move, whereas at any later moment it has: this is the sense in which the change is instantaneous. But now a curious paradox arises, for it seems to follow from this that the last moment at which it is at rest is the moment when it actually starts moving. This is one of the paradoxes which led Hamblin to reject the idea of instants in favour of intervals (...), and in so far as this involves denying that the car starts at an instant I am in full agreement with Hamblin. I also agree with him that the correct assertion is that the car starts within an interval. I cannot join Hamblin, however, in concluding from this that we should call the sentence The car starts moving true of, at, in, or through an interval. ... although the change can only located within an interval, and not at a moment, it is still an instantaneous change because there is no lower limit to the length of intervals which contain it.
My solution is to treat both open and closed events as vague. van Benthem (1982:96, 1985:38) considers approximating the point of change through nested neighborhoods of this type (cf also Ar. Phys. 234a). Dowty’s (1979:140) solution is to define become so that it cannot be contracted to a point. My representation of change as a vague event converging to a point (a filter of events) combines both views. Cf. Lewis (1970), Kamp (1975), Fine (1975), Dowty (1979:88), van Benthem (1980:48ff) for approaches to vagueness in line with these ideas.
Events can be connected or disconnected (Montague 1972:150, Bennett/Partee 1972, Bennett 1981:20, Johanson 1998, Link 1998:305).[42] The formula a¬aa denotes two stretches of a separated by ¬a. The event type is not connected in a, because it has an internal boundary towards ¬a. Or it denotes a connected stretch of ¬a closed by a on both sides. The distinction here is one between figure and ground, foreground and background. The quiet times in between form the background on which the events in focus take place. The course of events is the same, whichever selection of events is lifted out to be in focus.
Vlach (1993:245fn) notes time adverbials can denote disconnected times, e.g. in my spare time, at daytime, now and then. In my calculus a disconnected event can be represented in both ways, as a connected stretch of alternating events summed up by an iteration operator (a⋃¬a)^{+}, or as a disconnected collection of intermittent events a:(a⋃¬a)^{+}. Meets of iterated event types are often intended the latter way. (When we dated boys they came to pick us up.)[43] Time measures like minute generate disconnected events because of their closure properties (cf. section on order).
Granularity involves coarsening of the resolution of a sequence of events in that it preserves important features of the structure but leaves out detail. For instance, a workaholic restricts attention to events at the office leaving out all domestic ones, so that workdays seem to follow one another end to end. Under this resolution I will handle your order immediately can be true although there is a weekend in between. Even What are you doing at the moment / right now? is vague about granularity (Johanson 1998). The term occasion seems apt to refer to a bounded situation at a resolution suitable for each event type.
Coarsening granularity, or smoothing a course of events by leaving out irrelevant detail, is a quotient mapping which leaves out (maps to null) from a sequence of events those that don’t matter, making events that on closer look are not strictly adjacent appear adjacent. Conversely, the refinement of granularity takes an event algebra to a product algebra. These ideas can give practical content in the calculus, for regular languages are closed under transduction, substitution and (inverse) homomorphism (Salomaa 1973).
In the theory of regular languages, a connected subsequence u of an event xuy is called a factor and a connected sequence u_{1}…u_{n} of nonadjacent factors is a (scattered) subword of x_{0}u_{1}…u_{n}x_{n} (Rozenberg and Salomaa 1997:333). The class of regular languages is closed under factors (Rozenberg and Salomaa 1997:89) as well as under subwords (Rozenberg and Salomaa 1997:23,736). Scattered operators are regular in virtue of the closure of regular languages are under morphisms and inverse morphisms, or equivalently, under finite transductions (Rozenberg and Salomaa 1997).
Since regular languages are closed under subwords, we can define another set of Booleans in the time dimension on the basis of the subword relation. The scattered join or shuffle operator (Conway 1971:59) e⋃^{*}f is defined inductively by
x⋃*∅*= ∅*⋃*x = x
xu⋃*yv = x(u ⋃*yv) ⋃*y(xu ⋃*v)
The ignore operator of Kaplan/Kay (1994) and Karttunen (1994) is the special case of shuffle where one of the languages consists of strings of one symbol. Cf. bishuffle in Conway (1971:59).
Scattered meet or projection e⋂^{*}f allows coarsening an event type e into those subevents that meet some significant description f.
Scattered difference can be defined as the event type e\^{*}f satisfying the equation
(e\*f) ⋃*f = e.
This operation formalises an erasure homomorphism as the scattered difference e\^{*}f of a significant foreground event type e and an irrelevant background event type f. The equation
(a⋃¬a)^{ }*\*¬a = a
relates the two ways of looking at an alternation of states as a connected event or a scattered join of disconnected sequences discussed above.
Scattered versions of other Booleans can be defined analogously. The scattered complete join or shuffle closure ⋃^{+}e is the event type of all scattered subwords of e. ⋃^{*}e is star free (Rozenberg and Salomaa 1997:23). For states e and f , scattered join e⋃^{*}f equals (e⋃f)^{ *}. The scattered subevent relation between two events e⊆^{*}f can be defined by e⋃^{*}f = f. This is a regular relation (Conway 1971:64). It allows defining a scattered version in* of in defined by x:x⊆^{*}e which includes all scattered subevents of e. It is related to in by in^{*} e = ⋃^{* }in e.
Tokens of time (e.g. individual minutes) map onto a quotient algebra of times (or events) that last a minute. A minute, as a measure, or unit of counting, is a type of times, without a specific place in time.
In virtue of the second level of abstraction, times must be disconnected. Since minute is defined by duration, any two minutes are interchangeable, i.e. substitutable modulo duration. By implication, the events measured may be equally disconnected: work for 40 hours a week usually denotes a disconnected event type. Any disjoint join of sixty seconds is a minute, not only connected ones. Disconnected times are measured using connected sequences of periodic times (clock ticks) as a yardstick (Krantz et al. 1971).
Partitions of time in terms of periods of varying granularity relates to genericity. A generic event type such as work sums up distributions of activites on time under some given resolution. What is true under one resolution may be false under another one.
Changes of scale (choice of unit of measurement, say fractions of seconds or millions of years) are rigid affine (linear) transformations, a special case of homeomorphisms (continuous isomorphisms) which bend the metric of time smoothly but arbitrarily. Scale and granularity are interconnected through perspective.
Perspective is a notion of projective geometry which studies reflections of a space. In projective geometry lines and points are dual and interchangeable: the dual of a point of view is the unending horizon surrounding it. A perspective is a point (line, plane) through which the surrounding space is reflected. Each point in the space is rotated or flipped over to a mirror image on the other side of the point (line, plane) of reflexion. The closer an object, the larger it appears; as it moves further away, it contracts to a point. If the point of view is inside a region, the region appears unbounded; if it is outside, the region recedes to a point.[44] Such images, familiar from works on aspect, flow from the existence of morphisms to event algebra from the richer space of projective geometry.
In terms of projective geometry, a perspective fixes an origin, a point from which distances to other points are measured, a line of vision and a plane perpendicular to it where events are in sharpest focus. A projective space is preserved under projective transformations which map the space through a point and a line. Notions of point of view have had wide application in linguistics (deixis: Fillmore 1971, Gruber 1976, empathy: Kuno/Kaburaki 1977, Carlson 1988, diathesis: Delancey 1982).
Can we give some common characterisation for these different notions of point of view or perspective? We can start from the spatial metaphor, a space and a point of view as a point in that space, with distances measured from that point. Now replace the space image with that of a network of relations between individuals of any sort. A point of view is defined by choosing a designated member of the network. Its point of view on the network is a set of sentences that have a distinguished variable for it occurring in them (Langacker 1987:120ff.) A verbal description or narrative, can be solved relative to a selected point in it, rather like a geometrical space can be mapped through any one of its points.
The algebraic root of the notion of point of view is solvability or combinatorial completeness of an algebra. Instances are the combinatorial completeness of combinatory logic (Hindley/Selding 1986:109), lambda calculus (Hindley/Selding 1986:269) or categorial grammar, solvability of Boolean (Boole 1858), regular (Salomaa 1966, Salomaa/Xu 2000), or relational equations (binary constraint satisfaction, Ladkin/Maddux 1994), explicit definability in logic (Beth’s theorem), or the fundamental theorem of algebra (McLane/Birkhoff 1967:184). A set of equations in an algebra has a closed form solution if there are enough terms to name the solutions. What we are studying under the name of point of view is combinatorial completeness of natural language fragments.
Logical equivalences shift point of view. Indexical expressions: personal and demonstrative pronouns and tenses (I, here and now, you, he, there, and then). indicate a point of view, systematically shifted in conversation and indirect speech. Syntactic dependency, for instance subject choice, mainsubordinate clause relationships and referential chains define foreground/background relationships which reveal a point of view. Lexical alternations affecting argument structure, diathesis, and markedness relationships change point of view. Perceptual and evaluative vocabulary reveal the point of view of a perceiver and a preference subject. Discourse structure (what questions are posed, what inferences invited) reveals the background beliefs and interests of some attitude subject. Examples:
A is in front of B / B is behind A.
A gives C to B / B gets C from A / C is given by A to B.
Making a point of view explicit may reveal a hidden parameter or several: A is to the left reveals A is to the left of B as seen from C, A is good reveals C prefers A to B for D, A is big reveals A appears bigger to C than B, etc.
A particular perspective is associated with the direction of time. We face the future, move toward the future or the future flows toward us. Future events are in front of us, ahead of us and are drawing nearer. We are in the middle of things not yet past, and out of things that are already over, passing through events, while events pass by us into the past and the past recedes away. (Benveniste 1965, Traugott 1978).
Animals, subject to natural selection, have a natural perspective on events that puts me first, you second, and them in third place. This person hierarchy affects the choice among equivalent formulations in many places of grammar, from nominal reference (Carlson 1988) to voice (Croft 2001). It is a prefix of the animacy hierarchy discussed in the section on animacy.
A principle of temporal logic that has a topological interpretation is the Aristotelian principle of inference between progressive and simple aspects and to perfect aspect for open events: V and Ving coincide and imply Ved open event types. (cf. Eth.Nic.1174a14b14; Ryle 1949, Kenny 1963, Vendler 1967, Dik 1989, Smith 1991, Johanson 1998:§10.2.1.1):
the movement in which the end is present is an activity. E.g. at the same time we see and have seen, understand and have understood, and are thinking and have thought (while it is not true that at the same time we are learning and have learnt, or are being cured and have been cured.) Met. 1048b:18ff.
Aristotle specifically derives this as a consequence of open events being topologically open so that every point (is Ving) is an interior point of (Vs), with a left neighborhood (has Ved) of the same event type. (Ar. Phys. 236b, Bennett 1972:59). If the open event is has some minimal granularity (Link 1998:255), this inference is not strictly true, for at a fine enough resolution, the event appears discrete and closed with a first point where the inference fails, but it is true ‘at large’ or ’soon enough’ (Taylor 1977:214, Dowty 1979:172). This is not to say that the aspects are without effect for open event types. Though the types match, for a given event token, progressive, simple and perfect do pick out different subevents. The focal (marked) cases are different: simple aspect covers maximal subevents, progressive interiors and perfect suffixes. Because of these differences, different and even opposite implicatures may arise in each case.
Viewed as a principle of modal or tense logic, Aristotle’s activity principle reflects the condition that the point of evaluation is an interior point in neighborhood semantics (Segerberg 1971). It is a consequence of Scott’s neighborhood semantics for the progressive tense (Scott 1970:160).[45]
The converse of the activity (energeia) principle, the change (kinesis) principle Vs (is Ving) entails has not Ved holds for closed event types. As Aristotle put it:
In their parts and during the time they occupy, all movements are incomplete, and are different in kind from the whole movement and from each other. Eth Nic. 1174a21
The contraposition of Aristotle’s activity principle has not Ved entails is not Ving explains an implicature of present perfect. The contraposition of the kinesis principle, has Ved entails is not Ving is a common implicature of closed event types.
Aristotle's principle also entails that one who almost Vs, hence has not yet Ved, is not yet Ving either.
Time T imposes an order on events. An order is an antisymmetric binary relation of events. Its structure depends on further constraints on the relation, reflected in temporal axioms imposed on T. There is a rich literature on this theme (Rescher and Urquhart 1971, van Benthem 1982, Burgess 1984).
Time is a dimension, or a quotient algebra, of event tokens. That is, there is a projection, or Boolean morphism, which sends each event token to the time at which it happens. Time can be directly constructed out of events by taking the morphism to be an insertion of event tokens to a subset of their power set. A time then is the set of events which happen then.
Time may or may not be actually defined as equivalence class of simultaneous events. An old theme is this: if nothing changes, there is no way to tell time, no way to tell eternity from an instant (McTaggart 1927). This idea fits the class of noncounting languages which cannot count times.
Time simplifies reasoning about courses of events by the equivalence
e.f at t.u = e at t ∧ f at u ∧ t.u
This separates information about the order of events from information about their Boolean structure. For instance, using the above equivalence, one can deduce the distribution of join over concatenation (e⋃f)g = eg⋃fg from the corresponding principle for Boolean meet.
A(n indicative) tensed sentence is true or false. It states that an event happens at a given time. In the model at hand, a complex event token happens at a given time if it belongs to the time, and an event type p happens at t if p and t intersect. A complex event e happens in a given time t if some course of events xey happens at t.
The natural topology of the real line has a base consisting of all pasts and futures (called the order topology). An order is dense if there is a point between any two points. An order is connected iff it is dense and complete (each nonempty bounded set has a least bound). The notions of completeness and continuity involve transfinite generalisations of meet and join. A Boolean lattice is complete if ⋂⋃e = ⋃⋂e. A Boolean algebra is complete if it is also completely distributive, i.e. join and meet are dually related: ¬⋃e = ⋂¬e. A lattice morphism h is continuous if h(⋃e) = ⋃h(e).
Times in general are joins of periods. A period is a convex set of points.[46] Two times u,v are consecutive (adjacent) if uv is a time. In my calculus, u<v is a notational variant of uxv, i.e. < is an anonymous free variable over events (including ∅* but not ∅).
Concatenation being associative, t<u<v equals (t<u)<v and t<(u<v). The null event ∅* is concatenation identity, while the anonymous event < is the Boolean unit 1. For any event type e,
e = <⋂e = e⋂< ⊆ e<
The notation reflects the duality of concatenation and order, which allows defining one in terms of the other. For t to be next to u means that u is the nearest time after t, i.e. there is no time between them. Conversely, t is (properly) before u if there is a (nonnull) time next to t and u is next to that. Concatenation is tight order, or dually, order is loose concatenation. (McNaughton/Papert 19
I allow event variables x and < to denote the null event ∅* so that tu entails t<u. It follows that double << equals <. I allow « to abbreviate _^{+} , which, denotes separate, remote or distant past, i.e. < \ ∅*. If e<_<now some nonnull event separates e from the present.
While before < agrees with the inherent sense of the regular expression calculus, after > does not. > has a contextual definition by the equation t>u = u<t. Short of negative events going backward in time, we have no denotation for > alone.[47]
Hybrid combinations like t<u>v or t>u<v are ambiguous without parentheses:
(t<u)>v 
equals 
v<t<u 

(t>u)<v 
equals 
u<t<v 
t<(u>v) 
equals 
t<v<u 

t>(u<v) 
equals 
u<v<t 
Formulas (t<`u)>v, (t>`u)<v, t<(u´>v), (t>(u´<v) leave the order of t and v open. These forms will play a role in the analysis of past future and future past tenses.
Near past e.now holds events extending to the present. Near nonfuture is the event type e⋂<now which means e covers now and an immediate past of now. The term extended now McCoard 1978 and subsequent writing refers to such a now extended to the near past. Extended now <now differs in denotation from an immediate (near) past <´now, which is a past continuing up to but not (necessarily) including the present (the main thing is that there is no gap between the past and the present).
Symmetrically we have near future e.now and now<⋂e for a near nonpast, i.e. present extended to the future.
If t≤u were short for t<u ⋃ t⋂u, it would not allow t to partially overlap with u. It is better to define t≤u differently. A good choice is t<⋂<u which has the first definition as a special case. This one says t is no later than u: t starts no later than u and ends no later than it. In other terms, t≤u is the smallest interval including t and u.
The two definitions coincide when t and u are atomary. ≤ alone does not denote a real event type, but it can be defined contextually: ≤t equals <//<t or <<//t or <<\t, and t≤ equals t<\\< or t\\<< or t/<<. The event type e⋂≤now is a near nonfuture, denoting events which overlap a suffix of now (i.e. the endpoints of e and now coincide).[48]
In acyclic time, time’s arrow is irreversible: the same time never returns. Times are token identical (atomary) with respect to concatenation, and antisymmetric with respect to order:
¬t.t = 1
t.t = ∅
t<⋂<t = t
t<u ⊆¬(t⋂u)
t<u ⊆¬(u<t)
Note also that e≤f = e<f for incompatible events e⋂f=∅, and t≤¬t = t<⋂<¬t = t¬t for times or event tokens t.
Eternity, always or forever may also be an event type. Positive eternity future or henceforth is the event type 1^{w} or <^{w }of events going to go on forever. The omega exponent produces a maximum (perhaps infinite) number of iterations (Perrin 1984, Pin 1991, Rozenberg/Salomaa 1997). Negative eternity past or hitherto is the event type 1^{}^{w} of events having gone on forever. Eternity ever or always is the concatenation past.now.future.
Eternity ever is thus a dual to the event type of <, meaning sometime. The adverb forever, is literally the event type for ever. in and for are dual, which means among other things that ever = in for ever. ^{ }
Finite automata allow a dual characterisation of the event type forever. An automaton all of whose states are exit states produces a prefix closed language. Its complement, an automaton with no exits produces the empty language. Generalise the notion of language generated by an automaton to the set of all finite or infinite words which can be homomorphically mapped to the automaton (Salomaa/Rozenberg 1997). Looping automata without exits describe events which go on forever.
The regular language operator omega ^{w} is a dual of Kleene star *. In the extended domain of infinite strings, the complement of the empty language includes infinite languages over the alphabet: ¬∅ = 1^{w}. and ¬(e*) = (¬e)^{w}. Another equivalent characterisation of eternity is until the end of time¸ or more generally, e^{w} = e until ∅.
The complement of future is nonfuture past.now and its order dual is past . A Boolean dual of future is in future in the future or eventually. What does not go on forever eventually ends. The Boolean dual of past is in past in the past. Another paraphrase which falls out from these stipulations is for the future meaning henceforth. Note also the paraphrase for the time being = for now.
If future branches, what happens eventually does not have to happen any specific time. What is meant is the universalexistential quantification must in future e or must <e: in every future there is a time when e happens. (See section on liveness and safety.)
Declerck (1997) is an approach to tense which operates with such timespheres and sectors. (See section on TMAD systems.)
It will be important to note that future <´e and past e`< of an event have no beginning and end, respectively, so they are unbounded to the past/future. This is the common mathematical sense of bounded. In aspect literature, the term is often used for what I (and mathematicians) call closed.
In linear time, it does not make sense to ask since when it is true that an event will eventually happen if it will. A near future is bounded, for soon e is true from some short time before e until e. In forward branching time there can be a bounded foreseeable, impending or certain future, for there can be a first time at which the event will happen in the sense that it has become inescapable (it happens in all future branches). It is not any one time, for different events become (un)certain at different times.
A past e`< has a beginning just at the end of the event e, but no end. A near past He has just left ought to be bounded. If time is backward linear, there is no notion of bounded certain past symmetric to impending or certain future. But a case can be made that natural language evidential and mythical past constructions code such a notion. (See section on evidentiality.)
This will help understand the combinatorics of tenses with adverbials.
Jespersen (1924) suggests that now can denote an extended time, not a point but a region now^{+}.now.now^{+} round the time of speaking, an open neighborhood of a point (Palmer 1987:§3.1.3, 3.2.2., Abusch 1998:21, Guillaume 1929: Ch.IV, Kuhn 1989, Declerck 1997:62):
”Theoretically, it is a point, which has no duration, anymore than a point in theoretic geometry has dimension. The present moment, “now”, is nothing but the everfleeting boundary between the past and the future ... But in practice “now” means a time with an appreciable duration, the length of which varies greatly according to circumstances ... (Jespersen 1924:258, cf. also Cohen 1989:81).
On the other hand, now has no minimum duration. As Aristotle (Ar. Phys. II.3) argued, the present can be contracted to a point. These characterisations make now aspectually a state (simple and open). This insight will come to use in the section on simple present. Formally, now satisfies the identity
now = <now< = now^{+}
Thus technically, now ranges from vanishing point of present to the extended now <now< containing all time round it.
Jespersen’s observation about open now is important for understanding open perfects like I have been waiting for you. Overlapping atomless event types are simultaneous and adjacent at once: r⋂s entails r.s . Given wait and now are open, present wait⋂now equals progressive wait⋂<now<, extended now wait⋂<now, near past wait.now open perfect wait`now, simple past wait´<now and existential perfect wait<`now. This is an important locus of neutralisation. There is a generalisation lurking here relating the two senses “round now” and “so far” of extended now. (see section on paradigm).
The distinctions concerning now are quite subtle. They may make a difference as to what is entailed about the present. For instance Portuguese não saiu ‘he did not go out/he hasn’t gone out’ is vague about it. One translation only excludes a proper past (does not quantify over now), the second one extends to the present. If a negative perfect(ive) does not cover the present, then the positive form can be narrowed to a proper past. The intended coverage is figured out in context by game theoretical reasoning (Carlson 1983). Compare section on markedness.
The English free present tense does not denote an extended now in the technical sense. It cannot replace the perfect in I have been here since yesterday yesterday<⋂now⋂here. It will turn out that as a dependent tense, the English simple present can denote an extended now.
At the same time, the English simple present is transparent to event type, so if the time denoted by now is shorter than the event around it, progressive must appear: I am running. now⋂in run.
Languages without the progressive distinction just don’t make the distinction. For instance, fi Juoksen ‘I run, I am running’ does not distinguish between a simple match now⋂run, an unmarked progressive now⋂in run or unmarked extended present <now<⋂run or round now ⋂ run ‘I run about now’. These are equivalent, given now and run are open.
Jespersen’s open now is not the only event type now can have. Just like its distal counterpart then, now can match any event type. It is open when the time of speaking is open, that is, when we are speaking about the status quo. In Now it comes or now I dropped it we are speaking about events.
In a narrative, now can also refer to the narrative present (Rescher/Urquhart 1971:33). The now of the following sentence is not the deictic centre of the narrator nor that of the prosecuting attorney, but a third one belonging to the story being told:
It was 3 o’clock in the morning when the old lady rang for the nurse on duty. The prosecuting attorney claims that the nurse was tired now, and didn’t pay much attention to the old lady.
More on this in the chapter on tense in discourse.
Intervals can be temporally related in seven ways. Here is a picture of the options:

t 

t⋂u 
Simultaneity 

u 



t 
« 

t«⋂u 
proper prefix 
u 
u 



« 
t 

«t⋂u 
proper suffix 
u 
u 



« 
t 
« 
«t«⋂u 
proper infix 
u 
u 
u 


t 
t 
« 
t/«\u 
proper overlap 
« 
u 
u 


t 


t.u 
Contiguity 

u 



t 
« 

t«u 
proper separation 

« 
u 


Table 7
Corresponding improper relations are obtained by replacing separation « with temporal order <. More relations are obtained by taking Booleans and mirror images of these, for instance t<u equals t.u ⋃t«u. The event type t⋃u in 1 = <t<⋂<u< is the join of the lot.
< is a partial order in the set of event tokens and transitive for event types. However, it is not asymmetric for arbitrary event types. It is possible for e<f and f<e to hold at once (for different tokens). For instance, when I walk I move my left foot both before and after my right foot. The application must be here before five allows, and probably also expects that the application must be here after five.
The relations correspond to ways to lift an order relation on points to a relation among extended events, with different logics (Tedeschi 1981:264fn, van Benthem 1982:9, Ladkin/Maddux 1994). The simplest ones (Hamann 1989) are listed below. The operators ∨, ∧ can be read as lattice glb and lub with respect to temporal order.
1. t starts before u starts ∨t<∧u
2. t ends before u ends ∧t<∧u
3. t starts before u ends ∨t<∨u
4. t ends before u starts ∧t<∨u
Dual versions are obtained by replacing before < with not after ≤. A combination worth singling out is t<u (4) where t ends before u starts. Another is t≤u where t starts and ends no later than u does (1+2).
Different properties of the underlying order are lost in the different definitions. The weakest definition 3 (overlap) is symmetric, so it is hardly an order at all. Definitions 1 and 2 preserve linear order by ordering events by designated points. The strongest one 4 loses transitivity of complement, turning the order of events into a partial order.
Compare
John stretches before and after he runs.
I was betting (even/only) before the horses were running.
I was betting (even/only) after they were running.
The first sentence compares two closed events using the end<start rule 4 (Heinämäki 1974). The second sentence with before compares two beginnings using the start<start rule 2 (Miller/JohnsonLaird 1976, Ritchie 1979) while the third sentence with after follows start<end rule 1. Adding only forces rule 4, even blocks it.
There is a connection between ordering rule and aspect here, predictable from the characterisation of open and closed aspect in terms of meet and join. Open events obey the weak any rule: an event may precede one token of an open event type and follow another one:[49]
Odd numbers come before even numbers.
John worked at IBM both before and after you did.
Closed events obey the strong all rule: if an event precedes or follows a closed event at all, it precedes or follows all of it.
The following connections between aspect and order are predicted by these considerations:
open < open start<end
open < closed start<start
closed < open end<end
closed < closed end<start
Everything works as predicted as far as after is concerned. The exception is before: an open event type in a before clause often appears closed, as if it were in the scope of an implicit closure operator:
We left before it was (i.e. got) dark. (Heinämäki 1974:69)
This asymmetry between before and after is well known (Anscombe 1964, Heinämäki 1974, Hamann 1989:91). Before and after are asymmetric in natural languages in that before is a negative polarity context but after is not. I return to this in the section on before and after.
The characterisations of order types in terms of lattice operators predict their behavior on Boolean joins and meets. < is preserved in meets but lost in joins. A precedes B and C precedes D does not entail A and C precede B and D except respectively. ≤ is preserved in joins and meets:
There is a duality between sorting (total order) and permutation (partial order). A partial order corresponds to a set of total orders or permutations. A single ordered pair in a domain is a partial order too, so it too is the set of all permutations it belongs to. The more order, the fewer permutations. The complement of a strict linear order is a the mirror image nonstrict order. The complement of a partial order is partial order only if it is total.
Time flies like an arrow. We move in time from the past through the present into the future, facing the future, living in the present, and leaving the past behind us, while future time comes forever toward us becoming first present then past (Traugott 1978, 1985). Even if we agree that time is an asymmetric in the sense of an asymmetric order relation (a directed arrow), we can still argue whether it is symmetric round the present. Even if we are sure that time never returns, can we tell which way it is going? (Reichenbach 1956.)
Here are some of the many indications that past and future are not symmetric over the present.
· Existence. Past and present are already there, future is not.
· Nondeterminism. Past and present are necessary, future is possible (contingent).
· Future tenses come from modals, pasts not.
· Causation. Cause cannot follow effect (the past cannot be changed).
· Backtracking counterfactual conditionals are rare and fail prone (Lewis 1979, Nute 1984:418)[50].
· There is a preparatory process but no postparatory process. There is a future progressive is going to V but no perfect progressive is having Ved.
· Imperfective paradox of the progressive: is Ving entails has started Ving but not will finish Ving.
· Unfinished objects: destroying an object entails the object existed, creating an object does not entail it will exist (Parsons 1990)
· Imperfective paradox of before: leap is entailed by leap after you look but not by look before you leap.
· before is a negative polarity item, after is not: French ne appears with avant, not with après.
· before can be counterfactual, after not: French uses subjunctive with avant, indicative with après.
· When means (immediately) after rather than (immediately) before.
· (Immediately) after grammaticalises more often than (immediately) before (Gruzdeva 1994).
· Past future would is counterfactual (conditional), future past will have been is not.
· Perfective paradox: perfective can denote the beginning of an open event or the complete event, not just the end (Talmy 1985:92, Smith 1991:79).
· There are fewer verbs denoting cessation than inception (Ar. Phys. 229a26, Löbner 1988).
· Time adverbials denote change or its final state but not the initial state (Schopf 1987)
· Time’s arrow: there is a fixed association of past/present/future with source/location/goal rather than vice versa.
· Orientation of perspective: past is behind us, future is ahead of us.
· Events take time. Free time is in the future. When events come about, time is used up.
· Direction of positive scalar implicature: already means earlier or more than expected (scales increase toward future).
· Narratives proceed from past to future.
· Languages make more tense and aspect distinctions in the past than in the future (Ultan 1978).
· The present perfectsimple past distinction is not mirrored in the future (Ikola 1949:49).
· Perfect (past) participles mortuus 'dead' are common, prospective (future) participles moriturus 'going to die' rare.
· Sequence of tense exists for the past but not for the future (Comrie 1976, FabriciusHansen 1989).
· Temporal and conditional clauses allow perfect but rarely future. before/after he has arrived is common, future before/after he will arrive is unusual.
Basically, asymmetry of time is the division of events into those that are already there and those are not yet there relative to some now. Conversely there is (free, unused) time in the future, just because there are no events (yet) there. Time is where events take place.
Aristotle’s principle for open events is backward looking too: it says that an open event has begun, not that it will continue or end.
Time is directed from past toward the future, so it goes (passes us, here and now) from future to past. Temporal order is based on the pasts, or initial suffixes, of events. e<f means there is a past of some now that contains e but not f. The minimal and maximal subevents (lower and upper bounds) of an open event type relative to directed time are its beginning and the complete event. A change ¬ss is not definitely there before it is over and its final state s holds. This is why any change is primarily a change into a state, and any state is primarily the end state of a change. The asymmetry of time thus explains why the perfective of an open event can denote the beginning or the complete event, but not just the end.
Real time is isotropic, hence symmetric (van Benthem 1982:37): there is no way to distinguish between past and future from the structure of time, thus no explanation for the asymmetry. That explanation must come from a combination of time and modality. The direction of causation: cause cannot follow effect, is really just a restatement of time's arrow.
A genuine source of asymmetry is nondeterminism: backwards linear, future branching time. The wide end of time has less information and more entropy.[51] We gain information, the world at large loses it as time goes by.[52] These notions have been studied for the tense logics they imply (Thomason 1972, van Benthem 1985, Burgess 1982,1984:§6.26.4).
Anisotropy only concerns temporal order (McTaggart’s B series before/after). In contrast, now, the fixed point of McTaggart’s A series past/present/future, is not an absolute, structural feature ('objective'), only indexical ('subjective'): the present is whatever time is singled out as the reference time, index, perspective, or point of reference (Grünbaum 1971, van Benthem 1982:67, Kamp 1971).[53]
The ternary relation y is between x and z reduces to two asymmetric transitive binary relations x is before y and y is before z (Russell 1903:217). As van Benthem (1982:13) shows, McTaggart's two series are thus interdefinable: the step from B to A involves contextualisation (choice of index), the step from A to B decontextualisation (making explicit reference to the index). Classical Kripkean model theory for Priorean tense logic reduces the A series to the B series, see van Benthem (1982:154). The reduction is analogous to the interdefinability of the positive and the comparative in the theory of adjectives, and that in turn is the correspondence of binary first order relations and second order choice functions (Fishburn 1977). Choice functions are asymmetric, because the rejects of a choice are not chosen: there is a center and the edges.
McTaggart’s A and B series are dual. They can be diagrammed as the duality of three points and two line segments connecting them. The two segments are symmetric and complementary, while there is an asymmetry about the three points: one is inside or between the other two.
An interesting but less studied Boolean operator is priority join (Kaplan and Kay 1994), which I will symbolise as
e ⋃? f
and read as e if possible else f, or equivalently as e or f if necessary. It is defined by the formula
(e ¹ ∅ ⋂ e) ⋃ f
which equals (e ¹ ∅ → e) ⋂ (e=∅ → f) or (e ¹ ∅ ⋂ e) ⋃ (e=∅ ⋂ f). In other words, it denotes e if e is nonempty and f otherwise. The definition of priority join is quite classical. But the operator is not commutative nor associative, unlike usual join. Rather, it compares to the procedural interpretation of Booleans in imperative programming languages or the English connective else.
Priority operators are nonassociative and noncommutative, and nonmonotone in the first argument, so it makes a difference in which order they apply: e ⋃? f is distinct from f ⋃? e and e ⋃? (f ⋃? g) is distinct from (e ⋃? f) ⋃? g. I will assume priority join associates to the right, so e ⋃? f ⋃? g is short for e ⋃? (f ⋃? g). Distributivity also fails: (e ⋃? f) ⋂ g is not equal to (e⋂g) ⋃? (f⋂g).
Priority meet e ⋂? f is read as e and f if possible, and defined by
e ⋂ (e⋂f =∅ ⋃ f)
In other words, it is the meet of e and f if it is nonempty and e otherwise.
The complement of priority join is ¬(e ⋃? f) which comes down to (e ¹ 1 → e) ⋂ (e=1 → f) read as e if necessary and f. It denotes e if e is not foregone and f otherwise. It defines the same preferences as priority meet.
Priority meet is interdefinable with priority join:
e ⋂? f = (e⋂f ) ⋃? e e ⋃? f = (e⋃f) ⋂? e
The interdefinability extends to longer sequences, as typified by
e ⋂? f ⋂? g = e⋂f⋂g ⋃? e⋂f ⋃? e e ⋃? f ⋃? g = e⋃f⋃g ⋂? e⋃f ⋂? e
Priority meet is left associative. The priority of event type e over its complement ¬e is expressed by e ⋃? ¬e, which is equivalent to e ⋃? 1 and to 1 ⋂? e.
As these equivalences show, priority meet and join invert the order of priorities: priority join starts with the best option and bargains down, priority meet starts with the minimum and bargains up. Priorities stated in terms of meet are easier to apply: take the status quo and bargain up from it. Priority join is decreasing, and meet is increasing:
e ⋃? f ⊆ e ⋃? f ⊆ e. e ⋂ f ⊆ e ⋂? f ⊆ e.
The first step, e if possible, expressed in terms of priority meet, is 1 ⋂? e. which denotes e if nonempty, else 1. The exit clause of a priority join is the equivalent event type e ⋃? 1 also read e if possible Then
e ⋃? f = e ⋃ (e = ∅ ⋂ f) = (e ⋃ e = ∅) ⋂ (e ⋃ f) = (e ⋃? 1) ⋂ (e ⋃ f)
which shows that priority join is compositionally definable as e or f, but e if possible. Its complement ¬(e ⋃? f) thus says neither e nor f, or not e though possible. Translated to preference terms, either e and f are not preferred, or else e is not preferred over f.
Priority join can be characterised by the notion of a test event from dynamic logic. The test event ?e or true e (may e) asserts e is nonempty. That is, ?e is the event type e > ∅ which denotes 1 if e is nonempty and ∅ otherwise. It is the orderpreserving morphism from a Boolean algeba to 2.
The complement of true is false (¬may), which obeys the equivalences
false p = ¬true p = true ¬p= ?¬p = ¬?p
The dual of true e is the event type !e or must e which denotes e = 1 (always, all, only, must). Its complement is e < 1 which equals ¬!e or?¬e (maybe not e). This four group is threshed out more fully in the section on truth.
The relation of test event to priority join is given by the derivation.
e ⋃? f
e ⋃ (e = ∅ ⋂ f)
(e ⋃ e = ∅) ⋂ (e ⋃ f)
(e ⋃ f) ⋂ (e ⋃ e = ∅)
(e ⋃ f) ⋂ (e > ∅ → e)
(e ⋃ f) ⋂ (?e → e)
The conditional ?e → e here expresses the notion e if possible. So the last formula shows priority join can really be written and read as the meet e or f, and e if possible. Thus in particular the simple preference e? ⋃ 1 ‘e or don’t care’ equals ?e → e ‘e if possible’.
Once order counts, there is a host of derived priority operators. Priority difference e \? f is defined as e ⋂? ¬f. It removes f. from e if possible. Its dual is priority conditional e →? f defined by ¬ e ⋃? f. Priority residual e ¬? f is defined by e ⋃? ¬f. The fourth corner of the fourfield, inverse priority difference ¬e ⋂? f has no shorthand.
Four items already allow 4! or 24 permutations. There is no need to name all of them. For instance, the expression (e⋂?¬f) ⋃? (¬e⋂?f) codes the order (e⋂¬f) ⋃? (e⋂f) ⋃? (¬e⋂f) ⋃? (¬e⋂¬f) and (e⋃?¬e) ⋂? (f⋃?¬f) the order (e⋂f) ⋃? (e⋂¬f) ⋃? (¬e⋂f) ⋃? (¬e⋂¬f).
I define priority sum e +? f as the event type
(e⋂¬f) ⋃?(f⋂¬e) ⋃? (e⋂f) ⋃? (¬f⋂¬e)
It prefers exclusion over equivalence, and beyond that the leftmost term.
Priority equivalence e ↔? f will be defined as
(e⋂f) ⋃? (¬e⋂¬f) ⋃? (e⋂¬f) ⋃? (f⋂¬e)
It prefers equivalence over exclusion, and beyond that the leftmost term.
Priority only makes sense if there are options: ⋃? reduces to ⋃ in 2. The multiplication tables for priority join/meet look like this.
p 
q 
p⋃q 
p⋂q 
p⋃?q 
p⋂?q 
p 
q 
p⋃q 
p⋂q 
p 
p⋂q 
p 
q 
p⋃q 
∅ 
p 
p 
p 
∅ 
p 
∅ 
p 
p 
∅ 
q 
q 
∅ 
q 
∅ 
∅ 
∅ 
∅ 
∅ 
∅ 
∅ 
Table 8
It easy to see that priority operators reduce to the usual Booleans in the twovalued algebra 2.
Consider the generalisation of priority operators to concatenation. Optionality e? is defined as the join ∅* ⋃ e. This yields two prioritised versions, a positive one e ⋃? ∅* ‘e once if possible’ and a negative one ∅* ⋃? e ‘e once if necessary’.
Given that e^{+} can be defined as e ⋃ e^{+}, priority iteration or priority closure e^{+?} and e^{*?} ‘as long/often as necessary’ ought to be defined as e ⋃? e^{+?} and ∅* ⋃? e^{+?}, respectively. It codes a preference for the shortest possible iteration of e. Its dual e^{w?} as long/often as possible’, defined by ∅* ⋂? e ⋂? e^{w?}, codea the opposite preference for as long an iteration as possible. One who likes an activity a holds the preference
¬a*?a^{w?}
read as ‘a as soon as possible and as long as possible’. One who hates it holds the opposite preference (¬a)^{w?}a^{*?} ‘put off a as long as possible and put an end to it as soon as possible’. Impatience is expressed by ((¬a)^{+?}a^{+?})^{w?} and perseverance by (¬a^{w?}a^{w?})^{+?}. Lexicographic order of words in (a ⋃¬a)^{* }becomes simply (a ⋃?¬a)^{*?}.
An interesting case is lenient transduction defined in terms of priority join as f⋃?x:x, or equivalently in terms of priority meet as x:x⋂?f. It transduces x into e if possible, else does nothing. Note the equivalences
f⋃?(x:x) = (f⋃?x):x (x:x)⋂?f = x:(x⋂?f)
Lenient composition f °? g can be defined as composition with lenient transduction, i.e. as (f ⋃? x:x) ° g or x:x⋂?f ° g. Iteration of a transduction f^{+} is understood as iteration of compositions of f, or f ⋃ f ° f^{+ }. Priority iterations of a transduction f^{*?} and f^{w? }prefer fewer and more iterations, respectively. [54]
Complete priority join and meet are not worth a notation. To the extent they make sense, they are expressed by ?⋃e and ?⋂e, respectively.
Reflexive priority e ⋃? e or e ⋂? e reduces to e. Symmetric priority (e ⋃? f) ⋂ (f ⋃? e) reduces to e⋂f ⋃? e⋃f ⋃? 1. The consistency of these preferences shows that priority operators code a preorder.
Unlike usual Booleans, e ⋃? ¬e is contingent and e ⋂? ¬e consistent (it is equal to e). Note that ?1 is well formed and true, as are ?1=1, 1=1=1, ¬?∅, and ?∅ = ∅.
e ⋃? ¬e is the same as e ⋃? 1 or 1 ⋂? e or ?e → e and the opposite of ¬e ⋃ ?e, ¬e ⋃? 1, ? ¬e → ¬e, 1 ⋂? ¬e. 1 shows indifference.
The complement of a preference e ⋃? f , the event type ¬(e ⋃? f) comes down to ¬e ⋂ (?e → f) ‘not e and f if e is possible’. It is the same as opposite preference when indifference is transitive (preference is a weak order). In particular, the complement of a simple preference ¬(e ⋃? ¬e) becomes ¬e ⋂ (?e → ¬e) and equals ¬(e ⋃? 1) or (?e → 1) ⋂¬e or ?e⋂ ¬e ‘e is possible e yet not e’.
Strict preference, i.e. preference for e over f and dispreference for the other way round, is expressed by
(e ⋃? f ) ⋂ ¬ (f ⋃? e)
which can be also written as (e ⋃? f ) \ (f ⋃? e). A simple strict preference for e becomes
(?e → e) ⋂ ?¬e ⋂ e
This says ‘e if possible and e even if ¬e is possible’. An equivalent idiom is (?e → e) \ (?¬e → ¬e). Use Ð for a shorthand for (?e → e) and .Ð for strict preference. Strict preference is transitive, so e Ð f Ð g is well defined and entails e Ð g.
Priority meet e ⋂? f is a tensor product (Pratt 1999), a noncommutative fibred product subject to the law e⋂f > ∅ and characterised by the Galois connection
e ⋂? f ⊆ g iff e ⊆ f →? g
Consider defining true e as e Ð ¬e ‘rather e than not’. Under this truth definition, we do not get a full Boolean algebra, but a Heyting algebra. Heyting algebra is a subset of Boolean algebra. It is a distributive lattice which satisfies the conditions
x →x = 1 

x⋂∅ = ∅ 
x⋃1 = 1 
(x →y) ⋂y = y 
x ⋂ (x→y) = x ⋂y 
x→ (y⋂z) = (x→y) ⋂ (x→z) 
(x→y) →z = (x→z) ⋂ (y→z). 
Heyting algebra is the algebra of partial information, modal and intuitionistic logic, vagueness and fuzziness, truth value gaps, and topoi. (van Benthem 1985,) It fails the classical laws of bivalence (tertum non datur), double negation, modus tollens, de Morgan, and Peirce.
These observations help relate the asymmetry of time to Aristotle’s futura cotingentia principle, according to which future is nondeterministic: (some) future events are not determined (neither true or false) before the fact. Define must e as e Ð¬e ‘rather e than not’. Then an event is determined if must e ⋃ must ¬e holds and contingent if the dual may e ⋂ may ¬e holds. A weak futura contingentia principle says that only future events are contingent:
if may e ⋂ may ¬e then <e
A strong futura contingentia principle strengthens this to an equivalence, or adds the converse
if may e ⋂ may ¬e then <e
These principles can be stated in the reverse:
must e if (and only if) ≤e
The strong only if direction is implicated in the use of present or past tense to describe future events as in you have lost the game/ your chances / You are dead / I’m gone.
This only begins to unleash the value of priority Booleans. There are other mind expanding applications in the horizon, including vagueness, comparatives, markedness, and pragmatics.
Priority operators will come to use in the sections on counterfactuals, causality, markedness and prototypes. They are are the algebraic counterpart of binary order relations and choice functions for expressing comparative notions. A priority join defines a strategy, or choice function. A join of choice functions is a strategy set, which is equivalent to a game tree. This equivalence creates a correspondence between the algebraic calculus and the game model of modal logic introduced later on.
Priority operators satisfy the following identities:
(e⋃f ) ⋂? g = (e⋂?g ⋃ f⋂?g) ⋂? g
((e⋃f ) ⋂? g) ⋂? f = f ⋂? g
Define a strategy as an event type s defined in terms of priority operators. The composition sp of strategies s and p is their priority meet s ⋂? p. An event e is a degenerate case of a strategy. Note that se does not entail e, while es does. The strategies defined in terms of priority operators are pure strategies. As choice functions, they reveal a lexicographic preference relation (Lehmann 1995, Williams 1996, Benferhat et al. 2001).
I exemplify the representation of games with priority operators by encoding three wellknown games Left or Right (Even or Odd, Battleship and Merchant Ship), Prisoner’s Dilemma, and Battle of Sexes (Chicken). (Howard 1971).
Left or Right is a twoperson zerosum game with imperfect information. One player hides something in either hand behind his back. The other player gets it if he guesses the hand. The preferences of the hider and the guesser are expressed by
I left.you right⋃I right.you left ⋃? I left.you left⋃I right.you right
and
you left.I left⋃you right.I right ⋃? you left.I right⋃you right.I left
respectively. The meet of the preferences is ∅ (the game is zerosum). The strategies of the hider are I left_ and I right_, the strategies of the guesser are _I left and _I right. The outcomes are all the meets of the player strategies. Although the guesser moves last, his strategies do not depend on what the hider does (the game has imperfect information). The players might as well move simultaneously, in which case their strategies would be just I left and I right against you left and you right (Carlson 1994). What a player can do with a given strategy is what the strategy guarantees against the entire strategy set of the other(s). For instance, the guesser’s strategy _I left brings about
_I left ⋂ (you left_ ⋃ you right_)
i.e.
you left.I left ⋃ you right.I left
which does not guarantee a win. Neither player has a winning strategy in this game. The best play is to guess at random (a mixed strategy).
Compare the same game with full information. The hider chooses first and the guesser knows the choice. The hider’s strategies are as before, but the guesser has four strategies, including the winning strategy
you left.I left⋃you right.I right
The meet of this strategy and the hider’s entire strategy set
you left.I left⋃you right.I right ⋂ (you left_ ⋃ you right_)
is again the winning strategy, which is the guesser’s optimal outcome.
There are at least three ways to win a game. One is to be stronger. Another is to be cleverer. A third one is not to care. The game of Left or Right with imperfect information is a draw: neither player has a winning strategy. The same game with full information makes the guesser stronger.
Another way of ensuring the guesser a win is to give him two guesses. This gives him more power without adding his knowledge. He does not need to know because one of the two guesses wins anyway. The game consists of three moves. The preferences over the outcomes for the hider are
(I left.you right.you.right ⋃ I right.you left.you left)
⋃?
(I left.you left.you.right ⋃ I left.you right.you.left ⋃ I right.you right.you left ⋃ I right.you left.you right)
The preferences for the guesser are the opposite. He now has the strategy _(I left.I right) whose meet with the opponent’s strategy set gives
_I left.I right ⋂ (you right_ ⋃ you left_)
which equals the event type
you left.I left.I right ⋃ you right.I left.I right
which is optimal for the guesser.
The third way to win is to change the preferences of the game so that a loss becomes a win. Trivially, the guesser has a winning strategy if he gets the gift whichever way he guesses, or if he does not care, i.e. if his preference relation does not distinguish between outcomes.
Prisoner’s dilemma is the game of cheating between sexes. For good biological reasons, he and she have opposite preferences about faithfulness. It is best if I cheat and you don’t, next best if neither cheats, then if both cheat, and the worst is if you cheat and I don’t. Both players’ preferences are coded as either of the dual forms
(¬you cheat ⋃? you cheat) ⋂? (I cheat ⋃? ¬I cheat)
(¬you cheat ⋂?I cheat) ⋃? (you cheat ⋂?I cheat)
These define the preference ordering
(¬you cheat ⋂ I cheat) ⋃? (¬you cheat ⋂ ¬I cheat) ⋃? (I cheat ⋂ you cheat) ⋃? (¬I cheat ⋂ you cheat)
The composition of the preferences in either order produces a different optimum. The meet of the two players’ preferences is the joint preference
(¬he cheat ⋂ ¬she cheat) ⋃? (he cheat ⋂ she cheat)
Again neither player can alone bring about what is best for both. The solution is to team up and cooperate. The problem here is that the cooperative solution is unstable.
Both players have the first order strategies I cheat, ¬I cheat. The outcome ¬he cheat ⋂ ¬she cheat. is stable for the male if it is no worse than he cheat ⋂ ¬she cheat. Which it is not. So ¬he cheat ⋂ ¬she cheat flips over to he cheat ⋂ ¬she cheat. This is table for the female if it is no worse than he cheat ⋂ she cheat. Which it is not. So it flips over to he cheat ⋂ she cheat. This is stable for the male because it is better than ¬he cheat ⋂ she cheat. Symmetrically for the female. So the stable equilibrium is I cheat ⋂ you cheat, which is worse for both. Two independent agents reach a worse equilibrium than they would if they were one agent.
The solution singled out by natural selection seems to be a mixed strategy: be faithful most of the time and cheat at random only so much that the other partner does not bail out of the deal.
The preferences of the Battle of Sexes are expressed by priority equivalence. By the usual story, the spouses prefer to do the same thing but each prefers different things: the man likes games, the wife opera. Simplifying this a bit, the wife wants out, the man does not. The wife’s preferences are coded as the priority equivalence I out ↔? you out, equivalently expressed as
((I out ↔ you out) ⋂? I out) ⋃? I out
saying ‘I’d rather for us to do the same thing, and if possible go out, else I go out’. It produces the preference ordering
(I out ⋂ you out) ⋃? (¬I out ⋂ ¬you out) ⋃? (I out ⋂ ¬you out) ⋃? (¬I out ⋂ you out)
The man’s preferences are symmetrically opposite ¬I out ↔? ¬you out. The composition of the preferences in either order produces a different optimum. The meet of the preferences
(she out ↔? he out) ⋂ (¬he out ↔? ¬she out)
is the joint preference of going to the same place
((he out ↔ she out) ⋃? ¬(he out ↔ she out)
What they disagree about is the best implementation of this event type. Neither player can alone bring about what is best for both. One solution is to team up and flip a coin, i.e. to form a joint agent we who uses a mixed strategy to equalise between the two equilibria. Another solution is to split up so going together is no longer a shared goal (preference deterioration, Howard 1971:200).
For the present concerns, the main point of interest is the formalisation of the gametheoretical reasoning using priority operators. The argument is If he does not cheat and she does not cheat then he cheats. Classically, this is just false, because if he does not cheat then he does not. But this is a counterfactual argument: suppose cooperation; the male preferences and his choices, what would the male do (next)? To know that, we must remove from the premises what is needed for the male to have a choice, and then determine his choice from his preferences and strategies.
Look at a typical exception in a timetable:
I wake up daily at eight except Sundays at ten.
What except does is takes out as little as possible from the general rule that precedes that is necessary to make room for the exception that follows. In the case at hand, the result is easily defined in Booleans over times as
(day⋂eight \ Sunday) ⋃ Sunday⋂ten
In other words, take out Sunday and put in Sunday at ten instead. The expression can be rewritten as
((weekday⋃Sunday) ⋂ eight )\ Sunday) ⋃ Sunday⋂ten
(((weekday⋂eight) ⋃ (Sunday⋂eight) )\ Sunday) ⋃ Sunday⋂ten
(weekday⋂eight) ⋃ Sunday⋂ten
Consider next the dual of times, event types, as in The weather is fine except it is cold. What we want to see is a parallel between the following reductions:
any day (weekday or sunday), except sunday Þ weekday.
fine (warm and dry), except not warm Þ dry.
Say fine is defined as warm⋂dry. The dual substitution operation should subtract warm using residual and add cold using meet. That operation is not expressible by simple dualisation. The candidate
(fine ¬ warm) ⋂ cold
reduces to
((warm⋂dry) ⋃ ¬warm) ⋂ cold
((warm⋃¬warm) ⋂ (dry⋃¬warm)) ⋂ cold
1 ⋂ (dry⋃¬warm) ⋂ cold
(dry⋂cold) ⋃ cold
cold
Here is what is going on. Extensional except is Boolean subtraction. Intensional except is its dual, feature subtraction or abstraction. This operation is not expressed as a boolean function of the terms of except, because the universe relative to which the subtraction is taken is a dual universe of features. To abstract a feature is to subtract an element from a dual Boolean algebra of features. To do so one must fix the dual base of features relative to which the subtraction happens. Assume the complement is taken relative to the feature space warm+dry, giving
((warm+dry)\warm)+cold = cold+dry
The calculation then proceeds as indicated in
((warm \warm)+ (dry\warm)+cold = (∅ + dry)+cold = cold+dry.
which works because atomic features are disjoint dry⋂warm = ∅, so dry\warm = dry. The exact dual calculation
((warm⋂dry)¬warm)⋂cold = cold⋂dry
((warm¬warm)⋂(dry¬warm)⋂cold = (1⋂dry)⋂cold = cold⋂dry.
goes through just when dry¬warm = dry or dry⋃warm > ∅. This condition changes the last few steps of the original failed reduction so that the desired subtraction is obtained. A definition of subtraction e except f where e and f are event types can thus only be given relative to a factorisation e = f⋂g, where it is the event type g = (e¬f) ⋂ (f⋃g). It can be expressed as a complete Boolean function
⋂p: fine ⊆ p \ p ⊆ warm
where p ranges over the features of the base. In the case at hand, this formula instantiates to dry.
Daily except Sundays means almost the same as daily except not Sundays. The same duality holds for all but (not) one. The meaning is not quite the same: Everyone agrees except you (not) entails you disagree just when not is there. The logical form without not is (everyone except you) agrees, where except operates on objects. With not it is everyone agrees except you do not agree, where except operates on event types. The difference seems to be one between a subtraction and a substitution.
Consider event type There is dirt here except (not) there (pointing to one place), represented by the event type
dirt here \ dirt there ¬dirt there
The deletion removes the region of dirt there from the event type, and the addition adds the opposite event type. The above event type is equivalent to
dirt (here\there) ⋃ clean there
where the subtraction is factored into the location subtype.
The dual behavior of not in except clauses gets an explanation from the above construction. The negation reflects the positive half of the substitution operator, its absence the negative half.
The substitution except is idempotent, so I came except I came means just I came. Except reduces to meet when the exception clause is consistent with the rule: Everybody came except I was late entails just Everybody came and/but/though I was late, if I did come. You came except everybody came reduces to everybody came. Except reduces to just the execption clause if the exception leaves nothing over of the rule: I came except I did not is just a correction boiling down to I did not come. come¬come¾come is just ¬come.
Except as construed here subsumes Boolean difference in the following sense. For instance, exclusive we can be defined in terms of Boolean difference we\you ‘we excluding you’, or just ‘we and not you’. The deletion can be equivalently although redudantly characterised in terms of except as substitution with ∅
we \you⋃∅ ‘we except (not) you’.
which equals taking out of us the minimum necessary to entail ‘not you’.
The notion of substitution of events in the above argument, removing a premise to add another, links priority to nonmonotone and counterfactual inference and causality. An argument If I were you, I would be rich substitutes the hypothesis I am you in a set of premises, i.e. removes premises which contradict the hypothesis and adds the hypothesis (Ramsey 1931, Ryan/Schobbens 1997)
Application of the Ramsey rule can be construed as a maximisation problem. Intuitively, the corrected wakeup time contains besides the exception the maximum of times which meet the rule and are disjoint from the exception, i.e. something of the order of
⋃t:(day⋂eight⋂t \Sunday)
Dually, the intensional except not event type contains besides the exception the maximum of event types which meet the rule and are compatible with the exception, i.e. something of the order of
⋂p: fine ⊆ p \ p ⊆ warm
We can think of this as the closest event type to fine weather compatible with the exception. This perspective is applied in conditional modal logic.
The maximum may be empty. For instance, say I am poor and you are rich: if I were you would I be rich or would you be poor? If I were you I would be rich. If you were me you would be poor. Consider this case as an exception event:
I am poor, you are rich, I am not you. What if I were you?
As the dual of the extensional characterisation of except, this event type comes to
((I poor ∧ you rich ∧ ¬I=you) ∨ I=you) ∧ I = you
i.e.
I poor ∧ you rich ∧ I=you
I poor ∧ I rich
∅
The last step is based on the logical truth of poor+rich. The same result can be obtained by looking at the definition of the exception event as the minimum of different ways of accommodating the exception.
⋂p:(I poor ∧ you rich ∧ ¬I=you ∧ p\I=you)
What can p be? We must drop ¬I=you because it contradicts the exception. But what else to drop and what to keep? The consistent choices for p are
I poor ∧ I=you
you rich ∧ I=you
But the meet of these is empty, so the maximum is ∅. Unless we can agree on priorities, there is no saying what else would happen if I were you and you were me.
Given priorities, a most preferred, or closest counterfactual event type can be found. For instance, if the priorities over our assumptions are given by the strategy
I poor ∧? you rich
then the exception event type becomes I=you ∧ I poor.
One can factorise the premises into laws, boundary conditions, and consequences and give priorities to them in this order. Perhaps I am poor because I am me, while you are rich because you are different from me. Then the premise set is something like
¬I=you ∧ (I=you + you rich) ∧? I poor
The substitution of I=you is consistent: I stay poor, while you, being me, get poor too.
The inverse relationship between except and counterfactual conditional is patent in
The weather is fine except it is cold (not warm).
The weather would be fine if it were not cold (warm).
Ryan/Schobbes (1997) point out the dualities among different nonmonotone operators. My summary and interpretation of their results in the present terms is this. The belief revision modality used to capture counterfactual inference is my priority meet (cf. section on strategies above). The update operator is the dual substitution operator except, which turns out to be the inverse (righttoleft mirror image) of priority join. Thus the following duality diagram is obtained.


Dual 

priority meet 
priority join 
inverse 
substitution in meet 
substitution in join 
Table 9
Priority operators order terms in decreasing order of priority. Substitution operators order terms in increasing order of priority. Priority meet adds information until it fails. Priority join detracts information until it succeeds. Meet substitution always keeps the last information. Join substitution always retracts the last information.
Order begins with a(nti)symmetry. With transitivity, or acyclicity, it gives the category of (strict) partial orders, a cartesian closed category with monotone (orderpreserving) functions as morphisms. The linguistic category of comparison involves order relations too: a comparative property reveals an order relation.
Order relations appear in many guises. These include concatenation, binary order relations, choice functions, priority operators, and metrics. (Compare also section on convolution.)
In this section, I try to relate alternative ways of looking at order.
Looking at the signatures of preference relations, choice functions and priority operators on a Boolean algebra A, they are respectively
2^{A}^{´A} = 2¬A^{2} A^{A} A^{A}^{´A} = A¬A^{2}
The types grow going from left to right, so denotations must be constrained to let the operations on the right match or reveal those on the left. In the Boolean algebra 2, they all boil down to 16. There are 16 (all of them preference) relations, choice functions and Boolean operators in 2. Priority Booleans reduce to the usual ones here. As the size of A grows, the properties of preference relations cut down the number of relations in the first signature, followed by similar cuts in choice functions and binary Boolean operators.
(i) There is a straightforward morphism between binary order relations and concatenation, described more fully in the section on relation algebras. One consequence of the morphism is the equivalence
x ≤ y = x<⋂<y
(ii) The suffix operator \\< is a choice function. It satisfies the choice function properties of reflexivity, transitivity and cotransitivity. Any event is its own prefix (reflexivity). The prefix of a prefix is a prefix (transitivity). The nonprefix of a nonprefix is a nonprefix (cotransitivity).
x\\< ⊆ x 
Reflexive 
x\\<\\< ⊆ x\\< 
Transitive 
x/<_x/<_ ⊆ x/<_ 
cotransitive 
The event type
¬p<p
is equivalent to that specialisation of \\< which applied to the choice set¬p< includes p but not ¬p. This means that a motion or direction event type can also be read as a representation of a preference. The event calculus codes the notion that one who goes somewhere prefers it there. The encoding of the will and the action is the same, it is a question of how the event type is read.
(iii) Priority Booleans define choice functions. The strategy followed in the previous example is
1 ⋂? <p
which is just the choice function described in (ii). Mapping the choices on time, this represents the development of a nondeterministic future from an open choice point to a branch where p is chosen.
Strategies thus are choice functions. As shown in the theory of revealed preference (Fishburn 1977:299), a choice function reveals a preference relation by the definition
y ≤ x iff s(x⋃y) = x.
This preference relation defines a dyadic modal operator on event types by the formula
csw → e
which says that strategy s brings about e given c and w (Hansson 1969, van Fraassen 1972, Lewis 1969:103). This will be put to use in the chapter on mood.
The family of choice functions over a set of options forms a monoid under composition, with the identity choice function as unit. This allows defining notions of strategy composition and strategy combination. Simple strategy choices are not commutative: it is better to know what the other player has chosen before making one’s own choice. Game theory works with commutative strategy sets, where strategic advantage is coded in the complexity of each player’s strategy set. This amounts to a type shift, or skolemisation of operator orderings. In a commutative set of strategies, strategy composition reduces to pointwise meet, so sp equals s⋂p (Carlson 1994).
Composition with a simple proposition p equals the meet of p and the choice set when nonempty and 1 otherwise. (It is a twoway partition, a special case of a choice function determined by a equivalence relation, Fishburn 1977,.van Benthem 1991). The complement ¬s of a choice function s can be defined as the function choosing the complement of what s chooses. Similarly for union, so we have a full Boolean algebra of strategies.
Finally, we can define inclusion or entailment between strategies s⊆p as sp =s. This amounts to saying that s is a refinement of p.
Multiattribute measurement (Krantz et al. 1971), involves projecting a set of preference relations into a single preference relation. Looking at it another way, it concerns defining an order on a vector space.
One special case is where the attribute dimensions are orthogonal and the composite preference is a lexicographic ordering of the component preference relations p, r. The lexicographic order follows p when the two orders are incompatible and r when p is indifferent.
p ⋃ (r\ ~p)
The relation can be represented equivalently as a strategy whose value at any set of options is a priority meet s⋂?p_{ }of the values of the component choice functions s, p. For instance, the lexicographical order of the event types aa, ab, ba, bb can be defined positionwise as follows.
(a_ ⋃? b_) ⋂? (_a ⋃? _b) = (aa ⋃? ab ⋃? ba ⋃? bb)
(A tabular proof is given in the appendix.)
Lexicographical preference considers the dimensions of choice orthogonal: no increase in the less important attribute can offset the preference on the more important one. In terms of multiattribute measurement, the weight of the second criterion vanishes relative to a positive weight on the first one. (the ratio of the weights is 0).
Another special case is where the attribute dimensions satisfy cancellation or joint independence so that a preference in one dimension can always be offset by preferences on other dimensions. In this case, the composite preference can be expressed as a linear combination of the attributes and measured in reals as a weighted sum of the component preferences. (Fishburn 1996). This corresponds to the assumption that the dimensions are not orthogonal, but skewed (have a positive projection on one another).
These two extremes are thus dual to one another: joint independence is dual to orthogonality.
The smallest Boolean algebra 2 is a small order relation of 0 and 1, or yes or no. Any nominal scale is a product of 2 (see section on categorisation), an element of one is a vector of 2 (a characteristic function, or valuation).
A finite binary relation is a Boolean algebra whose atoms are isomorphic to 2, or ordered pairs. At the same time it is a fibred product of two monadic Boolean algebras under the laws of a(nti)symmetry and transitivity.
The laws allow extending a binary order relation to both Boolean (commutative and idempotent) and relational (concatenation) products of the relation. An order relation on atoms is extended to one on arbitrary elements through the choice function representation. An order relation on pairs is extended to an order relation on sequences so that any prefix of a sequence precedes its suffix. Conversely, a binary order can be retrieved as the revealed binary relation or as a convolution (shifted product) of binary cuts on a set of sequences.
The laws also allow defining metrics, or morphisms from the order relation to the algebra of reals. The slack (invariance) of the metric depends on the laws of the order relation in well known ways.
A multiattribute order relation in turn is a product of binary orders, or equivalently, an ordered vector space. There are two types: lexicographic, where the vectors are orthogonal and the dimensions ordered, and linear sum, where the vectors are not orthogonal and the points are ordered so that there is a weighted linear metric on the components.
Order connects up with language theory. Comparison is context free. To sort objects by size one has to form a monotone map between two orders. Context free languages are formalised in monadic second order logic with two successor functions against one for regular languages. See section on trees.
A concrete way to represent natural language as a formal language is the original Montague plan of giving construction and inference rules directly for natural language (Benthem 1986). For instance: from x gave z y infer x gave y to z , z got y, and y went from x to z; from x gave y to z infer x had y before z had y, and so on. The advantage is minimum of theoretical baggage. The disadvantage is a proliferation of similar rules for different items. In category theoretic spirit, it is more common to look for universal elements for the similarities, and abstract a formal language to represent them. Semantics and rules of inference are given once for the formal language plus translations between it and the natural language. But the two points of view can also be combined, construing natural languages as an alternative syntax for an abstract calculus.
The opposition between syntax and semantics, rules of formation and rules of interpretation, proof theory and model theory is a duality which goes to the core of modern logic.
In a way, my approach is at odds with this tradition, or at least can be seen as an attempt to straddle the duality. I do not fix the syntax of the event calculus at the outset, but extend it as need arises. I do not fix a semantics or set of intended models for it at the outset either, but allow myself to move from one framework of concepts to another as need arises. Like Russell’s old lady, I like to think of my world as being constituted of event and object types in the manner of “turtles all the way down”: there is always room for more distinctions, no rock bottom semantics to stop at and map a fixed syntax on.
Largely, this is a statement of attitude. But it translates to descriptive options in places. For instance, I suspect nlevel theories for any fixed small n, like twolevel aspect (Smith 1991), threeindex tense (Reichenbach 1947), or threelevel diathesis (Genusiene 1987). If more than two levels of decomposition are needed, why stop at three? Why not rather “turtles all the way down”? Language does not count. Or so I believe.
Syntax too can be done in two dual ways, known as generative grammar and constraint grammar, respectively. Contextfree languages and recursively enumerable languages are closed under union, but not under complement or intersection, so grammar rules are naturally stated positively through a generative grammar which is a conjunction of production rules of form
type1 Ê type2
The language generated by type1 is the smallest one satisfying the productions. Regular languages and recursive languages are closed under booleans, so grammar can also be written as a conjunction of constraints of the dual form
type1 ⊆ type2
The generated language is the largest one satisfying the constraints. Looking at the event calculus from this point of view, it becomes clear that we already have the makings of a formal syntax in it.
Looking at a grammar as a set of constraints liberates from the tree bias of context free grammar. Constraints do not have to be properly nested, they can also be superimposed. (There is a good computational reason for the tree bias: context free grammars are parsable in polynomial time. )
Pragmatics differs from semantics or syntax in that it tries to derive the structure and entailments of a discourse from minimal content and rich context, using general purpose reasoning. My analysis of Thai syntax later on suggests a picture of a loose knit sequence of content words whose functions are inferred from word order and context. A pragmatic approach to syntax seems appropriate for speech; a lot indeed remains implicit and fragmentary there. Instead of writing a surface grammar for fragments, should we rather have a more explicit if abstract background grammar, plus principles to infer missing structure from the surface?
That depends on where the complexity lies: on the kind of interaction between the system and its environment. The suggested approach pays off if a system, its environment and their interaction are simpler, or more interesting, to describe separately than the result of their interaction directly. In Thai syntax, for instance, digging deeper seems to reveal more interesting crosslinguistic morphisms than staying just on the surface.
Another example: Barber (1975) suggests representing the middle voice as a reflexive which identifies arguments in event types. Klaiman (1991) points out as a problem that the arguments so identified may not have any syntactic reflex. That is a problem only if the decomposition must stop at surface syntax.
For a third example, in the Cree sentence niwa:pamawak sisipak, the morphology tells that ducks are patient and subject and I am agent. Each morpheme can be written as a constraint on event types.
ni = <I<
wa:p = see
am = goal⋂near I\I
aw = subj⋂near I\I
ak = subj⋂pl _
The first constraint describes an event type which involves the first person. The second one describes a seeing event. The third one fits an event type which is directed to an object near but distinct from first person, the fourth to one whose subject is that type. The last one is true of an event type whose subject is plural.
Now the parsing problem is to find an event type which satisfies these constraints. It is clear that it must be a complex event type, for the two subject constraints cannot be true of the same event type. The smallest solution is the event type I see ` pl duck where I am the agent and the ducks are subject and patient. Note that the different constraints pertain to different subevents of the solution. It is not necessary to decide on a unique surface or underlying analysis, if there is more than one solution of the equations that fit the facts. Different speakers or generations can have different theories, whose differences only surface in a few critical forms if at all.
In more complex cases the subevents which satisfy the constraints need not be disjoint, but superimposed. What we have here is a variant of the idea of simultaneous description of a sentence on several strata or dimensions characteristic of unification grammar. No surprise, for the event calculus supports unification (meet of event types containing variables). Dahlstrom’s (1991) treatment of Cree morphology implements the same basic idea in terms of LFG constraint equations.
Grammatical types, for instance traditional parts of speech, can be classified in event calculus categorial grammar style starting from two Boolean types e and b for events and objects, or why not sentences s and nouns n or truth values t or 2 and entities e. We get the usual development of functional types (van Benthem 1986:??). The semantic type 1 and the syntactic type ∅* are identities of dual monoids (relation algebra composition and regular algebra concatenation).
A major problem with ordinary categorial grammar is that its types are too fine. One traditional category corresponds to an entire family of categorial ones. Booleans and variables over types allow generalising more freely than standard categorial grammar. The direction of the slash does not affect the definitions below, only word order.
The same few simple categorial distinctions are hidden under a lot of traditional grammatical terminology. A denominal noun affix is an adjective, a denominal verb affix is a transitive verb and a deverbal verb affix is an auxiliary is an adverb, as types go.
s 
e 
t 
event, sentence 
en rain 
n 
b 
e 
object, noun 
en rain 
n\s 


intransitive verb, denominal verb 
de regen – regnen 
(¬s\s)\n 




n\n 
a 
a 1 ⋂ 1 n 
adjective, denominal noun 
klein, chen 
v\v 
p 
v 1 ⋂ 1 p 
adverb, deverbal verb 

n\v 

v 1 ⋂ 1 n 
transitive verb, denominal verb 

v\n 

n 1 ⋂ 1 v 
complemented noun, deverbal noun 
fi sataa – sade 
n\a 


denominal adjective 

v\a 


deverbal adjective, participle 

p/n 


preposition, case, denominal adverb 

s/s 


negation, modal 

n/s 


Complementiser 

s\s/s 


coordinating conjunction 

p/s 


subordinating conjunction 

v\(n\v) 


transitiviser, causative 

(n\v)\v 


detransitiviser, reflexive, (anti)passive 

Table 10
If x is not y, x is the complement of x\y. x\x is an adjunct or free modifier of x. Adjuncts are idempotent under concatenation (composition) so they can be iterated freely: a = aa = n\n.n\n = n\n = a^{+}. In particular, an adjunct of type x\x is an idempotent of its category x so that x\x.x = x.
The category v of verbs is not available in standard categorial grammar. Given Booleans and variables it can be approximated as (¬s\s)\n ‘a nonnoun which can form a sentence alone or with a complement’. Whether this excludes anything depends on type inference. For instance, in hit hard, hit is a verb because it can form a sentence alone, so it is of category s = ∅*\s but hard is not a verb if it is of the type s\s. What about may in it may (rain)? If its category is v?\v then it is an adverb and verb at once, i.e. an auxiliary.
Chomsky’s fourfield of categories n, v, a, p forms another Klein four group.
The third column shows a corresponding relational identity. An adjective can be represented as a relation join with a noun. If both adjective and noun are absolute (monadic), we get an intersective adjective. A difference between adjectives and nouns is that nouns are monadic more often.
This concludes the chapter on conceptual tools. On to TMAD.
Aspect types are (types of) event types, defined by regular schemata, and used to classify phrases according to aspectual behavior.[55] What aspect types are worth distinguishing for a given language depends on the grammatical means available in the language. Phrases are not assigned to aspect types by fiat (i.e. there are few rules specifically saying that a verb belongs to a given lexical aspect type), rather, the aspect type is inferred from the inherent semantics of the phrase, its grammatical behavior and the context at hand. Intuitions about the aspect type of a phrase, say, a verb, are influenced by images of prototypical uses of the verb, and may be refined by bringing up alternative scenarios where the verb is at home. Aspect type is just a projection of the compositional structure of expressions in the temporal dimension. For instance, paint is vague between activity and accomplishment, because by applying paint one can both affect and effect things (paint a fence/picture). Watch TV is an activity but watch a TV program an accomplishment.
The list of aspect types below is not exhaustive, for the simple reason that there is no exhaustive list of aspect types. Any regular expression on events is an aspect type. Aspect types do not form a taxonomy of mutually exclusive and jointly exhaustive classes either, they can nest and intersect as freely as regular expression languages do. Still, certain simple aspect types are typologically most common, and deserve most attention.
For brevity of expression, I represent the commonest aspect types by sorted variables a,b,c,d,o,m,p,q,r,s, which thus imply aspect restrictions on their values.
What is a tense and what is an aspect? By a largely AngloSaxon tradition, tense orders events in time, in particular, relates them to a deictic or indexical utterance time or to a relative or anaphoric reference time, while aspect relates events to their parts or phases (Sweet 1900, Holt 1943, Jakobson 1957, Comrie 1976, Lyons 1977, Lapolla/Van Valin 1997:40). Thus Sweet (1900 [1955:101]): ‘aspects involve distinctions of time independent of any reference to past, present or future’. For Comrie (1976:3) ‘Aspects are different ways of viewing the internal temporal constituency of a situation’ (similarly Smith 1991:xvi). Roman Jakobson defines aspect negatively as characterising “the narrated event itself without involving its participants and without reference to the speech event” (Jakobson 1957:4). For Frawley (1992), aspect is 'the nontemporal, internal contour of an event.'
A continental tradition (Brugmann 1904, Agrell 1908, Jacobsohn 1926,1933, Hermann 1927,1933, Porzig 1927, Koschmieder 1928/29, Bühler 1934, Goedsche 1940, Sørensen 1943, Rundgren 1959, Pollak 1970, Forsyth 1970, Bache 1982, 1985, 1992, 1995:226,314, Raible 1990:195, Verkuyl 1993:10, Johanson 1998, Bertinetto/Delfitto 1998) separates aspect (aspectuality) as point of view from Aktionsart (actionality) as event type..
According to this tradition, aspect (Russian vid) is "Gesichtspunkt, unter dem ein Vorgang betrachtet wird" Porzig (1927:52), while 'Aktionsart ist, im Gegensatz zu Zeitstufe, die Art und Weise, wie die Handlung des Verbums vor sich geht' (Brugmann 1904:493). However, already Streitberg (1891) and Leskien (1909) gloss over the distinction (Bache 1985). Johanson (1998:§11.7) traces the conflation back to Curtius (1846).
For many authors Aktionsart means lexical or derivational aspect (Lyons 1977:705706). Some slavic aspectologists classify Aktionsarten (sposoby dejstvija) by morphology (Cohen 1989:37). For some, Aktionsart is a ‘combinatory variant’ or use of a (grammatical) aspect, i.e. an occasional meaning created by the ‘sum of all the elements which contribute to a particular interpretation of the basically invariant meaning’ (McCoard 1978:142). Among traditional Aktionsart distinctions are punctual vs. durative[56], dynamic vs. stative, telic (resultative) vs. atelic (irresultative), ingressive vs. terminative, semelfactive vs. iterative and habitual vs. nonhabitual (Bache 1985:13).
As Frawley (1992) sums it up,
the literature on aspect frequently draws a distinction between two kinds of computation of event structure: those that derive from modification of the event proper, called Aktionsart ('kind of action') or lexical aspect, and those that are a function of a perspectival change on an event as induced by discourse structure and information flow. There is little agreement on the proper terminology here, though see Bybee (1985) and Brinton (1988), and it is not always clear that this distinction can be drawn consistently (Comrie 1976).
By and large, aspectAktionsart distinctions involve degree of grammaticalisation and indexicality (Maslov 1959:160). Convictions may vary with language: those dealing with grammaticalised finite aspects/tenses (perfective/imperfective oppositions: Romance, South slavic, German) appreciate the dichotomy (Bertinetto/Delfitto 1998), those without them (English, Nordic, Russian, Arabic) may play it down. Some authors concede that aspect and actionality distinctions are ultimately based on the same or similar ontological distinctions or semantic primes (Lyons 1977:706, Bertinetto/Delfitto 1998:§4.2). Others (Johanson 1998) specifically deny this. The first camp are open to aspect composition, the second camp go for twolevel theories.
The aspectAktionsart distinction is not a primitive opposition in the present calculus, it but can be characterised (see section below on aspect as point of view).
Some writers make a terminological distinction between event internal aspect and sequential phase (Joos 1968:138146, Cattell 1969:120123, Mourelatos 1981:195fn) and others between indexical tense and infinitival taxis (Jakobson 1957).
To sum up, the tenseaspectAktionsart distinctions play on three logically independent but typologically related criteria: (i) temporal order vs. event structure, (ii) finiteness (absolute or indexical vs. relative or bound reference), and (iii) grammar vs. lexicon.[57]
Dahl (1985:25) points out that notional tense/aspect definitions may not make tense and aspect disjoint. This is no problem from my point of view. The calculus of event types covers both, making no sortal difference. The logical notion is event type. Tense and aspect are grammatical categories, subject to cross linguistic variation around prototypes where many criteria agree. (Bertinetto/Delfitto 1998) Given a calculus, taxonomical terminology is often more of a hindrance than help (e.g. Declerck 1997:59)
When has a language got a tense or aspect? By one definition, when marking the temporal relation explicitly is the default or obligatory, a grammatical accident rather than a lexical option (Dahl 1985, 1998). This is admittedly vague, and probably inherently so.[58] For TMA semantics, the grammatical makeup of a tense or aspect expression is not essential. Tenses and aspects are indicated by members of most any grammatical category from both open and closed classes, verbs and their inflections, nouns and their inflections, adverbs, prepositions, conjunctions and so on. For this reason, it is essential to develop a theory which allows capturing and combining the contribution of all of them at once. Dixon (1995) lists a number of languages (Ainu, Mundari, Tunica, Hopi, Paumari, Pirahã, Warrgamay) which don’t have anything that could be called a tense system in their grammar. All these do show an aspectual system. !Xu is said to have no grammatical marking for tense or aspect; but it too has a set of temporal adverbs (Snyman 1970).
It is a feature of my calculus that the difference between verbs, aspects, tenses, and temporal adverbials is only a grammatical one, they all denote the same sort of thing. A consequence of this is that not only verbs, but adverbials too, may exhibit aspect. Similarly, nothing crucial hangs on the decision whether a given form is a tense or an aspect or perhaps both. This also avoids concentrating too exclusively on verbs and their inflections. Chung/Timberlake (1985) show that TMA devices use a wide range of structural means (verbs, adverbs, clitics and affixes). Not all languages have grammatical tenses (Bybee et al. 1994:119). Tenses are optional in some languages (Comrie 1985:103). Tenses and temporal adverbials are complementary in some languages (Comrie 1985:31). What are temporal adverbials in some languages occur as bound morphemes in others (Comrie 1985:18).
Surely there are interesting crosslinguistic statistical and developmental universals to be found. Nominal reference is largely independent of TMA, which is one sense in which tense and aspect specifically concern verbs (Comrie 1985, Chung/Timberlake 1985, Langacker 1987:189). It is an old observation that languages tend to follow developmental paths where new forms to come in as compositional phrases, get grammaticalised into affixes, and die out as lexicalised inflections ready to enter a new cycle. The criteria of obligatoriness and bound morphology are two sides of the same grammaticalisation coin (Comrie 1985:10). There are interesting regularities concerning the grammaticalisation of TMA notions (Dahl 1985, Bybee 1991, Bybee et al. 1994, Lindstedt 1996, Leinonen 1996), many of which can be explicated if not explained in a formal semantics.
The formalism of this book is an event calculus, a formal language whose terms denote events. Event normally refers to happenings rather than states of affairs. I use it here to cover (courses of) events, episodes, situations, states, occasions, even times. Each candidate for a catchall term has its own aspectual type. Given the duality of dynamic events and static situations, most choices are awkward for one side or the other. On the plus side, event is both everyday and short.
Historical courses of events like Caesar's murder or the life of Jesus are complex closed event tokens including many others, love, hate, intrigue, suffering and death. A generic event type like murder has common structure: victim, perpetrator,a place and time, but no specific scene or date. A situation in my usage is a complex open event (type or token) of no specific duration; an occasion a bounded situation which defines the current resolution.[59]
In this section, a develop a taxonomy of event types.[60]
The main dichotomy in aspect types is between open and closed event types. An open event type a (for activity[61]) is cumulative (Quine 1960, Krifka 1987) or summative (Löbner 1990), i.e. closed under arbitrary joins. Open event types take durative adverbials (for t) and are inceptive with bounding adverbials (in/by t). They can stop or continue or resume (Heinämäki 1974:10).
if e⊆ a then ⋃e ⊆ a
In particular, then, open event types are closed under join and concatenation, i.e. a⋃a ⊆ a, aa ⊆ a, and a^{+} =^{ }a. The last property shows that open events are noncounting languages, or aperiodic monoids (McNaughton/Papert 1971, Pin 1997). The open/closed distinction has been recognised since Aristotle as an instance of the divisible/indivisible or count/noncount distinction (Verkuyl 1972, Gabbay and Moravcsik 1973, 1980, Mourelatos 1978, Bach 1980, Hoepelman and Rohrer 1980, Bennett 1981, Carlson 1981, Link 1983, Krifka 1987, Langacker 1987, Frawley 1992:331).
Open event types are structurally similar to noncount (mass or plural) nouns, i.e. they are divisible and amorphous in the same restricted sense. This is reflected in deverbal nouns from open verbs: work or rain is a noncount noun. Open events satisfy the characteristic axiom (Aristotle’s principle) that the present (progressive) implies the perfect: one who is Ving has Ved. Also one who stops Ving has Ved, while one who almost Vs is not Ving (Johanson 1998). Relative frequency adverbs continually, occasionally, frequently produce open event types out of open event types..
Many aspectologists (e.g. Pertinetto/Delfitto 1998) feel that open event types exclude loose bounding adverbials like in t. In my calculus, there is no rule against it. The only problem is that, by Aristotle's principle, an open event type holds already from the start. I will be with you in a moment thus comes to mean I will come to you in a moment. Languages may prefer to state this explicitly by using an inceptive change instead of an open event type.
A state s is a simple open event. A simple (or pure) state is atomless in time and atomary in type. It is fully homogeneous or homeomerous in that all temporal and logical parts of it are members of it, i.e. it is an atom of the event type algebra (no other event type is entailed by it).
s = in s = of s
(Taylor 1977, Dowty 1979:166, Mourelatos 1981:192, Vlach 1981a:273, Frawley 1992:146). This is the downward closure property known as the hereditary or subinterval property (Bennett 1974:64, Hirtle 1975:2728, Taylor 1977, Dowty 1979, Humberstone 1979, Röper 1980, Löbner 1988, Parsons 1990:184, Smith 1991:37, Krifka 1986, 1989, Verkuyl 1993:195ff, Link 1997) Since states are also open (closed under joins), states are in fact ideals in the event algebra (Halmos 1974:48, Verkuyl 1993:56).
States are analogous to abstract or mass terms. The algebra of a state is a complete Boolean algebra. Its topology is continuous relative to the assumed topology of time.
Given that in x of e denotes the event type of arbitrary parts of an event e, pure states satisfy
s = in x of s
This entails that the progressive, if defined by in x of e, will be vacuous when applied to simple states. For instance, pest entails animal and harmful.
The English progressive excludes simple states because they have no associated process to be in (Vlach 1993:242). He was being a pest shows be a pest is not a simple state here (there is something one does, to be a pest[62]). A state in the simple tenses implies the state actually holds at the reference time (whatever the reference time). Since points are nothing but events without proper parts, it follows that simple states are true and false pointwise (Galton 1984:15).
Call an event type timeless if it is two valued (denotes the unit event or the empty event). A timeless event is constant relative to time. Examples of timeless event types are eternal truths like one plus one is two, boys are boys and que será, será and temporally definite ones like It always rains in London or Christ died for our sins. (Rescher and Urquhart 1971:256).
There are also events that are independent of time in the weaker sense of not depending on times other than when they hold. Events that are independent of time in this sense need not be eternal. They can begin and end, they just don’t have ‘anything to do with time’. 'Being a function of time' here not only means that e varies with time, i.e. is true and false at different times, but that its occurrence at one time depends on what happens at other times. Simple states are timeless because they do not depend on a comparison between states of affairs at different times for their verification (Frawley 1992, Santos 1996). They can thus be predicated of times of any duration. Examples are round, large, red, and other geometric and qualitative properties that can be verified from a still image (Rescher and Urquhart 1971:23, Langacker 1987:220ff). Many qualities are orthogonal to time in this sense. In want of a better term, call such states atemporal. There are temporal qualities, for instance age.
Simple states are not agentive, because agency implies action, hence activity. They fail agentivity tests like force/persuade, the imperative, deliberately/carefully, and do anaphora (Lakoff 1965, Heinämäki 1974:9, Dowty 1979, Smith 1991:42, Frawley 1992:146ff). Being static, states have inertia: they continue to hold by default until something happens to cancel them (Dowty 1986:51). They do not consume energy (Comrie 1976, Nedjalkov 1988). All such ascriptions pertain to the essence of the state, not its causes. That star is bright denotes the effect the luminosity of that star is high, not the cause that star emits a lot of radiation.
Aristotle (Cat.8b9a, Met. 1022b) introduces a distinction between temporary states (skhesis, diathesis) like ill, permanent states (hexis) like learned, and capacities (dynamis) like hard. The classification shows up in many languages, and is worth registering as follows.
Some states are temporary, i.e. typically bounded by their opposites: he is awake/asleep, he sits/stands/lies, one goes in and out of them (Carlson 1977, Smith 1991:38) Following Galton (1984:58) and Santos (1996), I call such conditions temporary states. Portuguese uses the auxiliary ser ‘be’ for permanent states (qualities and habits), and estar, etymologically 'stand’ for temporary states (conditions).[63] Permanent states include timeless states and capacities.
What is the representation of temporary state in our calculus? Extensionally, a maximal temporary state is a state bounded by its opposites, i.e. a cycle ¬s s¬s. Any factor of such a temporary state is also a temporary state, i.e. any event of type s: ¬s s^{+}¬s. Intensionally, we want to account for typical: a typically temporary state typically ends, not necessarily in every instance. Someone can be ill and never get well, but everyone cannot: the average is by definition normal. The outcome: we may characterise temporary state by the formula gen prog pf e: a state which as a rule happens in cycles.
Generic states (habits or dispositions[64]) summarise states of affairs through longer periods, from which follows that they need time to be verified (Smith 1991:39). Example: smokes, is reliable. Predicating generic states of very short times seems awkward (Kucera 1981:184185). This shows that they have coarse granularity. Simple and generic states are (relatively) permanent or stable (Santos 1996, Nedjalkov 1988). Generic states have modal and counterfactual import. The distinction between habits and dispositions will be studied more closely below.
A dynamic state is vague between a simple state, a temporary state, and a process maintaining a state (e.g. a position, a posture, and maintaining a posture). Stand is a prime example: a building stands in a simple state, movable objects stand or are standing in a temporary state, and a person is standing in a dynamic state. A dynamic state can only be verified by looking at a neighborhood of a point (one cannot tell for sure that an object is stationary from a still picture, G.Carlson 1977a:428). This does not prevent it from holding as a simple state at a point with the right kind of neighborhood. (An event type can entail other events without denoting them.) A dynamic state (state of rest like stand or float) can only be verified at an interval, although the interval can become arbitrarily small. This distinguishes dynamic states from simple states, which are timelessly true at points.
d=s⋃p
The process of maintaining a state can be described as the limit of a series of small changes between the steady state and its complement. The appositeness of this analysis is seen in the trembling associated with difficulty of maintaining a posture. When resolution is decreased, a dynamic state becomes an oscillation of small adjustments around an equilibrium. Often (but not exclusively) the distinction correlates with the type of subject; animate (agentive) subjects allow the dynamic reading, inanimate objects the stative one. Examples: posture verbs stand, sit, lie, hang, hold, keep (Dowty 1979:173ff). I will return to dynamic states in the section on counterfactuals.
The interesting thing about dynamic states is that the result state is continuous and simultaneous with a continuous process. The imperfective can denote the preparatory stage or progress of the maintaining process, the perfective the acquisition or affirmation of the result state, and the perfect the result state. For instance hold in the imperfective means ‘try to get/keep a hold’, in the perfective ‘manage to get/keep a hold’, and in the perfect ‘have got a hold’. Simple and progressive aspect can denote the same event for dynamic event types: sit, stand, feel, hurt, think, hope. Examples: My feet hurt/are hurting. She thinks, or thinks she is thinking about her, as Paul reads. – I'm not sure I know fully what you are suggesting. – I suggest nothing immediate. (Bache 1985:230 Temporary and dynamic state are the Aktionsarten of progressive aspect. Holding a plan is a dynamic state, hence future progressive is one.
Closed events are dual to open ones. An atomary (singular closed) event type (closed, bounded and simply connected) event type is is nondurative, noncumulative, indivisible, or integrative (Löbner 1990), i.e. it cannot be continued, summed or divided, only repeated. It is really the notion of a singular count event that is involved here. For them, the Boolean sum b+b or the concatenation bb is not of type b (Galton 1984:103, Krifka 1987):
b+b ⋂ b = ∅
This property allows nesting, i.e. vagueness about boundaries as in ¬s(¬ss)s. The main thing is that there is numerically just one event of type b there. Closed sets in general are closed under finite joins, while for instance closed and connected sets are not. The part algebra of a atomary closed event type is the trivial Boolean algebra. Its relative topology is the indiscrete topology. Atomary closed event types are analogous to singular count nouns. Their iterations are analogous to plural count nouns, whose algebra is an atomic Boolean algebra and which are again open in a discrete topology, see below.
Atomary closed event types have a temporal profile (Hirtle 1975:§3.4, Langacker 1987:244, Löbner 1988:184), a beginning, middle and end (Aristotle), so the sequence of two such events in a row just has not got that same profile. Closed events satisfy Aristotle’s movement principle: one who is Ving has not (yet) Ved. One who stops Ving may not have Ved. One who almost Vs may already be Ving (Smith 1991, Johanson 1998).
A simple change c is a filter of nested half closed sequences of an open event and its complement (Wright 1963, Frawley 1992), converging on, but never coincident with the vanishing period (point) of change in between. For as Aristotle (Ar.Phys. VI.8) argues, change (as well as rest) always takes time, because it includes its boundaries, and there are two of them. At the same time, simple change has no minimum extent.
c = ¬aa
The point of change is covered by one of the complement states  that is what it means for states to be complementary. For instance, if a house is built, in the beginning there is no house, at the end there is one, and in the middle there is an unfinished house  but really, there is no house until there is one. Thus the event type ¬aa does not only describe instantaneous change (but it can describe one). A change is closed, because cc = ¬aa¬aa is not of type ¬aa. On the other hand, change can be (improperly) nested, i.e.
c = ¬aca
which entails c = ¬a^{+}a^{+}. Changes can be simple/atomic (a is a state s) or complex/extended (a has proper parts). Half closed event types exhibit an event/result ambiguity with for T and again (The sheriff jailed Robin Hood for years/again).
A change means that a certain open event type a becomes true, so we may write become a for ¬aa. The denial of become a is stay a i.e. aa. I define become here in terms of the concatenation operator. Dowty (1979:143ff) takes become as a primitive but finds that it is not sufficient alone for defining noncomplementary changes like go from the post office to the bank. He notes that things would be simpler taking concatenation (von Wright’s T) as the primitive, but is reluctant to do so as become seems a more natural linguistic primitive to him. Instead, he adds Cresswell’s (1977) and as a second primitive and writes from the post office to the bank as become ¬post and become bank, which equals post.(¬post⋂¬bank).bank.[65]
Ryle (1949:149), attributing the insight to Aristotle, distinguishes achievements (‘success words’ or ‘got it words’) from activity or process or task words (‘try words’). The contrast between Ryle’s achievements and his activities, processes and tasks is exemplified by kick/score, treat/heal, hunt/find, clutch/hold fast, listen/hear, look/see, travel/arrive.
An interesting point is that Ryle’s achievement words include both (more or less) sudden climaxes and protracted proceedings, i.e. both gettings and keepings. They include win, unearth, find, cure, convince, prove, cheat, unlock, safeguard, and conceal, but also keep a secret, hold enemy at bay, retain the lead. Both descrying a hawk and keeping it in view are sorts of success for Ryle. This is a telicity (resultativity) distinction. A change or an absence of change is resultative because it ends in a state complementary to the one it starts with or would have ended with. A causative event type is resultative if it causes or prevents a change. Ryle's point here is that both closed and open events can be resultative. Let us have a closer look at the latter.
Absence of change, the event type aa of continued open event could be called with Aristotle rest, or permanence, or continuative event type (cf continue in the section on phasal verbs below). Permanences are extended: Stay entails stay for some time. Poutsma (1921:31) has a list of continuative event types, including those formed with aspectualiser on: read on, go on, keep on. Absence of change is open: closed under arbitrary joins and finite meets.
From this one might expect to find absence of change exclusively imperfective. Yet crosslinguistically, verbs for remaining are perfective when they indicate the absence of a change at or for a definite time. In Bulgarian, phasal verbs for stopping, staying, keeping and continuing are perfective (Lindstedt 1985:180). In Finnish, there are two different verbs, perfective jäädä ‘get stuck, fall, be left behind, get off, remain’ taking goal complements and result state time adverbials, and imperfective pysyä ‘stay in place’ taking location complements and durative time adverbials. Jäädä means 'not continue to move, stop', pysyä means 'continue to not move, not start'. In practice, the two are often interchangeable. Compare
Jäin huoneeseeni koko päiväksi. ‘I remained in my room for the whole day.’
Pysyin huoneessani koko päivän. ‘I stayed in my room all day.’
But there is a difference as to when I did it: in the morning, or throughout the day:
Tänään (=tänä aamuna) jäin huoneeseeni koko päiväksi. 'Today (=this morning) I remained in my room for the whole day.'
Tänään (*tänä aamuna) pysyin huoneessani koko päivän. 'Today (*this morning) I stayed in my room all day.
The following example is revealingly ambiguous:
Mies jäi junasta asemalle. ‘The man left the train at the station/the train left the man at the station.’
The first reading says the man arrived by train at the station. It can be symbolised as man on train⋂¬train at station.train at station.¬man on train.¬train at station, whose subevents are The man was on the train, the train arrived at the station, the man left the train, the train left the station. The second reading says the man missed the train. It is the denial of ‘The man took the train at the station’, symbolised by man at station⋂¬train at station.train at station.man on train.¬train at station, whose subevents are The man was at the station, the train arrived at the station, the man entered the train, the train left the station. The only part actually negated by the second reading is the man entered the train. Thus the representation of the second reading is man at station⋂¬train at station.train at station. ¬man in train.¬train at station, whose subevents are The man was as the station, the train arrived at the station, the man failed to enter the train, the train left the station.
The curious thing is that the Finnish sentence portrays the man as ‘moving’ from the train (junasta ‘from the train’) to the station (asemalle ‘to the station’), while in actual fact he fails to make the opposite move from the station to the train. The assertion side of the case assignment is right: at the final state, the man is at the station and not on the train, in conformity with the cases. It is the presupposition side that fails, for the man was at the station and not on the train already at the initial state.
Tommola (1986) observes that Finnish resultative object case alternation appears even in permanences like
Simonides kantoi aina kaiken/kaikkea mukanaan. ‘Simonides always took/carried everything with him.’ ¬(carry⋂t).carry⋂t
Antti piti hevosen jalkaa/jalan paikoillaan. ‘Antti tried/managed to hold the horse’s leg in place.’ ¬(hold⋂t).hold⋂t.
Perfective permanence presents a problem only if we define resultativity through actual noncontextual (lexical) change. Looking at the instances more closely, it appears that the resultative total object form here indicates permanence at a point or through a time. When the object is total, an adverb of time is implied. Simonides carried everything along always when he left or all through his life, Antti held the leg in place until the operation was done. Given that the time t is future relative to the initial state ¬(hold⋂t), this means that the initial state is in effect expected state: the hoof was going to budge at t, but it did not.
This insight connects absence of change to branching future. The man’s plan was to get on the train. The actual course taken by the events causes a change from the expected future (the man was (going) to be on the train and not at the station) to the actual present (the man is at the station and not on the train). This step is not purely temporal (future), but counterfactual (past future): At the initial state, the man was to be on the train, at the final state, he was not (Lindstedt 1985:214). Although the facts do not change, their likelihoods do: at the initial state, the final state is not likely or at least not certain, at the final state, it is. Illustration:
Figure 8

The diagram can be read in terms of classical kinetics. In distance per time coordinates, rest is traced by the constant horizontal line, and steady motion by the straight diagonal line. Starting and stopping are mapped by curved lines where velocity changes. Continuation (constant velocity) is described by straightness of line. Rotating the right half plane 45 degrees clockwise we can visualise the situation where a straight line passes from rest to motion, while the absence of motion traces a curved path. As physics tells us, all change is relative to context, the system of coordinates. In terms of classical dynamics, if the force field changes around the origin, a force is present in the absence (cancellation) of expected acceleration. Opposition to change becomes permanence.
The verb jäädä 'remain' is the passive of causative jättää 'leave', which makes the above picture more concrete. Although the object left behind does not change position relative to its own coordinate system, the subject leaving it behind does, so relative to the coordinate system of the subject, the object does change position.
A prediction of this counterfactual analysis is that perfective permanence appears at genuine branching points in the tree of possibility. I (perfectively) stay at home at those points where I was supposed to leave home but did not (Ar. Phys. 226b1415). Those are the points at which I cause it that I go on being at home by preventing an alternative course of events. Though there is no change in actual fact, there is a change in possibilities (Heinämäki 1974:182).
In Portuguese, the same verb ficar means ‘stop, remain’ and ‘get, become’ , e.g. ficou ali ‘he stopped/got/remained there’, formally <s. In Swedish too bli is ‘become/remain’. It seems initially paradoxical that one verb can denote apparently contradictory things (change and its negation), but really this is a simpler vague event type. The (final state) is the same, only the presupposition (initial state) is unspecified. Become and remain and their negations form another square of opposites (Löbner 1990:89).
In general, verbs be, become, remain, their causatives make, cause, have, keep, media or passives get, as well as the corresponding verbs of position and motion stand, sit, lie, come, go, stay, take, bring, hold form a central locus of neutralisation and metonymy for TMAD systems.
A process (Vendler’s activity[66]) is a state of change (Galton 1984), (the limit of an) iteration of some (complex) event.
p = e^{+}
Here e is not p itself (that defines states). The resolution of p need not be unique (Parsons 1990:184). A process is open and extended (Bennett 1977,1981:18), because p = e^{+} = (e^{+})^{+} = p^{+}. One of Vendler’s criteria for activities was the Aristotelian entailment that the progressive implies the perfect, i.e. he is walking entails he has walked. Although this entailment may not be strictly valid because of the granularity of some processes (having walked may require a certain number of steps, and one can be in the middle of them before one is through them), it is true for coarser resolutions. As in Carlson (1981), I see no need to add a subinterval or homogeneity principle qualified by granularity (Link 1998:203,303). One seems to already follow from the upward closure principle just given.
Whether a process appears dynamic or static is relative to scale or resolution (Langacker 1982:269). A process which oscillates around a steady state involves a series of local changes without producing a global change; it is dynamic in the small but static in the large. It can be smooth, continuous and monotone or have complicated discrete internal structure. Lose energy is also a process.
A process can be an oscillation, an undirected iteration of a change or cycle. (Recall that for states s, ss ⊆ s):
p = (¬ss¬s)^{+} = (¬ss)^{+}¬s = ¬s(s¬s)^{+}
An oscillation is irresultative (it produces no overall change). Such states of change have and have not got a temporal profile (Jakobson 1957) depending on granularity. Unlike states, a process is of a different event type than its complement, the absence of a process. Unlike events, a process can go on, it has no built in lifespan (it is periodic, Rozenberg and Salomaa 1997). In a small enough scale, an activity entails change; in a large enough scale, it entails permanence. In the absence of comparative change, there is no limit to approach or reach. In Russian, many nondirected process verbs lack perfective pairs, e.g. (Johanson 1998)
For iteration of discrete closed events (pf e)^{+} or pf^{+} e the names series (Santos 1996) seems apt. If processes in general are noncount, then a series is specifically the plural of an event type in that it denotes a set of countable and separable individual events. The Boolean algebra of a series is a nontrivial atomic one. Its topology is the discrete topology. Serial event types are usually glossed by ‘one by one’, ‘one after another’. Series is a special case of plural (or distributive) event type, consisting of a plurality of discrete events of the same type (not necessarily successive but, for instance, spatially distributed). Some languages, like Navajo, have distributive or plural aspectualisers.
Or a process can imply progress, directed gradual comparative change where there is a stepwise but for a large enough granularity monotone increase or decrease along some comparative property (scale).[67] It is important to distinguish relative or comparative change such as become larger from absolute or positive change such as become large. It is easy to prove that relative change is open while absolute change (change to a specified value) is closed (one cannot become large twice in a row without becoming small in between, Dowty 1979:168169). A comparative change produces a global change relative to any given bounded period of time (is perfective), while at the same time, it also satisfies closure under joins, so it is imperfective. Accomplishments denote comparative changes accumulating into an overall absolute change. Progress or gradual adverbs gradually, little by little single out comparative changes (Bertinetto/Delfitto 1998).
Continuous change can be described as the limit of a series of changes where unit of resolution tends to zero. Continuous change is homogeneous (closed under open subperiods). Still, although arbitrarily short, continuous change is still extended (it takes at least two data points to detect it, Ar. Phys 239a20ff).
And then of course there are complex processes, for instance building might be described by a complex regular expression in the style of (hammer⋃saw⋃carry⋃rest⋃think)^{+}. (Thus one can be building something without doing anything.)
The fact that processes or activities allow gaps has been noticed by several authors (Rescher and Urquhart 1971:160, Heinämäki 1974:1920, Gabbay and Moravcsik 1980, Bennett 1981, Dowty 1979:8182, Vlach 1981,1993:242, Palmer 1987:§4.1): it seems okay to say the first sentence while allowing for the second sentence.
I’ve done nothing for the past hour except read this damn book.
Well, actually that’s not true, there’s the two and half minutes that I went to the bathroom, and the two thirtysecond periods I spent looking out the window, and all those fractions of seconds I was blinking...
There are several approaches to accommodating it. One is to say that quantification over times is restricted to relevant occasions. The pauses don’t count as long as they don’t include competing activities to the reading. (It would be different if I had took up some other reading, watched TV, or been on the phone.) Another one is to say that the quantification is relative to granularity: it is enough to find a cover of the half hour with periods of reading of some reasonable granularity in each member of which it is generically true that I am reading. It is when resolution approaches the level of seconds that the universal quantification becomes false. [68] A third one is to say that one is literally reading the book even as one blinks, because one cannot reasonably read without blinking. Reading is defined by its result, not by what one does each individual minute. A fourth one (Bennett 1981:21) is to admit that activities simply are disconnected. I think all of these explanations are correct and compatible. Compare Vlach’s (1981a,b) example
(Someone walks into a theater, points to an empty seat and asks:)
 Is someone sitting here?
 Yes and no.
A person who has left a seat temporarily has not left it permanently. She is not sitting on it for the moment, but the seat is taken for the duration of the show.
Ritchie (1979) divides English activities into two groups depending whether they appear as cycles or acquisitions in temporal clauses. In
I shouted to him after he ran.
The guests arrived after he slept.
after he ran preferably means after he ran off while he slept preferably means after he slept enough. Changing massaged him for shouted to him and left quietly for arrived changes the preferences. In general, the closure of an activity appears as acquisition when the scale is small (resolution is fine) and as a cycle when the scale is large (resolution is coarse). The interpretation is sensitive to context (the likeliest closure is chosen when a closed event type is required by the context.)
A cycle (a momentaneous or transient event) o is a change to an event and back (Klein 1994:96). A cycle is closed, but not resultative, because the final state is the same as the initial state. Cycles will have an important role in explaining the existential perfect (Lindstedt 1985:213ff).
o= ¬ee¬e
Because it cannot be continued, any closed event b is equivalent to a cycle of itself ¬bb¬b (read: b once) but does not denote the same (the events have different parts). Cycles too can be simple or extended, with knock and hit as examples of the former, kiss and visit of the latter. Smith’s (1991:56) semelfactives are simple cycles, which shun the progressive. My definition is noncommittal about this: extended cycles like visit or kiss do allow progressive.
Cycles are sometimes called punctual (Comrie 1976:41), momentaneous (Cohen 1989:76) or semelfactive (Smith 1991) and discrete iteration of cycles frequentative. Some languages as a rule mark the frequentative/semelfactive distinction in verb morphology (Finnish, Navajo), others as a rule don’t (English). This is analogous to marking plural inflection (Jespersen 1924:210).. Note that the iteration operator e^{+ }covers a single occurrence as a special case (proper iteration is ee^{+}). Forms in which iteration is unmarked actually put this option to practice. Cases in point are English cycles (cough, flash, blink) and the Russian imperfective, see below.
Smith (1991:67) claims that semelfactives, unlike achievements, do not have a future progressive. While Bright star is winning the race can mean she is in the lead, The canary is flapping its wings should not (in Smith’s view) mean it is going to spread them, only that it is already moving them up and down. This may be a question of scale. You can’t tell flapping from spreading before the wings are coming down, unless you are watching a slow motion film. Canaries' flapping of wings is not planned either. (Compare I am visiting my grandparents next weekend.)
Cycle is a minor lexical event type in many languages. English, for failing to mark number, does not distinguish between states and cycles or cycles and series (touch, sigh). It is a common event type in grammatical aspect, where cycles are produced by bounding open event types with temporal adverbials. The contrast between lexically produced resultative (half closed) event types (punctualterminative in Ikola 1949) and grammatically produced cycles (closed event types, linearterminative in Ikola 1949) is a major factor in the aspectAktionsart controversy (Johanson 1998).
Cycles shun resultative perfect. Compare I stand corrected to I stand telephoned, where the phone call leaves no dent to its target. (McCoard 1978:227, Parsons 1990:312) Or the door is locked/knocked.
Are closed events telic (resultative)? A definition of resultativity is needed to answer that. In Aristotle’s definition, result is the end (suffix) of an event type, which leaves it to the event type to detail the definition in each case. For a change, the result is the final state of the change. The more surprising consequence of the definition is that the result of an an open event is the event itself. Thus an open event has attained its end as soon as it has begun.
Vendler 1967:103, attributes the idea to Gilbert Ryle (1947), who refers it back to Aristotle Metaphysics 1048b:
Since of the actions which have a limit none is an end but all are relative to the end, [they] are in movement in this way (without being already where the movement aims), this is not an action or at least not a complete one (for it is not an end); but that movement in which the end is present is an action. E.g. at the same time we are seeing and have seen, are understanding and have understood, are thinking and have thought (while it is not true that at the same time we are learning and have learnt, or are being cured and have been cured) [...] Of these processes, then, we must call the one set movements, and the other set actualities.
The test has been used by others too (Lindstedt 1985:155) to distinguish between atelic and telic verbs. Actually Aristotle is not saying that actualities have no telos, quite the opposite, they are ends in themselves.
In the Aristotelian sense, the inception of an open event is resultative. The perfect tethelai 'be in bloom' is resultative too, for it denotes the result of the acquisition thallein 'blossom' (Kühner 1896:§384.)[69] In contrast, a cycle of an open event type is closed but not resultative in that it does not produce a change into a state different from the initial state. The only (and irreversible) result of a cycle is that it happened. (This is Galton’s 1984:§5.3 pofective aspect). Smith (1991:105) claims that some languages (Chinese) give resultative and irresultative perfectives separate treatment, others don’t (French)[70].
Prototypically, an event type is resultative if it entails a change, i.e. a half closed event type. On the other hand, the existence of permanences on the one hand and cycles on the other hand establish a twoway proof of independence between change and resultativity: permanences entail no actual change but may appear resultative, while cycles entail changes but are irresultative. Yet change and resultativity do not seem unrelated. In some way, the exceptions are very special cases. The perfective of a permanence prevents a change at a specific juncture, while a cycle involves a change undone by an opposite change.
A permanence can be closed by limiting it to a specific time of expected change: (be.be)⋂t is a closed event type for any definite occasion t. But this does not yet capture the change in expectations. More informatively, (t→be´¬be).be which captures the idea that when one manages to stay on, he wasn't going to do so, according to some counterfactual theory t, but is in actual fact staying now.
If we resolve the counterexamples in this way, we can keep to the simple idea that resultative or telic event types are topologically half closed, i.e. they contextually entail a oneway change:
telic: t→c
Galton (1984) suggests a definition by which an event type is irresultative if its closure coincides with the closure of its interior: pf prog e = pf e. In my framework, this holds for open events and cycles, so Galton's definition agrees with mine here.
Vendler’s class of achievements is in my terms a mixed class containing half closed events (simple changes) and closed events (cycles) in my classification, so it is defined as the disjunction
¬aa¬a?
(Mommer 1986:75, Moens 1987:57, Declerck 1997:201fn). Simple changes (if any) have no proper parts, so they should not allow an interior progressive. A traditional showcase, notice, does tend to get coerced to iteration in the progressive. Here, iteration is hardly distinguishable from incremental process:
People are noticing this stock and overlooking the other. (WSJ)
The prime minister was noticing symptoms. (LOB)
If we want to find out whether someone has been noticing what he has been reading, we are generally content to decide the question by crossquestioning him not long afterwards. (Ryle)
In Ryle’s (1949) unwitting example notice is already close to pay attention.
It seems many momentaneous achievements are in fact closed from both ends in that the result state too is bounded in duration. The awkwardness of I noticed it for a moment as a reference to the duration of resulting awareness suggests notice already has a cycle for result state, unlike recall (compare recall for a moment). Perhaps notice means ‘become aware for a moment’, not just ‘become aware’. Find is more like a change from lost to found: We found her for a moment but then she ran away again somewhere seems passable.
Arrive is similar to notice. One cannot say He is arriving/he has arrived very long before or after arrival. It seems arrive would be something of the order of (¬there.there)⋂near, which makes it a cycle of little or no duration. She came for good is fine but She arrived for good is odd (Mittwoch 1988:215). Interestingly, also Russian imperfective movement verbs with pri 'near' e.g. prixodit' 'come, arrive' can only be iterative (Forsyth 1970:167).
Reach the top is similar. Reach is a success verb: it means manage to get, which implies you tried. Its denial not reach implies you were near but didn’t. It would be fatuous for me to say I have never reached the top of that mountain if I was never even near it. It makes sense to ask someone Did you reach the client on the phone? only if he was supposed to call. No, I didn’t reach them if I haven’t even tried is a white lie. Reach is not a simple achievement in that reach does not just mean become at, it entails an approach. Compare A message appeared on/reached the screen. In the latter, unlike the former, the message must have got there from somewhere else through a gradual approach. Conclusion: reach is of event type ¬at.at⋂near.
These achievements are absolute (positive) changes, despite appearances. Approach, in contrast, is a relative (comparative) change: it can mean get near, get nearer or get next to something. The sense of approach in reach seems to involve the step from near to next to, i.e. the last leg from a neighborhood of a goal to contact with it. Reaching something only covers a short distance relative to the total distance traveled, and (even allowing for final deceleration), a fraction of the total time taken.
The Vendlerian distinction between achievement and accomplishment concerns the internal structure of events (the ‘stuff in between’). The distinction beteween simple and complex event types is made clearly in Aristotle Eth.Nic. 1174a20b14. Accomplishments, having parts, are complex and extended, achievements simple or at least not essentially extended (Vlach 1993:243). Granularity is involved: Dowty (1986:4243) suggests achievements are
not only typically of shorter duration than accomplishments, but also those which we do not normally understand as entailing a sequence of subevents, given our usual everyday criteria for identifying events named by the predicate.
Smith (1991:60) suggests that achievements differ from accomplishments by never entailing a preliminary process. I think that is wrong; some achievements do require one (e.g. reach). Achievements with a preliminary process might be called process achievements in contrast to simple achievement which do not. Process achievements like grow up or reach have no built in starting point and hence no minimum extent (the time it takes to grow up or reach adulthood depends on where one starts counting), but they do have a built in endpoint.
The difference between achievement and accomplishment is rather that an achievement happens at the last point of the process, while an accomplishment happens during it. An achievement can contract to a point, because it has no minimum extent. In topological terms, an achievement is a system of left neighborhoods of a point (Kelley 1955).
An accomplishment concerns change between contrary rather than complementary states (from dark to light, as against from dark to not dark). The difference between “instantaneous” switching on a light ¬light.light and “durative” dawn dark.¬(dark⋃light).light is whether we recognise any intermediary states where it is neither dark nor light. The distinction is thus sensitive to resolution (Dowty 1986:43). Though many events usually classed as achievements do in fact have duration (Heinämäki 1974:19, Dowty 1986:42), achievements are “punctual” in a sense akin to that in which events in a narrative are punctual in Kamp’s theories: they are not interrupted by other events in a narrative.
Ryle 1949 (Dowty 1979) notes that achievements do not allow manner adverbs like attentively, studiously, vigilantly, conscientiously, pertinaciously, slowly, rapidly, systematically, haphazardly which describe the associated process. There is a connection to causativity in that the activity of bringing about the effect is such a process. One can finish accomplishments but not activities or achievements (Heinämäki 1974:11). Achievements become iterative in durative contexts like V for T, spend T Ving, be Ving, or stop Ving, accomplishments need not (though they may). Mourelatos (1981:194) notes that in equals after for achievements but not so for accomplishments: I shall start/run a mile in/after ten minutes. Johanson (1998) points out that accomplishments allow a generic (unanchored) question How long does it take to V?
Another example of the distinction is go/leave: going from one place to another takes time (there are places in between to traverse) unless the places are complementary. Leaving is going away, from one place to its complement, so leaving can be contracted to a point. It need not (there may be things to do in between).
Although there are clear cases of accomplishments (agentive incremental changes like build a house or extended motion like walk from x to y) and relatively good cases of achievements (nonagentive momentaneous verbs like notice or find or touch), a sharp boundary is hard to maintain. The distinction between achievement and accomplishment is a matter of resolution. Many events can be pictured in either way by changing the resolution (as in slow or fast motion film). For this reason, it has become something of a commonplace to suspect or reject the achievementaccomplishment distinction, especially as it has relatively little work to do in English grammar (Bach 1981, 1983,1986, Mourelatos 1981, Verkuyl 1993, Tenny 1994).
Chaput (1990) finds it useful for Russian to distinguish between true achievements (imperfective form is iterative only), inceptive achievements (imperfective denotes result state) and endinsight achievements (imperfective is progressive). Chaput's true achievements are my simple changes and cycles, his inceptive achievements my acquisitions, and his endinsight achievements are my process achievements. The names don’t count, the crux is that the calculus can reflect those classes which get lexicalised or grammaticalised.
An accomplishment is a process producing a (comparative) change, or such a change produced by a process (Bennett 1981:17, Vlach 1981, ter Meulen 1983, Mittwoch 1988:248). An accomplishment is finished when a change is reached and stopped when the process stops (Heinämäki 1974:1011). Almost is ambiguous with accomplishments: John almost opened the door means he almost started or almost finished doing so (Smith 1991:54).
m = p⋂c
The connective here is a type meet p⋂c which corresponds to token join p⋃c. Type join p⋃c would be too weak, for a true accomplishment entails both subevents (this is known as nondetachability: Dowty 1977, Vlach 1981, Smith 1991:50).
The characterisation does not yet express the fact that the process causes the change. (A better approximation will be given in the section on diathesis.) An accomplishment cannot in general be contracted to a point (Heinämäki 1974:11). In accomplishments like build a house or paint a picture the change is between contraries and coextensive with the process: the event includes both the process and the change (Parsons 1990:218). Thus one does not say Ada (finally) wrote the novel at 2 o’clock last night to mean Ada finished writing the novel at 2 o’clock. Only when can mean ‘during the building’ in We helped him when/before he built the house while only before can do so in We helped him when/before the house was ready. Often but not always, the change is a comparative change between two contrary states. The process continues until an absolute change to a final a state from its contradictory results:
m = p.c
There need not be a unique culmination point. Examples like open, take off, leave, fall down etc. (Carlson 1981) are vague depending on where the change happens,because a change happens between contrary rather than contradictory states. There is an initial and a final change to choose from:
m = c.p.c
For instance, take off can mean leaving ground or reaching cruising altitude. Open can mean ‘not closed’ or ‘wide open’. That is why a plane can be taking off both before and after it takes off, and one can be opening a door both before and after one has opened it. As acquisitions they mark an initial change: After I fell off the cliff I kept falling is fine. After I fell on the ground I kept falling is no good, because the final change is included.
Accomplishments are often complex event types. For an example, build a house might be instantiated in a given case as
(hammer⋃saw⋃carry⋃rest⋃plan)^{+} ⋂¬house.house
In terms of lexical makeup, two types of accomplishments can be distinguished: those which name the process and entail the change, and those which name the change and entail the process. These types could be called process accomplishments and result accomplishments respectively. Examples of process accomplishments are English manner accomplishments, read, write, eat. Process accomplishments appear intransitively as activities with an implied noncount object (read, write, eat) and participate in so called conative prepositional constructions read in, write at a book. Result accomplishments include Levin's (1989) change of state causatives like break, crack, bend, which are causatives of corresponding intransitive changes. The former border on accomplishments, the latter on achievements, depending on the detachability of the entailed subevent.
The former type may implicate but do not entail a definite change (T type events of Dahl 1981), say paint the fence, lift a weigh, take a bath. TThus paint the fence can mean ‘apply paint to the fence’ or ‘coat the fence with paint’, lift a weight can mean ‘lift up’ or ‘lift higher’ or ‘lift and put back’, take a bath can mean getting clean or just bathing for a while.
The borderline between activities and accomplishments is as negotiable as the boundary between accomplishments and achievements. One can turn an activity into an accomplishment by setting a bound to the activity (e.g. by adding a count object or a goal complement), and turn an accomplishment into an activity by removing one (e.g. adding a noncount object or directional complement). Many English event types are vague between the two classes: read a book, comb one’s hair. Formally, the difference to true accomplishments is that the focus of the event can be on the process alone.
p⋃c:p⋂c
Accordingly, the perfective of a process accomplishment may just denote the closure of the process (a cycle). Process accomplishments take optional or default complements or reflexive forms (medial diathesis) and behave as activities when used intransitively: read, eat, wash (oneself).
Many process accomplishments, like cook the meat, fill the tank, are relative or comparative changes, that is, the change can be expressed by a comparative adjective which allows degrees (more or less done/full), has an absolute positive or norm (like done); some have a maximum (full). The openclosed vagueness of the verb reflects vagueness about whether the change is relative (comparative) or absolute (positive) (Abusch 1986). I cooked the meat in an hour entails the meat was done in an hour, while I filled the tank for an hour entails I made the tank fuller for an hour. Tenny (1994) notes a variability in judgments between different types of accomplishments here.
There is a related diathesis distinction between affected and effected object accomplishments (Smith 1991:52). Effected object accomplishments cause a change in the object, which sets a natural boundary (Smith 1991:48) to the event, while affected object accomplishments are noncommittal about what happens to the object. Verbs of creation and destruction (instances of the cause become schema) make, build, destroy, kill are cases of the former, verbs of application and consumption (instances of the use schema) paint, read, eat, use, mow lawn, comb hair of the latter.
For instance read has a natural boundary at read everything once, but the implicature of closure is easily defeated, as one can read the same thing many times over; feed the puppy has a natural boundary when the puppy has had enough, but one can always feed him a little more a little later.
There are ways to make the intended sense explicit. One is to add a result clause: eat up. One is to make the object oblique: eat at/of something. A durative adverbial may force a process reading: I wrote that report for two hours does not imply the report is finished unlike It took me two hours to write that report (Smith 1991:158). A true accomplishment shuns durative adverbials: Mary walked to school for an hour or We built a house for two weeks are odd (Smith 1991:54,69). Conversely, a bound adverbial turns a process accomplishment into a closed one (an inceptive change or cycle): John pushed the cart in two hours/It took John two hours to push the cart (either to start pushing it, give it a push, or push it somewhere).[71]
The activityaccomplishment distinction is a closedness distinction. The class of derived accomplishments is large in English because English uses prepositions and adverbs to establish bounds on activities. Such derived accomplishments are rare in Portuguese (Santos 1996) and other Romance languages, where English accomplishments are often unpacked into activities and/or achievements, e.g. walk home becomes go home on foot (Vinay/Darlbelnet 1958).
Santos (1996) claims that Portuguese does not distinguish between activities and accomplishments. An apparent closed accomplishments like andar à Lisboa ‘walk to Lisbon’ fails to denote the result. For instance, a resultative adjunct chegada à Lisboa, ‘having arrived to Lisbon’, cannot be formed from the process accomplishment andar à Lisboa ‘walk to(wards) Lisbon’, *andada à Lisboa ‘having walked to Lisbon’.
Examples from English, Malagasy (Travis 2000:172) and Chinese (Ritter/Rosen 2000:208). A resultative reading is implicated in each case, but it is defeasible.
en I shot him (in the head/*dead), but he did not die.
mg namory (past) /*nahavory (pf past) ny ankizy ny mpampianatra nefa tsy nanan fotoana izy. ‘The teachers (tried to gather/gathered) the children but they ran out of time’.
zh Ta (shale Zhangsan / *ba (obj) Zanghsan shale), keshi Zhangsan mei si. ’He (tried to kill/killed) Zhangsan, but Zhangsan did not die.
An acquisition (Santos 1996) denotes an open event (state or process) or a change producing it (Breu 1985, Sasse 1991, Seiler 1993:28, Johanson 1998), e.g. see, hear, remember, forbid, permit, turn, fall, face, head, hide, flee.
q = ¬a?a
The inceptionduration relationship of an acquisition is an alternative perspective to the eventresult relationship of the perfect, depending on which seems to be the ‘main thing’: if the change is the main thing, the state is its end result, if the state is the main thing, the change is its beginning. Perfective and perfect are sort of inverses here: a result state can be turned into an incipient change with the perfective, an incipient change into result state with the perfect. Nice minimal pairs are English inceptive have and the state perfect got or Greek present/perfect ktaomai/kektemai ‘get/have’ and present/perfective ekho/eskhon ‘have/get’ (Kühner 1896:§384). Acquisitions thus denote the end effects of a causal chain, so they tend to be intransitive or nonagentive.
Since an acquisition is vague between change and its result, aspects have a field day. The imperfective denotes the preparatory process or the result state, the perfective the initial change or a cycle, the perfect the result state (Johanson 1998:§10.3.1.2). Examples from Greek (some of them actually translate into English examples of the same type): thallo/ethela/tethela ‘flourish/shoot out/be in bloom’, ozo/ozesa/ozeka,ododa ‘smell/give out a smell/be smelly’, hegeomai/hegesamen/hegemai ‘think/decide/be of the opinion’, keutho/ekeusa/kekeutha ‘lurk/hide/be hidden’, khairo/ekhairesa/ kekhareka ‘rejoice/gladden/be glad’.
Santos (1996) observes that the result perfect of a simple acquisition is infelicitous: I have remembered his name now or I have had an idea are odd as immediate perfects. They are fine construed as existential (at least once) or universal (all along). The bare result perfect would be equivalent to the present: I remember his name, I have (got) an idea. The perfect is fine when the associated change is complex: They have agreed/agree on the conditions and I have understood/understand mean different and are ok (Hirtle 1975:38).
English allows variation between simple and perfect aspect with certain acquisitions (Crystal 1966:271, Declerck 1994:91): be hidden/hiding, I have understood/understand, I have forgotten/forget his name, I have heard/hear you are ill, he has told/tells me that you are not working hard (Hirtle 1975:3839). An interesting case is have (got): Do you have much snow in Quebec? can mean do you generally get/have you got.
Cf. also German erkennen 'be(come) aware', erhalten 'get/have'. English progressive and Chinese imperfective zhe both produce an ambiguity between ongoing event and result readings in acquisitions (Smith 1991:116):
John was sitting in the chair.
Tianli zhongzhe huar. ‘Flowers were (being) planted in the soil.’
Aspect is complicated by the existence of aspect shifts, or unmarked aspect alternations (Poutsma 1921:§331, Allen 1966:198, Joos 1968:114117, Leech 1969:135, Hirtle 1967:6984, Scheffer 1975:6175, Carlson 1981, Mourelatos 1981:196, Zucchi 1998), also known as aspect transitions (Moens 1987) or situation type shifts (Smith 1991:36, §3.3). Instead of assuming that an individual aspect operator or tense creates a range of meanings, it is often preferable to assign the shifts or derived meanings to the event types operated on. If the shifted or derived meaning is not limited to a particular aspect type, it may be convenient to make it an optional lexical rule, or unmarked alternation. Lexical alternations of event types may go unnoticed in forms where they do not make a difference; but they become visible where an aspect operator coerces an event type to another one by forcing an application of an intervening unmarked aspect operator.
The difference between coercion and vagueness/ambiguity[72] is a matter of grammatical division of labor (Zucchi 1998:350). Coercion differs from disambiguation of a vague/ambiguous form only in that the theory does not locate the disjunction in the lexical aspect type but in an unmarked aspect operator applicable to a class of forms. This is the distinction between rules and lexical entries in formal grammar theory, or type assignment and type inference in categorial grammar. The alternation can be localised in the lexicon, in a lexical rule, in a grammatical construction, or derived as an entailment or implicature from the context. It is basically a question of where disjunctions should be placed in grammar. The usual criteria are what gives best explanatory value, closest fit and smallest grammar.
What one language does not have an explicit aspect for becomes an unmarked aspect shift relative to another language. Alternatively, and possibly equivalently, they have looser definitions for lexical aspects. Many languages leave largely unmarked iteration (English), perfective (English) or progressive (German). English simple tenses are nonprogressive, which means that they are closed or generic (Curme 1931:XIX). For unmarked aspect shifts, see Lindstedt (1985:§3.2.5), Moens (1987); against them, see Heinämäki (1984:67), Santos (1996). This is an instance of zero morpheme controversies.
An important observation is that unmarked aspect shifts happen with unmarked forms, but appear to be blocked for morphologically marked forms. For instance, Russian unmarked imperfective aspect chital is open to progressive, simple, or iterated readings, but marked perfective dochital ‘read to the end’ is not; instead, there is a marked secondary imperfective dochityval ‘was reading to the end’. This observation will be taken up in the section on markedness, where it is suggested that ambiguities registered as unmarked aspect shift arise through Boolean differences of marked and unmarked aspects.
An open event s can can in principle be closed in two different ways. One can single out the half closed initial change ¬ss, the beginning of s, or a closed cycle ¬ss¬s. Generally, both options are there, but they are not equally salient for all verbs. Though there are clear crosslinguistic correlations here, there are language particular differences as well (Johanson 1998:§10.3.1.2).
Postulating a lexical aspect type of acquisitions (Santos 1996) instead of a general rule of inceptive (inchoative, ingressive) aspect shift (Smith 1991:78) is motivated by the observation that the choice is lexically governed: not all open event types allow inceptive readings, at least not with the same ease (Poutsma 1921:723). Compare for instance acquisition remember to state be in Norway with after in
After I remembered his name, I remembered it for a long time
After I was in Norway, I stayed there for a long time.
After can mean after the recall in the former, but it tends to mean after the visit in the latter.Once I was in Norway or after I got to Norway would be preferred if I stayed in Norway. This may depend on scale and expectations. Partee (1984:fn31) finds After Mary was in the hospital ambiguous between ‘after she began to be in the hospital’ and ‘after her whole hospital stay’.
Dowty (1986:51) notes that sit, stand and lie admit the inceptive interpretation much more frequently and readily than other statives, and perhaps should be regarded as truly ambiguous between stative and inceptive readings. Location and posture verbs (sit,stand,lie, hang, hide) are acquisitions in many languages, i.e. have disjunctive aspect type ¬seated?seated. Here, stability (liability to change) may be an essential part of the profile: posture is not enough to choose between sit, stand, and lie. Assuming a lexical class here is the object oriented solution to locating the eventstate ambiguity. It seems preferable to the alternative proposal (Smith 1991:225226) that English progressive has a special resultative use which only emerges in this class of verbs.
At the other extreme, Chafe (1970:1972, Hirtle 1975:92) notes that it is difficult to imagine situation where The door has been open could imply that the door is now open. But open is already a resultative, the result state of the change open. This seems to generalise to result states in general: the biblical sense of I knew her last night is rather marked in English (unlike Portuguese conhecer ’meet, know’ which is an acquisition). Progressives don’t get inceptive reading either: When Marily entered everyone was cheering suggests they had started without her. Such open events are protoypically interior. Be in Norway may belong here.
Many apparent cases of inceptive reading do not call for any special rule. The section on ordering events revealed that temporal relations between extended events logically reduce to relations between endpoints. John fell asleep before he knew it makes know appear inceptive by sheer force of logic. Knew is equivalent to noticed here: he knew it as soon as he noticed it. The same holds of John was asleep before he noticed it: for he was asleep as soon as he fell asleep.
Activity verbs assume an apparent inceptive reading in when sentences: When Marilyn entered everyone cheered. Apparently, they started cheering when they saw her. Yet no inceptive shift need be involved. The when sentence just says some cheering occurred on Marilyn’s entry. Although it entails everyone started cheering the two are not equivalent; the resolution is different. Cheered denotes the reception in its entirety, started cheering its inception.
The following diagram, an aspect transition network of the type of the type introduced in Moens (1987) and applied in Santos (1996), maps some common aspect shifts between event types. Legend:
a 
open event type 
p 
process 
mom 
momentaneous 
b 
closed event type 
q 
acquisition 
pf 
perfective 
c 
change 
s 
state 
prog 
progressive 
m 
accomplishment 
freq 
frequentative 
res 
resultative 
o 
cycle 
inc 
inceptive 


Table 11
The diagram loops back from the right end to the left. Unlabeled arrows represent type subsumption (inclusion). Perfective aspect shifts into initial changes ¬ee, closures ¬ee¬e and final changes e¬e are known by many terms including inceptive/ inchoative/ ingressive, completive/ delimitative/ perdurative, and terminative/egressive respectively.
In addition to unmarked transitions (‘null derivation’), aspect transitions are marked by derivation and suppletion. Historically, derivation (as a type of word formation) often goes back to compounding while inflection comes from periphrasis (phrasal constructions). For instance imperfectives arise both ways, from locative periphrasis and from iterative derivation.
Greek has a systematic distinction between imperfective, perfective (aorist) and perfect stems. Russian has a number of suppletive pairs or tuples of verbs that only or mainly differ in aspect. Lindstedt (1985:160) exemplifies the various meanings obtained by prefixing Bulgarian varjà ‘boil’, e.g. ‘parboil, bring to the boil, boil again, overboil, overboil slightly, boil every now and then, underboil, boil more as needed’.
There is a long grammatical tradition of enumerating and naming various exically governed event types or Aktionsarten. It continues as the study of the meanings of aspectual prefixes in Russian aspectology, where they are known as modes of action (sposoby dejstvija, Johanson 1998) or procedurals (sovershajemost', Isachenko 1962, Forsyth 1970). Here is a combination of the lists in Comrie (1976:§2), Lyons (1977:§12.4,15.6), Bache (1985), Forsyth (1970) and Johanson (1998), with suggested formalisations.
open 

en 
ru 
continuative 
a`a 
continue reading 

resumptive 
a¬a`a 
resume reading 

durative 
aa 
keep reading 

iterative 
e^{+} 
read repeatedly 

frequentative 
pf^{+} e 
read now and then 
chityvat' 
habitual 
s ⋂ (e ⋂ a) 
used to read 

dispositional 
s → (e → b) 
will read 





closed 



inceptive 
¬aa 
start reading 
zaplakat' 'start crying' 
terminative 
a¬a 
finish reading 
dochitat' 
semelfactive 
¬ee¬e 
read once 

repetitive 
e¬ee 
reread 
perechitat' 
delimitative 
¬aa¬a 
read some 
pochitat' 
completive 
¬rar 
read all 
prochitat' 
egressive 
a¬a 
stop reading 

Table 12
The precise event type of English repetitive re seems to be cause r`¬rr. This matches reenter 'come in again', rebuild 'build up anew', regroup 'make a new group', repaint 'apply a new cover of paint', relive 'experience again'. and excludes resleep, rehelp, relose. The differences between some of the types are small, and looser types covering more than one mode of action on the list are common. For instance English read on actually covers both continuation and resumption, expressing a¬a?`a. Russian inceptive za 'start' seems dedicated to nonprogressive events. A more specialised one is u 'from' for cognitive states: uznat' 'get to know'. Note that English must use get; start to know is funny. Minor modes like attenuative, absorptive, evolutive (Isachenko 1962, Forsyth 1970) involve nonaspectual implicatures of aspects or aspectual implicatures of nonaspects.
Many languages have derivational affixes which express or imply aspect distinctions. Derivational affixes can be classified as aspect operators by their input and output. Momentaneous or semelfactive suffixes produce closed event types out of open event types. Iterative (durative/continuative or repetitive/frequentative depending on input aspect type) affixes produce iterations of different types. Etymologically, iteration often comes from reduplication or some (other) pluralising device (affix, adverb or quantifier), or goes back to verbs expressing continuation or permanence (sit, remain, live, go on). Iteratives seem predominantly derivational. (Bybee 1994:161). Bybee et al. (1994:164,170) suggest that repetitive grams further develop into habituals and continuative ones into progressives. Iteration can turn a transitive verb into an intransitive one (a process accomplishment with an implicit noncount object, Bybee 1994:172).
Alternating compositions of these two inverse derivation types are common. Lat videre see ‘see’ , visere pf see ‘go see’ , visitare (pf see)^{+ }‘go see often, visit’, Finnish heilahtaa pf swing‘swing once’, heilua swing^{+}‘swing continuously’, heilahdella (pf swing)^{+} ‘swing intermittently, back and forth’.
Affixing with bounding particles (intransitive prepositions and adverbs) is a productive way to produce changes (achievements and accomplishments) out of states and activities in many languages. Inceptive affixes can form initial or comparative changes out of states and accomplishments. Inception is often a specialisation of a more general perfective gram (Kühner 1898:156). Latin arere dry ‘be dry’, arescere (become dry)^{+} ‘dry’, exarescere become dry ‘dry out’.
Causativereflexiveintransitive alternations produce aspect changes These are not pure aspect shifts in that the diathesis of the verb also changes. Examples: English hide(tr/refl/itr) translates into Finnish variously by piilottaa cause become hidden (put into hiding: accomplishment), piiloutua refl cause become hidden (put oneself into hiding:accomplishment), piillä hidden (be hidden:state) or piileskellä (refl cause stay hidden)^{+ } (be in hiding: activity).
Aspect features are event types which crossclassify to generate a matrix or taxonomy to define other aspect types. This section singles out certain central aspect features definable in terms of the event calculus.
An event type is open (noncount) if it is closed under unions (i.e. ee⊆e). An event type is closed (count) if its complement is open (i.e. if ee⋂e=∅). An event type e is complex (extended) if it contains at least two separated parts of the same type, else it is simple (atomary, simply connected). An event is an atom (in e) if it does not contain proper parts (in e). An event type e is atomic if every part of e includes an atom, and nonatomic if e has no atoms.[73].
A state is, open, and simple. An activity is open and extended. An achievement is closed and simple. An accomplishment is closed and extended. Open events can be extended at will, closed events cannot. Simple events can be contracted to a point, complex events cannot.
The definition of extended in terms of subevents is really equivalent to saying an event takes time, or e⋂tt^{+}. This reflects the intuition that an extended event cannot be contracted to a point, because it has at least two separate parts.
Holisky (1981) suggests question tests for central aspect features: how long (open events), at which point (simple events), and how (impure events). This last test is based on the idea that how checks for subevents. It is not definitive, for why/how questions rather pair up with the explanation of universal and existential claims, respectively: why necessary/how possible. The how test correctly distinguishes impure sleep from pure be asleep (How did you sleep/were you asleep), but it misclassifies pure events how did you know/notice it, where how asks for antecedents, not subevents.
There is a pervasive processproduct ambiguity in aspect terminology: the same names are used of aspect operations and the resulting event types. In my parlance, aspects are primarily grammaticalised expressions or constructions which map event types to event types.
There is a small number of prime candidates for u⋂niversal aspects: the grammaticised openclosed contrast imperfective and perfective in the first rank, followed by their more complex (and more concrete) antecedents progressive and perfect, with the leftover generic (dispositional and habitual) aspects following next. From there on, there is no sharp cutoff between major and minor aspects, but a grid of coarser and finer distinctions (McCoard 1978:9):
there’s a whole list of relatively concrete meanings which may be associated with perfective or imperfective forms: the perfective is said to indicate short duration, temporal limitation, punctuality, completion, inception, result [...], while the imperfective indicates iteration, dufativity, conation, nontermination, habituality, progressiveness, and the like. The trouble is that none of these conceptions  sometimes called Aktionsarten (German for ‘kinds of action’), especially if they involve lexical derivation  applies to more than a subset of cases. [...] For instance, “indicating the end of a situation is at best only one of the possible meanings of a perfective form, cerainly not its defining feature” (Comrie 1976:19).
Accordingly, a general definition of the main aspectual contrast is rather abstract:
perfectivity indicates the view of a situation as a single whole, without distinction of the various separate phases that make up that situation; while the imperfective pays essential attention to the internal structure of the situation. (ibid., p. 16)
My definitions of the major aspects are very similar to received wisdom.
In topological terms, the perfective turns an event type into a (half)closed event type, one which contains one or both of its limits (Timberlake 1982:311).[74] The definition specifies the outcome, but not the means. The rest of the calculus provides the means.
pf b:e
To satisfy the definition, a perfective does not have to do anything to a closed event type (but it can); while it coerces an open event into a (half) closed one. Perfective is a count operator in the domain of events: it turns noncount events into count ones. This is reflected among other things in deverbal nouns. departure is a count noun.
Formally, perfective reflects the topological notion of boundary and the Boolean notion of complete join/meet. Given that closed events are filters and open events are ideals, the following Boolean duality holds:
⋂pf e = ⋃ipf e
That is, the imperfective and the perfective have a common bound, approximated from above by the perfective and from below by the imperfective.
The definition of perfective as half closed or closed makes it a disjunctive event type. This is reflected in aspect literature as a schism between two definitions of perfective, based on limit or totality, respectively (Dahl 1981,1984). Both views are represented in Razmusen (1891:379, cf. Leinonen 1982:39)
The perfective verb, it seems to me, originally signals an action as reaching its goal (its boundary), and then in general an action seen as a whole (beginning, middle and end together).
Klein 1995 traces the totality view to Cherny (1877). It is restated in Dahl (1984:9) quoting Comrie (1976:16):
perfectivity indicates the view of a situation as a single whole, without distinction of the various separate phases that make up that situation, while the imperfective pays essential attention to the internal structure of the situation.
The schism can be traced to the following sources. First, there is a genuine difference between closed (irresultative) and halfclosed (resultative) event types. In lexical aspect languages closure is derived in diathesis from result complements, and produces half closed event types. Grammatical aspect is derived in phrase structure from open events using time adverbials, and produces closed event types. Second, in closed aspect languages perfective is marked, in open aspect languages it is imperfective. Slavicists, with a marked lexical perfective, stress telicity (halfclosure). Westerners and orientalists with marked grammatical imperfectives stress totality (closure).
Combining the two views, the perfective prototype for an open event type a can thus be defined as ¬aa¬a?. The perfective thus denotes the smallest or the largest subevent of an open event type. This is the duality between some a and all a, the beginning of a and the completion of a.
The duality of perfective explains its ambiguity between closed and half closed events, punctual and telic. It is another thing why language should group event types asymmetrically in just this way. This temporal asymmetry of the perfective in favor of inception over cessation might be called the perfective paradox: unless specifically marked, a perfective of an open event type denotes its beginning or the complete event, not just the end.
One simple observation is that an event is of the same type as its inception, while the event following its cessation is its opposite. By the time an event ends, it is all there, whereas at the time it begins, it is not yet in existence and may never become so. There are beginnings without ends, but there are no ends without beginnings (Galton 1984). This accords with the observation that there are many inceptive verbs but few egressive or desinative ones (Johanson 1998), mostly suppletive: have/get/lose. The halfclosed perfective ¬aa is resultative, the closed perfective ¬aa¬a is irresultative (it is a cycle which ends in the same state it begins with), while the outgoing change a¬a is really the inception of the complement event type ¬a.
There is a topological construal of these intuitions. The half open event type ¬aa is the closure of a in the past order topology of a, or the event type <a of pasts of a, or in the relative topology defined by the event type <a, or all events ending in a. The half closure of a is the closure of a relative to a model of time where future is not actual.
For a closed event, if pf b is written out explicitly as ¬bb¬b, the perfective does do something to a closed event: it turns it to a cycle of itself. For instance, die becomes die once. If the event type is unique in the way dying is, truth is preserved: one who dies dies once, and of course conversely. With noncounting event types b, iterated perfective pf^{+} b actually becomes a series, i.e. open.
In general, does not seem right to presume that He fell yesterday means the same as He fell once yesterday. On the other hand again, He fell a moment ago and He fell once a moment ago seem pretty much the same, when the fit of the time and the event is tight enough. He fell sometime yesterday and He fell just once sometime yesterday are again equivalent. An additional existential quantifier on reference time allows contracting the boundaries to points, and the focus is again on the event between them. In sum, the perfective of a closed event is equivalent to the simple aspect within a short enough reference time (where the event becomes unique).
Analogous observations apply to the perfective of open event type in a language that has a perfective. The interpretation of Portuguese chovou ‘it (has) rained (pf)’ that first comes to mind is it rained for a while (it started to rain, it rained, and it stopped raining). But it is not the only one (and possibly not even the commonest one). Chovou can also mean ‘it rained at (or for) a specific time t under discussion’ pf(rain⋂t), whether or not it went on raining afterwards. It does not always imply the rain stopped, only that something else in the context it was matched with did. If someone asks (as a geographical question, or as a tourist reminiscence) Chovou em Bergen no Inverno passado? ‘Did it rain in Bergen last winter? you can answer Si, chovou, without implying that the rain (has) stopped (it probably hasn’t, knowing Bergen), only that the time talked about is over. Similarly foi bonita? ‘Was she pretty?’ need not imply the prettiness ended, just that looking at her did.
English You were always my best friend or Portuguese Sempre quis saber ‘I always wanted to know’ do not imply that you aren’t my friend any longer or that I have stopped wanting to know. What a perfective aspect really says is that an event expressed with the verb is closed, but it does not say exactly what the event is that is closed. Cases where a verb is perfective and the verb event type isn’t closed are cases where there is some narrower event type expressed or implied which is closed, some further specification which makes an event type expressed in the sentence closed although the event type denoted by the verb alone isn’t. Scoping may be involved. Take for instance
Estivemos aquí bastante tempo, então? ‘We have been here long enough, haven’t we?’
Here, the conversants are still “here” so be here is not closed, nor even be here long enough. The paraphrase would be We have been here for t, and that is long enough, and the underlying be here for t is the closed event licensing Perfeito.
One interesting thing about these observations is that they indicate a perfective event is definite (contextually unique). Of course, this result holds relative to the aspect type it applies to. The uniqueness implication is defeated if the input aspect type is plural or definite, for ‘there is exactly one sequence of several events’ entails ‘there are several events’ and ‘there is exactly one of this’ says nothing more than ‘there is this’. A unique event is trivially perfective (Galton’s 1984:56 onceonly events are such). I shall survey the different senses of a perfective in the section on the Portuguese Perfeito, and return to the question of the definiteness of the perfective in the section on definite tenses below.
The perfective adds (coerces or entails) a boundary to an event. It does not say what that boundary is, except that it belongs to the complement of the event bounded. If the event type is inherently closed, the boundary can be inferred from the event type. Else some other bounding event must be found to explain the perfective, which may in turn help determine just which event is being referred to. It can be an explicit bounding complement or modifier, or it may have to be inferred from context. Examples of this will come up in the sequel.
Perfective says nothing about how long the result state of an event continues after the event. Consider for instance Portuguese já comei ‘I already ate’, which is the normal translation for the English perfect I have (already) eaten. The result state of eating is that one is satiated, so representing eating as become satiated ¬ss , pf eat unpacks to (¬s¬s ⋃ ss ⋃ s¬s) . ¬ss . (¬s¬s ⋃ ss ⋃ s¬s), which implies I ate some time ago, I haven’t eaten since, and I may or may not be hungry now.
Smith (1991:103,109) notes languages with a perfective differ in the treatment of simple states. Some languages (English, Russian) as a rule do not mark aspect on them, others (the Romance languages) do.[75] This correlates with the type of aspect system. Russian, with unmarked derivational imperfective, does not. Romance languages, with marked grammatical perfective, do. English only marks progressive, which is ruled out for simple states.
Perfectives develop from perfects, semelfactive (momentaneous) affixes, result particles (prepositions and adverbs) and phasal verbs. Many if not all languages use result adverbs to close events, but they differ in the extent this is systematic (lexicalised or grammaticalised). English is half way there: useuse up, writewrite down etc. Russian forms perfectives with adverbial prefixes, momentaneous suffixes, and suppletion. Altaic languages exhibit constructions with converbs originally meaning 'give, put, send, reach, throw' (Johanson 1998).
The imperfective forms an open event out of an event. Quote: “The imperfective aspect expresses an action without indicating the two limits of the action, without all the elements constituting a totality, but with particular emphasis on its middle”. (Leinonen 1982:39 citing Sheljakin 1972).[76]
ipf a:e
In Boolean terms, the imperfective takes an event to a Boolean algebra generated by it as the bottom element (plural) or top element (mass). Topologically, the imperfective produces interiors and neighborhoods. Two consequences: the imperfective denotes an open event type, and it coerces its input to an event type which is or includes an open event type. Imperfective is idempotent: ipf^{+} e = ipf e. Thus the imperfective does not have to do anything particular to open events (but it may do something, e.g. change scale), while it turns accomplishments into activities and coerces achievements into iterations. Thus the imperfective subsumes progressive and iteration. We might say that the imperfective is a noncount operator in the domain of events, subsuming progressive, as a partitive operator and iteration, as a plural operator.
I do not mention particular selection and construction operators out of the definition of the imperfective above, but rather define the aspect as a disambiguator, or output filter. The imperfective can also be defined constructively as an arbitrary composition of the iterative, progressive, and generic aspects, i.e. as the regular expression (id  prog  gen)^{+} of aspect operators. This obviously makes ipf idempotent.
A partial compositional definition of imperfective is gen prog e^{+}, which covers as degenerate cases progressive, iterative, transient, and existential uses, by letting one or more of the operators gen, prog and e^{+ }reduce to identity in turn. An unmarked imperfective in particular covers e itself.
Narrower definitions may be right for the imperfectives of given languages, depending on what other aspects the language has, i.e. whether imperfective is obligatory, optional or preferred to express progressivity, iterativity, habituality and so on.
Imperfective forms evolve from progressive periphrasis and iterative derivation (Bybee 1994:1589). For instance, the Latin imperfect ibam ‘I was going’ shows the traces of an auxiliary be in the ending (a progressive), the Portuguese Imperfeito ends in ia < ibam, the Russian secondary imperfective is a frequentative stem. A progressive is generalised to an imperfective by extension to iterative and habitual contexts (Bybee et al. 1994:141).
Although iteration of an open event always produces an open event, resolution may change with iteration. Thus the imperfective of a state can give it a ‘habitual’ feel without necessarily implying that the state occurs intermittently (Johanson 1998:§4.2). Czech makes a distinction between jsem ji míval/mil rád ‘I used to like/liked her’, where the imperfective implies remoteness. In my terms, it expands the resolution of the events and thereby also makes the distance to the past longer. Compare English past habituals: I used to smoke yesterday sounds funny (Kucera 1981:184185, Lindstedt 1985:127).
Crosslinguistic combinatorics of imperfective vs. perfective can be studied best in languages like Russan and Greek which run the whole gamut of imperfective vs. perfective stems (Forsyth 1970, Hedin 1998). Similarities and differences between Russian and Greek observed by Hedin can be traced to corresponding formal properties.
Telicity. The perfective formula ¬ee¬e? divides up into two types of perfective, a half closed telic type, denoting changes, and a closed atelic type, denoting cycles. Russian napisal pis'mo 'wrote up (pf) a letter' is an example of the former type. Russian pochital pis'mo 'read the letter a little/for a while' is of the latter type. Modern Greek egrapse ena gramma 'wrote a letter' is vague between the two, it can denote a change or a cycle.
Negation. The denial of any occurrence of an event type at all is an open aspect type and the strong denial (contrary or pointwise complement) of one. A closed denial of a closed aspect type denies the assertion but leaves the presupposition in place. The presupposition is that an occasion for the event type is present, the assertion denied is that the event reaches closure then. Though the assertion fails, the implication of a definite occasion remains, and thus the event type of the denial remains a closed one. This works for Russian Passagir ne zagovarival/zagovoril 'the passenger would not start talking/did not start talking (ipf)' ¬(¬speak.speak)^{+ }vs. ¬(¬speak.speak) as well as Greek den miluse/milise 'He did not talk/say anything (pf/ipf)' ¬speak^{+ } vs. ¬(¬speak.speak.¬speak?).
Imperative. The implicatures of an imperfective imperative are strikingly similar in Russian and Greek. If imperfective denotes Ving, the process, not the accomplishment, an imperfective imperative should mean get/go on Ving, and that is what it does mean. Common English turns of phrase here are go ahead, get on with it.
gr Na su kenoso to ghala su?  Kenone! 'Shall I pour (pf) you your milk?  Go ahead, pour away! (ipf).
Markedness. The key observation here is that the formula ipf e does not entail e while perfective pf e does. But imperfective does not entail ¬e either, but at best implicates it. Even an equipollent marked contrast, defined as ipf e \pf e, does not entail ¬e, though it of course does entail ¬pf e. The difference ipf e \pf? e of imperfective and an unmarked perfective does entail ¬e. Russian imperfective is (mostly) the unmarked member of a privative opposition. The modern Greek opposition is equipollent (Hedin 1998). In either case, ipf e is compatible with e simpliciter, or 'the bare fact' or 'simple denotation' of e.
The overall thrust of Hedin (1998) is that (unmarked) imperfective is used to denote event type as contrasted to event token: "the situations are considered in a static, nontemporal perspective, as types". Examples. gr Pjos to eleje (ipf) 'Whose words were those?', ru My zavtrakali (ipf) v vosem chasov 'Breakfast was at eight'. Compare also ru pisal (ipf) karandashom 'he did the writing by pencil', fr L'année derniere je deménageais (ipf) 'what I did last year was move house'. The paraphrase in each case is a support verb of an open event type. It seems right to say that imperfective boils down to simple (i.e. no) aspect here.
Poutsma (1921) distinguishes three senses for progressive, exemplified by
In this picture, the train is arriving at the station.
One is a straightforward interior progressive (Poutsma 1921:§29) Arrive, said of a train, is an event type which is vague between achievement and accomplishment. The event type looks pointlike from far off but is extended from close by; pointlike if one times arrival by one end of the train and extended if by both ends. The slow arrival of a long train to the station is an event that takes appreciable time from first entry to full stop.
The reason the process can appear external to the achievement is that it is detachable (Smith 1991), i.e. the change can be contracted to a point. Although the train keeps arriving for minutes, the exact moment of arrival can be pinpointed as the moment it enters the station (starts arriving), or alternatively, the moment it comes to a stop (has arrived).
This second sense of the progressive is the neighborhood progressive of an achievement like the train is arriving “implies only the approach to the transition” (Quirk et al. §3.41, Kucera 1983:179).
The next train is arriving in ten minutes now.
This is a future (futurate) progressive (Poutsma 1921:§32). A future progressive denotes a temporary plan (Allen1966:215, Johnson 1981:156, Lewis 1986, Declerck 1994:97). It is odd for unalterable future The sun is rising at five tomorrow (Gabbay and Moravcsik 1980:73,Cf. Dowty 1979:154ff, Huddleston 1967:786).[77] Leech (1987:23 ) (Vlach 1993:242) points out that the future progressive is a near future. This is not a metric entailment, but a topological one: some preparatory process leading up to the future event is already in progress. This process need not be specific to the future event; only as things are proceeding now, the future event will materialise.
As Mittwoch notes, John was working for 2 hours can be a future progressive of form prog <(work for 2h) ‘John was temporarily scheduled to work for 2 hours in the future’, perhaps even (prog <work) for 2h, ‘for 2 hours there was a plan that John should work in the future’.
The third sense of progressive is a boundary progressive of imminent future, paraphrased by about to, at the point of, on the verge of, almost, nearly (Poutsma 1921:§30):
You are making a mistake if you hire him.
I was throwing it away when he told it was valuable.
Here, no protracted process is implied, but rather, an initial state next to a transition. Be cannot be replaced by keep here. Time adverbials for a while, quickly, soon are infelicitous. I call this type a boundary or point progressive (Johanson 1998:§10.2.1.35).
Depending on event type, the progressive thus denotes being in a temporary state, process, or left neighborhood of an event type. Topologically, it denotes an event in the interior of an extended event (in) or in a left neighborhood or at the boundary of an atomic one (at). The progressive is thus inherently a compositional aspect (Cohen 1989:104):
prog in p of e:e
A common type of progressive can be read off directly from the above formula. Bybee et al. (1994) propose that the core meaning of the progressive construction is ‘the subject is located in the midst of doing something’.
As defined so far, progressive is idempotent: prog prog p = prog p. This may helpo explain why iterated progressives are rare. English progressive in particular allows a stronger explanation: it produces a state out of a nonstate, so it does not satisfy its own input conditions (Dowty 1986:44)
Vlach (1981) presents a series of arguments that the progressive produces a state out of a nonstate (likewise Palmer 1974:73, Taylor 1977; but cf. Bennett/Partee 1972:35, Smith 1991:118). Progressive auxiliaries (be, stand, …) are states.
Progressives, like states, provide background for perspectival aspect: Max was here/running when I arrived. Dowty (1986:44ff) proves that the in p clause of the progressive entails the subinterval property of states.
Many progressives are spacetime metaphors of location (Jespersen 1949:§12, Anderson 1973, Comrie 1976:98103, Bybee 1994:129133, Johanson 1998:§7.6). Examples of locative periphrases for the progressive pt está a trabalhar, 'is at work' fr Il est en train de travailler, 'is in progress of work' sv Han håller på att arbeta 'holds on to work', Han är i färd med att arbeta 'is on the way with work', fi Hän on työssä 'is in work', de Er ist dabei, zu arbeiten 'is at work'.
Progressives are formed from nonpast participles, gerunds, infinitives and nominalisations combined with locative or partitive prepositions and auxiliary verbs (Blansitt 1975, Johanson 1998, Bertinetto/Ebert/deGroot 1998). The development path proceeds from a locative construction through interior progressive toward a general imperfective covering iterative and generic uses (Bybee 1994).
Motion verbs tend to replace progressive with an event noun paraphrase flying/in flight, moving/in motion (Smith 1991). This can be related to the fact that motion (kinetics) does not entail causation (dynamics). I return to this in the chapter on diathesis.
A traditional puzzle for the progressive is the imperfective paradox, i.e. that be building a house does not imply build a house (Bennett 1974:66, Dowty 1977, Nute 1979,1989:528, Galton 1984, ter Meulen 1985, Dik 1989:94, Smith 1991:98100, Johanson 1998:§10.2.1.3). A paradox arises only if the progressive is supposed to be an extensional operator in linear time. It can be avoided by allowing either possible worlds or intensions (event types) The gist is that progressive affords a partial view on an event, the simple aspect denotes it in its entirety. What is only partially shown may change, while what is entirely in view is fixed.
Goldsmith and Woisetschlaeger (1976,1982) argue against treatments which try to make the simple/progressive distinction in terms of the time line, on the grounds that the real line is too poor for it. They construe the English simple/progressive aspect distinction as a fundamentally modal distinction between what they call structural and phenomenal properties. Structural properties of an object are those whose going or coming mark a change in the the object, while phenomenal properties may come and go without changing the object. In This law/bill raises/is raising the price of oil by 10 cents a gallon the simple aspect explains the essence of the law or bill, the progressive its accidental consequences.
In my view both schools are right, and the extensional time line view and the intensional property view are dual. In the former, the modal content comes in through branching time, in the latter, from the finer grain of intensional objects.
Intensionally, the imperfective paradox can be accounted for by allowing the subevent relation of in the definition of the progressive to relate event types, not event tokens. That is, one can be building a house without actually building a house if one is in a process of the type involved in building a house. By the typetoken duality, the direction of entailment is reversed. For instance, one is building a house while one builds the foundation of one. Event type build a foundation would entail event type build a house if every build a foundation event token were included in a build a house event token. What we say instead is that event type build a house entails event type build a foundation, that is, every build a house token includes a build a foundation token. Build a house entails build a foundation, not vice versa. In a rather weak sense, then, one who builds a foundation is thereby building a house. It is weak if it licenses saying that one is building a house when one is really only aiming for a foundation.
Usually, we seem to imply something both stronger and weaker with the progressive. A standard example here is Partee’s Mary was making John a millionaire (Link 1998:255). The progressive seems to imply no particular event, but a presumed causal connection: whatever Mary was doing was causally related, i.e. contextually (counterfactually) equivalent against some set of background assumptions, to John becoming a millionaire, i.e. there is a counterfactual theory (situational constraint) s available which entails the s ⊆ Mary p< → <John rich. For another example, the event type of stepping off the pavement is typically, ceteris paribus, or at least hopefully, equivalent to the event type of crossing the street, although not all actual starts are included in actual crossings the street.
With this revision, an intensional typetoken solution to the imperfective paradox (ter Meulen 1987, Löbner 1988, Mittwoch 1988, Vlach 1993, Hinrichs 1983 Cooper 1985, Parsons 1990, Glasbey 1998, Hedin 1998) is dual to an extensional possible worlds or branching future solution (Carlson 1981, Dowty 1977,1979, Tedeschi 1973,1981, Landman 1992, Portner 1998). Pace Löbner (1988:185), there is a translatability.
One who builds a foundation is possibly building a house (an entailment is consistent with its premise), which is basically what the possible worlds account says: prog (p cause become q) is true iff p is true and q is possible (Dowty 1979:136ff), intended or expected (Dowty 1979:149); in general, entailed by p in some contextually given theory (Galton 1984:127) or channel c (Barwise and Seligman 1994).[78] Asymmetry of time is crucially involved in the imperfective paradox, for entailments are asymmetric: from be building a house one can infer begin building a house but not finish building a house. (This asymmetry is pointed out but left unaccounted for in Smith 1991:99).
The key examples in Goldsmith and Woisetschlaeger also conform to both views:
Help! Police! The statue of Tom Paine stands/is standing at the corner of Kirkland and College streets! (Gleitman)
Is standing is much preferred if the statue is not in its rightful place, i.e. it is at a temporary location.
A difference between announcement/prediction and observation/eyewitness report is clear in the following example. The announcement format has a comical effect for unintended interruptions.
And now I take the flask of sodium nitrate and pour the contents into this beaker; now I light the bunsen burner and heat it to a boil ... And now I  whoops  sneeze; and now I reach into my pocket and take out my handkerchief...
Now he is picking up a glass flask, and pouring its contents into a beaker. Now he’s lighting the bunsen burner and  wait! He’s reaching into his pocket for what seems to be his handkerchief!
The explanation: the progressive is timed by observation events, events in the simple aspect time themselves.
Thomason/Stalnaker (1973) study inference patterns for adverbs like partly, halfway. They note that unlike usual restrictive adverbs, they don't entail the head, but if anything vice versa, rather like weak modals:
He climbed partly/halfway/almost/possibly to the top.
This is another instance of the asymmetry of time in a branching future model. A halfway event entails the start but does not entail the end. A contrary change lets the inference to go through:
He opened the door halfway.
This does entail he opened the door (it was no longer closed, if not fully open either.) Nearly/almost differ from halfway in that they entail that the event denoted was a near miss. He almost opened the door can mean he never started or that he did not quite finish. These words therefore serve to tell apart achievements and accomplishments.
The interior progressive of an extended event type is analogous to a mass term formed out of a count noun (e.g. a text/text) in that it goes from an individual to its divisible internal parts. In other forms, it is a partitive or inclusion operator (Mittwoch 1988, Löbner 1988:181, Herweg 1991), In fact, it is expressed by the partitive case in Finnish.
Taylor (1977) and Dowty (1979) in fact argued that progressives can only be formed out of interval predicates, in my terms, extended event types (event types which have temporal parts, and thus cannot be contracted to a point). This is not the whole story, however, as Zucchi (1998) points out, given the difference between John/?the motor was being noisy.
There are two dimensions the partitive operation can operate on, the time dimension and the subevent dimension (Gabbay and Moravcsik 1980:74). Thus English progressive requires that p is a process (Timberlake 1982:311, Chung/Timberlake 1985). Vlach (1981a:291) specifically argues that ‘topologically specified’ (purely temporal) truth conditions for the English progressive must fail, and that any correct account must make central use of the notion of a process. Zucchi (1998) distinguishes different varieties of the English progressive of states depending on the type of process involved. The copular variety is being A is agentive (entails do), is lying is a dynamic or temporary state and is resembling more and more a comparative change (Zucchi 1998:359).
The Portuguese progressive allows that p is a temporary state (Santos 1996). Both English and Portuguese progressives exclude simple states. As pointed out by Comrie (1976:35, cf. Cohen 1989:77), the Portuguese case throws doubt on the suggestion that the English progressive is excluded from states just for being redundant for duratives (Lyons1968, Bartsch 1995:53).
In the degenerate case, p can be any open event type, in which case interior progressive reduces to a temporal part operator.
Interior progressive is open. It is extended to the extent that it entails a process. As a special case of the imperfective aspect, progressive implicates incompletion (Jespersen 1949, Palmer 1987:§4.1.2). I do not require the progressive to pick out a proper part of an event. Most, and the prototypical, parts are proper anyway. More to the point, the absence of entailment to completion characteristic of the progressive is guaranteed already by allowing the parts to be proper. By markedness, a marked progressive implicates the narrower meaning.[79]
Leech (1971) even claims the progressive conversely entails the negation of the perfective, e.g. that Who has been eating my porridge? should entail that some porridge is left. This much cannot be entailed: given that perfective entails the progressive, a contradiction would ensue. Mittwoch (1988) rightly points out that She has been eating your porridge, it is all gone is consistent. The converse inference is a typical conversational implicature (like some entailing not all). Compare Hatcher’s (1951) I have been writing a difficult letter; thank goodness it is finished.
Mittwoch (1988:224ff, following Bennett/Partee 1972, Cresswell 1977:13, Dowty 1979, Richards 1984, Heny 1984) requires that the progressive denotes proper subintervals. Bennett/Partee (1972) restrict evaluation of the progressive to points. Since the progressive is a state, the distinction does not amount to much of a difference: if there is a subinterval there is a proper subinterval and vice versa. However, some of Mittwoch’s own observations can be used to support the weaker position taken here. The simple tense and the progressive seem simply equivalent in
The exact time John was wearing/wore sunglasses was when I had lunch with him
If wore had to properly include lunchtime in order for was wearing to match it exactly, the two forms could have different truth values. I don’t think they do.
A similar question is whether a progressive always denotes a nonfinal part of an event. Dowty (1979) makes progressive quantify over nonfinal proper subintervals. Langacker (1982:281) says the progressive focuses attention on a single, arbitrarily selected internal point of a process. An often quoted apparent counterexample to these claims (Vlach 1981) is the imperfective paradox: in He was telling me something when he was interrupted, the point of interruption is the last point of actual telling. This is not very convincing: counterfactually, the story went on. Hamann (1989:79) cites as evidence sentences like Before I was working, I was unhappy and After the horses were running, he tried to place a bet. I do not think these sentences show anything about the boundaries of the progressive.
Smith (1991:119), looking for differences between progressive aspect and stative event type, argues progressive must be medial because it does not support an inceptive reading (narrative progression), while states can:
When he became manager, everyone was unhappy.
When he became manager, everyone was (soon) working overtime.
Jameson switched off the light. It was pitch dark around him. (Cooper 1986)
John entered the kitchen. (In the next moment,) Mary was pushing him out again.
While a state in simple aspect is vague between simultaneous and sequential readings; the progressive seems to require soon or in the next moment to get a sequential reading.. Vlach (1993:243) too observes that progressives group with locative statives like at work or away against nonlocative statives like alone or free in that they do not support narrative progression.[80]
When Allen left, Betsy cried/was alone.
When Allen left, Betsy was crying/away.
This property of the progressive is exploited in the so called interpretative use (König/Lutzeier 1973, König 1995, Bertinetto/Ebert/deGroot 1998), where the progressive helps fold two events into one:
If we selected the best described languages, we would also be selecting the languages with the largest number of speakers.
This confirms Dowty's (1986:55) point that progressive need not denote a proper interior subevent to produce Smith's observation.. A sufficient condition is that a progressive is not closed, i.e. it does not have to include the boundary. The interior cannot be reached without first reaching the boundary. Narrative progression matches a locative or progressive state with the final state of the preceding event, which places the boundary of the state somewhere earlier. This does not prevent progressive from denoting any part of an open event, including an open prefix. Markedness makes a difference, however: French imparfait differs from English progressive here (Molendijk 1994). More on this topic in the section on narrative progression.
A suggestion of the progressive is that it has duration, it goes on for some time (it is a ‘continuous tense’; Sweet 1900, Palmer 1987:§3.1.3, §4). This is implied by the MontagueScott definition (Scott 1970) of the progressive as true of the interior of an event (an open, hence extended region). Processes are extended, so if there is one point within a process there are others next to it. This does not mean that the progressive should not be a state (true pointwise), it only implies that it is not true of isolated points (is not closed).
A complementary suggestion, felt in dynamic states like live or lie is that a progressive only holds for a limited time (has a ‘temporary aspect’ or at least subject to change). By denoting parts of some more encompassing event, the progressive implicates that the reference time is bounded by the latter. The progressive of a dynamic state I am living here instead of I live here thus disambiguates for temporary against permanent state (it is susceptible to change, Palmer 1987:§4.6.1).[81]
A related consideration is that I am living here has finer resolution than I live here. The suggestion of short time, due to the contracted time scale, translates into implications of recency I have been living here and immediacy I will be living here in the past and future tenses. The fact that I must be going home feels more immediate than I must go home (Jespersen 1949:§13.4.6) can also be related to the fact that be going home is true at any time from the time of departure while go home denotes the whole trip (cf. I must be on my way).
Sweet (1900) and Poutsma (1921) are hard put to find a difference in meaning between I coughed/was coughing all night long. Sweet says the progressive emphasizes duration, Poutsma says it is more vivid. I venture to suggest that both impressions are due to a finer scale of observation times being implied by the progressive.
Leech (1969) concludes that the progressive has two apparently contradictory meanings, limited time extension and continuous. Examples of the two are The engine works/is working perfectly and The earth turns/is turning on its axis. There is no contradiction in the progressive simultaneously suggesting continuation (long duration) and temporariness (short duration), for there is a twoway comparison here: the event the progressive denotes is short but open, the event it entails around it is longer and possibly closed. It all depends what is compared to what.[82] Besides, there is iterative shift: He has been leaning on the door may feel longer or shorter than He has leaned on the door depending on whether the progressive is iterated (making be leaning mean keep leaning).
Formally, the progressive does entail both boundedness and openness (these are both properties of in). These implications are not contradictory, it is consistent to have an open but bounded event type. As suggested above, Leech’s (1969) different feelings about is working and is turning reflect different analyses prog pf work and (prog turn)^{+ }respectively, selected on the basis of factual knowledge of the world.
There are obvious connections between type of progressive and diathesis. Interior progressive is the inverse operator of process accomplishment: be eating up prog pf eat equals eat. The progressive of a result causative which specifies the effect but not the cause is future. This can be expected, since there is no preparatory process for the progressive to pick out here:
The missile is hitting the target in twenty seconds.
In a typical process causative where the entire process is included in the specified time frame:
John is putting up the tent in one minute flat.
Motion verbs tend to have suppletive progressives crosslinguistically. In English, there is a choice between He is coming/going home and He is on his/the way home. The former can just confirm a plan, the second reports a position on the trajectory. Neither entails ongoing motion: one can make a stop while one is on one’s way. Finnish uses a separate set of locative paraphrases with motion verbs. E.g. English He is going is best translated with a derived nominal Hän on menossa ‘He is in going’, while Hän on matkalla literally corresponds to He is on his way. The regular interior progressive Hän on menemässä pois ‘He is in the process of going away’ is marginal, the point progressive Hän on menemäisillään pois awkward.
The rationale for this typological phenomenon is that motion is a simple process composed of changes of position: there is no other process causing motion for progressive to dig out. Simple (noncausative) intransitive processes in general tend to avoid progressive or have suppletive forms: sleep  be sleeping  be asleep, live  be living  be alive, drift  be drifting  be adrift, move  be moving  be in motion.
An objection against the Bennet –Partee analysis of the progressive is that it does not allow gappy processes (Kuhn 1989:529). This refers to the possibility of I sit here not entailing I am sitting here, me pointing at my seat taken by someone else. This is a resolution ambiguity Though simple aspect is more likely to be habitual than the progressive, with a dynamic event type both can be either. When the resolution is the same, the inference follows: I am sitting here this season is entailed by I sit here this season.
Future or prospective progressive can be analysed as the interior progressive of a future simple tense prog fut e (Poutsma 1921:§33), that is,
in p of <e:e
According to the time table, a train arrives every ten minutes, and in fact, one is arriving in ten minutes. The progressive tells that events are actually proceeding toward, or drawing near completion in accordance with the generic prediction (a provisional observation, not the rule). Palmer (1987:§4.4.1) contrasts I start/am starting work tomorrow. The simple present reports my schedule, the progressive my plan. The university announces a performative Examinations start tomorrow, the papers may report Examinations are starting tomorrow. Either she leaves or I leave is a threat, Either she is leaving or I am leaving a guess.
The existence of progressive future I am leaving today, in the absence of a progressive perfect prog perf e I am having left today, is an instance of the asymmetry of time: the past is necessary while the future is contingent. The event type p in the interior progressive is by definition a process or a temporary state. The state of going to leave is open to opposite contingencies: my plan could change at any minute. Holding to a plan in the face of future contingencies is a dynamic state. The state of having left is not temporary  it never ends. The past needs no attention, it will keep.
Future progressive, unlike scheduled simple future, manages to make a future reference without a future time adverb.
We are having a picnic. Care to come?
No surprise, since progressive denotes future on its own, as projects the end of the event to the future.
Point or boundary progressive be about to V, be on the verge of Ving is the case where the variable p of the progressive formula in p of e denotes an atomic event in the immediate left neighborhood of an event e, i.e. the boundary or verge of e. There are two major differences to the interior progressive. First, interior progressive supports inference from is Ving to has been/begun Ving, while the point progressive has the opposite entailment has not been/begun Ving. Second, the interior progressive allows keep Ving, the point progressive does not. The interior progressive is open, the neighborhood progressive is closed. He is going on describes an uninterrupted process, the point progressive he is about to go on denotes a pause which is about to end.
Consider reach (attain). Reach is a momentaneous transition which must be shortlived. It appears that its progressive be reaching imposes a granularity where the final approach is a simple step, an atomic event, not a process. Is reaching equals is about to reach, is on the point/verge of reaching, not keep reaching. It denotes a point where contact is imminent, not a process left to run through.
Though both approach and reach have progressive forms, they do not mean the same. There is a difference between The curve keeps approaching the line and The curve keeps reaching the line. The former can describe an asymptote, the latter can only describe a curve intercepting a line many times.
The difference between interior (process) and boundary (point) progressive is made explicitly in Finnish using internal vs. external locative cases olla Vmassa/Vmaisillaan.
The boundary progressive is awkward with open event types which do not have boundaries to be at. Finnish
Olin syömäisilläni / istumaisillani pöydässä kun soitit. ’I was about to eat / sit at the table when you called.’
is quite odd and gets better with
Olin rupeamaisillani syömään / istumaisillani pöytään kun soitit. ’I was about to start eating / sit down to the table when you called.’
Portuguese too makes a difference between ir correr and ir a correr (Hundertmark 1982:§8.251253). The first one is a near future ‘be going to run’ , the second one a point progressive ‘be about to run’ with an implicature of interruption: ‘I was about to run when something happened (to stop it)’.
Formally, the point progressive and future progressive come to the same prog <e or in p of <e. The difference is whether p is atomary (a point) or a temporary state.
Kuhn (1989:529) points out that a simplistic interior progressive analysis won’t work for a simplistic analysis of achievements such as Baltimore wins or Mary starts to sweat. If these events are true at a point only, how can another event happen in its interior? This objection can be met as follows. For one thing an achievement true at points need not be true only at points. Though the exact point of winning may exist, there may be an associated process as well. For another thing interior progressive is not the only progressive on the market. Baltimore is winning can also mean Baltimore is about to win. Both readings seem available for this example.
Further variants of progressive are obtained by combining it with aspectual verbs. French and Portuguese use go to describe comparative change progressive (prog p)^{+: } aller (en)+ant in French, ir+‑ndo in Portuguese, and gå og V in Norwegian. They can often be translated by keep Ving or V little by little (the latter is better for a closed reading):
fr La vallée allait (en) s’elargissant ‘the valley kept getting wider’
no Det er nettopp det jeg går og studerer på. ‘That’s just what I go and ask myself.’
pt Ela foi ganhando terreno, ja não era a costureira, era a dama da companhia... ‘She had been gaining ground, she was no longer the seamstress, she was a lady in waiting’.
Note the Perfeito foi of ir ‘go’ in the Portuguese example which makes the progress a bounded one (a short bit of comparative change). Not surprisingly, many languages express the iteration through reduplication:
fi Laakso laajeni ja laajeni. 'The valley widened and widened.’
jp gohan o tabe tabe ‘eat rice repeatedly’
The combination andar a+inf ‘ go on Ving’ in Portuguese expresses habitual iteration of the progressive (prog e)^{+}. Predictably this shade can in some cases be brought out in English with keep or a habitual adverbial these days:
Ele anda a aprender português. ‘She is learning Portuguese these days.’
Todos andamos a fingir qualquer coisa. ‘We all keep feigning things’.
Tønne (2001) shows that Norwegian gå og pseudocoordination progressive selects open event types. It will be shown in the diathesis chapter that go denotes a comparative change. It follows that a pseudocoordination progressive of the type go⋂e will be denote a comparative change as well.
Progressives formed with dynamic permanences like sit or lie will similarly share aspectual properties with them.
Bertinetto/Ebert/deGroot (1998) distinguish between focalised, durative, and absentive progressives. The first class maps on my interior progessive, durative is this one. Absentive is a fresh locative progressive meaning be somewhere Ving, for instance Finnish olla Vmassa. .
This section surveys the combinatorics of the progressive with other TMAD devices. Compare the section on the English progressive below.
The progressive of abstract accomplishments like John is running a mile in three minutes prog (run ⋂mile in 3min) feels odd in the same way as Cresswell’s (1979) abstract accomplishment He was polishing all the boots or Johnson's (1998) They were playing chess for an hour. Hirtle (1975:§354) notes that the English simple/perfect progressive can distinguish between absolute and relative quantity or frequency (just as perfective/imperfective aspect does in Portuguese, vide infra).
It has snowed/been snowing a lot (much/often).
My car has cost/been costing me $250 (a month) in repairs this year.
It is snowing a lot now or I have been smoking two cigarettes sound odd or incomplete (per day?), unless they entail it is snowing hard or often or that I have lit two cigarettes at once. This feature can be attributed to the absence of associated process in abstract accomplishments.
Vlach (1981b:75) finds progressive progressively less natural with more abstract adverbs of frequency: be shaving every day / regularly / once a week / usually / never, as it becomes harder to identify the associated process.
Mittwoch (1988:224) makes similar observations about It was raining for 2 hours. This too is because (and in so far as) there is no process associated with an abstract accomplishment. Such an accomplishment is more like a closed simple state. The progressive has a future feel to it, because there must be a plan involved, in absence of other dedicated process. With the scoping
John was working for 2 hours prog (work for 2h)
thus means his shift was 2 hours long. Mittwoch’s other examples support this. The level of the lake was rising 10 feet when we arrived is odd, for how could one tell? John was drinking three cups of tea when I arrived may be funny, but gets better with John was drinking his usual three cups of tea when I arrived.
Mittwoch (1988) feels very strongly that progressives should not accept duration adverbials, i.e. that the scoping (prog work) for 2h is out for the above example. Similar opinions are voiced by Rohrer (1981) who stars John was leaving for an hour, Hatav (1989), and Bertinetto/Delfitto (1998), who question mark Mary was dancing for two hours/until midnight. Compare
It is raining for 2 hours.
It was raining for 2 hours.
It has been raining for 2 hours.
It will be raining for 2 hours.
The present tense variant is the odd one out. The perfect and the future are fine. The simple past is bad if we add when I arrived but better with steadily. Or try replacing be with keep. The present tense is again odd: It keeps raining for 2 hours is a generic sentence, for keeps is an extended event type in simple aspect.
Note that (prog rain) for 2h denotes at least two hours of rain, while prog (rain for 2h) denotes an abstract state of being inside a rainy two hours. The present tense sentence can only match the point of speaking under the latter scoping. We saw that this scoping with an abstract accomplishment entails a plan; the problem is that rain is normally not a scheduled event.
In contrast, the past tense under the opposite scoping (prog rain) for 2h, disambiguated for by steadily or keep, is good. But this scoping makes the present tense sentence detached from the point of speaking, for two hours cannot fit in the point of speaking. The opposite scoping makes better sense as a series It is raining for two hours daily. prog (rain for 2h)^{+}.[83]
All this works as predicted. No rule is needed to limit the relative scopes of durative adverbials and the progressive. Commutativity with durative adverbials could be one index of grammaticalisation of the progressive.
Some examples Mittwoch rules out as ungrammatical seem quite natural. She admits that many native speakers allow such sentences, including e.g. Leech (1969:150), Palmer (1974:55), Bennett (1975:103), Dowty (1979:157), Bennett (1981) and Vlach (1981). The King James bible has them (Mittwoch 1988:248):
He was working all morning.
He is always losing things.
You were talking on the phone for hours.
They were working on that project for ages.
They were playing chess every evening for three years. (Johanson 1998)
He was coughing for half an hour.
Last year we were wearing winter coats till May.
Last year when I was in Boston I was living in Mary’s house for three months.
I was playing piano from ten to eleven o’clock (Leech 1969).
John was wearing sunglasses when I had lunch with him (Dowty 1979).
And Solomon was building his own house thirteen years and he finished all his house. (1.Kings 7.1).
Lee was going to Radcliffe until she was accepted by Parsons.
Rob was working on the research project until he got the job at the U of M.
One advantage of allowing this lot is that the compound tenses of the progressive can come out compositionally. The complex syncategorematic rules of Mittwoch (1988) are unnecessary.
Contrary intuitions may reflect constraints operative in other languages. French closed event types seem to exclude imparfait: Rohrer (1981) stars Jean traversait l'Atlantique pendant deux jours 'John was traversing the Atlantic for two days'. and Vet Jeanne copiait la letter pendant des heures 'Jeanne was copying the letter for hours'. Mittwoch (1988:249) also claims that the Spanish progressive and the French imparfait do not accept duration or frequency adverbials. As Bertinetto/Delfitto (1998) point out, languages like Spanish or Portuguese allow a perfective progressive here. Note the English translation kept, which is a continuation causative.
es Maria estuvo pintando la pared hasta la media noche/durante dos horas. 'Maria kept painting the wall until midnight/ for two hours.'
In Italian, progressive is passable when there is a reason for it, e.g. when it matches the event to another:
it Maria stava giocando a tennis dalle 2 alle 3 quando tu credevi che stesse studiando. 'Mary was playing tennis from 2 to 3 when you thought she was studying.'
In English too, one only uses progressive with tight adverbials of time for a reason. In general, iterative progressives make better sense than interior ones. I was using product X until I discovered product Y sounds perhaps even better than used or kept using because it suggests habitual behavior instead of a considered plan.
In general, a closed cycle of a progressive pf prog e, denoting an irresultative past temporary state or (Galton’s pofective) is an identifiable event type crosslinguistically (Declerck 1997:194).
I’ve been thinking. (I may have an idea.)
You’ve been working too hard. (You need a rest.)
You’ve been drinking. (I can smell it.)
He’s been talking about you. (I know something now.)
Someone’s been tampering with my books. (They are upside down.)
Who’s been eating my porridge? (There are traces in it.)
Portuguese uses perfeito progressivo here:
pt Estives a falar com ele. 'You have been (pf) talking (prog) to him (You have done some talking to him)'.
The progressive removes result and the perfeito adds closure: something has happened which has not run to completion or has been undone, but has left a trace. This is a prototype existential perfect.
Bertinetto/Delfitto (1998) note that perfect progressive shuns bounding time adverbials:
es María estuvo pintando la pared en dos horas. 'Mary was painting the wall for a while in two hours.'
The English translation is not too felicitous either. Perfect progressive is a cycle, i.e. closed and irresultative, while bounding time adverbials (quod vide) prefer half closed (resultative, telic) event types.
Bertinetto/Delfitto (1998) and Johnson (1971:29, 1998), along with many other writers, exclude progressive with bounding adverbials in/by t, starring Mary was painting the wall in two hours. This seems too restrictive. There is a well known narrative construction which combines a progressive form with a bounding or location time adverbial (imparfait pittoresque), described later in the chapter on discourse.
Where’s Mr Luttrell? he heard her ask. In a moment she was greeting him...
Manning shook off his early Evangelical considerations, started an active correspondence with Newman, and was soon working for the new cause.
It is true that this combination is special, not equivalent to Mary painted the wall in two hours, simply because (prog e) in t does not entail e in t. Instead, its meaning is the expected composition of the meanings of the progressive and the adverbial, as it ought to be. The opposite scoping prog (e in t) is also unusual, but not excluded: Spasski was winning the game within the appointed time when he made a fatal move. The composition is not inconsistent (empty), only such situations are hard to verify.
A progressive imperative is uncommon in English (outside Irish English, Curme 1931:380, Poutsma 1924:67, Matthews 1989) as well as many other languages (Bertinetto/Ebert/deGroot 1998). An order concerns what is going to happen, not what is already happening, for that cannot be changed. Thus the event type licensing an imperative study is (now<⋂(¬study⋃study)).study in other words, start or keep studying, not be studying now. An interesting exception that confirms the rule comes from Poutsma. The heroine does not know what the hero is thinking about, and makes a silent wish concerning what is already happening:
I hope you’re thinking about me. Please, be thinking about me. (Webster)
This explanation predicts that progressive imperative sounds better if the order concerns a state of affairs timed by another event: don’t be watching TV when I get back. As get/keep studying shows, it is the be part of be studying that causes the problem. A causative make sure you are studying corrects the situation. Imperfective imperatives in general are fine when they can mean get/go on Ving. See section on imperfective combinatorics for examples.
Perfect is variously called a tense (Frawley 1992), an aspect (Friedrich 1974) or a phase (Joos 1967).[84] According to Paul (1886:273) "Das Charakteristische des Perf. im Gegensatz zu Aor. und Imperf. liegt darin, dass es das Verhältnis eines Vorganges zur Gegenwart ausdrückt." In the same vein, Lindstedt (1996) cites Maslov’s definition of perfect “expressing a present state as a result of a preceding action or change, and/or expressing a past action, event or state that is somehow important to the present and is considered from the present point of view, detached from other past facts”. According to Bauer (1970:190), “the action is viewed, not as a past event, but as being an accomplished fact at the moment of speaking, having taken place, once or repeatedly, within a span of time that is not conceived as separated from the moment of speaking.” According to Joos (1967:140), in the perfect phase the event is not mentioned for its own sake but for the sake of its consequences. This is why it cannot be used for narratives, which present past events for their own sake. As these descriptions suggest, perfect is a complex or compositional aspect (Cohen 1989) in that it separates reference time from event time. In this context, 'event time' and 'reference time' and refer to subevents of the complex event type composed by perfect, specifically the input and output event types of the aspect.
perf ¬rr`r of e:e
Perfect is a prototypical category of closely related variants, straddling the openclosed event type distinction, denoting some part of a final change involved in the event e. A number of typologically significant special cases of the perfect have been distinguished (cf. Bybee et al 1994). A first division can be made between resultative perfects e\\r (cf. Bybee et al. 1994 resultative) which produce open event types, and anterior perfects e`r (cf. Bybee et. al 1994 anterior) which produce closed event types. A further division can be made between near perfects e`r where event time extends up to now and remote ones e<`r where it ends earlier (while reference time remains an extended now).
A resultative[85] e⋂¬rr`r:e, denotes the final or result state (suffix) r:e⋂<r of an event e. This makes it a denotational variant of extended now e⋂<r.
res e\\r:e
An example is English He is gone, the glass is broken. Resultative perfect is idempotent: e\\r.e\\r ⊆ e\\r. As an open event type, the resultative allows still: He is still gone is fine, He has still gone is not (Dahl 1985:133135). Another concomitant is since (Johanson 1998:§8.5.1.1): de Er ist seit gestern verhaftet 'he has been arrested since yesterday'. The resultative entails that the result state holds at reference time: He is gone implies he is away. A result state perfect cannot be undone: He is gone and come back is inconsistent (compare he has gone and come back).
What the result r is in each case depends on aspect type. For a change, the result is a state following the change. In the simplest case, this is the final state that lexically defines the change, for instance be open is the result state of open. This is the garden variety of resultativity (telicity) discussed in the section on resultativity.
Result perfect of an open event type does not change event type. The suffix of an open event, by definition, is the event itself a\\a ⊆ a.. This makes them somewhat redundant, but not nonexistent. Perfects of open events are known in Classical Greek grammar as perfecta intensiva (Kühner 1899:§384.4), because they as it were only intensify the present (denote continued open events). Examples: dedorka 'look' from derkomai 'look', kekhaira 'rejoice' from khairo 'rejoice', eolpa 'be in the hope' from elpo 'hope'. Johanson (1998:§10.3.2) cites examples from other languages. Another way of look at a result perfect of an open event type is as the result perfect of the corresponding initial change (Kühner ibid). But this is a distinction without a difference.
This is what Aristotle (Met. 1048b:1834, Eth. Nic.1174a14b14) had in mind in saying that the end (telos) of an activity (energeia) is the activity itself. For it follows from this definition that an activity has reached its result as soon as it has come about. This in turn is indeed one of the criteria Vendler (following Ryle who cites Aristotle) lists for activities. This is also why open events are irresultative: the result implies no change, as it is identical to the initial state. The perfect of an open event holds as soon as the event has started. Or as Langacker (1982:280) puts it: the perfect derives a state from a process by focusing on the point at which the trajectory is fully instantiated, which in the case of closed event types is the endpoint, while in open event types any point within the event meets the condition of the perfect, including the first.
A continuative perfect 'is still gone' found in Chinese, Lezgian, or Archi (Johanson 1998) can be formalised as e\\r`r:e It denotes a continued stretch of the state resulting from a change. A result state perfect is like the progressive in that it does not have an inceptive reading.
In a lexicalised state perfect r such as dead from die, the result state no longer entails an antecedent change. The eventresult relation at most applies at type level, no antecedent event token is implied to exist. Dead matter was never alive, hidden variable has never been in evidence. A state perfect is no longer a live aspect, Shakespeare’s result perfect My wife is dead tonight is obsolete. State perfects are common in Greek, where they get suppletive translations: eidon/oida ‘see/know’ (more examples below). In English, as in many languages, state perfect participles produce adjectives: closed (not open), extended (wide), impoverished (poor), decided (clear)[86] State perfects do not allow modifiers relating to the event (Smith 1991:68): The door is locked with a key is a passive, not a state perfect (compare the door is locked with a padlock). More on resultatives in the section on perfect typology below.
For a cycle ¬ss¬s, the result state is also the same as the initial state, so transient events such as cough, hiccup, blink, blip, flash (but also take a nap/walk, watch some TV, visit) are irresultative, produce no change. This is one case where closed does not equal resultative. The perfect of an open event type can denote the aftermath of the state (Hirtle 1975), i.e. time following a cycle of the state (I have slept).
There is, finally, a weak but undeniable sense in which every event produces a result, and that is the historical result of the event itself being past, having happened. This sense comes about from the equivalence (noted above) of any individual (hence closed) event with a cycle of itself: b is equivalent to ¬bb¬b. On a coarser granularity, any event e is a turning point, a historical change, the boundary between time without it and time with it: ¬(e<).e< (the event changes from nonpast to past) or <e(¬<e) (event changes from future to nonfuture) or ¬(<e<).< e< (the event changes from nonexistent to existent). These are all a half closed event types. Thus one result of any event e is its history or perfective perfect perf pf e =<e<`<. In other words, an existential present perfect of e, or e<now is also the near perfect (e<).now. An existential perfect becomes a result perfect on a coarser resolution where intervening events map to null.
A part of the grammaticalisation of a perfect is that the change ¬rr may become only a contextual consequence of a particular token of the given type rather than inherent to the lexical event type. For instance we have met can imply there is no need to introduce us. This is why the implicature of current (or present) relevance (Dahl/Hedin 1998) associated with the perfect is so elusive: what constitutes the relevant result depends on context (Löbner 1988:178179). Another reason is granularity. Paul has finished writing his part of the book (Chung/Timberlake 1985) may mean that the part has just been finished but that he is still free to do other things  he has not started any other project of the same sort since then. Immediacy is subject to granularity.
The point of current relevance is that even an existential perfect denotes a result state, namely the state that an event of the type has (not) happened. This is result enough for many purposes, for instance criminal courts and the Guinness book of records. Lindstedt (1998) makes a distinction between material bound (lexical result entailment) and temporal bound (sufficient for current relevance). Löbner (1988:178) allows a derived event type ‘event e has happened’. Palmer (1974,1987:§3.3.2.) speaks of ‘nil results’. .See also Chung/Timberlake (1985).
The term anterior (Reichenbach 1947) is a cover term for two closely related perfect types, near perfect and remote perfect. A near perfect is the event type (e⋂¬r)`r or equivalently e`(¬e⋂r) which denotes a state r immediately following e. If r is equal to ¬e, the formula reduces to e`¬e. If r is now, it reduces to e`now.
For open events, one difference between present resultative a\\now and near present perfect a`now concerns whether the event covers the present, i.e. whether it is entailed or just implicated to hold now. Ehrich/Vater (1989) note that German perfect Ich habe seit drei Jahren nicht mehr geraucht ‘I have not smoked in three years’ ¬smoke⋂3years`r⋂now is consistent with smoking at present, while extended present Seit drei Jahren rauche ich nicht mehr ¬smoke⋂3years ⋂<now is not.
Hirtle (1975:3536,68) argues the English present perfect of a state does not entail that the state covers the present, for otherwise the following sentence would be redundant:
Change has been, and is, the breath of our existence and the condition of our growth. (Kruisinga 1931:405)
By Gabbay and Moravcsik (1980:76) even the perfect of a progressive He has been reading his book is true if he was reading only until now. The progressive fails to guarantee that the reading event continues now. The progressive does help suggesting it, especially in contrast to He has read his book which is closed. Mittwoch (1988) points out that bare activities seem infelicitous in the English present perfect, or rather, that they are coerced to closed events in the perfect. To say that an unfinished bit of activity has occurred, the perfect progressive is preferred (the progressive implicates iteration here, Hirtle 1975:§3.5.5).
John has run (away)/been running (around).
McCoard (1978) notes that At the moment he has been away is rather bad, but gets better with At the moment, he has been away for a week. Without a bound, the perfect is vacuous, for an open perfect is true as soon as (i.e. all the while) the present is. A punctual adverbial is still somewhat awkward. (By) now/so far, he has been away for a week sound much better (Comrie 1985:3334).
It is only when we are marking a record (the first point when a result is reached) that the exact timing makes a difference. Now I have met him is appropriate right after I met him, not years later. This is sensitive to resolution. I have met him today is no news unless I met him today. Today Asimov has written over three hundred books does not require that Asimow wrote them today, not even that he wrote anything recently, as long as he is alive. (Kuhn 1989:537).
I have always liked opera has a strong implicature of continuation, I have liked opera so far is noncommittal, while I had always liked opera until now strongly suggests (but does not entail) a change of mind. Once the event has been over for some time, it is too late to use anterior present perfect:
A person has just been rescued from a remote island where he had/?has been marooned since 1960. (McCawley 1971:109)
A near or 'hot news' perfect of a change c = ¬rr has the form ¬r`r. It denotes the final state immediately following the change. Since it holds of the immediate consequent state, it is used for hot news: He has just/now gone is true at a time immediately after the departure. An anterior perfect of a closed event is closed from the past ,so ¬rr¬rr is not of type ¬rr but an iteration of that type. Hence closed near perfect allows already and just but not still. He has already left is fine but in He has still left, leave is coerced to an iteration (Hirtle 1975:81fn,93, Hoepelman/Rohrer 1981:114, Dahl 1985:134, Nedyalkov and Jaxontov 1988, Bybee et al. 1994:65). Lindstedt (19??) observes the same restriction on the Finnish perfect. Compare
The patient cannot go out because she has still only recently recovered from an illness.
Hot news (McCawley 1971, Comrie 1976:60, Anderson 1982) is a context for closed near perfect (Johanson 1998:§8.5.2). News is hot if the event is recent, in fact immediate: it is the latest news about the matter at hand. As always, immediacy, adjacency of events, is relative to resolution: what matters is that no other event of the same scale and granularity intervenes. Johanson (1998:§5.2.1) observes that Balkan languages with an existential perfect tend to use a perfective past here.
Now tell me, what news of the mission?  We have located the ship.  Wonderful, said Halfrunt, wonderful! (Adams: The Hitch Hiker's Guide to the Galaxy).
Temporal adverbials with near perfects denote an extended now <now like today or this week. The since clause denotes an extended now in Since you have been here, everything has been fine denotes here⋂fine⋂<now. Proper past adverbials like yesterday, last year or before, previously are excluded. My father has died is hot news, My father has died in the past is unlikely. Compare a father of mine is dead or people have died in the past.
Mittwoch (1988:247) notes that Today John has done his homework. is ambiguous about whether John did (all) the work today: homework⋂today\\r⋂now vs. homework\\r⋂today⋂now. The former is equivalent to a near past homework⋂today.now Only the latter reading remains for I have been here for a week today here⋂week`r⋂today, for obviously the past week won't fit within today. Yesterday is no good on either scoping: Yesterday John has done his homework homework⋂yesterday\\r⋂now and homework\\r⋂yesterday⋂now are both empty.
Durative adverbials can measure the result state with an anterior perfect. He had gone out for 20 minutes translates as ¬away`away⋂20min⋂now< where 20 minutes measure time from departure (some of it, even all of it, may still be ahead). The imperfective paradox applies, making the estimate a subjective one, measuring planned absence. In past perfect both interpretations are possible: He had gone out for 20 minutes and returned/but never returned/returned early.
A remote or existential (experiential, indefinite) perfect e<`r:e just says an event has occurred sometime in the past.[87] This trivial result state of the existential perfect never ends (what has happened cannot be undone), so existential perfect is irrevocable (Galton 1984:58) or irreversible (Nedjalkov 1988).[88] A formal reflex of this is that the following event type denotes 1:
(e<now<) ⋂ (<now<(e<`t) = e<now<) ‘What happened will have happened’.
The perfect of an irreversible state is awkward: Napoleon has been dead (Klein 1994:101). This is odd, because has been dead implies is dead. A measure phrase adds a later bound and saves the sentence: Napoleon has long been dead. An experiential perfect sensu stricto is an agentive existential perfect (Lindstedt 1998).
Existential perfect is odd with still. As Hoepelman and Rohrer (1981:114) point out, irrevocability makes still redundant. He has still left would implicate having left could stop, which is not possible. Adverbials of duration are odd in general:
I have been alive/*born for 47 years today.
Our family has been in/*to Wales for a month now.
We are/*have got up since dawn today.
The constraint already follows from the representation of existential perfect. Consider two candidate formalisations of I have been born for 47 years today:
(¬born.born⋂year)^{47}<`r⋂today
(¬born.born<`r⋂year)^{47}⋂today
The first formula makes for 47 years an event time adverb, implying multiple days of birth. The second formula construes it as a reference time adverbial, cramming 47 years into one day. Even a shorter adverbial like since dawn is odd, much as the simple present tense is. All in all, reference time as an event type has a weak existence in anterior perfects: it either coincides with event time (near perfect) or is unbounded (remote perfect).
The event time e of an existential perfect is properly past, so it allows remote event time adverbials. Example: fi Hän on käynyt täällä viime viikolla ‘He has been here last week’. It has been observed in many languages that with a definite remote past event time adverbial existential perfect becomes evidential. Finnish Eilen olen ollut sairaana ‘Yesterday I have been ill’ is unlikely, for I ought to know my past condition by acquaintance. The best interpretation is that I was unconscious yesterday.
Kuhn (1989:537) finds it hard to reconcile the idea of perfect as a near past to the fact that the event can be remote. Why should one ever say I have bouth a pair of shoes outside a shoe store, or I have lived in Finland when I am very obviously living in Germany? Simple: in the existential perfect event type e<`r, the event e is remote, but the reference time e< is near.
Indefinite existential perfect is a dual of a habitual or iterative one. Existential perfect is particularly common in negative polarity contexts, for that is where the weak side of the duality turns up. Compare for instance
Have you ever helped him when he has been ill?
No. I have never helped him when he has been ill.
Yes, I have helped him when he has been ill.
The positive answer can be understood as the contradictory or as the contrary of the negative one.
The perfect has no imperative: Have studied, have not watched TV. Bad timing (one cannot change the past) is only a part of the explanation. Have studied when I get back avoids that problem but is not much better, though the idea can be expressed by adding a causative Make sure you have studied when I get back. (This is much preferred with the progressive imperative too: Make sure you are studying when I get back). As the paraphrase shows, the other half of the explanation is that the perfect is a state and states are incompatible with the imperative, because the subject is not an agent. The progressive is better than the perfect, being a dynamic state.
Conversely, there is no progressive perfect in English (Mittwoch 1988:243) because having done something is a simple state.[89]
I am nearly having written this paper.
Hirtle (1975:41) notes that the perfect does not get iterated either: Have you always got this much rain in Quebec? (perf have)^{+} is an odd way to ask do you always have/get (pf? have)^{+}. Perfect does not get iterated because it is irrevocable (Galton 1984): it won’t cycle because it never ends. A bound perfect is fine, because it is the main event that cycles: Whenever I see them they have been swimming (see→perf swim). And of course the sentence makes sense as the perfect of an iteration: We have always got this much rain in Quebec so far (perf have^{+}).
Smith (1991:111) considers perfect a special case of perfective. Though this may be a fair description of closed perfects, it hardly fits open perfects.
Because a perfect denotes a state, the unmarked morphology for perfect is adjectival, usually a result participle. Finite perfects get formed by combinining an auxiliary  usually be or a posture verb with an active participle or , have with a passive participle. The participle is perfective or a perfect. The voice of the participle depends on transitivity and the auxiliary depends on voice in ways that are predictable (compositional) until grammaticalisation sets in. The Finnish and Bulgarian perfects have be plus an active participle, the Portuguese and English ones have have plus a passive one. Some languages mix both types (German). The Russian preterit is an old active participle that has lost its auxiliary. Agglutination pastes together finite inflected perfects from periphrastic ones. (Johanson 1998:§8.8.)
The English perfect as a rule does not go with future time adverbials like tomorrow, next week, a week from now (Crystal 1966, Matthews 1987:145). A case in point would be a scheduled future perfect like The last plane on Sunday has landed well before the last train leaves. (This is a constructed example; I have not attested such usage naturally.) A noncase are result state adverbials like I have turned off the heating for tomorrow/until further notice.
Consider temporal generics in particular. A generic sentence like he (generally) smokes should thus be analysed as a generic quantifier R over smoking occasions can and smoking occurrences smoke. (I leave the restriction to smoking occasions out where it becomes vacuous.)
can Q smoke
The upper limit of Q is always: He always smokes (when he can). This is the special case smoke or can→smoke discussed in the previous section. The lower limit is sometimes: He sometimes smokes can be expressed as ~¬smoke or <smoke<.
Then there are the cases in between. One class are relative frequency adverbials He smokes twice a day of form ((pf smoke) twice) ⋂day)^{+ } of which many times a day, often/frequently, regularly, every now and then pf^{+} smoke are vague representatives.
Another significant special case is mostly, which says that more smoking occasions are smoking occurrences than not: can⋂smoke > can⋂¬smoke. (Here the greater than sign compares the number or duration of occasions, not temporal order.)
Counterfactual generics work the same way. The difference is that the generality extends from actual to merely possible occasions. He usually, habitually, typically, normally smokes have such counterfactual implications, i.e. involve considerations what would happen if things were in a way which they may never be. They differ in what kinds of modifications to the status quo are admissible in each case. The deductions what would happen depend on what other generic event types, invariants, regularities or laws, must be kept unchanged in moving from one situation to another.
I abbreviate the twoplace generic operator can Q smoke as a oneplace operator gen smoke (which leaves the domain of restriction of the generic quantifier implicit). It covers habits and dispositions as special cases, discussed in the next section. The generalised quantifier format pQq where Q is a binary relation on a Boolean algebra nicely brings out the two degrees of freedom in narrowing down generics: intensionally by qualifying conditions p on the domain and extensionally by quantitative constraints Q on the generalisation. Both are involved in spelling out examples like G. Carlson’s Lions have manes as ‘most adult male lions have manes’.
The generalised quantifier point of view also puts my earlier observations about always in sharper focus. :
Mary always sleeps.
Mary only sleeps.
Mary always sleeps well.
Mary only sleeps well.
Mary never sleeps badly.
Mary mostly sleeps well.
According to the generalised quantifier analysis, a quantifier has an event type it lives on (can) and the one it quantifies (smoke), related as presupposition and assertion or focus. If the presupposition is empty, we approach a categorical quantification: Mary always sleeps (when she can) becomes can→sleep or just sleep, which (if we restrict the range of only to sleep⋃¬sleep) equals Mary only sleeps. If the sentence has a natural division into focus and presupposition, that can be used, so Mary always sleeps well becomes sleep→well, which is practically equivalent to Mary only sleeps well. This choice of focus also explains why Mary never sleeps badly ¬(sleep⋂badly) seems to mean very much the same. Similar observations apply to Mary mostly sleeps well, whose first reading (though not the only one) is that most of Mary’s sleep is good.
A generic quantifier over occasions is in effect an unspecified frequency adverbial (Vlach 1993). It thus makes it patent why generics have a close affinity to sentences with explicit frequency quantifiers like Every afternoon John ate and apple, Eva got up at noon last summer (Smith 1991:41). It also follows that generics produce open event types (in fact, states).
An event type is almost periodic if there is a partition of time into periods in each member of which there is a token of that event type. The partition constitutes a convex hull of the sequence of events which envelopes the tokens of the event type.
The strength of a claim of almost perodicity depends on the character of the partition. In the degenerate case, any event is almost periodic with period one, in the envelope of eternity. A timeless event is almost periodic relative to the discrete topology. A habit is an almost periodic event which happens every now and then. A disposition is one which happens always on given conditions.
The above definition of almost periodicity can be recursively reapplied to itself by noting that the almost periodic event type may itself in turn be almost periodic on some yet finer event type. Periodicity is thus a case of granularity.
Extensionally, an almost periodic event happens in a given fraction of the members of the partition, when the cycle of the independent variable does not correlate with the period of the dependent variable. Depending on the choice of the independent variable, different statistical probability estimates may be obtained.
Intensionally, a better match may be approached by refining the rule for picking out the partition. The relevant notion here is one of a general rule leaving exceptions which in turn fall under a minor rule, perhaps again with exceptions. For instance, the fact that the solar year does not fit the calendar year is solved by the Gregorian rule for leap years, which is typical a rule with exceptions:
The leap year is every fourth year; but there is no leap year at even hundreds of years; but there is one at even four hundreds.
What kind of a rule is this? Well, it is a strategy. The general rule is followed until it fails, in which case the exception is taken, until taking the exception again causes an error, in which case an exception is taken to the exception. This is a typical preferential, approximative, or fractal process.
This analysis of regularity ties up with my analysis of genericity in terms of granularity. It also ties up with an analysis of causation as default reasoning. My analysis of causation is also predicated on the notion of a strategy. It reflects the observation by philosophers of science that causal explanations are ceteris paribus explanations: what needs explaining is a deviation from a general rule. The explanation is a minor rule which allows for the exception.
Take for instance what it means to say I teach on Tuesdays. It does not quite have to mean I play every Tuesday, although it certainly is entailed by that. What it means that as a rule, I teach on Tuesdays. Exceptions are allowed, especially if there is a minor rule covering the exceptions: except every third Tuesday when there is a faculty meeting. That exception again may have exceptions, and so on. These ideas naturally lead up to Fourier transformations.
Adverbs like regularly, usually can be given a semantics using these ideas. What they say is that there is a rule (possibly a strategy, i.e.a preferential rule with built in exceptions) which governs an almost periodic sequence of events of the given type.
Habits and dispositions can be construed as dual reducts of the notion of an almost periodic event type. Dispositions (Palmer’s potential verbs of state, 1987:§4.3.1,4.6.1) support predictions (counterfactuals concerning outcomes of experiments, i.e. closed events). An absolute disposition denotes a timeless property which under certain counterfactual conditions triggers a closed event. In other words, it asserts the existence of a strategy s which in a quantity Q of cases produces e under conditions c in situation w:
disp: c s wQe
A generic disposition is thus a counterfactual generalised quantifier that turns a closed event into a state of vague generality: 'there are conditions s under which generally (sometimes, often, mostly, always, usually when q) b’ A disposition is a conjunction of conditionals: He smokes in the bathroom ‘whenever he smokes it is in the bathroom/whenever he is in the bathroom he smokes’.
Dispositions don’t entail occurrences. A glass can be brittle and never break. One does not ask how often but how easily it happens. Because of genericity, a glass need not necessarily break when it is dropped, only it is liable to do so.
Higher order iteration pf^{+} which produces series was already mentioned as an implicit aspect shift in English. Through counterfactual generic quantification, a series becomes a habit, which turns it into a dynamic state (Vlach 1993:241). In habits, an event type e is iterated, allowing for gaps of some unspecified event type q.
hab: ¬disp¬
A habit is the dual of a disposition. This turns an open event into a counterfactual state of vague generality and counterfactual import: ‘Conditions c cause that generally (sometimes, often, mostly, always, usually) when q a’. Though a habit does not imply any particular instance at the time of reference, it (even counterfactually) entails a series of instances around it. Habits have existence entailments: It is not possible to have a habit of doing something and never do it. This follows because the range of contexts c includes the actual context.
One common way to put this is to say that genericity turns events into properties of individuals. If one thinks what it means to ascribe an event type as a property to an individual, it seems to imply that the event type is made part of the individual concept of that individual, and is thus less apt to be removed from the set of premises concerning that individual when varying counterfactual situations involving it. Maigret smokes a pipe in all of Simenon’s books, but he only errs in one of them. Another facet of the same observation is that assigning an event type involving several individuals as a property of one of them is a question of explanatory value, of efficacy in classifying facts. If you and I often get to a fight, at least one of us is irritable. Which one of us it is depends on which grouping of incidents is a better index to them.
An open event type a turns into a habit through higher order iterations pf^{+}a. One might characterise a habit as a dynamic state whose sustaining process is a series. This is why habits take considerable time, are not naturally attributed to the immediate here and now (Kucera 1981), but allow the progressive. Logically, a habit is a join of meets: He smokes in the bathroom. ‘there are several instances of his being in the bathroom and smoking, if not today, then tomorrow or next week’. It makes sense to ask how often someone habitually smokes (Kasher and Manor 1980).
Random repetitive event types like smoke or collect stamps are prime habits. Predictable onceonly event types (like brittle or flammable) are prime dispositions. The limiting case of both habits and dispositions is a regularity: The sun rises from the east is fully predictable both ways.
Habits and dispositions are dual ways of looking at a generalisation, extensional (by instances) and intensional (by rules). As modalities, habits are universalexistential (inability type) modalities, and dispositions existentialuniversal (ability type) modalities. One who habitually smokes must let it happen sometime, i.e. cannot always prevent it, one who dispositionally speaks English can bring it about anytime, i.e. will always be able to do it. Habits happen every now and then, dispositions happen any time in given conditions. A habit has a frequency: it is instantiated at least once in every period of sufficient length, a disposition has conditions: it is realised every time in at least one type of situation.
The reason why the duality, in fact the difference, between habits and dispositions is easily lost is that they involve generic quantification, i.e. their quantifier character is vague. A weak disposition is a strong habit. As science progresses, once mysterious habits become predictable dispositions.
An illustration of the duality comes from game theory. Take a twoperson zerosum game of perfect information , say sequential version of Left or Right, where players Left or Right choose a hand in turn and Left wins if the choices are different). The second player always wins (can always win) in the dispositional sense: there is a strategy which wins against all opponent strategies. But in a simultaneous version of the same game, where players have to choose numbers independently, neither player can win. In iterations of the game, both Left and Right keep winning and losing in a habitual sense: there is no way to guess which. This is an instance of actual iteration backed up by a counterfactual theory.
Once the duality of habits and abilities is noticed, indications of it are easy to find. Habits are easy to develop but hard to break, while abilities are hard to acquire but easy to lose. We are slaves of our habits while abilities are power. We try to shake bad habits and gain useful abilities. There are collocations like habitual criminal and capable officer. Ryle (1949:43) explains a disposition such as brittleness as ‘bound (liable) to undergo a change when a condition is realised’, and his habit of smoking as ‘permanent proneness to smoke when I am not eating, sleeping, lecturing, or attending funerals, and have not quite recently been smoking’. Thus one can predict of a disposition when it will happen, of a habit (at best) when it does not. A glass breaks if something makes it, Ryle smokes unless something stops him. Habits tend to be passive (they are succumbed to) and abilities agentive (they are exercised). Etymologically, habit and ability come from the same stem habere ‘have’: habitus is a passive perfect form, habilis is an active future one.
Belief is a habit but knowledge is a capacity. We make people believe things but let them know things. Belief is something one holds and clings to, knowledge is something that one acquires and applies.
Further evidence of the analysis of habits and dispositions as duals is that some languages use different aspects for habits and dispositions (imperfective for habits, perfective for dispositions). The habit describes what tends to happen without specifying when, and the disposition what will happen in given conditions (Hedin 1998). Examples:
Prosto zahodit muzyku poslushat’. Sjadjet v ugol, poslushajet chasok  i domoi. ‘He simply drops by to listen to music. He will sit in a corner, listen for a while and home (he goes)’.
On vsegda najdet/nahodit vyhod. ‘He will always find/always tends to find a solution’.
Il lisait/lit toujours une heure après le petit déjeuner. ‘He (would) always read for an hour after breakfast.
Ele sempre faz(ia) os deveres.’ ‘He always did/would do his homework’
Simonides kantoi aina kaiken omansa/kaikkea omaansa mukanaan.’Simonides always carried (took/lugged) everything (part/nom) he owned with him.
English uses will to signal nonpast disposition (not habit). Would can mark a past habit or disposition. Used to can report a remote closed habit in the past hab e « now. This does not entail the habit is over; as usual, it is enough for the reference time to be over: He used to smoke a lot when I knew him.  He no longer/still does[90]. Used to is paraphrased I had the habit of (remote past), not I have had the habit of (near past). Would needs a reference time to hang on to and so cannot start a topic, used to doesn’t and can. Further tests to distinguish between habit and disposition come from negative polarity quantifiers like any. These only occur where they can have marked scope, i.e. with weak connectives, quantifiers, modalities and negation. They disambiguate for disposition:
He would defy everybody, go everywhere and do everything (habit).
He would defy anybody, go anywhere and do anything (disposition).
Habituals come from iteratives (but not conversely) and from habitual and dispositional verbs (live, know how, be used to). Dispositions come from futures and (other) agentive modalities (Bybee 1994:159). Czech (Kucera 1981) has a suffixal iterative aspect in ava specialised to habitualdispositional meaning. Typical examples are
Nemci mluvivavi spatne cesky ‘Germans tend to speak Czech badly (i.e. of those Germans who speak Czech, the majority speak it badly).
Rusti generalove umiravaji vmladem veku ‘Russian generals tend to die young.’
Petr mi psaval ‘Petr used to write to me.’
Znaval jsem ho dobre. ‘I used to know him well.’
Habitual aspect is excluded from progressive contexts:
Kdyz jsem vesel do pokoje, Petr hraval na klavir. ‘When I entered (pf) the room, Peter used to play the piano’.
Budy vas cekavat zitra v sedm hodin vecer. ‘I will (usually) wait for you tomorrow at seven in the evening.’
This recalls the awkwardness of John sleeps in his office frequently today/right now. The only present time adverbials that are really felicitous with generics are of vague duration, like now or these days.
Smith (1991:40) notes a typological tendency of perfective sentences in the present tense to signify generic aspect. This tendency is future prediction turned to disposition (cf. English Mary will feed the cat).
A philosophical problem: Aristotle (Nicomachean Ethics) concludes that virtue is a habit, not a passion or faculty, because it is deliberate and praised or blamed for. I have just described habits as something one cannot help having. Are these views compatible? Perhaps, taking resolution into account. An agent’s long term plan, once formed, binds him much like another agent. A habit takes time to form or break, but it can be done, given time, and motivation. That is is just what praise or blame is for.
Aspectual (phasal) verbs and adverbs deal with beginning, ending, continuing and repeating (Freed 1979, Harkness 1987:78, Löbner 1989,1990,1999, Mittwoch 1993, Snessaert 1997,1999, van Baar 1991, van der Auwera (ed.) 1991, van der Auwera 1993,1996). English phasal adverbs include already, still, yet, any more/longer, again, only, just. Phasal adverbs have a presupposition or background and assertion or focus. They presuppose something about what went before and assert something about what holds now. The denial of a phasal adverb shares its presupposition: the denial of He is still here is He is already away which shares the presupposition that he was here. Löbner (1989) maps the four phasal adverbs still, not yet, already, no longer on the square of opposites formed by the four group (s⋃¬s)^{2} of simple changes
still 
already 
no longer 
not yet 
Table 13
Though the mapping is revealing as far as it goes, the picture gets more complicated on a closer view (Snessaert 1998). The full picture includes that phasal adverbs are scalar adverbs (König 1977,179,1981,1991, Krifka 199?). Scalar adverbs comment the slope of correlation between two scales (Lai 1999). Since scalar adverbs relate two scales, they have several dualities, corresponding to inversions of the two scales. Already 'as much as m as early as t´ has a direct dual in still 'as much as m as late as t', and double one in only (yet) 'as little as m as late as t'. The fourth possibility 'as little as m as early as t' is exemplified by anymore. In addition to scalar dualities, phasal adverbs come in tight (open) and loose (closed) variants.
Already is a scalar adverb meaning as much as x as early as now. He is already eleven can suggest he is unexpectedly undeveloped for his age, or that he got eleven earlier than expected. (On a scale of developmental age, his data point is above the expected curve: he is this old too soon, or too old now.) He is already eleven thus means 11⋂now<t or x<11⋂now.
It is already dark either means it has become dark earlier or it will have been dark longer than expected (darkness is ‘all ready’ by now). As a temporal adverbial, already suggests that an event is early. This is really only the temporal projection of the scalar implicature above. Formalisation:
e´<t
Already is a focus adverb which can focus another item in the sentence, presupposing the rest. Already t where the focus t is a frame adverbial can be paraphrased with by t or as early as t. Where m is the correlate scale already m can be paraphrased by as long as/as many times as/as much/many as m. I got up already at six means I got up at six which was earlier than usual: up⋂6´<usual.
An implicature of already for open events is that the event has begun earlier. If darkness is here now earlier than expected, then has begun earlier than expected. If she is asleep now then she has already fallen asleep. It it was expected to begin now, it began before now. The special case where the event extends through now can be represented by a⋂≤now. Its dual is e≤now 'by now'. This is how Portuguese ja vou 'I already come (pres)' come⋂≤now translates English I am coming. come⋂<now< and Russian Ona uzhe dva chasa tancevajet 'She dances (pres) already two hours' dance⋂2h⋂≤now translates English She has been dancing for two hours (in dance)⋂2h`now.
An apparently free occurrence of already is anchored too. If the reference time t is not given it is taken to be now or then. This explains the interplay of already with English (or Finnish) finite tenses: I am already here here⋂≤now versus I was already there here⋂≤then<now. The latter type requires that the past tense has past focus (reference time) e´<now alias e⋂then<now. An indefinite or unfocused past tense e<now
I have already lived here for three months can mean the experience is not new to me (live⋂3months´≤now)<`r⋂now while I have lived here for already three months here⋂3years⋂≤now`r⋂now means that the stay is getting longer every day. Already is not interchangeble with as long as here. Already, anchored to now, guarantees that the event continues through the present, the unanchored as long as does not (Johanson 1998:§8.5.2.1).[91]
Compare also Hoepelman/Rohrer’s (1981:112) Beethoven has already written 5 symphonies. Already ‘by now’ suggests symphony writing has gone on until recently and was expected to go on. Ergo Beethoven must be alive.
I am already going to do it tomorrow (now´<t)<do⋂tomorrow means the decision to do it tomorrow does not need further prompting, while I am going to do it already(as soon as) tomorrow now<do⋂tomorrow´<t implies tomorrow is an early date to do it
If the expected time is in the future, it is news that the event holds now. By implicature, already denotes the result of a recent change from the absence of an event to its presence, specifically, the first appearance of the event. Thus The baby already walks can be said when the baby has just taken her first steps, and The light (has) already changed as soon as it has. A formalisation which covers this implicature is ¬aa?`a, which allows a recent or an immediate change.
The combination of already with a change produces red.¬red.now, in effect turning the perfective into a near perfect. This is also the definition we get for already from its interdefinability with not not yet or not still not. It also has the desired relation to by now.
In other contexts, the implicature can be almost the opposite. I have already told you what to do does not implicate I told you recently, but that this is not the first time. The house is already a hundred years old does not so much stress that the anniversary came unexpectedly early, but that the house is older now than one would expect.[92]
Altogether, there are three competing implicatures of already, discernible in
fi Kun tulin kotiin, vaimoni oli jo hyvin vihainen.
‘When I came home, my wife was already very angry.’
First, she was angry earlier than expected. Second, she had been angry before my arrival. Third, she had got angry not too long ago. The first implicature is an entailment: it is what already means. The second implicature is entailed from already together with the open event type of be. Substituting got for was, we get the meaning ‘this time she got angrier than expected.’ Here already comments on the degree of anger rather than its duration. Languages and contexts may choose one or another of the implicatures as the foc