What’s the point? The role of punctuation in realising information structure in written English
© Moore. 2016
Received: 10 January 2016
Accepted: 4 May 2016
Published: 26 May 2016
The main claim of this paper is that punctuation marks, in conjunction with spaces between words, function to provide visual rather than auditory cues for information structure in written English. Information structure is defined here as dividing the flow of discourse into units, each containing a newsworthy element, and in contrast to the Systemic Functional systems of Reference and Theme. A model of how these three systems interact is further supported by evidence from the historical development of reading and modern studies of the process of fluent silent reading. Reading silently does not require physical articulation and so written text is constrained by the saccading eye rather than the need to draw breath. The silent reader uses punctuation marks as a guide in a saccade to focus on the end of a clause which provides a non-arbitrary location for New information.
KeywordsPunctuation Information structure Theme Reference Systemic Functional Linguistics Embodiment
Punctuation has vexed many writers past and present (e.g. Lowth, 1762; Truss, 2003). Surveys of the development of punctuation (e.g. Baron, 2001; Bruthiaux, 1993) reveal the longstanding divide between those that prescribe punctuation by prosodic principle and those that promote punctuation as the route to clarifying grammatical structure. This paper attempts to find a path through this apparent impasse by proposing not only that prosody and punctuation both realise the same function – that of information structure – but that they do so in a natural relationship to the spoken and written modes of language, respectively.
The following section appraises a range of opinions on the topic of punctuation to reveal the contrast between punctuation by prosody and punctuation by grammar in written English. The next section examines the function and realisation of Information structure within Systemic Functional Linguistics (SFL), contrasting it with the systems of Reference and Theme, and culminating in a model of how the three systems contribute to written text. The paper then attempts to provide explanations for the difference in the realisation of Information structure between spoken and written English, investigating firstly important steps in the historical development of written English, and then examining current understanding of the process of silent reading. The discussion aims to reveal how texts intended to be read aloud are more likely to be punctuated for prosody, while texts that will remain unspoken are likely to be punctuated for grammar, but that punctuation realises the function of information structure in written texts by exploiting the potential of the mode of language and maintaining a natural relationship between function and realisation. The final section discusses what this approach to punctuation may imply for written English.
Punctuating for grammar or punctuating for speech?
Advice on punctuating English will often advocate either a syntactic or prosodic approach. Prescriptivists of the syntactic school insist that punctuation derives from “logical” rules of grammar, while those of the prosodic school encourage writers to read their sentences aloud, listening to intonation and pausing in order to identify the correct positions for punctuation marks (Bruthiaux, 1993). However, discussions and empirical studies into these different viewpoints rarely differentiate consistently between texts to be read aloud or in silence.
In addition to the attention of grammarians and pedagogues, punctuation has attracted interest from psycholinguists. Fodor (2002), for instance, views prosody as being imposed onto written sentences and implores psycholinguistics to pay more attention to prosody in sentences, suggesting that there is an implicit prosody in all written sentences that frequently aids disambiguation. Similarly, Hill and Murray (2000) highlight the prosodic role of commas in disambiguation, particularly in written relative clauses. Cohen et al. (2001) describe punctuation as the ‘visual analogue’ (p.80) of prosody in experiments that attempt to trace the impact of punctuation and prosody on sentence comprehension. However, these experiments use written sentences that are then read aloud in the prosodic condition, rather than using recorded spontaneous conversation, and so do not reflect all language use; they are unable to reliably comment on typical spontaneous spoken language which is characterised by at least as many grammatically incomplete sequences, false starts and sentence fragments as fully-syntactic sequences (Carter and McCarthy, 2006). Comparing punctuation to its intonational equivalent, particularly when most written English will never be spoken, may provide “a theoretically uninteresting account of what is in any event a not very good correlation” (Nunberg, 1990, p.15).
Halliday (1989) conforms to the view that there are two influences on punctuation choice: “punctuation according to grammar, and punctuation according to phonology” (p.37). While it may be possible to identify these two tendencies, this paper argues that the difference between the two rests in the intended mode of reading (in silence or aloud), and that both tendencies realise the same function of information structure. Although punctuation that responds to phonology may result in a text that easily transposes to spoken English – a text that is written to be spoken – punctuation that responds to grammar results in a text that is not intended to be read aloud – it is written to be read silently (Gregory and Carroll, 1978). That is, a written-to-be-spoken text works best when it reflects patterns of speech. Typical examples include scripts for plays and newscasts. In contrast, units in a text that is written-to-be-read reflect grammatical patterns that reflect the patterns of reading eyes. Academic writing, typified by sentences containing multiple clauses of complex nominal groups, serves as the archetype of text that is written-to-be-read. This paper takes as a crucial distinction the cline from texts-for-speaking to texts-for-reading as key to the debate over punctuation. There is, however, common ground between the two modes. We shall take, from the social-semiotic theory of Systemic Functional Linguistics (SFL), the notion of Information Structure as common to both types of text. The following section describes this concept in detail, and later explores how it may be used to account for the various descriptions of punctuation.
Information structure, as originally described (Halliday, 1967a), functions to divide the flow of discourse into units, each containing an obligatory New and optional not-New (or “Given”) elements. The function of New information is now defined as that which the speaker treats as newsworthy (Fries, 2000), i.e. what the speaker wants his/her listener to pay attention to. In speech, Information structure is realised by intonation, so that one intonation contour is equivalent to one unit of information (Halliday, 1967a; 1976; Halliday and Matthiessen, 2014). Within the intonation contour, the tonic foot realises New information, and Given information is simply the remainder or residue of the information unit. There is, then, a non-arbitrary, natural relationship between the function of what the speaker wants the listener to focus on and its realisation; it is the aurally most prominent part of the message.
//a */nurse for a baby//
//about */twenty years old//
//a */nurse for a baby about twenty years old//
Information Structure in speech functions independently of grammatical structure, is realised prosodically rather than grammatically, and is probably processed simultaneously with lexico-grammar (Deacon, 1997). Although there is an unmarked relationship between the intonation contour and the grammatical clause, it is not a defining one (Halliday, 1967a). In speech, an intonation contour operates alongside clausal structure, and there may be multiple information units in a single clause, or one unit may extend over two or more clauses. For instance, the exclamation “Fire!” when used to clear a building realises New but not Given information, although it does not conform to clausal structure. Information structure in speech does not depend on clausal grammatical structure. An utterance in spoken English may lack grammatical structure but it can not lack information structure; every utterance in English will have a tonic foot and so will realise New information. That is, the speaker will always select one part of each unit of information and make it more prominent than any other, placing a demand on the listener to focus on that part of the message.
Information structure and reference
It is vital that we identify a function for information structure that is independent of other linguistic functions. In the tradition of SFL, New information does not function within referential systems, but is information being treated by the speaker as newsworthy (Fries, 2000). While Information and Reference are both derived through speaker choice, their meanings and realisations differ. Information structure focuses a listener’s attention on one element of a message through the tonic foot, while Reference assigns status of contextual familiarity to each nominal structure (Martin, 1992) through grammatical markers such as definite and indefinite articles. Perhaps the most telling difference between a referential and prosodic definition of new information is that referentially ‘new’ (or ‘fresh’ or Presented) items in English can only be nominal groups, whereas any item within an intonation contour can be the prosodic New information.
Similarly, Lambrecht’s (1994) attempt to combine cognitive, syntactic and pragmatic factors into a model that explains ordering in a sentence uses the concept of shared and not-shared knowledge to ensure that “information structure belongs to sentence grammar” (p.207). Steedman (2000), Jackendoff (2002) and Vallduvi (1993); (Vallduvi and Engdahl, 1996) all define information structure using referential terms, despite the many differences between their approaches.
By contrast to the approaches of Chafe, Givón and Prince, I have argued for separating sharply the distinction between Given and New information in the information unit, as opposed to the distinction between presenting and presuming reference in the nominal group. (Fries 2000, p.103)
Within Systemic Functional theory, Reference and Information in English are distinct systems in the textual metafunction. Partially revising Halliday and Hasan’s (1976) classic treatment of the role of reference in Cohesion for discourse semantics, Martin (1992) distinguishes between the systems of Presenting (typically indefinite) and Presuming (typically definite) reference within the function of Participant identification. The separate function of Participant tracking describes the phoric nature of a tied relationship, encompassing exophora (within a broader context but outside the co-text), endophora (within text), cataphora (succeeding tie beyond group), esphora (succeeding tie within group) and anaphora (preceding tie). Participant identification and Participant tracking encompass reference in all its forms, including the use of repetition, definite and indefinite articles, pronominalisation, comparison, semantic relations such as hyponymy and meronymy, and unique reference (Moore, 2008a), while leaving the function of information structure distinct. Although there is an unmarked relationship between New information and Presenting reference (Martin, 1992), this relationship is not defining; New information is most frequently realised by Presenting reference, but it can also be realised by Presuming reference and Given information can be realised by Presenting reference. New information, i.e. an element highlighted by the speaker as newsworthy, is functionally distinct from referentially new information, i.e. not previously mentioned, because reference is typically realised by deictic markers of definite and indefinite reference, while information structure in spoken English is realised by intonation.
//4. Wally’s got a */pocket//
//4.^ a/breast */pocket//
//4.^ in his */shirt//
//4.^ to/slip the */boarding cards/into//
//.1. then they’re/easily/pulled */out//
//.1. otherwise I/just have to/hold them in my */hand//
//.1.^ because you/don’t want to go/fumbling for */those when you’re/going/through//
(Halliday and Greaves, 2008, p.212)
Information structure and theme
A second function which must be distinguished from Information structure is Theme. Halliday (1967b, p.212) characterises the function of Theme as “the point of departure for the clause as a message” – which some might call the ‘ground’ of the clause – and it is developed in the remainder of the clause, which is referred to as Rheme.
Example of multiple theme
can’t make it to the party tonight
D://em........ *//1 yes so//4. ^ the problem to*/day/^ was....//
P://well it’s just that I’ve…//4. ^ I/actually */feel like I’ve1
//.1+ got a */brick//.1+ on my */chest//
D: *//1 right//
P://^ that is … em…//
D://-2^ em.. con/stricting your */chest//
(Halliday and Greaves 2008, p.188)
The textual metafunction and information structure
Information, Reference and Theme are the three principal systems in the textual metafunction that operate within the clause (Martin, 1992; Fries, 2000; 2002). Each system operates independently, although there are clearly marked and unmarked patterns of correlation. For instance, in speech, the nominal group receiving indefinite (Presenting) reference in clause-final position (Rheme) is likely to receive the tonic foot (New information) to provide an unmarked combination. Significant rhetorical advantage can be gained from skillful manipulation of these three systems in producing text (Lassen, 2004; Moore, 2006).
The textual metafunction functions to create “relevance to context” (Halliday and Matthiesen, 2014, p.85). For instance, focussing on ‘context’, the system of Reference locates endophoric (internal to the text) or exophoric (external) meanings (Martin, 1992) to develop cohesion within the co-text and coherence with the context (Halliday and Hasan, 1985). Through Reference, the textual metafunction ties experiential and interpersonal meanings to a particular co-text and context (of culture and of situation). Without this contextualising function, those meanings could be considered to remain abstract. If we focus on ‘relevance’, the textual metafunction creates relevance by adding prominence through assigning relative value to parts of the clause (Matthiessen 1992). The Theme, for instance, functions to select meaning(s) from the clause (Hannay & Martínez Caro, 2008) and to assign them thematic prominence, which has consequences for the thematic development of the text (Fries, 1992) in contrast to meanings in Rheme. At the same time, New Information functions to highlight specific meanings, to direct the listener’s or reader’s attention to those meanings enabling them to be newsworthy (Fries, 2002), in contrast to ‘Given’ non-newsworthy meanings. Importantly, assigning relevance through thematic and informational prominence contributes to the construal of the co-text and context of the text. It is only through the dual processes of positioning experiential and interpersonal meanings in relation to context and giving them relative value that experiential and interpersonal meanings can operate in text. We can propose, then, that the textual metafunction instantiates experiential and interpersonal meanings.
As discourse progresses, the system of Information structures (i.e. divides or organises) discourse into units, and New information functions to allow the speaker to choose meanings within the unit of information (that may or may not be presented as referentially new to the discourse) to be highlighted as newsworthy within the flow of discourse. We can now characterise New information as the contextually-related selection of one part of a message for the purpose of speaker-directed focus, adding the value of prominence through the naturally corresponding realisation of prominence by the tonic foot in speech in English. The problem with this description, however, is that it can only account for speech. A corresponding description of Information structure in written English is required.
Information structure and written English
There are significant difficulties involved in identifying the written equivalent of spoken Information structure in English, largely because intonation is not realised in written English. It may seem natural to assume that we can apply the same model of information structure from spoken to written English. After all, anything that is written down can be spoken aloud, and so reading a sentence aloud should reveal its information structure. This is the approach taken by Davies (1989) in explaining the role of prosody in written language. In various empirical studies, Davies (e.g. 1994a, b) has demonstrated that most readers assign the same prosody to text revealing an apparently ‘inherent’ information structure. Where there is variation in assigning the tonic foot, this is often explained by contextual factors. For instance, a pupil and an experienced teacher, when reading aloud a transcript from an earlier lesson, will place the tonic foot on different lexico-grammatical items because of the influence of their view of the contexts of situation and culture (Halliday and Hasan 1985); i.e. they make different assumptions.
Davies’ material is typically spoken English that has been written down – transcripts of radio commentary or classroom talk, poetry, or scripts from plays or newscasts – rather than written text which would tend to remain unspoken, such as most adult fiction or academic prose. It is likely that the matter of text selection is not trivial. If the text is a transcript of unscripted speech, or is written to reflect patterns of speech, it will be constructed with units of information that match patterns of spoken English. This is not true of all written English. While the ability to recreate tonicity is a long-standing function of written English, developments in the system of writing have allowed for units of information to expand beyond the limitations of lung capacity (see Developments in the understanding of neurophysiological processes of reading below). What Davies’ studies have not shown is a strong correlation in readers’ tonic placement for more ‘written-to-be-read’ texts. Any difference between different types of text is important, because it seems that most written texts in the modern world are far more likely to remain silent than to be read aloud.
The paragraph was originally completed with sentence iv below, but any of the following final sentences are grammatically possible and appear to construe the same ideational meanings:
Actually the tendency is subtler than this. It is true that the so-called ‘recitation script’ of a closed IRF exchange (initiation-response-feedback) remains dominant; but in many British and American primary/elementary schools, another script is also common: sequences of ostensibly open questions that stem from a desire to avoid overt didacticism, are unfocused and unchallenging, and are coupled with habitual and eventually phatic praise rather than meaningful feedback. (Alexander, 2008, p.93)
So we have these two deeply seated pedagogical habits to contend with: recitation and pseudo-enquiry.
Two deeply seated pedagogical habits that we have to contend with, therefore, are recitation and pseudo-enquiry.
So recitation and pseudo-enquiry are two deeply seated pedagogical habits that we have to contend with.
So we have two deeply seated pedagogical habits to contend with: recitation and pseudo-enquiry.
The fundamental function of Information structure is to divide the flow of discourse into manageable units. It seems almost trite, then, to point out that punctuation functions to divide written discourse into manageable units. The parallels between spoken and written modes need not end there, though, as the role and realisation of New information in written English also need to be identified. To date, there has been little attempt to explain how information structure may have developed from a linguistic function realised in speech by highly-flexible intonation into one realised in writing by relatively-rigid sequence. The remainder of this paper attempts to provide historical, physiological and neurological explanations as to how the realisation of the same linguistic function has diverged in different modes, and then to relate that function to the issue of punctuation.
Developments in written English
To investigate the differences between the realisation of Information structure in written and spoken modes of English, this section will first examine key aspects of the historical development of the written form and then examine how this has changed the neurophysiological processes involved in reading, resulting in the modern practice of reading as an individual, silent activity. The significance of silent reading for the structure of the information unit and for punctuation will be examined in the final section.
Without written language, certain achievements are not possible in society. Written language provides the opportunity for record-keeping (Briggs, 2000) and enables bureaucracies to evolve. Written language should not be viewed as spoken language written down, but as an extra set of semiotic resources to perform functions that are unavailable to a non-recorded language: “Writing and speaking are not just alternative ways of doing things; rather they are ways of doing different things.” (Halliday, 1989, p.xv). Developments in writing technology occur under pressure from the demand for certain functions. However, once writing has changed, it may enable further changes in society. As writing systems develop, so the functions that they can achieve develop, enabling further demands to be placed on the system, and so on.
The introduction of spaces into written text in Europe from about the 9th Century must have offered an advantage to become as ubiquitous as it is today. Ancient Greeks and Romans knew about spaces in text, but the innovation did not spread because there was no perceived advantage. That advantage, according to Saenger (1982; 1997), is fast, silent reading which would be superfluous to social contexts where readers were considered little more than reciters of key texts (Svenbro, 1993). It was only when society demanded new functions from written language that the available technology was adopted (Saenger, 1997). The apparently trivial technological innovation of adding spaces to make the category of the word material (Linell, 2005) resulted in a radically new approach to the standard way of reading: reading in silence. Using first-hand witness accounts and contemporary descriptions of reading practices, Saenger (1997) deduces that with the introduction of spaces from Ireland, through England to France and beyond, scribes were able for the first time to make copies of texts without sounding out the words.
The introduction of spaces allowed readers to scan the text with their eyes quicker than they could speak, and aided the change in the social significance of the act of reading (Johnson, 2000; Svenbro, 1993). The net result was a vastly superior reading speed which preceded, and possibly precipitated, the rapid expansion of literacy (Saenger, 1997) and major developments in the writing of vernaculars. A similar increase in reading speed can be found when incorporating spaces into modern un-spaced scripts (see Punctuation and information structure).
Without spaces to use for guideposts, the ancient reader needed more than twice the normal quantity of fixations and saccades per line of printed text. The reader of unseparated text also required a quantity of ocular regressions for which there is no parallel under modern reading conditions (Saenger, 1997 p.7)
When written language is spoken aloud, as is likely for scriptura continua, there is no difference between written and spoken language in a language that uses tone to realise Information Structure, because the tonic foot within a tone unit must be realised in speech. When reading becomes silent, however, the role and realisation of information structure becomes unclear. If a unit of information is realised by a tone unit, and New information is realised by the tonic foot, we must consider what happens in languages such as English when intonation is neither articulated nor coded in the written form. It may be that the patterns of spoken language are ‘heard’ in written language, and so it is fairly trivial to derive information from intonation, just as in speech. It may be, however, that we do not need to hear the sound of the language in order to read fluently. The following section examines this issue using evidence from studies of physiological, psychological and neurological processes.
Developments in the understanding of neurophysiological processes of reading
It may be that there is no difference between information structure in written and spoken English. That is, readers experience an internal voice that sounds out what is on the page (see Chafe (1988) for an earlier discussion). If this is the case, then the system for identifying information units and New information in both reading and speaking could be the same because they use orthographic data for phonological cues. This section reviews evidence from studies of the brain to identify the role for internal voicing while reading.
Psychological studies that evaluate the role of phonological and orthographic data when reading report conflicting conclusions. In ‘priming’ studies, Lukatela et al. (2001) conclude that reading depends on a sub-phonemic level of processing. They support the view that the processing of written words takes two simultaneous routes – one directly to a representation of lexical meanings, and the other via a phonemic path to the same destination. Perfetti and Bolger (2004) use functional neuroimaging to argue that visual, phonological and semantic processes must be operating on written words. There is, then, some psychological evidence to suggest that when reading we may be able to hear words, and therefore that intonation might operate to realise information structure in both written and spoken English. However, while possible, it may be that it is not necessary to hear words when reading. Dehaene et al. (2005) review neuroimaging results to argue that fluent readers recognise combinations of letters as patterns, having learned the likelihood of those patterns. Peereman, Content and Bonin conclude that their experiments “cast doubts on the existence of reciprocal constraints between orthography and phonology at prelexical stages of processing.” (1998, p.171). That is, they see little evidence for the necessity of a phonological level of processing in reading. If so, information structure must have a non-phonological realisation in writing.
The disagreement here may be due to the tools being used. Some tools used to measure brain activity are highly localised; they view, in detail, what happens in one particular region of the brain at a precise time. It is unsurprising, therefore, that these studies produce results detailing local, sequentially executed processes. What is required, however, is an amodal perspective that recognises the distributed nature of brain function. A view of the brain that posits localised functions without considering the complex, coordinated, interconnected structure of the brain is “indefensible” (Edelman, 2004, p.30).
MEG (magnetoencephalography), which can trace the possible pathways that language takes through the brain, reveals that the reading process is associated with a wide variety of areas and processes, and reading stimulates areas of the brain associated with both general visual processes and language production, even though this technique can not specify sequence or causation (Salmelin and Kujala, 2006). One interpretation of these findings would appear to support the view that we hear when we read. This could be considered a rather one-dimensional description, however, based on a view of the brain as operating sequentially with one location being associated with one function. Alternatively, an amodal, embodied, perspective may be able to incorporate these results into a more satisfactory framework.
Recent research suggests that the process of reading activates those areas of the brain used in order to motivate articulation and those areas associated with aural perception of language. That is to say, while some areas of the brain are particularly active while reading, other areas do not shut down – there is a simulation, echoing or mirroring effect in other parts of the brain (Barsalou, 2008), including those related to speaking and listening. Learning (including language learning) is embodied and takes place through the associations that neural pathways create with a wide range of neural and physiological processes, and when learned behaviour is evoked, it appears that all of its associations are activated and are probably strengthened. When encountering a particular phrase as a result of reading or listening, for example, the articulatory process required to pronounce the phrase is simulated in the brain, even if the physical articulation is not enacted. In this amodal model of the brain, the brain ‘re-enacts’ the processes and associations experienced during previous encounters of language, behaviour or sensations (Barsalou, 2008).
It is this effect that probably produces the experience of ‘hearing’ what we are reading as much as, or more than, a conscious effort to sound out the written language in our head. This effect is so great that the type of language that is ‘experienced’ (action words, physical objects, abstractions etc.) probably has a more significant effect on brain activity than the mode, or channel, through which it is experienced (Tomasino et al., 2007). Action-perception loops appear to be involved in all languaging processes. D’Ausillio et al. (2009) suggest that a listener implements the motor programs required to produce the speech sounds that they hear. The same effect probably simulates the sounds of written language while reading (Pulvermüller et al., 2014).
The mental simulation of a motor act that is not accompanied by an overt body movement … corresponds to a process by which the brain activates a motor plan and monitors its unfolding through internal feed forward models, while holding back (overt) motoneuronal output. (Tomasino et al. 2007 p.T128)
The psychological research reviewed above suggests it may be possible to read without hearing the words in our head. On the other hand, recent neurophysiological studies suggests that, while reading, we cannot stop our brains from simulating the processes related to listening, speaking and acting what we read. (That may also explain why we derive so much pleasure from reading). What is most significant, however, is that the constraints of the articulatory system are inconsequential because there is no physical reaction – any motor-neuronal activity remains in the brain. The action-perception loop takes place within ‘cranial time’ at the speed of neuro-chemical transfer, and is thus unconstrained by the limitations of motor processes. Regardless of whether we experience ‘hearing’ what we read, we can ignore the physical constraints of articulation via the respiratory system, exploiting instead the cognitive potential, or affordances, of the visual system operating with spaced, punctuated text. The importance of the neurological location of the action-perception loop to the issue of information structure and punctuation will be reviewed in the following section.
Punctuation and information structure
This section begins by summarising some of the main points in the argument thus far, and discusses the details and implications of this perspective, concluding with a sample text. An intonation contour can define a unit of discourse in spoken English. This has been identified as a unit of Information, with an obligatory New realised in speech by the tonic foot and an optional residue, Given (Halliday, 1967a) or not-New. New information functions to highlight one element in the message as newsworthy (Fries, 2000). It seems unlikely that this important function has no equivalent in written English which retains the functions of Reference and Theme using the same grammatical realisations, but written English has not evolved a system to realise intonation and so it requires a different realisation for information structure. The question that has been raised in this paper centres on what replaces the tone unit and the tonic foot in written English to realise information structure and New information, without replicating the functions of Reference and Theme.
One clue to the distinction between spoken and written information units is to be found in the historical development of written text. The introduction of spaces and punctuation marks into written Greek, Latin and later vernaculars such as English liberated them from the demand to be articulated (Saenger, 1997), which resulted in a quantitative and qualitative difference between reading spaced and unspaced text in an alphabetic system (Rayner et al. 1998; Winskel et al. 2009). In studies of readers of Thai which, like scriptura continua, is an unspaced script, the tone markings in Thai script make a significant contribution to saccading, reading speed and accuracy in silent reading (Winskel, 2011), supporting the view that the written script ‘satisfices’ the functional demands placed on it (Seidenberg, 2011). As expected from historical experience with Latin, introducing spaces into modern Thai improves reading speed for readers that are familiar with word-spacing in written text (Winskel et al. 2009) – a phenomenon also identified in Japanese alphabetic scripts (Sainio et al., 2007). Without spaces in written English, meaning is far easier to derive when the text is read aloud (Rayner et al. 1998), producing spoken patterns of intonation. That is, intonation contours in unspaced written text match intonation contours in speech, producing equivalent units of information in speech and writing. Spaced text facilitates fast, silent reading through enabling the eye to saccade more effectively (Rayner, 1998; Perea and Acha, 2009). While spaced text can still be spoken, it can also be read silently, bypassing the physical constraints of the respiratory system and the physical articulation of air expelled from the lungs, since the action-perception loops demanded by the silent reading process take place at the vastly superior speed of neurochemical transfer (Vigneau et al., 2006). This releases text that is written-to- be-read from the same physical constraints as spoken text and text that is written-to-be-spoken, and so the realisation of information structure in written text does not need to match its realisation in spoken text. This may help to explain why written English has not evolved an orthographic system to represent intonation and the tonic foot; they are functionally superfluous.
With spaces between words providing suitable units for saccades, punctuation can contribute to the way a reader’s eyes fixates. The eye typically saccades to points preceding the punctuation unit (Rayner et al. 2000); Pynte and Kennedy (2007) record a significantly lengthened fixation in places immediately preceding punctuation marks. In comparison to versions of the same text presented without commas, fixation time is longer with punctuation marks and overall reading speed is faster (Hirotani et al. 2006). These three studies demonstrate how the silent reader’s eyes easily saccade to fixation points prior to punctuation marks, thereby improving the readability of a text. That is, it seems highly likely that a writer can focus a reader’s attention on whatever is placed before a punctuation mark. Therefore, the visibly-distinct position prior to a punctuation mark seems to be a ‘natural’ position to place the prominent, or newsworthy, item in a written information unit, in the same way that it is ‘natural’ to place the newsworthy item in spoken English at the most audibly prominent point.
It follows from this that similar features of written text, such as high frequency conjunctive adjuncts which are typically small words, provide points where the saccading eye knows not to focus; when reading, during a saccade our eyes may recognise conjunctions, as they do punctuation marks, as points to ignore in order to see what precedes them. These easily-recognised marks may have relatively little ideational meaning of their own, but point the reader to the preceding units which carry the textual value of an informational culmination in prominence (Matthiessen, 1995b; Fries, 2000). Empirical research is required to support this hypothesis. To my knowledge, reading studies such as those described above designed to measure the effects of these words on saccades, fixations and reading speed have yet to be carried out. The effects on reading of other graphological and typographical means of drawing attention, such as italicisation, embolding, underlining, capitalisation, colourisation and more, also need further research.
Even if a reader can reproduce tonicity when ‘hearing’ written text in their heads (Barsalou, 2008), the demand to draw breath is removed in the action-perception loop inside the reader’s head (Vigneau et al., 2006). When the limitation of the demand to draw breath which is placed on spoken information structure is removed, the alternative potentials and restrictions that apply to written information structure need to be identified. Because a silent reader does not have to articulate the sounds, even if they are actually sounding out the words in their head, the silent reader is limited instead by the constraints of the visual system, only being able to look as far ahead as the eye can saccade. Although studies have not directly compared the potential and constraints of the visual versus articulatory systems for information structure, related research suggests that a visual information unit typically extends further than a respiratory information unit. Chafe (1988) suggests that, for the same written text, speakers are limited to about 5.7 words per intonation contour as opposed to the 8.9 words in a punctuation unit – an increase of about 50 %. There appears to be the need for more empirical studies that directly compare these two measures, although the type of text is not a trivial matter here, and reading a written-to-be-read text aloud can no longer be viewed as typical of spoken English.
Different realisations for spoken and written information structure produce different reflexes in their variation. Within the spoken mode, emphasis can be placed on any part of an utterance by adding a tone unit and making one element the tonic foot in order to make it newsworthy. Without using unusual marked choices, such as an italic or bold font face, this highly variable emphasis is unavailable to a writer. Instead, grammatical strategies that are available in the spoken mode but are more frequently employed in written registers of English redistribute elements within the clause as a response to the demands of written information structure (Biber et al., 1999). That is, elements are redistributed in a clause to appear before a punctuation mark. For instance passive voice, often cited as indexical of academic text (e.g. Banks, 1991; Tarone et al., 1998), allows re-ordering in a clause to place the process or the agent of a transitive process in clause-final New position. Similarly, special structures, such as cleft and pseudo-cleft clauses, intentionally disrupt the unmarked order of a clause so that elements that would ordinarily be at the start of a written clause, and considered to be the Theme, are instead placed in clause-final position where they are informationally more salient (Herriman, 2004). Finally, in exceptional circumstances, if a writer is unable to focus attention on newsworthy items at the end of the clause using grammatical strategies, graphological strategies, such as embolding, underlining and italicising offer the opportunity for marked information structure, drawing attention to a marked position for New information. It has even been proposed that clauses are constructed so that the newsworthy item is chosen first, but placed last (Hannay and Martínez Caro, 2008).
To recap, then, Information structure in spoken English is realised by intonation, with New information being realised by the tonic foot (Halliday and Matthiessen, 2014). Written English does not have a graphological equivalent to intonation, but as Theme and Reference both operate consistently across written and spoken modes (Martin, 1992), it seems most likely that Information also operates in both modes. Through an investigation of the history of written English, it seems that the introduction of spaces into Latin and later vernaculars allowed written English to be read silently far more easily and extensively (Saenger, 1997), releasing the normal realisation of information structure, intonation, from the physiological constraints of the respiratory system. Written, spaced English can be read faster than it is spoken, partly because the visual system has a larger word span than the respiratory system (Chafe, 1988). Reading is also faster than speaking because it operates within action-perception loops (Vigneau et al., 2006) and because the eye saccades, jumping quickly over unimportant and predictable parts of a clause and fixating on key items. The information for what is key is provided by peripheral vision (Rayner, 1998). When reading English, the saccading eye can easily recognise punctuation marks, and so can easily ignore them and fixate on the previous position for New information (Rayner et al. 2000; Pynte and Kennedy (2007). This corresponds to the proposed location of New information as Point (Martin, 1992), Culmination (Matthiessen, 1995b) and N-Rheme (Fries, 1992). In this way, the function of information structure – focussing the interlocutor’s attention on what is newsworthy (Fries, 2000) – is maintained across both modes while the realisation maintains a non-arbitrary, natural relationship with the physiological affordances of each mode. Thus, the limits set by the respiratory system in spoken English and the length of a saccade in written English define the units available in the different modes. This paper takes the position that in both systems we can call the unit that they permit a unit of Information.
Security measures must be incorporated into computer systems whenever they are potential targets for malicious or mischievous attacks. This is especially so for systems that handle financial transactions or confidential, classified or other information whose secrecy and integrity are critical. In Figure 7.1, we summarize the evolution of security needs in computer systems since they first arose with the advent of shared data in multi-user timesharing systems of the 1960s and 70s. Today the advent of wide-area, open distributed systems has resulted in a wide range of security issues. (Coulouris et al. 2001. p.252–3)
The text above develops through the following Themes: Security measures; whenever they; This; that; whose; In Figure 7.1, we; since they; Today the advent of wide-area, open distributed systems. The model described in this paper predicts that New information will be found at the end of each clause, resulting in the following New information: computer systems; malicious or mischievous attacks; systems; critical; computer systems; the 1960s and 70s; and security issues. These represent the choices that the writer made in making these clause elements newsworthy.
The first Theme Security measures uses a marked choice of Presenting reference. While it provides the grounding for the clause and the paragraph, it might also be considered more of a newsworthy point than the more taxonomically general computer systems Presented in New position in the original. The second clause follows an unmarked pattern, with the adjectival group critical considered most newsworthy. Clause complex  begins with potentially two Themes; the exclusive (Halliday and Matthiessen, 2014) use of exophoric we as Topical Theme is preceded by In Figure 7.1,. I would argue that just as tone units allow for multiple information units in a spoken grammatical clause, so do punctuation marks allow for multiple units of information in the written grammatical clause. Thus, the Presented Figure 7.1, has its own newsworthiness and In is the ‘not-New’ (or Given) of the group-length information unit. The New information for the clause is the nominal group the evolution of security needs in computer systems which realises Presuming reference, but this is Esphoric (within group) and culminates in the Presented computer systems. The following clause is hypotactically combined using the textually unmarked thematic since with the unmarked Presuming they as Topical Theme. This clause culminates in the homophorically Presuming prepositional group of the 1960s and 70s. The final clause starts with a Textual Theme and culminates in Presenting reference in New. However, the Presuming reference in the first nominal group is esphoric (within-group) to a Presented noun (wide-area, open distributed systems) and could also be a candidate for New information.
Computer systems must incorporate security measures whenever they are potential targets for malicious or mischievous attacks. This is especially so for systems that handle financial transactions or confidential, classified or other information whose secrecy and integrity are critical. We summarize the evolution of security needs in computer systems in Figure 7.1, since they first arose in the 1960s and 70s with the advent of shared data in multi-user timesharing systems. Today a wide range of security issues have resulted from the advent of wide-area, open distributed systems. (adapted from Coulouris et al. 2001. p.252–3)
Security measures must be incorporated into computer systems whenever they are potential targets for attacks that are malicious or mischievous. This is especially so for systems that handle confidential, classified or other information whose secrecy and integrity are critical or financial transactions. In Figure 7.1 the evolution of computer system security needs are summarized since the advent of multi-user timesharing systems with shared data first brought them about in the 1960s and 70s. Today the advent of wide-area, open distributed systems has resulted in a wide range of security issues. (adapted from Coulouris et al. 2001. p.252–3)
An earlier study (Moore, 2008b) provided some evidence that readers not only recognise these changes in text flow and readability, but also take significantly less time to read texts that follow the unmarked pattern. However, more research, especially studies which use eye-tracking technology, are required to better understand these initial findings.
By proposing that punctuation functions to realise information structure in written English, this paper enables the function of information structure to remain the same in spoken and written English while retaining a natural non-arbitrary relationship between realisation and function in both modes. The realisation of information structure differs in the different modes of speaking and reading as a result of the different physiological processes involved. Whereas spoken units of information respond to the potential and constraints of the articulatory system, written units of information respond to the potential and constraints of the visual system. When written English is read aloud, it once again becomes subject to the constraints of spoken English. Some registers of written English are better suited to this process of transliteration, and some registers, such as play and news scripts, samples of direct speech in prose, and sermons and speeches, aim to conform to the patterns of spoken English. Other registers of written English, however, are not designed to be read aloud. Readers of legal documents, much academic text, or tax return forms can experience great difficulty articulating clause complexes that are designed for the saccading eye. A survey of Matthiessen’s (2015) taxonomy of register will reveal registers that are more likely to be written-to-be-read, written-to-be-spoken, spoken and transcribed, spoken and re-worked, and so on, to produce units of information with predictable patterns.
Intonation provides a suitable unit of information for the spoken mode by facilitating distinctions in the flow of sound, with the tonic foot realising the focus of information, or New, through the natural association of pitch change with auditory focus. Punctuation provides a suitable unit of information for the written mode by facilitating the saccade of the reading eye, with the position preceding punctuation (or a conjunction) realising the focus of information, or New, though the natural association of easily-identifiable fixation points with visual focus. It is a mistake to confuse the two modes of realisation. The units of Information in speech and in reading do not need to be the same, as they both have a non-arbitrary, natural relationship to the associated physical and cognitive constraints and potential affordances of independent neurophysiological systems. Final position in a clause has long been recognised as the default realisation of New information in written English (Fries, 1992; Matthiessen 1995b), but this was only made possible as result of spaces and punctuation marks disconnecting the saccading eye of written English from the articulatory organs of spoken English, quite apart from the apparent sensation of hearing written English or the ability to render written text as spoken English. While intonation in spoken English provides flexibility for the speaker to create newsworthy items in almost any position, written English provides considerable flexibility through structures that re-organise the clause to allow newsworthy items to appear before punctuation marks. It is because of this single function, information structure, operating in written English that we can describe punctuation for grammar and punctuation for prosody, or punctuation for texts written to be read and punctuation for texts written to be spoken, respectively. In an optimal English text that is written to be spoken, the units of information in the written text will be constrained by the potentials of the articulatory system. In an optimal English text that is written to be read, the units of information in the written text will be constrained by the potentials of the visual system. This is achieved most effectively through punctuation marks.
The first intonation contour in this line appears to have no tonic foot, but this is because it is an incomplete tone unit which restarts on the same line and includes feel as the tonic foot.
This article is dedicated to the memory of Geoff Thompson who supervised the PhD thesis from which this paper is derived. Needless to say, the ideas in this paper would not have been realised without his support and encouragement. Sincere thanks are also due to Professors Michael Hoey, David Vernon and Cathy Burnett for further inspiration, discussion and comment, to colleagues past and present at Khalifa and Sheffield Hallam universities, and to the reviewers for Functional Linguistics, although any remaining errors in the paper are entirely my responsibility.
About the Authors
Nick Moore is a Senior Lecturer at the TESOL Centre in Sheffield Hallam University where he leads and teaches both English language and teacher education modules. His PhD thesis, awarded in 2010 by the University of Liverpool, examined the role, realisation and interaction of Theme, Reference and Information Structure in a corpus of engineering textbooks. His research interests centre on the application of systemic functional and corpus linguistics to language teaching.
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Alexander, R. 2008. Essays on Pedagogy. London: Routledge.Google Scholar
- Banks, D. 1991. Some observations concerning transitivity and modality in scientific writing. Language Sciences 13: 59–78.View ArticleGoogle Scholar
- Baron, N. 2001. Commas and canaries: The role of punctuation in speech and writing. Language Sciences 23: 15–67.View ArticleGoogle Scholar
- Barsalou, L.W. 2008. Grounded cognition. Annual Review of Psychology 59: 11.1–11.29.View ArticleGoogle Scholar
- Biber, D, S. Johansson, G. Leech, S. Conrad, and E. Finegan. 1999. Longman Grammar of Spoken and Written English. Harlow: Pearson Education Ltd.Google Scholar
- Briggs, C.F. 2000. Literacy, reading and writing in the medieval West. Journal of Medieval History 26(4): 397–420.View ArticleGoogle Scholar
- Bruthiaux, P. 1993. Knowing when to stop: Investigating the nature of punctuation. Language & Communication 13(1): 27–43.View ArticleGoogle Scholar
- Burnyeat, M.F. 1997. Postscript on Silent Reading. The Classical Quarterly. New Series 47(1): 74–76.Google Scholar
- Carter, R., and M. McCarthy. 2006. Cambridge Grammar of English. Cambridge: Cambridge University Press.Google Scholar
- Chafe, W. 1970. Meaning and the Structure of Language. Chicago: University of Chicago Press.Google Scholar
- Chafe, W. 1988. Punctuation and the prosody of written language. Written Communication 5(4): 395–426.View ArticleGoogle Scholar
- Chafe, W. 1995. Accessing the mind through language. In Of Thoughts and Words - Proceedings of Nobel Symposium ’92, ed. S. Allén, 107–125. London: Imperial College Press.View ArticleGoogle Scholar
- Clark, H.H., and S. Haviland. 1977. Comprehension and the Given-New contract. In Discourse Production and Comprehension, ed. R. Freedle, 1–40. New Jersey: Ablex.Google Scholar
- Cohen, H J. Douaire, and M. Elsabbagh. 2001. The role of prosody in discourse processing. Brain and Cognition 46(1–2): 73–82.View ArticleGoogle Scholar
- Coulouris, G., J. Dollimore, and T. Kindberg. 2001. Distributed Systems: Concepts and Design, 3rd ed. Harlow: Pearson.Google Scholar
- D’Ausillio, A., et al. 2009. The motor somatotopy of speech perception. Current Biology 19(5): 381–385.View ArticleGoogle Scholar
- Davies, M. 1989. Prosodic and non-prosodic cohesion in speech and writing. Word 40(1–2): 255–261.View ArticleGoogle Scholar
- Davies, M. 1994a. “I’m sorry, I'll read that again”: Information structure in writing. In The Syntax of Sentence and Text: A Festschrift for František Daneš, eds. S. Čmejrkova and F. Štícha, 75-89. Amsterdam: John Benjamins
- Davies, M. 1994b. Intonation IS visible in written English. In Writing vs. Speaking: Language, Text, Discourse Communication, eds. S. Čmejrkova, F. Daneš, and E. Havlova, 199-203. Tübingen: Gunter Narr Verlag
- Deacon, T.W. 1997. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W.W. Norton.Google Scholar
- Dehaene, S L. Cohen, M. Sigman, and F. Vinikier. 2005. The neural code for written words: A proposal. TRENDS in Cognitive Science 9(7): 335–341.View ArticleGoogle Scholar
- Edelman, G.M. 2004. Wider Than the Sky - The Phenomenal Gift of Consciousness. New Haven: Yale University Press.Google Scholar
- Findlay, J.M. 2004. Eye scanning and visual search. In Interface of Language, Vision and Action, ed. J.M. Henderson, 135–158. New York: Psychology Press.Google Scholar
- Fodor, J.D. 2002. Psycholinguistics cannot escape prosody, Proceedings of Speech Prosody 2002, Aix-en-Provence, France, April 11–13, 2002, 83–90. France: Aix-en-Provence.Google Scholar
- Fries, P.H. 1992. The structuring of information in written text. Language Sciences 14(4): 461–488.View ArticleGoogle Scholar
- Fries, P.H. 2000. Issues in modelling the textual metafunction. In Patterns of Text: In honour of Michael Hoey, ed. M. Scott and G. Thompson, 83–107. Amsterdam: John Benjamins.Google Scholar
- Fries, P.H. 2002. The flow of information in a written text. In Relations and Functions within and around Language, ed. P. Fries, M. Cummings, D. Lockwood, and W. Spruiell, 117–155. London: Continuum.Google Scholar
- Gavrilov, A.K. 1997. Techniques of reading in classical antiquity. The Classical Quarterly. New Series 47(1): 56–73.Google Scholar
- Gregory, M., and S. Carroll. 1978. Language and Situation: Language Varieties and their Social Contexts. London: Routledge.Google Scholar
- Halliday, M.A.K. 1967a. Notes on transitivity and theme part 2. Journal of Linguistics 3/2: 199-244
- Halliday, M.A.K. 1967b. Intonation and Grammar in British English. The Hague: Mouton
- Halliday, M.A.K. 1976. Theme and information in the English clause. In Halliday: System and Function in Language, ed. G. Kress. London: Oxford University Press.Google Scholar
- Halliday, M.A.K. 1979. Modes of meaning and modes of expression: types of grammatical structure, and their determination by different semantic functions. In Function and Context in Linguistic Analysis, ed. D.J. Allerton, E. Carney, and D. Holcroft, 57–79. Cambridge: CUP.Google Scholar
- Halliday, M.A.K. 1989. Spoken and Written Language. Oxford: OUP.Google Scholar
- Halliday, M.A.K. and W.S. Greaves. 2008. Intonation in the Grammar of English. London: Equinox.Google Scholar
- Halliday, M.A.K. and R. Hasan. 1976. Cohesion in English. London: Longman.Google Scholar
- Halliday, M.A.K. and R. Hasan. 1985. Language, Context and Text: Aspects of Language in a Social-Semiotic Perspective. Victoria: Deakin University Press.Google Scholar
- Halliday, M.A.K., and C.M.I.M. Matthiessen. 2014. An Introduction to Functional Grammar, 4th ed. London: Arnold.Google Scholar
- Hannay, M. and E.M. Martínez Caro. 2008. Last things first: A FDG approach to clause-final focus constituents in Spanish and English. In Languages and Cultures in Contrast: New Directions in Contrastive Linguistics, ed. M.A. Gómez-González, J.L. MacKenzie, and E. González-Alvarez, 33–68. Amsterdam: John Benjamins.View ArticleGoogle Scholar
- Herriman, J. 2004. Identifying relations: The semantic functions of wh-clefts in English. Text 24(4): 447–469.Google Scholar
- Hill, R.L. and W.S. Murray. 2000. Commas and spaces: Effects of punctuation on eye movements and sentence parsing. In Reading as a Perceptual Process, ed. A. Kennedy, R. Radach, D. Heller, and J. Pynte, 565–589. Amsterdam: Elsevier.View ArticleGoogle Scholar
- Hirotani, M L. Frazier, and K. Rayner. 2006. Punctuation and intonation effects on clause and sentence wrap-up: Evidence from eye movements. Journal of Memory and Language 54: 425–443.View ArticleGoogle Scholar
- Jackendoff, R. 2002. Foundations of Language. Oxford: Oxford University Press.View ArticleGoogle Scholar
- Johnson, W.A. 2000. Toward a sociology of reading in classical antiquity. The American Journal of Philology 121(4): 593–627.View ArticleGoogle Scholar
- Lambrecht, K. 1994. Information Structure and Sentence Form. Cambridge: CUP.View ArticleGoogle Scholar
- Lassen, I. 2004. Ideological resources in biotechnology press releases: Patterns of Theme/Rheme and Given/New. In Systemic Functional Linguistics and Critical Discourse Analysis, ed. L. Young and C. Harrison, 264–279. London: Continuum.Google Scholar
- Linell, P. 2005. The Written Bias in Linguistics. London: Routledge.View ArticleGoogle Scholar
- Lowe, E.A. and E.K. Rand. 1922. A Sixth-Century Fragment of the Letters of Pliny the Younger. Cambridge, Mass.: Cambridge University Press. http://www.gutenberg.org/ebooks/16706. Accessed 6 Aug 2008.
- Lowth, R. 1762/1967. A Short Introduction to English Grammar. London: A. Miller and J. Dodsby. Reprinted by Menston: Scholar Press.
- Lukatela, G T. Eaton, C. Lee, and M.T. Turvey. 2001. Does visual word identification involve a sub-phonemic level? Cognition 78: B41–B52.View ArticleGoogle Scholar
- Martin, J.R. 1992. English Text: System and Structure. Amsterdam: John Benjamins.View ArticleGoogle Scholar
- Matthiessen, C.M.I.M. 1992. Interpreting the textual metafunction. In Advances in Systemic Linguistics: Recent Theory and Practice, ed. M. Davies and L. Ravelli, 37–82. London: Pinter.Google Scholar
- Matthiessen, C.M.I.M. 1995a. THEME as an enabling resource in ideational ‘knowledge’ construction. In Thematic Development in English Texts, ed. M. Ghadessy, 20-54. London: Pinter
- Matthiessen, C.M.I.M. 1995b. Lexicogrammatical Cartography: English Systems. Tokyo: International Language Science Publishers
- Matthiessen, C.M.I.M. 2015. Register in the round: Registerial cartography. Functional Linguistics 2(9): 1–48.Google Scholar
- Moore, N. 2006. Aligning Theme and Information Structure to Improve the Readability of Technical Writing. Journal of Technical Writing and Communication 36(1): 43–55.View ArticleGoogle Scholar
- Moore, N. 2008a. Bridging the metafunctions: Tracking participants through taxonomies. In From Language to Multimodality: New Developments in the Study of Ideational Meaning, eds. C. Jones & E. Ventola, 111-129. London: Equinox
- Moore, N. 2008b. Validating a model of information structure in written English through a reading protocol. In Proceedings of the 19th European Systemic Functional Linguistics Conference and Workshop, eds. E. Steiner & S. Neumann. http://scidok.sulb.uni-saarland.de/volltexte/2008/1697/pdf/Moore_form.pdf. Accessed 15 July 2015
- Nunberg, G. 1990. The Linguistics of Punctuation. Stanford, CA: CSLI.Google Scholar
- Parkes, M.B. 1992. Pause and Effect: Punctuation in the West. Farnham: Ashgate.Google Scholar
- Peereman, R A. Content, and P. Bonin. 1998. Is perception a two-way street? The case of feedback consistency in visual word recognition. Journal of Memory and Language 39: 151–174.View ArticleGoogle Scholar
- Perea, M. and J. Acha. 2009. Space information is important for reading. Vision Research 49: 1994–2000.View ArticleGoogle Scholar
- Perfetti, C.A., and D.J. Bolger. 2004. The brain might read that way. Scientific Studies of Reading 8(3): 293–304.View ArticleGoogle Scholar
- Prince, E. 1981. Toward a taxonomy of given-new information. In Radical Pragmatics, ed. S. Cole, 223–255. New York: Academic.Google Scholar
- Pulvermüller, F, O. Hauk, V.V. Nikulin, and R.J. Ilmoniemi. 2005. Functional links between motor and language systems. European Journal of Neuroscience 21: 793–797.View ArticleGoogle Scholar
- Pulvermüller, F, R.L. Moseley, N. Egorova, and Z. Shebani. 2014. Motorcognition–motor semantics: Action perception theory of cognition and communication. Neuropsychologia 55: 71–84.View ArticleGoogle Scholar
- Pynte, J and A. Kennedy. 2007. The influence of punctuation and word class on distributed processing in normal reading. Vision Research 47: 1215–1277.View ArticleGoogle Scholar
- Rayner, K. 1998. Eye movements in reading and information processing: 20 Years of Research. Psychological Bulletin 124(3): 372–422.View ArticleGoogle Scholar
- Rayner, K., M.H. Fischer, and A. Pollatsek. 1998. Unspaced text interferes with both word identification and eye movement control. Vision Research 38(8): 1129–1144.View ArticleGoogle Scholar
- Rayner, K., G. Kambe, and S.A. Duffy. 2000. The effects of clause wrap-up on eye movements during reading. The Quarterly Journal of Experimental Psychology 53A(4): 1061–1080.View ArticleGoogle Scholar
- Saenger, P. 1982. Silent reading: Its impact on late medieval script and society. Viator 13: 367–414.View ArticleGoogle Scholar
- Saenger, P. 1997. Space Between Words: The Origins of Silent Reading. Stanford, CA: Stanford University Press.Google Scholar
- Sainio, M, J. Hyönän, K. Bingushi, and R. Bertram. 2007. The role of interword spacing in reading Japanese: An eye movement study. Vision Research 47: 2575–2584.View ArticleGoogle Scholar
- Salmelin, R., and J. Kujala. 2006. Neural representation of language: Activation versus long-range connectivity. TRENDS in Cognitive Science 10(11): 519–525.View ArticleGoogle Scholar
- Seidenberg, M.S. 2011. Reading in different writing systems: One architecture, multiple solutions. In Dyslexia Across Languages, ed. P. McCardle, B. Miller, J.R. Lee, and O.J.L. Tzeng, 146–168. Baltimore, MD: Brookes Publishing.Google Scholar
- Steedman, M. 2000. Information structure and the syntax-morphology interface. Linguistic Inquiry 31(4): 649–689.View ArticleGoogle Scholar
- Svenbro, J. 1993. Phrasikleia: An Anthropology of Reading in Ancient Greece. Ithaca: Cornell University Press.Google Scholar
- Tarone, E., S. Dwyer, S. Gillette, S. and V. Icke. 1998. On the use of the passive and active voice in astrophysics journal papers: With extensions to other languages and other fields. English for Specific Purposes 17/1: 113–132
- Tomasino, B, C.J. Werner, P.H. Weiss, and G.R. Finka. 2007. Stimulus properties matter more than perspective: An fMRI study of mental imagery and silent reading of action phrases. NeuroImage 36: T128–T141.View ArticleGoogle Scholar
- Truss, L. 2003. Eats, Shoots and Leaves – The Zero Tolerance Approach to Punctuation. London: Profile Books Ltd.Google Scholar
- Vallduvi, E. 1993. Information Packaging: A survey. Report prepared for WOPIS. http://www.hcrc.ed.ac.uk/publications/rp-44.ps.gz. Accessed 2 Oct 2007
- Vallduvi, E, and E. Engdahl. 1996. The linguistic realization of information packaging. Linguistics 34: 459–519.View ArticleGoogle Scholar
- Vigneau, M, et al. 2006. Meta-analyzing left hemisphere language areas: Phonology, semantics, and sentence processing. NeuroImage 30: 1414–1432.View ArticleGoogle Scholar
- Winskel, H. 2011. Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai. Applied Psycholinguistics 32: 739–759.View ArticleGoogle Scholar
- Winskel, H, R. Radach, and S. Luksaneeyanawin. 2009. Eye movements when reading spaced and unspaced Thai and English: A comparison of Thai–English bilinguals and English monolinguals. Journal of Memory and Language 61: 339–351.View ArticleGoogle Scholar