Using Online Annotations to Support Error Correction and Corrective Feedback

Published on June 2016 | Categories: Documents | Downloads: 47 | Comments: 0 | Views: 409
of 11
Download PDF   Embed   Report

Online Annotations

Comments

Content

Computers & Education 52 (2009) 882–892

Contents lists available at ScienceDirect

Computers & Education
journal homepage: www.elsevier.com/locate/compedu

Using online annotations to support error correction and corrective feedback
Shiou-Wen Yeh a, Jia-Jiunn Lo b,*
a
b

Department of Applied Linguistics and Language Studies, Chung-Yuan Christian University, Chung-Li, Taiwan, ROC
Department of Information Management, Chung-Hua University, No. 707, Sec. 2, Wu-Fu Rd., HsinChu, Taiwan, ROC

a r t i c l e

i n f o

Article history:
Received 28 September 2008
Received in revised form 15 December 2008
Accepted 17 December 2008

Keywords:
Applications in subject areas
Distance education and telelearning
Human–computer interface

a b s t r a c t
Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective
feedback and error analysis system called Online Annotator for EFL Writing. The system consisted of five
facilities: Document Maker, Annotation Editor, Composer, Error Analyzer, and Viewer. With this system,
teachers can mark error corrections on online documents and students can receive corrective feedback
accordingly. The system also classifies and displays error types based on user query. Second, an experiment was conducted to evaluate the effectiveness of this system. Fifty EFL (English as a Foreign Language)
college freshmen were randomly assigned to two groups. The experimental group received corrective
feedback with the developed system whereas the control group used the paper-based error correction
method. After the treatment, students in both groups conducted corrective feedback activities by correcting the same document written by an EFL student. The experimental results were encouraging in that the
analysis of students’ corrective feedback revealed significantly better performance in the experimental
group on recognizing writing errors. Implications for further research are discussed.
Ó 2008 Elsevier Ltd. All rights reserved.

1. Introduction
Online learning is a fast growing field in education worldwide. It is also an emerging focus in the areas of computer technology and
language learning where scholars and teachers are examining the impact of technology on writing instruction. As the online composition
classroom has become more common on university campuses, many researchers have looked for innovative ways to meet the needs of a
new kind of learner—one no longer limited by constraints of face-to-face conferencing (e.g., Hyland & Hyland, 2006; Miller, 2001; Peterson,
2001; Wible, Kuo, Chien, Liu, & Tsao, 2001). As the importance of feedback emerged with the development of online composition classroom, error correction, and corrective feedback have become important tasks for EFL (English as a Foreign Language) teachers and students.
Many researchers are calling for critical examination of new technologies for writing instruction, specifically the possibilities and limitations of computer-mediated feedback on second language writing (Dagneaux, Denness, & Granger, 1998; Hyland & Hyland, 2006; Nagata,
1993; Ware & Warschauer, 2006). This study takes up researchers’ call for investigation that addressed error correction and corrective feedback in relation to online annotation technology.
Giving feedback on second language writing is a challenging task, if learners do not understand or process this feedback. Berry (1995)
suggested that there is a big gap between teachers’ and students’ understanding of grammatical terms in relation to errors, especially those
commonly used in the correction code. Some students struggled with applying teacher feedback to their writing because they were unfamiliar with the grammatical rules and metalinguistic terminology connected with the errors (Berry, 1995, cited in Lee (1997)). Therefore,
when giving feedback on student errors, teachers need to take into account their students’ previous English language instruction especially
their metalinguistic backgrounds. According to Lee (1997), second language students were often asked to correct grammatical errors, but
seldom were they told to categorize them. In addition, Lee’s (1997) research suggested that students’ failure in error correction was mainly
due to their failure in detecting errors, and a crucial variable in error correction is ‘‘recognizing the existence of errors” (p. 473). Kubota
(2001) found that students tended to overlook the error symbols and sometimes misunderstood the meanings of the symbols. Also, students ‘‘quite often resort to reduction rather than elaboration for their error correction” (p. 478). In other words, they improve correctness
at the expense of their creativity. Kubota concluded that the way to maintain their creativity while improving accuracy needs to be
explored.

* Corresponding author. Tel.: +886 3 5186045; fax: +886 3 5186546.
E-mail address: [email protected] (J.-J. Lo).
0360-1315/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2008.12.014

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

883

Since EFL learners have great diversities in error correction and feedback strategies, a more constructive approach and a more interactive environment for error correction are needed. As claimed by Ferris (1995), the goal of error correction should be to equip students with
a range of strategies to help them become more independent writers. To be effective, feedback should be conveyed in a number of modes
and should allow for response and interaction (Brinko, 1993, cited by Hyland and Hyland (2006, p. 5)). Dagneaux et al. (1998) proposed
that error analysis should be reinvented in the form of computer-aided error analysis, a new type of computer corpus annotation. Ware
and Warschauer (2006) suggested that the electronic database of student writing can be helpful for developing metacognitive and metalinguistic awareness. Much research has been conducted to search for effective writing feedback and correction methods. This paper presents a potential direction of using online annotation for error correction and corrective feedback.
Annotations are the notes a reader makes to himself/herself, such as students make when reading texts or researchers create when noting references they plan to search (Wolfe, 2002). Annotations are also a natural way to record comments and ideas in specific contexts
within a document. Annotations on online documents are easily shared among groups of people, making them valuable for a wide variety
of writing tasks. Compared to paper-based annotations shared merely through printed technology, online annotations provide learners
with more opportunities for dialogue and learning through conversations (Wolfe, 2002). Since annotations involve four major functions:
remembering, thinking, clarifying, and sharing (Ovsiannikov, Arbib, & Mcneill, 1999; Wolfe 2002), annotation systems can take advantage
of electronic database and networked technologies to provide EFL teachers and learners a more constructive environment for error correction and feedback.
Du (2004) further suggested that annotations can help learning activities by providing functionalities such as directing attention, building internal connections, and building external connections. For directing attention, annotations can focus the learner’s attention on a very
limited amount of information. For building internal connections, annotations can help the learner see how texts are related to others.
Annotations can also help the learner relate the presented information to existing knowledge for building external connections. As such,
we believe it could play an important role in the development of EFL learners’ metacognitive and metalinguistic awareness. Practically,
marking up text with colored and dynamic annotations can focus the learner’s attention on a very limited amount of information, and
therefore is an efficient way to draw learners’ attention to the corrections and feedback in the document. For building internal and external
connections, online annotations can provide a good way for EFL students and teachers to share error comments and allow extended conversations to take place in the context of a common text. Online annotations also can help users navigate documents, that allow them to
look up information or return to earlier sections of a document (Golovchinsky & Marshall, 2000). By facilitating such easy movement between texts, annotation tools can build internal connections and emphasize the intertextual nature of writing. As a language learning tool,
online annotations for EFL writing seems to fit with the current trend of distance learning, cognitive conditions for instructed second language learning (Skehan, 1998), the generally accepted hypothesis of second language acquisition known as the Monitor Model (Krashen,
1985), and collaborative language learning which has roots in Vygotsky’s (1978) cultural psychology.
Based on previous research and related works, we proposed that online annotation functionalities for manipulating, rearranging, searching, displaying and sharing annotations can be used to support EFL error correction and corrective feedback, especially the collaboration
between teachers and students outside the classroom. The purpose of this study is two fold. First, we developed an online annotation system named Online Annotator for EFL Writing to support error correction and corrective feedback for EFL teachers and students. The system
was composed of five unique facilities: (1) Document Maker—for students to enter documents to the system; (2) Annotation Editor—for
teachers to mark error corrections on online documents; (3) Composer—classifying and displaying annotation marks based on user query;
(4) Error Analyzer—accessing Database and displaying the statistical results of error distributions with Viewer in one of the four modes:
single document for an individual student, all documents for an individual student, single document for all students, and all documents
for all students; (5) Viewers—the Document Viewer displaying annotation marks that fulfill query conditions requested by the users,
and the Analyzed Result Viewer displaying the four error analysis modes in bar charts. Pederson (1987) cautioned that to ensure the best
possible use of computer-assisted language learning (CALL), careful programming, thoughtful instructional design and sensitivity to EFL
students’ needs are important but not sufficient. The profession must undertake research that ‘‘provides an empirical base for our assumptions, strategies, and applications of the computer in language teaching” (Pederson, 1987, p. 101). Therefore, the second purpose of this
study was to investigate empirically if the Online Annotator for EFL Writing leads to more effective error correction for EFL writers. The
remaining parts of this paper are structured as follows: (a) review of related works; (b) description of the online annotation system; (c)
experimental study; (d) results and discussion; and (e) conclusion.

2. Related works
2.1. Challenges of error correction and corrective feedback
Error correction and corrective feedback are the areas that bridge the concerns of EFL teachers, researchers, and instructional designers.
As defined by Ellis (2007), error correction is a technique to help learners correct errors by providing them with some kind of prompting,
and corrective feedback takes the form of responses to text containing an error. The responses can consist of (1) an indication that an error
has been committed, or (2) provision of the correct target language form, or (3) metalinguistic information about the error, or any combination of these. For writing instruction, error correction and corrective feedback are important tasks for both teachers and students in
many contexts. Although it is generally agreed that students expect teachers to correct written errors and teachers are willing to provide
them (Enginarlar, 1993; Ferris, 1995; Hedgcock & Lefkowitz, 1994; Lee, 1997; Leki, 1991; Schulz, 1996), the immediate concern of many
teachers ‘‘is not so much to correct or not to correct” (Lee, 1997, p. 466), but rather when and how to respond to what students write (Lee,
2003; Magilow, 1999; Yates & Kenkel, 2002). For example, when teachers mark student errors, do they need to indicate the type of error the
student has made? Which specific errors should be corrected? Also, error feedback and analysis of EFL students’ written work is an extremely time consuming task for teachers. Considering the time required for error feedback, the most effective way to correct errors is worth
investigating (Kubota, 2001).
For error correction research, theorists are also concerned with whether corrective feedback has any effect on learners’ interlanguage
development and much research has been conducted on effective correction methods in EFL. However, there is no agreement on the most

884

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

effective method (Ferris, 2006; Kubota, 2001; Truscott, 1996; Truscott, 1999; Yates & Kenkel, 2002). Ellis (2007) summarized five controversies surrounding corrective feedback in both language pedagogy and second language acquisition: (1) whether corrective feedback contributes to second language acquisition, (2) which errors to correct, (3) who should do the correcting, (4) which type of corrective feedback
is the most effective, and (5) what is the best timing for corrective feedback. Ellis (2007) indicated that a general weakness of current research of corrective feedback is that they have focused narrowly on the cognitive aspects of correction and acquisition. He further emphasized the importance of the social context of corrective feedback and the psychological characteristics of individual learners. Hyland and
Hyland (2006, p. 10) stated that, ‘‘The main reason for [the discrepancies] is that feedback on students’ writing, whether by teachers or
peers, is a form of social action designed to accomplish educational and social goals. Like all acts of communication, it occurs in particular
cultural, institutional, and interpersonal contexts, between people enacting and negotiating particular social identities and relationships,
and is mediated by various types of delivery.” Such positions are in accordance with Zamel’s (1983) claim that writing teachers should
consider ways to incorporate student feedback into their courses more thoroughly, such as, establishing a collaborative relationship with
students, drawing attention to problems, offering alternatives and suggesting possibilities.
Several studies have looked at different error correction techniques for improving linguistic accuracy. Lee (1997) conducted a study to
examine whether some errors are more difficult for learners to correct than others. The error correction task contained both surface and
meaning errors and was set in three different conditions: (1) marked condition, (2) slightly marked condition, and (3) unmarked condition.
The results showed that students performed best in the marked condition and performed worst in the unmarked version. However, there
was no significant difference between the slightly marked and the unmarked condition. The findings confirmed that students’ failure in error
correction was mainly due to their failure in detecting errors. Therefore, error correction techniques which can help students detect and correct errors are needed. Lee (1997) suggested that students’ performance in error correction itself can provide a useful source of information
to help teachers formulate their error correction policy. For future investigations, Lee (1997) also suggested the use of a student text where
errors occur naturally to examine learners’ performance in error correction, instead of using a standard text with errors implanted.
Ferris (1995) used questionnaire to elicit EFL learners’ strategies for error correction. This study relied on learners’ memories of the
strategies they employed for error correction. Ferris reported the following strategies used by EFL students in the USA to correct errors:
ask teachers for help, make corrections themselves, check a grammar book, think about/remember mistakes, ask a tutor for help, and check
a dictionary. Ferris (1995) further suggested that the ultimate goal of error correction should be to equip students with a range of strategies
to help them become more independent editors and better writers. Kubota’s (2001) study deals with error correction strategies employed
by learners of Japanese when revising their written work. This study investigates: (1) the effectiveness of the coding system employed by
the Victorian Certificate of Education; (2) types of code symbols which lead to successful self-correction; (3) strategies used for self-correction; and (4) successful as well as unsuccessful strategies employed by students. Kubota (2001) found that students have great diversities in error correction strategies.
For EFL learners who have diverse proficiency levels and error correction strategies, a more constructive approach and a more interactive environment for error correction and corrective feedback should be developed. We proposed that such an environment should be able
to help learners ‘‘detect and recognize the existence of errors” (Lee, 1997, p. 473). Specifically, it should provide EFL learners with effective
error-feedback prompts, including (1) an indication that an error has been committed, or (2) provision of the correct target language form,
or (3) metalinguistic information about the error (Ellis, 2007). Besides drawing learners’ attention to the error-feedback prompts, the environment should be able to equip learners with tools for classifying errors (Lee, 1997) and provide the scaffolding needed by learners to
move from grammatical drills to more independent writing (Ferris, 1995). Considering the point of view of teachers, the environment
should support teachers with effective methods and convenient tools for marking error corrections and providing error feedback (Kubota,
2001). It should also provide enough support for teachers to incorporate student feedback into their courses more thoroughly (Lee, 1997;
Zamel, 1983). Finally, teachers should be able to establish a collaborative relationship with students (Hyland & Hyland, 2006). In this sort of
relationship, students and teachers can exchange information about what the writing is trying to communicate and can negotiate ways to
improve it.
2.2. Computer-mediated corrective feedback
In responding to the limitations of paper-based error feedback and analysis, van Els and et al. (1984) stated that ‘‘[Error analysis] has too
often remained a static, product-oriented type of research, whereas L2 learning processes require a dynamic approach focusing on the actual course of the process” (cited by Dagneaux et al. (1998, p. 164)). Researchers have suggested a more constructivist approach to designing open-ended learning environments. For instance, Peterson (2001) suggested that the introduction of distance-learning technologies has
reminded us of the importance of this kind of feedback, and online technologies can offer new ways of gathering that information from
students. Teachers should consider new and emerging technologies and the capabilities they add to approaches for teaching and supporting the distant learner (Ware & Warschauer, 2006). From the perspective of instructional design, traditional paper-based error feedback
and analysis can be reinvented in the form of computer-aided error analysis, which is a potential type of computer corpus annotation.
Several studies have been conducted in the area related to technology-enhanced corrective feedback and writing instruction, specifically
the linguistic advantages of technology-enhanced corrective feedback. For example, Nagata (1993) compared the effectiveness of intelligent computer feedback (providing metalinguistic explanations on the learners’ errors) and traditional computer feedback (indicating only
missing or unexpected words without any metalinguistic explanations on the errors) for learning complex grammatical constructions, and
found a significant difference between these two types of feedback, favoring intelligent computer feedback. Nagata (1997) also compared
metalinguistic feedback (detailed explanations about grammatical and semantic functions of the documents) with translation feedback
(English L1 translations of Japanese documents). Nagata explained that the intensive computer exercises with on-going metalinguistic
feedback helped the students to understand the complex grammatical concepts better than translation feedback. Her study suggested that
immediate linguistic feedback provided by a computer program can lead learners to perform better in using complex grammatical structures than translation feedback does, and that it does so by increasing the students’ tendency to actually use such metalinguistic information in their production of the target language.
Dagneaux et al. (1998) suggested that error-tagged learner corpora are a valuable resource for improving EFL materials. They developed
a computerized error analysis system (CEA) which has two major features: (1) the learner data is corrected manually by a native speaker of

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

885

English; and (2) the analyst assigns to each error an appropriate error tag and inserts the tag in the text file with the correct version. Seven
major category codes were included in CEA system: Formal, Grammatical, LeXico-grammatical, Lexical, Register, Word redundant/word
missing/word order, and Style. These codes are then followed by one or more subcodes, which provide further information on the type
of error. The overall architecture of the system could be retained with the addition (or deletion) of some subcategories, such as GADJG
(grammatical errors affecting adjectives and involving gender). It was suggested that CEA is a powerful technique and can be used to generate lists of specific error types, count and sort them in various ways and view them in their context.
2.3. Online annotation technology for corrective feedback
Based on the previous discussions, we propose that computer-based error feedback can be further reinvented with online annotation
technology. Functionalities of current annotation systems can be summarized as follows: (1) Highlighting key words—These systems
mainly provide tools much like highlighters to mark keywords or key points in the document, so that they can be quickly found later. Such
marking helps draw the readers’ attention (Du, 2004), while the colors and marks can carry some additional information (Ovsiannikov et al.,
1999). (2) Structuring related annotations—This type of system generates a list of related annotations, in which, related annotations will be
placed together. When the user adds new annotations to the database, he/she can just press a button and record the annotations into the
list. The user can also delete a certain annotation from the list. (3) Managing annotations—While the systems save the annotation contents,
they also save related data for further management, such as annotators’ IPs, annotation types (deleted, revised, etc.), and the importance of
the annotations.
As Bargeron, Gupta, Sanocki, and Grudin (1999) claimed, annotations can provide ‘‘in context” personal notes and can enable asynchronous collaboration among groups of users. In other words, it allows multi users to annotate the same document for the purpose of knowledge sharing. With annotations, users are no longer limited to viewing content passively on the Web, but are free to add and share
commentary and links, thus transforming the Web into an interactive medium. In addition, as Virtual Notes (Koch & Schneider, 2000)
showed, the system records the exact location of the icon and the annotations to the database. Once the user clicks on the document,
the system will retrieve related icons from the database. If the user moves his/her mouse pointer over an icon, the annotation will appear
automatically. This system also allows multi users to work either synchronously or asynchronously. It is unique that when there are new
annotations coming in, the system will e-mail and invite the users to view the new annotations. Another type of annotation system (e.g.,
Ovsiannikov et al., 1999) provides an annotation database (ADB) for the user to integrate information. Specifically, when searching ADB for
annotations, the user can take the advantage of information in the related clumps. The list of search results will contain annotations found
both directly and indirectly through their context. When searching for original documents, annotations in ADB can provide additional
information as to which texts are relevant.
Some annotation systems are designed for second language writing and error feedback. CoCoA (Ogata, Feng, Hada, & Yano, 2000) is a
typical example, which provides foreign students in learning Japanese writing an environment of exchanging marked-up documents for
error correction. For the purpose of writing, the learner first writes original text with an editor and sends the original text via e-mail to
the teacher. In the editor interface, the text is double-spaced to allow teacher’s corrections with marks and comments. Then the teacher
saves the corrected text and sends it to the learner via e-mail. Finally, the learner can view the corrected text and the revised text in the
top-down windows. Moreover, the system can retrieve documents with similar error types from the database for the learner to practice.
The valuable error profile helps the teacher for understanding the students’ error patterns and can be used by teachers to choose materials
with similar type of errors for further exercises. To meet the particular challenges faced by EFL learners and teachers, Wible et al. (2001)
designed a novel interactive online environment called IWiLL for second language writing classes. To compose or submit an essay, the student links to a page that displays a row of colored buttons. From this page, the student can resume work on an unfinished essay or revision,
or to submit or compose a new essay. The ‘‘Comment Bank” provides teachers with a convenient interface to mark, store and reuse comments. The system is unique that the essays written by students and the comments given by teachers can be archived in a searchable online
database. With ‘‘View Comments”, the student can see patterns of difficulty from his/her own writing. However, in spite of the advantages
mentioned above, the authors reminded that, ‘‘in the context of EFL writing, what is needed is research on the differential effects of the two
approaches [computer-based marking versus traditional pen-and-paper marking] to providing feedback” (Wible et al., 2001, p. 308, 310).
Based on the literature review, this study first developed an annotation system which can provide annotation analysis and knowledge
sharing, and can be applied to error correction and corrective feedback in writing instruction. The proposed system was developed to
achieve the following objectives: (1) providing multiple annotation functions for error correction; (2) providing error feedback and analysis; and (3) providing error feedback management. The system not only prompts students to correct grammatical errors but also help
them to categorize errors. It provides a facility for the classification of errors/feedback. When students receive feedback from their teacher,
they can query the feedback such that only one class of errors/feedback is presented at a time. This facility could provide the scaffolding
needed by learners to move from grammar drills to proofing their own written compositions. Since the question of how annotations may
help students’ writing has not been sufficiently addressed (Jones & Plass, 2002; Wible et al., 2001, ; Wolfe 2002; Wolfe & Neuwirth, 2001),
this study further examined the effects of using the proposed system on EFL students’ error correction. It was hypothesized that the system
is helpful for developing students’ metalinguistic awareness, as suggested by Ware and Warschauer (2006), and therefore will contribute to
their error recognition performance.

3. Development of Online Annotator for EFL Writing
The system that we have developed has been designed based upon some motivating ideas. First, the ideal annotation system for error
correction and error feedback should be able to help learners ‘‘detect and recognize the existence of errors” (Lee, 1997, p. 473). Specifically,
it should provide learners with effective error-correction prompts, including (1) an indication that an error has been committed, or (2) provision of the correct target language form, or (3) metalinguistic information about the error, as defined by Ellis (2007). Second, the system
should be able to support teachers with effective methods and convenient means for providing error feedback (Kubota, 2001) and incorporating student feedback into their courses (Lee, 1997; Zamel, 1983). Moreover, the system should be able to provide users (students or

886

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

teachers) the annotation marks subject to different query conditions. Specifically, the system should provide a facility for classifying errors
and corrective feedback, and the users involved should be enabled to search and retrieve the stored record of error feedback. Finally, teachers should be able to establish a collaborative relationship with students (Hyland & Hyland, 2006), such as sharing and discussing errors
and feedback with students. Fig. 1 shows the client/server structure of the Online Annotator for EFL Writing system and its major components, which include Document Maker, Annotation Editor, Composer, Error Analyzer, Viewer, and Database. In what follows, each component is presented in detail.
3.1. Document Maker
The Document Maker is depicted in Fig. 2. It is where a registered student inputs his/her documents. To compose a document, the student is first shown a text box and two rows of colored editing tools. After entering the document title, the student can choose either to
compose online by typing their article within the text box or to copy and paste other text composed off-line into that box. To submit
the document to the instructor, the student simply clicks on the ‘‘SAVE” button. As a document is edited and saved, the system will convert
it into the HTML format and save it in Database so that it can be displayed with general Web page browsers for error correction marking.
3.2. Annotation Editor (error correction marking)
The error correction marking generated by Annotation Editor takes the form of responses to words or sentences containing an error. The
responses consist of (1) an indication that an error has been committed, or (2) metalinguistic information about the nature of the error, or
(3) provision of the correct target language form, or any combination of these. Fig. 3 illustrates the interface of Annotation Editor. To create
a correction and comment, the assessor first highlights the error, named as ‘‘annotation keywords”, which he or she wants to correct. Then
the assessor clicks on one of the annotation tools to activate the corresponding function to place the error correction mark into the annotation keywords. The annotation tools include ‘‘DELETE”, ‘‘INSERT-FRONT”, ‘‘INSERT-BACK”, ‘‘HIGHLIGHT”, and ‘‘REPLACE”.
For ‘‘INSERT-FRONT”, ‘‘INSERT-BACK”, and ‘‘REPLACE” annotation tools, a pop-up window is available for entering the ‘‘inserted text” or
‘‘replaced text”. A pop-up window is also available for entering additional explanations for each error. The developed system uses JavaScript to automatically insert the <SPAN> tag of XHTML around the highlighted text (annotation keywords) for showing the effects of annotation marks (see Fig. 4 for an illustration) and store all related annotation information, such as annotator (who creates the annotation),
annotation type, error type, annotation keywords, replaced text, and additional explanation of the annotation, in Database.
For each annotation, the assessor then assigns an error code using the two pull-down menus (level-one menu and level-two menu) to
indicate its error category and sub error type. Under each level-one error, there are different numbers of level-two sub errors. Table 1 shows
the error categories and part of the sub error types the assessor can assign to each error. Specifically, there are five major error categories:
(a) writing style, (b) document structure, (c) sentences, (d) words and phrases, and (e) agreement, tense, and voice.
In Fig. 4, the ‘‘class” feature within the <SPAN> tag indicates the selected annotation tool, such as ‘‘REPLACE”, ‘‘HIGHLIGHT”, etc. The
‘‘class” feature also determines the CSS template to be applied to the annotation keywords within the <SPAN> tag. The ‘‘id” feature within

Document
Maker
Viewer

Error
Analyzer

Composer

Annotation
Editor

User Input
Interface

User Output
Interface

Client

Database

Server

Fig. 1. The System Architecture.

Fig. 2. Illustration of Document Maker.

887

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

Fig. 3. Illustration of Annotation Editor.

Fig. 4. Illustration of the code to use <SPAN> for inserting annotation. (The upper part shows the annotation effect the users see.)

Table 1
Error categories and error types for error feedback (partial).
Major error category

Sub error type

1. Writing style

1.1 Idiomatic English
1.2 Unnecessary repetition
1.3 Redundancy
(Totally 8 error types)

2. Document structure

2.1 Do not begin a new paragraph
2.2 Paragraph development
2.3 Paragraph unity
(Totally 9 error types)

3. Sentences

3.1 Misplaced modifier
3.2 Sentence construction
3.3 Dangling modifier
(Totally 7 error types)

4. Words and phrases

4.1 Faulty capitalization
4.2 Faulty word division
4.3 Prepositions
(Totally 19 error types)

5. Agreement tense and voice

5.1 Faulty subject-verb agreement
5.2 Pronoun agreement
5.3 Wrong tense
(Totally 5 error types)

the <SPAN> tag is unique and plays the role of annotation identification code. It can make dynamic control to the annotation keywords,
specifically, displaying annotation marks subject to different query conditions by regarding each annotation as an object stored in the
annotation Database. As an annotation is being created, all the related information, such as user ID, annotation type, error type, annotation
identification code, additional explanations for each error, is recorded in the Database whose primary key is the annotation identification
code. The annotation Database offers the information for error analysis (manipulated by Error Analyzer) and annotation query (manipulated by Composer). In Annotation Editor, the users can make correction marks and comments only. It is under a ‘‘read-only” status in that
the content of the original document cannot be changed.
3.3. Composer (displaying annotation marks based on user query)
This system inserts annotation tags into HTML codes of the original document for showing the effects of annotation marks and store all
related annotation information in Database. With Database and the annotation identification code, through Composer, the system can

888

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

provide the users the annotation marks subject to different query conditions. In this system, a user can query the annotations based on (1)
the assessor who makes annotations, (2) the annotation types, and (3) the error types. In addition, this system is able to implement full-text
search within ‘‘Annotation Keywords”, ‘‘Replaced Text”, and/or ‘‘Additional Explanation”. For those annotations that fulfill the query conditions, this system will display the corresponding CSS templates within the <SPAN> tag by using JavaScript so that those corresponding
annotation marks can be shown in Viewer accordingly.
3.4. Error Analyzer
Students’ performance in error correction itself can provide a useful source of information to help teachers formulate their error correction policy. For this purpose, Error Analyzer accesses Database and analyzes students’ errors to display the statistical results of student
error distributions in bar charts in Viewer as requested. This search function is also available to students. Practically, when students receive
feedback from the teacher, they can query the feedback such that only the annotations that fulfill the query conditions will be presented at
a time. In Error Analyzer, four error statistical analysis modes are analyzed: single document for an individual student, all documents for an
individual student, single document for all students, and all documents for all students.
3.4.1. Analysis of single document for an individual student
E
, in single document p for an individual student j is analyzed, where Epij is the number of
In this mode, the error ratio of error type i, Epij
pj
errors of error type i that student j has made in document p and Epj is the total number of errors that student j has made in document p.
P
Epj ¼ ei¼1 Epij , where e is the total number of error types. It is helpful to realize the error ratio distribution that student j has in writing
document p.
3.4.2. Analysis of all documents for an individual student
E
The difference between analysis of all documents and single document
for an individual student j is that, instead of Epij , the error ratio
Pmj
pj
E
pij
, where mj is the number of documents that student j has written.
student j has made in all documents, Eij , is used. Eij is computed as p¼1
mj
It is helpful to realize the most severe barrier that student j faces in writing.
Pnp
3.4.3. Analysis of single document for all students
E
j¼1 pij
The difference between analysis of single document p for an individual student and all students is that the error ratio Pe P
np
i¼1

E
j¼1 pij

is used,

where np is the number of students that have written document p and e is the total number of error types. It is helpful to realize the unclear
concepts most students have about document p.
Pm Pn
3.4.4. Analysis of all documents for all students
Epij
In this mode, the average frequency of error type i for all students, p¼1 n j¼1 , is used, where m is the total number of documents and n
is the total number of students. It is helpful to realize the overall unclear concepts most students have.
3.5. Viewer
Two Viewer modes are provided in this system, Document Viewer and Analyzed Result Viewer. Document Viewer displays annotation
marks that fulfill query conditions requested by the users. In Document Viewer, the users know which parts of their documents are corrected and they can get detailed corrective feedback by clicking on the annotation mark(s). A pop-up window will come up to display additional explanations for each error (see Fig. 5). Analyzed Result Viewer displays the four error analysis modes analyzed by Error Analyzer in
bar charts.

Fig. 5. Document Viewer.

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

889

4. The experiment
Annotation functionality is a dimension that expands immensely when a paper-based annotation assumes an electronic form (Wolfe,
2002). From the perspective of instructional design, traditional paper-based error feedback and analysis can be reinvented in the form of
computer-aided error analysis, which is a potential type of computer corpus annotation. Compared to paper-based annotations shared
merely through printed technology, ‘‘electronic annotations can not only be as convenient as their paper counterparts, but they are superior in terms of the additional advanced capabilities they can offer” (Ovsiannikov et al., 1999, p. 329). Online annotations provide learners
with more opportunities for dialogue and learning through conversations (Wolfe, 2002). Annotations also can take advantage of electronic
database to provide EFL teachers and learners a more constructive environment for error correction and feedback. However, despite the
possible benefits of online annotations for learning, Wolfe and Neuwirth (2001) pointed out that most users prefer to print paper versions
of online documents before reading them. Compared to computer displays, paper is more legible and portable, and allows readers to move
easily between documents. Besides, paper-based documents can be easily annotated.
In responding to the limitations as well as advantages of computer-based versus paper-based annotations, researchers (e.g., Ovsiannikov
et al., 1999; Rice, 1994; Wible et al. (2001), Wolfe & Neuwirth, 2001) claimed that comparison studies of using annotations in paper and
electronic environments are needed. Therefore, an experimental study was conducted to investigate the effectiveness of Online Annotator
for EFL Writing on students’ error correction. Specifically, the study examined if the proposed online annotation system leads to more effective error recognition for EFL error correction. Researchers (e.g., Ferris, 1995; Kubota, 2001; Lee, 1997) suggested that students’ failure in
error correction is mainly due to their failure in detecting and recognizing the existence of errors. In addition to error detecting and error
recognition, students should be able to understand the meanings of the error symbols. The experiment addressed the following question:
What is the relative effectiveness of computer-based corrective feedback versus paper-based corrective feedback on EFL students’ error
recognition? It was hypothesized that the system is helpful for developing EFL students’ metalinguistic awareness and scaffolding their
error correction strategies, and therefore will contribute to their error recognition performance.
4.1. Participants
The participants in this study consisted of 50 freshmen in a university in northern Taiwan. The participants studied Applied Foreign
Languages (Foreign Language Teaching or Translation/Interpretation) in the same department. The subjects’ first language was Chinese.
At the beginning of the semester, the students were randomly assigned into one of the two English Writing classes (each with 25 students)
which were taught by the same instructor under identical conditions. During the experiment, students in Class A belonged to the experimental group and Class B to the control Group. The experimental group received corrective feedback with the developed system and the
control group received the paper-based corrective feedback.
4.2. Procedure
To start the experiment, all participants from both groups were asked to write a short essay about their favorite celebrities using the
Document Maker of Online Annotator for EFL Writing as depicted in Fig. 2. To ensure that each student obtained the same instructions
on how to do the writing task, a list of guidelines was developed and printed on the class worksheet for students to follow. After the document was edited, the system automatically converted it into HTML format and saved it in Database for error correction marking. After the
writing session, the control group’s works were printed out and graded by a trained rater using traditional paper-based corrective feedback
method. On the other hand, the experimental group’s works were graded by the same rater using the Annotation Editor of Online Annotator
for EFL Writing. Although the grading methods were different, all participants’ corrective feedback was coded as metalinguistic data by the
rater with the checklist as shown in Table 1. The feedback types consisted of (1) an indication that an error has been committed, or (2)
metalinguistic information about the nature of the error, or (3) provision of the correct target language form, or any combination of these.
Both groups could see the corrective feedback made by the rater but in different format: the control group could receive the paper-based
feedback with different feedback types; the experimental group could see errors they made with the Document Viewer and Analyzed Result Viewer of Online Annotator for EFL Writing.
In a later class, all participants were asked to write another essay related to their freshman life in past tense. The grading procedures and
criteria were exactly the same as the former case. After that, all participants were asked to implement the error correcting practice in paper-based way for the same document provided by the instructor. This document was a student text where errors occured naturally. All
participants handwrote feedback and comments onto the printout of the student text. To ensure that each student obtained the same
instructions on how to do the feedback task, a list of guidelines was generated and printed on the class worksheet for students to follow.
Students were required to identify not only the errors but also the corresponding error types. Two other English writing teachers in the
department were also invited to read the same paper and generate corrective feedback collaboratively. During the data analysis phase,
all participants’ corrective feedback for the document was scored by the rater against the corrective feedback collaboratively generated
by the two writing teachers. In the next step of the analysis, a scoring list was used to examine the quality of students’ corrective feedback;
that is they need to identify the errors correctly.
4.3. Design
Three variables were employed to examine if the system is helpful for students’ error recognition performance. The first variable, x1,
represents the average number that an incorrect text is determined as incorrect by the student. The higher x1 is the higher effectiveness
it has on error recognition. The second variable, x2, represents the average number that an incorrect text is determined as correct by the
student. Different from x1, the higher x2 is the less effectiveness it has on error recognition. The third variable, x3, represents the average
number that a correct text is determined as incorrect by the student. Similar to x2, the higher x3 is the less effectiveness it has on error
recognition. Based on the three variables, the following three hypotheses were tested:

890

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

H1 The experimental group identified more incorrect texts than the control group did; that is, the experimental group has higher x1
value than the control group.
H2 The experimental group determined less number of incorrect texts as correct than the control group did; that is the experimental
group has lower x2 value than the control group.
H3 The experimental group determined less number of correct texts as incorrect than the control group did; that is the experimental
group has lower x3 value than the control group.
5. Results and discussion
The results were analyzed with SPSS 10.0. Tables 2 and 3 list the descriptive statistics and the t-test results of variables x1, x2, and x3.
The results revealed that the experimental group had higher x1 value [Mean = 29.04] than the control group [Mean = 19.08]. Results of ttest further showed that the experimental group had significantly higher value of x1 than the control group [p = 0.001] (see Table 3). It
implies that students in the experimental group could identify and recognize incorrect texts more effectively than the control group.
Table 2 revealed that the experimental group had lower x2 value [Mean = 25.40] than the control group [Mean = 35.80]. The difference
was significant [p < 0.001] (see Table 3). It implies that students in the experimental group missed less number of incorrect texts than
the control group did. It also conforms to the research hypothesis that the experimental group determined less number of incorrect
texts as correct than the control group did. Finally, Table 2 showed that the experimental group had lower x3 value [Mean = 5.36] than
the control group [Mean = 6.92]. The results indicate that the experimental group determined less number of correct texts as incorrect
than the control group did. However, the difference between the two groups was not significant [p = 0.276] (see Table 3). The results
suggested that students in both groups had similar performance on determining a correct text as incorrect. Since the experimental results of this study revealed significantly better performance in the experimental group on recognizing writing errors, they supported
our hypothesis that the error feedback generated by the Online Annotator for EFL Writing system is helpful for EFL students’ error
correction.
As claimed by Ferris (1995), the goal of error correction should be to equip students with a range of strategies to help them become
more independent writers. The Online Annotator for EFL Writing system integrated the potential of online annotation technology and L2
research in error correction and corrective feedback. It provided several unique features that were not available in the paper-based corrective feedback. These features worked together and created a more interactive environment for EFL error correction and corrective feedback.
In this study, for example, to make a correction or comment with Annotation Editor, the assessor first highlighted the error which he or she
wanted to correct. Then the assessor clicked on one of the annotation tools to activate the corresponding function to place the error correction mark into the annotation keywords. The assessor also could delete, insert, highlight, or replace any selected target. For ‘‘INSERTFRONT”, ‘‘INSERT-BACK”, and ‘‘REPLACE” annotation tools, a pop-up window was available for entering the ‘‘inserted text” or ‘‘replaced
text”. A pop-up window was also available for entering additional explanations for each error. Unlike the case in paper-based correction,
marking up text with colored and dynamic annotations could focus the learner’s attention on a very limited amount of information, and
therefore is an efficient way to draw learners’ attention to the corrections and feedback in the document. For each annotation, the teacher
then assigned an error code using the two pull-down menus (level-one menu and level-two menu) to indicate its error category and sub
error type. This feature provided a dynamic and interactive environment for teachers to indicate the type of error that the student has
made.
The system also provided convenient tools for both teachers and students to store and query annotations. When the teacher marked
students’ essays with Annotation Editor and saved them, the system used JavaScript to automatically insert the <SPAN> tag of XHTML
around the highlighted text (annotation keywords) for showing the effects of annotation marks and stored all related annotation information in Database. The Composer is unique that it manages annotation data and allows the users (either teachers or students) to search the
Table 2
Descriptive statistics of variables x1, x2, and x3.
Variable

Group

Number

Mean

SD

x1

Experimental
Control

25
25

29.04
19.08

9.7103
9.5870

x2

Experimental
Control

25
25

25.40
35.80

8.4705
10.7510

x3

Experimental
Control

25
25

5.36
6.92

5.1468
4.8642

Table 3
t-Test results of variables x1, x2, and x3.
Variable

Equality of variance

F

P

t

d.f.

P

x1

Equal
Unequal

0.027

0.871

3.650
3.650

48
47.992

0.001***
0.001***

x2

Equal
Unequal

1.474

0.231

3.799
3.799

48
45.508

0.000***
0.000***

x3

Equal
Unequal

0.123

0.728

1.101
1.101

48
47.848

0.276
0.276

***

P < 0.001.

F-test for variance

t-Test for mean

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

891

annotation subject to different query conditions. In this system, a user could query the annotations based on (1) the assessor who makes
annotations, (2) the annotation types, and (3) the error types. In addition, this system was able to implement full-text search within ‘‘Annotation Keywords”, ‘‘Replaced Text”, and/or ‘‘Additional Explanation”. For those annotations that fulfilled the query conditions, this system
displayed the corresponding CSS templates within the <SPAN> tag by using JavaScript so that those corresponding annotation marks could
be shown in Viewer accordingly. This function was helpful for building internal connections. The system also could help the users navigate
documents, that allowed them to look up information, pursue annotations, or return to earlier sections of a document. By facilitating such
easy movement between texts, the tools embedded in this system could build internal connections and emphasized the intertextual nature
of writing.
In Document Viewer, students could see which parts of their essays were corrected and they could obtain detailed corrective feedback
by clicking the annotation marks. A pop-up window would come up to display additional explanations for each error. This facility has the
potential to enhance students’ metacognitive awareness of linguistic form and function. It also can be used to scaffold students’ autonomy
in correcting errors and in reflecting on their writing. If the learner makes use of those learning facilities, the features may stimulate or
substitute for the student’s own error corrective strategies. Since second language students were often asked to correct grammatical errors,
but seldom were they told to categorize them (Lee, 1997), we assume that the facilities have the potential to reduce learners’ cognitive
overload and scaffold different error correction strategies that help students learn to move from grammatical drills to independent writing.
Since neither of these dimensions has been explored in depth in the literature, our further work will involve using this system to examine
the relationship between EFL students’ metalinguistic development and error correction strategies as well as the effectiveness of the system to reduce EFL writer’s cognitive overload.
The Analyzed Result Viewer displayed the four error analysis modes analyzed by Error Analyzer in bar charts: single document for an
individual student, all documents for an individual student, single document for all students, and all documents for all students. This function not only helped to raise students’ metacognitive and metalinguistic awareness, it also supported teachers to incorporate student feedback into their courses. Feedback on students’ writing is a form of social action designed to accomplish learning and social goals (Hyland &
Hyland, 2006). For building external connections, the Online Annotator for EFL Writing system provided a good way for EFL students and
teachers to share error comments and allow extended conversations to take place in the context of a common text. As such, teachers could
establish a collaborative relationship with students.
As Krashen (1982) has pointed out, one of the features that encourage second language acquisition is to provide tools—devices that help
control the quantity and quality of input. In this study, the online annotation system provided several unique tools that worked together to
help control the quantity and quality of language input. It is worth mentioning that the Online Annotator for EFL Writing designed in this
study can be easily modified by adding or deleting major category codes or subcodes to fit the instructional or research needs. Because of its
convenience, this system holds great potential for writing instruction. As the online composition classroom has become more common on
university campuses worldwide, we believe that such a system has particular value as training environments for students who are not
effective in error correction in paper-based situations or when paper-and-pen corrections are not available. In fact, the aforementioned
functions of the system demonstrated its significant value and importance in students’ learning processes in error correction and corrective
feedback.
There are two limitations of this study that should be considered. The first limitation regards the system interface. In the current
version of Online Annotator for EFL Writing, Analyzed Result Viewer and Error Analyzer are available to both teachers and students.
However, the teachers could access to all the searchable learner documents and annotations stored in Database, while the students
could only see his or her own corrective feedback in Document Viewer. This makes the collaborative relationship limited in scope.
In fact, for the purpose to enhance the collaborative relationship between student and student, the system can be further modified
to facilitate peer editing and annotation sharing among students. For instance, we can add a ‘‘peer editing” component to the system.
By doing so, a student’s writing can be reviewed by other students. The second limitation concerns the experimental design. In this
study, we presented a comparative research on computer-based corrective feedback versus paper-based corrective feedback on EFL students’ error correction. However, as cautioned by researchers (e.g., Felix, 2005; Pederson, 1987), it is also important to investigate how
technologies are impacting students’ learning processes, rather than looking at overall effectiveness of the system. Therefore, this study
can be extended to investigate the differential effects of different variables in this system. We see great value in Online Annotator for EFL
Writing that facilitates the process of corrective feedback and collaborative learning. Error correction strategies used by EFL students in
peer feedback on writing can also be investigated. An additional consideration with regard to research methodology is the sample population. This study obtained permission of 50 EFL students who registered as freshmen in the department. The number of subjects in
each group was limited to 25 because of practical reasons. Future research could replicate this study with more subjects to further test
the hypothesis of this study. Future research could investigate the long-term effects of online annotations on student writing development as well.

6. Conclusion
Our research was motivated by the increased need for effective writing feedback and correction methods in online composition classrooms. To this end, this study developed an online error correction and corrective feedback system called Online Annotator for EFL Writing.
The system consisted of five modules: Document Maker, Annotation Editor, Composer, Error Analyzer, and Viewer. With this system, the
users could make error corrections on online documents with computer annotations. The system could display the grammatical error types
and additional explanations for each error. It also allowed the users to query annotations in Database. The main focus of the experiment
was to examine the relative effectiveness of computer-based corrective feedback versus paper-based corrective feedback on EFL students’
error recognition. The experimental results revealed that the experimental group that used Online Annotator for EFL Writing effectively identified more errors than the control group did; moreover, the experimental group missed less number of incorrect texts than the control
group did. The experimental results were encouraging and suggested that the system could realize the concept of scaffolding, the ways
that the corrective feedback delivered through the online annotation system could be used by the student writer to develop his or her corrective strategies. Limitations and future research directions are also provided in this paper.

892

S.-W. Yeh, J.-J. Lo / Computers & Education 52 (2009) 882–892

Acknowledgements
We gratefully acknowledge the research support of the National Science Council of Taiwan (NSC 93-2411-H033-010). We would also
like to thank the anonymous reviewers for insightful comments on an earlier version of this paper.
References
Bargeron, D., Gupta, A., Sanocki, E., & Grudin, J. (1999). Annotations for streaming video on the web: System design and usage studies. Microsoft Research Report. <http://
www.research.microsoft.com/research/coet/mras/www8/paper.doc> Retrieved 19.01.02.
Berry, R. (1995). Language teachers and metalinguistics terminology. Paper presented at the Third international conference on teacher education in second language teaching,
Hong Kong, City University of Hong Kong, March 14–16, 1995.
Brinko, K. T. (1993). The practice of giving feedback to improve teaching. Journal of Higher Education, 64(5), 574–593.
Dagneaux, E., Denness, S., & Granger, S. (1998). Computer-aided error analysis. System, 26(1998), 163–174.
Du, M. C. (2004). Personalized annotation management for web based learning service. Master thesis. Chungli, Taiwan: National Central University.
Enginarlar, H. (1993). Student response to teacher feedback in EFL writing. System, 21(2), 193–204.
Ellis, R. (2007). Corrective Feedback in Theory, Research and Practice [Abstract]. Presented at the 5th international conference on ELT in China & the 1st congress of chinese applied
linguistics. Beijing, China: Beijing Foreign Language Studies University, May 17–20, 2007. <http://www.celea.org.cn/2007/edefault.asp> Retrieved 23.10.07.
Felix, U. (2005). Analysing recent CALL effectiveness research-toward a common agenda. Computer Assisted Language Learning, 18(2), 1–32.
Ferris, D. (1995). Teaching ESL composition students to become independent self-editors. TESOL Journal, 4(4), 18–22.
Ferris, D. (2006). Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction. In K. Hyland & F. Hyland (Eds.),
Feedback in second language writing: Contexts and issues (pp. 81–104). New York: Cambridge University Press.
Golovchinsky, G., & Marshall, C. (2000). Hypertext interaction revisited. Hypertext 2000. New York: Association for Computing Machinery.
Hedgcock, J., & Lefkowitz, N. (1994). Feedback on feedback: Assessing learner receptivity to teacher response in L2 composing. Journal of Second Language Writing, 3, 141–163.
Hyland, K., & Hyland, F. (2006). Feedback in second language writing: Contexts and issues. New York: Cambridge University Press.
Jones, L. C., & Plass, J. L. (2002). Supporting listening comprehension and vocabulary acquisition in French with multimedia annotations. The Modern Language Journal, 86(4),
546–561.
Koch, S., & Schneider, G. (2000). Implementation of an annotation service on the WWW – virtual Notes. In: Proceedings of PDP’2000 – 8th euromicro workshop on parallel and
distributed processing. Rhodos, Greece, January 19–21, 2000. IEEE Computer Society Press.
Krashen, S. (1982). Principles and practices in second language acquisition. New York, NY: Pergamon.
Krashen, S. (1985). The input hypothesis: Issues and implications. London: Longman Publisher.
Kubota, M. (2001). Error correction strategies used by learners of Japanese when revising a writing task. System, 29(2001), 467–480.
Lee, I. (1997). ESL learners’ performance in error correction in writing: Some implications for teaching. System, 25(4), 465–477.
Lee, I. (2003). L2 writing teachers’ perspectives, practices and problems regarding error feedback. Assessing Writing, 8(2003), 216–237.
Leki, I. (1991). Twenty-five years of contrastive rhetoric: Text analysis and writing pedagogies. TESOL Quarterly, 25, 123–143.
Magilow, D. H. (1999). Case study #2: Error correction and classroom affect. Teaching German, 32(2), 125–129.
Miller, S. K. (2001). A review of research on distance education in computers and composition. Computers and Composition, 18, 423–430.
Nagata, N. (1993). Intelligent computer feedback for second language instruction. The Modern Language Journal, 77(3), 330–339.
Nagata, N. (1997). The effectiveness of computer-assisted metalinguistic instruction: A case study in Japanese. Foreign Language Annals, 30(2), 187–200.
Ogata, H., Feng, C., Hada, Y., & Yano, Y. (2000). Online markup based language learning environment. Computers and Education: An International Journal, 34(1), 51–66.
Ovsiannikov, I. A., Arbib, M. A., & Mcneill, T. H. (1999). Annotation technology. International Journal Human–Computer Studies, 50, 329–362.
Pederson, K. M. (1987). Research on CALL. In S. Flint (Ed.), Modern media in foreign language education: Theory and implementation (pp. 99–131). New York: Cambridge
University Press.
Peterson, P. W. (2001). The debate about online learning: Key issues for writing teachers. Computers and Composition, 18, 359–370.
Rice, G. E. (1994). Examining constructs in reading comprehension using two presentation modes: Paper vs. computer. Journal of Educational Computing Research, 11(1994),
153–178.
Schulz, R. A. (1996). Focus on form in the foreign language classroom: Students’ and teachers’ views on error correction and the role of grammar. Foreign Language Annals,
29(3), 343–364.
Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press.
Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327–369.
Truscott, J. (1999). The case for the case against grammar correction in L2 writing classes: A response to Ferris. Journal of Second Language Writing, 8, 111–122.
van Els, E. et al. (1984). Applied linguistics and the learning and teaching of foreign language. London: Edward Arnold.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University.
Ware, P. D., & Warschauer, M. (2006). Electronic feedback and second language writing. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues
(pp. 118–122). New York: Cambridge University Press.
Wible, D., Kuo, C. H., Chien, F. Y., Liu, A., & Tsao, N. L. (2001). A Web-based EFL writing environment: Integrating information for learners, teachers, and researchers. Computers
and Education: An International Journal, 37(2001), 297–315.
Wolfe, J., & Neuwirth, C. M. (2001). From the margins to the center: The future of annotation. Journal of Business and Technical Communication, 15(3), 333–371.
Wolfe, J. (2002). Annotation technologies: A software and research review. Computers and Composition, 19, 471–497.
Yates, R., & Kenkel, J. (2002). Responding to sentence-level errors in writing. Journal of Second Language Writing, 11(2002), 29–47.
Zamel, V. (1983). The composing process of advanced ESL students: Six case studies. TESOL Quarterly, 17, 165–187.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close