Nothing Special   »   [go: up one dir, main page]

US20050042592A1 - Methods for automated essay analysis - Google Patents

Methods for automated essay analysis Download PDF

Info

Publication number
US20050042592A1
US20050042592A1 US10/948,417 US94841704A US2005042592A1 US 20050042592 A1 US20050042592 A1 US 20050042592A1 US 94841704 A US94841704 A US 94841704A US 2005042592 A1 US2005042592 A1 US 2005042592A1
Authority
US
United States
Prior art keywords
essay
sentence
discourse
features
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/948,417
Other versions
US7729655B2 (en
Inventor
Jill Burstein
Daniel Marcu
Vyacheslav Andreyev
Martin Chodorow
Claudia Leacock
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Educational Testing Service
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/948,417 priority Critical patent/US7729655B2/en
Assigned to EDUCATIONAL TESTIG SERVICE reassignment EDUCATIONAL TESTIG SERVICE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MARCU, DANIEL, ANDREYEV, VYACHESLAV, BURSTEIN, JILL, CHODOROW, MARTIN SANFORD, LEACOCK, CLAUDIA
Publication of US20050042592A1 publication Critical patent/US20050042592A1/en
Priority to US12/785,721 priority patent/US8452225B2/en
Application granted granted Critical
Publication of US7729655B2 publication Critical patent/US7729655B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Definitions

  • This invention relates generally to document processing and automated identification of discourse elements, such as a thesis statements, in an essay.
  • the invention facilitates the automatic analysis, identification and classification of discourse elements in a sample of text.
  • the invention is a method for automated analysis of an essay.
  • the method comprises the steps of accepting an essay; determining whether each of a predetermined set of features is present or absent in each sentence of the essay; for each sentence in the essay, calculating a probability that the sentence is a member of a certain discourse element category, wherein the probability is based on the determinations of whether each feature in the set of features is present or absent; and choosing a sentence as the choice for the discourse element category, based on the calculated probabilities.
  • the discourse element category of preference is the thesis statement.
  • the essay is preferably in the form of an electronic document, such as an ASCII file.
  • the predetermined set of features preferably comprises the following: a feature based on the position within the essay; a feature based on the presence or absence of certain words wherein the certain words comprise words of belief that are empirically associated with thesis statements; and a feature based on the presence or absence of certain words wherein the certain words comprise words that have been determined to have a rhetorical relation based on the output of a rhetorical structure parser.
  • the calculation of the probabilities is preferably done in the form of a multivariate Bernoulli model.
  • the invention is a process of training an automated essay analyzer.
  • the training process accepts a plurality of essays and manual annotations demarking discourse elements in the plurality of essays.
  • the training process accepts a set of features that purportedly correlate with whether a sentence in an essay is a particular type of discourse element.
  • the training process calculates empirical probabilities relating to the frequency of the features and relating features in the set of features to discourse elements.
  • the invention is computer readable media on which are embedded computer programs that perform the above method and process.
  • certain embodiments of the invention are capable of achieving certain advantages, including some or all of the following: (1) eliminating the need for human involvement in providing feedback about an essay; (2) improving the timeliness of feedback to a writer of an essay; and (3) cross utilization of essay automatic essay analysis parameters determined from essays on a given topic to essays on different topics or responding to different questions.
  • FIG. 1 is a flowchart of a method for providing automated essay feedback, according to an embodiment of the invention.
  • FIG. 2 is a flowchart of a process for training the automated essay feedback method of FIG. 1 , according to an embodiment of the invention.
  • a Bayesian classifier can be built using the following features: a) sentence position, b) words commonly used in thesis statements, and c) discourse features, based on rhetorical structure theory (RST) parses.
  • RST rhetorical structure theory
  • a thesis statement is generally defined as the sentence that explicitly identifies the purpose of the paper or previews its main ideas. Although this definition seems straightforward enough, and would lead one to believe that even for people to identify the thesis statement in an essay would be clear-cut. However, this is not always the case. In essays written by developing writers, thesis statements are not so clearly and ideas are repeated. As a result, human readers sometimes independently choose different thesis statements from the same student essay.
  • this system can be used to indicate as feedback to students, the discourse elements in their essays is advantageous.
  • Such a system could present to students a guided list of questions to consider about the quality of the discourse. For instance, it has been suggested by writing experts that if the thesis statement of a student's essay could be automatically provided, the student could then use this information to reflect on the thesis statement and its quality.
  • such an instructional application could utilize the thesis statement to discuss other types of discourse elements in the essay, such as the relationship between the thesis statement and the conclusion, and the connection between the thesis statement and the main points in the essay.
  • students are often presented with a “Revision Checklist.”
  • the “Revision Checklist” is intended to facilitate the revision process.
  • FIG. 1 is a flowchart of a method 100 for providing automated essay analysis, according to an embodiment of the invention.
  • the method 100 estimates which sentence in an essay is most likely to belong to a certain discourse category, such as thesis statement, conclusion, etc.
  • the method 100 begins by accepting ( 110 ) an essay.
  • the essay is preferably in electronic form at this step.
  • the method 100 next performs a loop 115 .
  • the method 100 makes one pass through the loop 115 for each sentence in the essay.
  • Each pass of the loop 115 gets ( 120 ) the next sentence and determines ( 130 ) the presence or absence of each feature A 1 . . . A n (the feature A 1 . . . A n having been predetermined to be relevant to the particular discourse category).
  • the loop 115 next computes ( 140 ) a probability expression for each sentence (S) for the discourse category (T) using the formula below.
  • the method 100 next tests ( 150 ) whether the current resource is the last and loops back to the getting next sentence step 120 if not. After a probability expression has been evaluated for every sentence, the method 100 chooses ( 160 ) the sentence with the maximum probability expression for the particular discourse category. The method 100 can be repeated for each different discourse category.
  • the accepting step 110 directly accepts the document in an electronic form, such as an ASCII file.
  • the accepting step 110 comprises the steps of scanning a paper form of the essay and performing optical character recognition on the scanned paper essay.
  • the determining step 130 and computing step 140 repeat through the indexed list of features A 1 . . . A N and updates the value of the probability expression based on the presence or absence of each feature A 1 . . . A N .
  • Another embodiment of the determining step 130 and computing step 140 is that the presence or absence of all features A 1 . . . A N could be determined ( 130 ) and then the probability expression could be computed ( 140 ) for that sentence.
  • the steps of the method 100 can be performed in an order different from that illustrated, or simultaneously, in alternative embodiments.
  • the method 100 uses the discourse category to estimate which sentence in an essay is most likely to be the thesis statement. Assume that the method 100 utilizes only positional and word occurrence features to identify the thesis statement, as follows:
  • the method 100 begins by reading ( 110 ) the following brief essay:
  • the method 100 loops through each sentence of the above essay, sentence by sentence.
  • the first sentence denoted S1 is “Most of the time. . . life.”
  • the observed features of S1 are /W_FEEL, SP — 1, /SP — 2, /SP — 3 and /SP — 4, as this sentence is the first sentence of the essay and does not contain the word “feel.”
  • the second “sentence,” denoted S2 is actually two sentence, but the method can treat a group of sentences as single sentence, when, for example, the sentences are related in a certain manner, such as in this case where the second sentence begins with the phrase “For example . . . ”.
  • S2 in this example is “We put . . . army.” It's features are /SP — 1, SP — 2, /SP — 3, /SP — 4 and W_FEEL, as would be determined by the step 130 .
  • the second sentence is chosen ( 160 ) as the most likely thesis statement, according to the method 100 .
  • FIG. 2 is a flowchart of a process 200 for training the method 100 , according to an embodiment of the invention.
  • the process 200 begins by accepting ( 210 ) a plurality of essays.
  • the essays are preferably in electronic form at this step.
  • the method 200 accepts ( 210 ) manual annotations.
  • the method 200 determines ( 225 ) the universe of all possible features A 1 . . . A n .
  • method 200 computes ( 260 ) the empirical probability relating to each feature A i across the plurality of essays.
  • the preferred method of accepting ( 210 ) the plurality of essays is in the form of electronic documents and the preferred electronic format is ASCII.
  • the preferred method of accepting ( 210 ) the plurality of essays is in the form of stored or directly entered electronic text.
  • the essays could be accepted ( 210 ) utilizing a method comprised of the steps of scanning the paper forms of the essays, and performing optical character recognition on the scanned paper essays.
  • the preferred method of accepting ( 220 ) manual annotations is in the form of electronic text essays that have been manually annotated by humans skilled in the art of discourse element identification.
  • the preferred method of indicating the manual annotation of the pre-specified discourse elements is by the bracketing of discourse elements within starting and ending “tags” (e.g. ⁇ Sustained Idea>. . . ⁇ /Sustained Idea>, ⁇ Thesis Statement>. . . ⁇ /Thesis Statement>).
  • the preferred embodiment of method 200 determines ( 225 ) the universe of all possible features for a particular discourse item.
  • the feature determination step 225 begins by determining ( 230 ) the universe of positional features A 1 . . . A k .
  • the feature determination step 225 determines ( 240 ) the universe of word choice features A k +1 . . . A m .
  • the feature determination step 225 determines ( 250 ) the universe of rhetorical structure theory (RST) features A m +1 . . . A N .
  • An embodiment of the positional features determination step 230 loops through each essay in the plurality of essays, noting the position of demarked discourse elements within each essay and determining the number of sentences in that essay.
  • An embodiment of the word choice features determination step 240 parses the plurality of essays and create a list of all words contained within the sentences marked by a human annotator as being a thesis statement.
  • a m universe determination step 240 can accept a list of predetermined list of words of belief, words of opinion, etc.
  • RST radiological structure theory
  • the RST parser of preference utilized in step 250 is described in Marcu, D., “The Rhetorical Parsing of Natural Language Texts,” Proceedings of the 35th Annual Meeting of the Assoc. for Computational Linguistics, 1997, pp. 96-103, which is hereby incorporated by reference. Further background on RST is available in Mann, W. C. and S. A. Thompson, “Rhetorical Structure Theory: Toward a Functional Theory of Text Organization,” Text 8(3), 1988, pp. 243-281, which is also hereby incorporated by reference.
  • the method 200 computes ( 260 ) the empirical frequencies relating to each feature A i across the plurality of essays. For a sentence (S) in the discourse category (T) the following probabilities are determined for each A i : P(T), the prior probability that a sentence is in discourse category T; P(A i
  • the method 100 and the process 200 can be performed by computer programs.
  • the computer programs can exists in a variety of forms both active and inactive.
  • the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s); or hardware description language (HDL) files.
  • Any of the above can be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form.
  • Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes.
  • Exemplary computer readable signals are signals that a computer system hosting or running the computer programs can be configured to access, including signals downloaded through the Internet or other networks.
  • Concrete examples of the foregoing include distribution of executable software program(s) of the computer program on a CD ROM or via Internet download.
  • the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general.
  • Experiment 1 utilizes a Bayesian classifier for thesis statements using essay responses to one English Proficiency Test (EPT) question: Topic B.
  • EPT English Proficiency Test
  • Table 1 indicates agreement between two human annotators for the labeling of thesis statements.
  • the table shows the baseline performance in two ways. Thesis statements commonly appear at the very beginning of an essay. So, we used a baseline method where the first sentence of each essay was automatically selected as the thesis statement. This position-based selection was then compared to the resolved human annotator thesis selection (i.e., final annotations agreed upon by the two human annotators) for each essay (Position-Based&H). In addition, random thesis statement selections were compared with humans 1 and 2 , and the resolved thesis statement (Random&H). The % Overlap column in Table 1 indicates the percentage of the time that the two annotators selected the exact same text as the thesis statement.
  • Experiment 2 utilized three general feature types to build the classifier: a) sentence position, b) words commonly occurring in a thesis statement, and c) RST labels from outputs generated by an existing rhetorical structure parser (Marcu, 1997). Trained the classifier to predict thesis statements in an essay. Using the multivariate Bernoulli formula, below, this gives us the log probability that a sentence (S) in an essay belongs to the class (T) of sentences that are thesis statements.
  • Experiment 2 utilized three kinds of features to build the classifier. These were a) positional, b) lexical, and c) Rhetorical Structure Theory-based discourse features (RST).
  • positional feature we found that in the human annotated data, the annotators typically marked a sentence as being a thesis toward the beginning of the essay. So, sentence position was a relevant feature.
  • lexical information our research indicated that if we used as features the words in sentences annotated as thesis statements that this also proved to be useful toward the identification of a thesis statement. In addition information from RST-based parse trees is or can be useful.
  • a) the thesis word list we included lexical information in thesis statements in the following way to build the thesis statement classifier.
  • a vocabulary list was created that included one occurrence of each word used in a thesis statement (in training set essays). All words in this list were used as a lexical feature to build the thesis statement classifier. Since we found that our results were better if we used all words used in thesis statements, no stop list was used.
  • the belief word list included a small dictionary of approximately 30 words and phrases, such as opinion, important, better, and in order that. These words and phrases were common in thesis statement text. The classifier was trained on this set of words, in addition to the thesis word vocabulary list.
  • RST one can associate a rhetorical structure tree to any text.
  • the leaves of the tree correspond to elementary discourse units and the internal nodes correspond to contiguous text spans. Text spans represented at the clause and sentence level.
  • Each node in a tree is characterized by a status (nucleus or satellite) and a rhetorical relation, which is a relation that holds between two non-overlapping text spans.
  • the distinction between nuclei and satellites comes from the empirical observation that the nucleus expresses what is more essential to the writer's intention than the satellite; and that the nucleus of a rhetorical relation is comprehensible independent of the satellite, but not vice versa.
  • the relation is multinuclear Rhetorical relations reflect semantic, intentional, and textual relations that hold between text spans.
  • one text span may elaborate on another text span; the information in two text spans may be in contrast; and the information in one text span may provide background for the information presented in another text span.
  • the algorithm considers two pieces of information from RST parse trees in building the classifier: a) is the parent node for the sentence a nucleus or a satellite, and b) what elementary discourse units are associated with thesis versus non-thesis sentences.
  • Table 2 indicates performance for 6 cross-validation runs. In these runs, 5 ⁇ 6 of the data were used for training and 1 ⁇ 6 for subsequent cross-validation. Agreement is evaluated on the 1 ⁇ 6 of the data. For this experiment inclusion of the following features to build the classifier yielded the results in Table 2: a) sentence position, b) both RST feature types, and c) the thesis word list.
  • This cross-validation method was applied to the entire data set (All), where the training sample contained 78 thesis statements, and to a gold-standard set where 49 essays (GS) were used for training.
  • the gold-standard set includes essays where human readers agreed on annotations independently.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

An essay is analyzed automatically by accepting the essay and determining whether each of a predetermined set of features is present or absent in each sentence of the essay. For each sentence in the essay a probability that the sentence is a member of a certain discourse element category is calculated. The probability is based on the determinations of whether each feature in the set of features is present or absent. Furthermore, based on the calculated probabilities, a sentence is chosen as the choice for the discourse element category.

Description

  • This application claims priority to U.S. Provisional Patent Application No. 60/263,223, filed Jan. 23, 2001, which is incorporated herein by reference.
  • FIELD OF THE INVENTION
  • This invention relates generally to document processing and automated identification of discourse elements, such as a thesis statements, in an essay.
  • BACKGROUND OF THE INVENTION
  • Given the success of automated essay scoring technology, such application have been integrated into current standardized writing assessments. The writing community has expressed an interest in the development of an essay evaluation systems that include feedback about essay characteristics to facilitate the essay revision process.
  • There are many factors that contribute to overall improvement of developing writers. These factors include, for example, refined sentence structure, variety of appropriate word usage, and organizational structure. The improvement of organizational structure is believed to be critical in the essay revision process toward overall essay quality. Therefore, it would be desirable to have a system that could indicate as feedback to students, the discourse elements in their essays.
  • SUMMARY OF THE INVENTION
  • The invention facilitates the automatic analysis, identification and classification of discourse elements in a sample of text.
  • In one respect, the invention is a method for automated analysis of an essay. The method comprises the steps of accepting an essay; determining whether each of a predetermined set of features is present or absent in each sentence of the essay; for each sentence in the essay, calculating a probability that the sentence is a member of a certain discourse element category, wherein the probability is based on the determinations of whether each feature in the set of features is present or absent; and choosing a sentence as the choice for the discourse element category, based on the calculated probabilities. The discourse element category of preference is the thesis statement. The essay is preferably in the form of an electronic document, such as an ASCII file. The predetermined set of features preferably comprises the following: a feature based on the position within the essay; a feature based on the presence or absence of certain words wherein the certain words comprise words of belief that are empirically associated with thesis statements; and a feature based on the presence or absence of certain words wherein the certain words comprise words that have been determined to have a rhetorical relation based on the output of a rhetorical structure parser. The calculation of the probabilities is preferably done in the form of a multivariate Bernoulli model.
  • In another respect, the invention is a process of training an automated essay analyzer. The training process accepts a plurality of essays and manual annotations demarking discourse elements in the plurality of essays. The training process accepts a set of features that purportedly correlate with whether a sentence in an essay is a particular type of discourse element. The training process calculates empirical probabilities relating to the frequency of the features and relating features in the set of features to discourse elements.
  • In yet other respects, the invention is computer readable media on which are embedded computer programs that perform the above method and process.
  • In comparison to known prior art, certain embodiments of the invention are capable of achieving certain advantages, including some or all of the following: (1) eliminating the need for human involvement in providing feedback about an essay; (2) improving the timeliness of feedback to a writer of an essay; and (3) cross utilization of essay automatic essay analysis parameters determined from essays on a given topic to essays on different topics or responding to different questions. Those skilled in the art will appreciate these and other advantages and benefits of various embodiments of the invention upon reading the following detailed description of a preferred embodiment with reference to the below-listed drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flowchart of a method for providing automated essay feedback, according to an embodiment of the invention; and
  • FIG. 2 is a flowchart of a process for training the automated essay feedback method of FIG. 1, according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
  • I. Overview
  • Using a small corpus of essay data where thesis statements have been manually annotated, a Bayesian classifier can be built using the following features: a) sentence position, b) words commonly used in thesis statements, and c) discourse features, based on rhetorical structure theory (RST) parses. Experimental results indicate that this classification technique may be used toward the automatic identification of thesis statements in essays. Furthermore, the method generalizes across essay topics.
  • A thesis statement is generally defined as the sentence that explicitly identifies the purpose of the paper or previews its main ideas. Although this definition seems straightforward enough, and would lead one to believe that even for people to identify the thesis statement in an essay would be clear-cut. However, this is not always the case. In essays written by developing writers, thesis statements are not so clearly and ideas are repeated. As a result, human readers sometimes independently choose different thesis statements from the same student essay.
  • The value of this system is that it can be used to indicate as feedback to students, the discourse elements in their essays is advantageous. Such a system could present to students a guided list of questions to consider about the quality of the discourse. For instance, it has been suggested by writing experts that if the thesis statement of a student's essay could be automatically provided, the student could then use this information to reflect on the thesis statement and its quality. In addition, such an instructional application could utilize the thesis statement to discuss other types of discourse elements in the essay, such as the relationship between the thesis statement and the conclusion, and the connection between the thesis statement and the main points in the essay. In the teaching of writing, students are often presented with a “Revision Checklist.” The “Revision Checklist” is intended to facilitate the revision process. This is a list of questions posed to the student that help the student reflect on the quality of their writing. So, for instance, such a list might pose questions as in the following. (a) Is the intention of my thesis statement clear?, (b) Does my thesis statement respond directly to the essay question?, (c) Are the main points in my essay clearly stated?, and (d) Do the main points in my essay relate to my original thesis statement?
  • The ability to automatically identify, and present to students the discourse elements in their essays can help them to focus and reflect on the critical discourse structure of the essay. In addition, the ability for the application to indicate to the student that a discourse element could not be located, perhaps due to the ‘lack of clarity’ of this element could also be helpful. Assuming that such a capability were reliable, this would force the writer to think about the clarity of a given discourse element, such as a thesis statement.
  • II. Providing Automated Essay Analysis
  • FIG. 1 is a flowchart of a method 100 for providing automated essay analysis, according to an embodiment of the invention. The method 100 estimates which sentence in an essay is most likely to belong to a certain discourse category, such as thesis statement, conclusion, etc. The method 100 begins by accepting (110) an essay. The essay is preferably in electronic form at this step. The method 100 next performs a loop 115. The method 100 makes one pass through the loop 115 for each sentence in the essay. Each pass of the loop 115 gets (120) the next sentence and determines (130) the presence or absence of each feature A1 . . . An (the feature A1 . . . An having been predetermined to be relevant to the particular discourse category). If more than one discourse categories is evaluated, a different set of features A1 . . . An may be predetermined for each discourse category. The loop 115 next computes (140) a probability expression for each sentence (S) for the discourse category (T) using the formula below. log [ P ( T S ) ] = log [ P ( T ) ] + i log [ P ( A i T ) / P ( A i ) ] if A i present log [ P ( A _ i T ) / P ( A _ i ) ] if A i not present
    where P(T) is the prior probability that a sentence is in discourse category T; P(Ai|T) is the conditional probability of a sentence having feature Ai, given that the sentence is in T; P(Ai) is the prior probability that a sentence contains feature Ai; P({overscore (A)}i|T) is the conditional probability that a sentence does not have feature Ai, given that it is in T; and P({overscore (A)}i) is the prior probability that a sentence does not contain feature Ai. Performance can be improved by using a LaPlace estimator to deal with cases when the probability estimates are zero.
  • The method 100 next tests (150) whether the current resource is the last and loops back to the getting next sentence step 120 if not. After a probability expression has been evaluated for every sentence, the method 100 chooses (160) the sentence with the maximum probability expression for the particular discourse category. The method 100 can be repeated for each different discourse category.
  • Preferably, the accepting step 110 directly accepts the document in an electronic form, such as an ASCII file. In another embodiment, the accepting step 110 comprises the steps of scanning a paper form of the essay and performing optical character recognition on the scanned paper essay.
  • In one embodiment, the determining step 130 and computing step 140 repeat through the indexed list of features A1 . . . AN and updates the value of the probability expression based on the presence or absence of each feature A1 . . . AN. Another embodiment of the determining step 130 and computing step 140 is that the presence or absence of all features A1 . . . AN could be determined (130) and then the probability expression could be computed (140) for that sentence. Those skilled in the art can appreciate that the steps of the method 100 can be performed in an order different from that illustrated, or simultaneously, in alternative embodiments.
  • III. Example of Use
  • As an example of the method 100, consider the case when the discourse category is a thesis statement, so that the method 100 estimates which sentence in an essay is most likely to be the thesis statement. Assume that the method 100 utilizes only positional and word occurrence features to identify the thesis statement, as follows:
      • A1=W_FEEL=Occurrence of the word “feel.”
      • A2=SP 1=Being the first sentence in an essay.
      • A3=SP2=Being the second sentence in an essay.
      • A4=SP3=Being the third sentence in an essay.
      • A5=SP4=Being the fourth sentence in an essay.
      • Etc.
        Assume further that the prior and conditional probabilities for these features have been predetermined or otherwise supplied. Typically, these probabilities are determined by a training process (as described in detail below with reference to FIG. 2). For this example, assume that the above features were determined empirically by examining 93 essays containing a grand total of 2391 sentences, of which 111 were denoted by a human annotator as being thesis statements. From this data set, the following prior probabilities were determined by counting frequencies of feature occurrence out of the total number of sentences (where the preceding slash “/” denotes the “not” or complement operator):
      • P(THESIS)=111/2391=0.0464
      • P(W_FEEL)=188/2391=0.0786
      • P(/W_FEEL)=1−0.0786=0.9213
      • P(SP1)=93/2391=0.0388
      • P(/SP1)=1−0.0388=0.9611
      • P(SP2)=93/2391=0.0388
      • P(/SP2)=1−0.0388=0.9611
      • P(SP3)=93/2391=0.0388
      • P(/SP3)=1−0.0388=0.9611
      • P(SP4)=93/2391=0.0388
      • P(/SP4)=1−0.0388=0.9611
        It can be seen from these numbers, that every essay in the training set contained at least four sentences. One skilled in the art could continue with additional sentence position feature probabilities, but only four are needed in the example that follows.
  • From the same data set, the following conditional probabilities were determined by counting frequencies of feature occurrence out of the thesis sentences only:
      • P(W_FEEL|THESIS)=35/111=0.3153
      • P(/W_FEEL|THESIS)=1−0.1861=0.6847
      • P(SP 13 1|THESIS)=24/111=0.2162
      • P(/SP 1|THESIS)=1−0.2162=0.7838
      • P(SP2|THESIS)=15/111=0.1612
      • P(/SP2|THESIS)=1−0.1612=0.8388
      • P(SP3|THESIS)=13/111=0.1171
      • P(/SP3|THESIS)=1−0.1171=0.8829
      • P(SP4|THESIS)=14/111=0.1262
      • P(/SP4|THESIS)=1−0.1262=0.8739
  • With this preliminary data set, the method 100 begins by reading (110) the following brief essay:
      • Most of the time we as people experience a lot of conflicts in life. We put are selfs in conflict every day by choosing between something that we want to do and something that we feel we should do. For example, I new friends and family that they wanted to go to the army. But they new that if they went to college they were going to get a better education. And now my friends that went to the army tell me that if they had that chance to go back and make that choice again, they will go with the feeling that will make a better choice.
  • The method 100 loops through each sentence of the above essay, sentence by sentence. The first sentence, denoted S1, is “Most of the time. . . life.” The observed features of S1 are /W_FEEL, SP 1, /SP2, /SP3 and /SP4, as this sentence is the first sentence of the essay and does not contain the word “feel.” The probability expression for this sentence is computed (140) as follows: log [ P ( T S 1 ) ] = log [ P ( T ) + log [ P ( / W_FEEL T ) / P ( / W_FEEL ) ] + log [ P ( SP_ 1 T ) / P ( SP_ 1 ) ] + log [ P ( / SP_ 2 T ) / P ( / SP_ 2 ) ] + log [ P ( / SP_ 3 T ) / P ( / SP_ 3 ) ] + log [ P ( / SP_ 4 T ) / P ( / SP_ 4 ) ] = log [ 0.0464 ] + log [ 0.6847 / 0.9213 ] + log [ 0.2162 / 0.0388 ] + log [ 0.8388 / 0.9611 ] + log [ 0.8829 / 0.9611 ] + log [ 0.8739 / 0.9611 ] = - 0.8537
  • The second “sentence,” denoted S2, is actually two sentence, but the method can treat a group of sentences as single sentence, when, for example, the sentences are related in a certain manner, such as in this case where the second sentence begins with the phrase “For example . . . ”. Thus, S2 in this example is “We put . . . army.” It's features are /SP 1, SP2, /SP3, /SP4 and W_FEEL, as would be determined by the step 130. Computing (140) the probability expression for S2 is done as follows: log [ P ( T S 2 ) ] = log [ P ( T ) ] + log [ P ( W_FEEL T ) / P ( W_FEEL ) ] + log [ P ( / SP_ 1 T ) / P ( / SP_ 1 ) ] + log [ P ( / SP_ 2 T ) / P ( / SP_ 2 ) ] + log [ P ( / SP_ 3 T ) / P ( / SP_ 3 ) ] + log [ P ( / SP_ 4 T ) / P ( / SP_ 4 ) ] + = log [ 0.0464 ] + log [ 0.3153 / 0.0786 ] + log [ 0.7838 / 0.9611 ] + log [ 0.1612 / 0.0388 ] + log [ 0.8829 / 0.9611 ] + log [ 0.8739 / 0.9611 ] = - 0.2785
  • Likewise, for the third sentence, it's features are /W_FEEL, /SP 1, /SP2, SP3 and /SP4, and its probability expression value is −1.1717. The probability expression value for the fourth sentence is −1.1760. The maximum probability expression value is −0.2785, corresponding to S2. Thus, the second sentence is chosen (160) as the most likely thesis statement, according to the method 100.
  • Note that the prior probability term P(T) is the same for every sentence; thus, this term can be ignored for purposes of the method 100 for a given discourse category. Note also that while the preceding calculations were performed using base-10 logarithms, any base (e.g., natural logarithm, ln) can be used instead, provided the same base logarithm is used consistently.
  • IV. Constructing the Automatic Essay Analyzer
  • FIG. 2 is a flowchart of a process 200 for training the method 100, according to an embodiment of the invention. The process 200 begins by accepting (210) a plurality of essays. The essays are preferably in electronic form at this step. The method 200 then accepts (210) manual annotations. The method 200 then determines (225) the universe of all possible features A1 . . . An. Finally, method 200 computes (260) the empirical probability relating to each feature Ai across the plurality of essays.
  • The preferred method of accepting (210) the plurality of essays is in the form of electronic documents and the preferred electronic format is ASCII. The preferred method of accepting (210) the plurality of essays is in the form of stored or directly entered electronic text. Alternatively or additionally, the essays could be accepted (210) utilizing a method comprised of the steps of scanning the paper forms of the essays, and performing optical character recognition on the scanned paper essays.
  • The preferred method of accepting (220) manual annotations is in the form of electronic text essays that have been manually annotated by humans skilled in the art of discourse element identification. The preferred method of indicating the manual annotation of the pre-specified discourse elements is by the bracketing of discourse elements within starting and ending “tags” (e.g. <Sustained Idea>. . . </Sustained Idea>, <Thesis Statement>. . . </Thesis Statement>).
  • The preferred embodiment of method 200 then determines (225) the universe of all possible features for a particular discourse item. The feature determination step 225 begins by determining (230) the universe of positional features A1 . . . Ak. Next, the feature determination step 225 determines (240) the universe of word choice features Ak+1 . . . Am. Finally, the feature determination step 225 determines (250) the universe of rhetorical structure theory (RST) features Am+1 . . . AN.
  • An embodiment of the positional features determination step 230 loops through each essay in the plurality of essays, noting the position of demarked discourse elements within each essay and determining the number of sentences in that essay.
  • An embodiment of the word choice features determination step 240 parses the plurality of essays and create a list of all words contained within the sentences marked by a human annotator as being a thesis statement. Alternatively or additionally, the word choice features Ak+1 . . . Am universe determination step 240 can accept a list of predetermined list of words of belief, words of opinion, etc.
  • An embodiment of the RST (rhetorical structure theory) features determination step 250 parses the plurality of essays to extract pertinent. The RST parser of preference utilized in step 250 is described in Marcu, D., “The Rhetorical Parsing of Natural Language Texts,” Proceedings of the 35th Annual Meeting of the Assoc. for Computational Linguistics, 1997, pp. 96-103, which is hereby incorporated by reference. Further background on RST is available in Mann, W. C. and S. A. Thompson, “Rhetorical Structure Theory: Toward a Functional Theory of Text Organization,” Text 8(3), 1988, pp. 243-281, which is also hereby incorporated by reference.
  • For each discourse element, the method 200 computes (260) the empirical frequencies relating to each feature Ai across the plurality of essays. For a sentence (S) in the discourse category (T) the following probabilities are determined for each Ai: P(T), the prior probability that a sentence is in discourse category T; P(Ai|T), the conditional probability of a sentence having feature Ai, given that the sentence is in T; P(Ai), the prior probability that a sentence contains feature Ai; P({overscore (A)}i|T), the conditional probability that a sentence does not have feature Ai, given that it is in T; and P({overscore (A)}i), the prior probability that a sentence does not contain feature Ai.
  • The method 100 and the process 200 can be performed by computer programs. The computer programs can exists in a variety of forms both active and inactive. For example, the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s); or hardware description language (HDL) files. Any of the above can be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. Exemplary computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running the computer programs can be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of executable software program(s) of the computer program on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general.
  • V. Experiments Using the Automated Essay Analyzer
  • A. Experiment 1—Baseline
  • Experiment 1 utilizes a Bayesian classifier for thesis statements using essay responses to one English Proficiency Test (EPT) question: Topic B. The results of this experiment suggest that automated methods can be used to identify the thesis statement in an essay. In addition, the performance of the classification method, given even a small set of manually annotated data, appears to approach human performance, and exceeds baseline performance.
  • In collaboration with two writing experts, a simple discourse-based annotation protocol was developed to manually annotate discourse elements in essays for a single essay topic. This was the initial attempt to annotate essay data using discourse elements generally associated with essay structure, such as thesis statement, concluding statement, and topic sentences of the essay's main ideas. The writing experts defined the characteristics of the discourse labels. These experts then completed the subsequent annotations using a PC-based interface implemented in Java.
  • Table 1 indicates agreement between two human annotators for the labeling of thesis statements. In addition, the table shows the baseline performance in two ways. Thesis statements commonly appear at the very beginning of an essay. So, we used a baseline method where the first sentence of each essay was automatically selected as the thesis statement. This position-based selection was then compared to the resolved human annotator thesis selection (i.e., final annotations agreed upon by the two human annotators) for each essay (Position-Based&H). In addition, random thesis statement selections were compared with humans 1 and 2, and the resolved thesis statement (Random&H). The % Overlap column in Table 1 indicates the percentage of the time that the two annotators selected the exact same text as the thesis statement. Kappa between the two human annotators was 0.733. This indicates good agreement between human annotators. This kappa value suggests that the task of manual selection of thesis statements was well-defined.
    TABLE 1
    Annotators % Overlap
    1&2 53.0%
    Position-Based&H 24.0%
    Random&H 7.0%
  • B. Experiment 2
  • Experiment 2 utilized three general feature types to build the classifier: a) sentence position, b) words commonly occurring in a thesis statement, and c) RST labels from outputs generated by an existing rhetorical structure parser (Marcu, 1997). Trained the classifier to predict thesis statements in an essay. Using the multivariate Bernoulli formula, below, this gives us the log probability that a sentence (S) in an essay belongs to the class (T) of sentences that are thesis statements.
  • Experiment 2 utilized three kinds of features to build the classifier. These were a) positional, b) lexical, and c) Rhetorical Structure Theory-based discourse features (RST). With regard to the positional feature, we found that in the human annotated data, the annotators typically marked a sentence as being a thesis toward the beginning of the essay. So, sentence position was a relevant feature. With regard to lexical information, our research indicated that if we used as features the words in sentences annotated as thesis statements that this also proved to be useful toward the identification of a thesis statement. In addition information from RST-based parse trees is or can be useful.
  • Two kinds of lexical features were used in Experiment 2: a) the thesis word list, and b) the belief word list. For the thesis word list, we included lexical information in thesis statements in the following way to build the thesis statement classifier. For the training data, a vocabulary list was created that included one occurrence of each word used in a thesis statement (in training set essays). All words in this list were used as a lexical feature to build the thesis statement classifier. Since we found that our results were better if we used all words used in thesis statements, no stop list was used. The belief word list included a small dictionary of approximately 30 words and phrases, such as opinion, important, better, and in order that. These words and phrases were common in thesis statement text. The classifier was trained on this set of words, in addition to the thesis word vocabulary list.
  • According to RST, one can associate a rhetorical structure tree to any text. The leaves of the tree correspond to elementary discourse units and the internal nodes correspond to contiguous text spans. Text spans represented at the clause and sentence level. Each node in a tree is characterized by a status (nucleus or satellite) and a rhetorical relation, which is a relation that holds between two non-overlapping text spans. The distinction between nuclei and satellites comes from the empirical observation that the nucleus expresses what is more essential to the writer's intention than the satellite; and that the nucleus of a rhetorical relation is comprehensible independent of the satellite, but not vice versa. When spans are equally important, the relation is multinuclear Rhetorical relations reflect semantic, intentional, and textual relations that hold between text spans. For example, one text span may elaborate on another text span; the information in two text spans may be in contrast; and the information in one text span may provide background for the information presented in another text span. The algorithm considers two pieces of information from RST parse trees in building the classifier: a) is the parent node for the sentence a nucleus or a satellite, and b) what elementary discourse units are associated with thesis versus non-thesis sentences.
  • In Experiment 2, we examined how well the algorithm performed compared to the agreement of two human judges, and the baselines in Table 1. Table 2 indicates performance for 6 cross-validation runs. In these runs, ⅚ of the data were used for training and ⅙ for subsequent cross-validation. Agreement is evaluated on the ⅙ of the data. For this experiment inclusion of the following features to build the classifier yielded the results in Table 2: a) sentence position, b) both RST feature types, and c) the thesis word list. We applied this cross-validation method to the entire data set (All), where the training sample contained 78 thesis statements, and to a gold-standard set where 49 essays (GS) were used for training. The gold-standard set includes essays where human readers agreed on annotations independently. The evaluation compares agreement between the algorithm and the resolved annotation (A&Res), human annotator 1 and the resolved annotation (1&Res), and human annotator 2 and the resolved annotation (2&Res). “% Overlap” in Table 2 refers to the percentage of the time that there is exact overlap in the text of the two annotations. The results are exceed both baselines in Table 1.
    TABLE 2
    Mean percent overlap for 6 cross-validation runs.
    Annotators N Matches % Overlap Agreement
    All: A&Res 15.5 7.7 50.0
    GS: A&Res 9 5.0 56.0
    1&Res 15.5 9.9 64.0
    2&Res 15.5 9.7 63.0
  • C. Experiment 3
  • A next experiment shows that thesis statements in essays appear to be characteristically different from a summary sentence in essays, as they have been identified by human annotators.
  • For the Topic B data from Experiment 1, two human annotators used the same PC-based annotation interface in order to annotate one-sentence summaries of essays. A new labeling option was added to the interface for this task called “Summary Sentence”. These annotators had not seen these essays previously, nor had they participated in the previous annotation task. Annotators were asked to independently identify a single sentence in each essay that was the summary sentence in the essay.
  • The kappa values for the manual annotation of thesis statements (Th) as compared to that of summary statements (SumSent) shows that the former task is much more clearly defined. We see that the kappa of 0.603 does not show strong agreement between annotators for the summary sentence task. For the thesis annotation task, the kappa was 0.733 which shows good agreement between annotators. In Table 3, the results strongly indicate that there was very little overlap in each essay between what human annotators had labeled as thesis statements in the initial task, and what had been annotated as a summary sentence (Th/SumSent Overlap). This strongly suggests that there are critical differences between thesis statements and summary sentences in essays that we are interested in exploring further. Of interest is that some preliminary data indicated that what annotators marked as summary sentences appear to be more closely related to concluding statements in essay.
    TABLE 3
    Kappa and Percent Overlap Between
    Manual Thesis Selections (Th) and
    Summary Statements (SumSent)
    Th SumSent Th/SumSent Overlap
    Kappa .733 .603 N/A
    % Overlap .53 .41 .06
  • From the results in Table 3, we can infer that thesis statements in essays are a different genre than, say, a problem statement in journal articles. From this perspective, the thesis classification algorithm appears to be appropriate for the task of automated thesis statement identification.
  • D. Experiment 4
  • How does the algorithm generalize across topics? The next experiment tests the generalizability of the thesis selection method. Specifically, this experiment answers the question whether there were positional, lexical, and discourse features that underlie a thesis statement, and whether or not they were topic independent. If so, this would indicate an ability to annotate thesis statements across a number of topics, and re-use the algorithm on additional topics, without further annotation. A writing expert manually annotated the thesis statement in approximately 45 essays for 4 additional topics: Topics A, C, D and E. She completed this task using the same interface that was used by the two annotators in Experiment 1. The results of this experiment suggest that the positional, lexical, and discourse structure features applied in Experiments 1 and 2 are generalizable across essay topic.
  • To test the generalizability of the method, for each EPT topic the thesis sentences selected by a writing expert were used for building the classifier. Five combinations of four prompts were used to build the classifier in each case, and that classifier was then cross-validated on the fifth topic, not used to build the classifier. To evaluate the performance of each of the classifiers, agreement was calculated for each ‘cross-validation’ sample (single topic) by comparing the algorithm selection to our writing expert's thesis statement selection. For example, we trained on Topics A, B, C, and D, using the thesis statements selected manually. This classifier was then used to select, automatically, thesis statements for Topic E. In the evaluation, the algorithm's selection was compared to the manually selected set of thesis statements for Topic E, and agreement was calculated. Exact matches for each run are presented in Table 4. In all but one case, agreement exceeds both baselines from Table 1. In two cases, where the percent overlap was lower, on cross-validation (Topics A and B), we were able to achieve higher overlap using the vocabulary in belief word list as features, in addition to the thesis word list vocabulary. In the case of Topic A, we achieved higher agreement only when adding the belief word list feature and applying the classical Bayes approach (see footnote 2). Agreement was 34% (17/50) for Topic B, and 31% (16/51) for Topic A.
    TABLE 4
    Performance on a Single Cross-validation Topic (CV Topic)
    Using Four Unique Essay Topics for Training.
    Training Topics CV Topic N Matches % Overlap
    ABCD E 47 19 40.0
    ABCE D 47 22 47.0
    ABDE C 31 13 42.0
    ACDE B 50 15 30.0
    BCDE A 51 12 24.0
  • The experiments described above indicate the following: With a relatively small corpus of manually annotated essay data, a multivariate Bernoulli approach can be used to build a classifier using positional, lexical and discourse features. This algorithm can be used to automatically select thesis statements in essays. Results from both experiments indicate that the algorithm's selection of thesis statements agrees with a human judge almost as often as two human judges agree with each other. Kappa values for human agreement suggest that the task for manual annotation of thesis statements in essays is reasonably well-defined. We are refining the current annotation protocol so that it defines even more clearly the labeling task. We expect that this will increase human agreement in future annotations, and the reliability of the automatic thesis selection since the classifiers are built using the manually annotated data.
  • The experiments also provide evidence that this method for automated thesis selection in essays is generalizable. That is, once trained on a few human annotated prompts, it could be applied to other prompts given a similar population of writers, in this case, writers at the college freshman level. The larger implication is that we begin to see that there are underlying discourse elements in essays that can be identified, independent of the topic of the test question. For essay evaluation applications this is critical since new test questions are continuously being introduced into on-line essay evaluation applications. It would be too time-consuming and costly to repeat the annotation process for all new test questions.
  • V. CONCLUSION
  • What has been described and illustrated herein is a preferred embodiment of the invention along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.

Claims (37)

1-23. (cancelled)
24. A method for providing individualized essay writing instruction, the method comprising:
receiving an essay in an electronic format;
automatically determining a first value for each sentence in the essay that reflects the probability that the sentence is a member of a discourse element category, wherein the probability is based on the presence of each of a predetermined set of features in the sentence;
utilizing the first value to determine whether each sentence in the essay should be assigned to a discourse element category; and
identifying any discourse elements in the essay.
25. The method of claim 24, further comprising:
if the first values do not indicate the presence of a discourse element in the essay, indicating that the essay lacks sufficient clarity.
26. The method of claim 24 wherein the receiving step comprises:
receiving the essay into a computer via a keyboard.
27. The method of claim 24 wherein the discourse element category is thesis statement.
28. The method of claim 24 wherein the receiving step comprises:
scanning a paper form of the essay; and
performing optical character recognition on the scanned paper essay.
29. The method of claim 24 wherein the predetermined set of features comprises a feature based on a position within the essay.
30. The method of claim 24 wherein the predetermined set of features comprises a feature based on a presence or an absence of one or more selected words.
31. The method of claim 30 wherein the one or more selected words comprise one or more words empirically associated with a thesis statement.
32. The method of claim 30 wherein the one or more selected words comprise one or more words of belief.
33. The method of claim 30 wherein the predetermined set of features comprises a feature based on rhetorical relation.
34. The method of claim 30 wherein the first value is determined by utilizing a multivariate Bernoulli model.
35. The method of claim 34 wherein the first value comprises:

log[P(Ai|T)]/P(Ai)] if Ai present, and
log[P(T)]+Σlog[P({overscore (A)}i|T)]/P({overscore (A)}i)] if Ai not present,
wherein
P(Ai|T) is a conditional probability that a sentence has a feature Ai given that the sentence is in a class T;
P({overscore (A)}i|T) is a conditional probability that a sentence does not have a feature Ai given that the sentence is in a class T;
P(T) is a prior probability that a sentence is in a class T;
P(Ai) is a prior probability that a sentence contains a feature Ai; and
P({overscore (A)}i) is a prior probability that a sentence does not contain a feature Ai.
36. The method of claim 35 wherein assigning comprises
assigning the sentence for which the first value is largest to the discourse category element.
37. The method of claim 24 wherein the probability is calculated utilizing a LaPlace estimator.
38. A system for identifying a discourse element of an essay, the system comprising:
an input device, wherein the input device receives an essay;
a first processing device for determining the presence of each of a predetermined set of features in each sentence of the essay;
a second processing device for calculating a probability that each sentence in the essay should be assigned to a discourse element category;
a third processing device for selecting one or more sentences in the essay that should be assigned to the discourse element category; and
a display device for displaying each of the one or more sentences that should be assigned to the discourse element category.
39. The system of claim 38 wherein the input device comprises a keyboard for entering the essay.
40. The system of claim 38 wherein the input device comprises a scanner for converting an essay in a non-electronic format into an essay in an electronic format.
41. The system of claim 38 wherein the first processing device utilizes a rhetorical structure parser.
42. The system of claim 38 wherein the second processing device utilizes a multivariate Bernoulli model.
43. The system of claim 38 wherein the second processing device utilizes a LaPlace estimator.
44. A method for creating a mathematical model for use in identifying discourse elements, the method comprising:
receiving a plurality of first essays relating to a particular subject, wherein each first essay is in an electronic format;
receiving annotations for each first essay, wherein each annotation identifies at least one discourse element;
identifying features, wherein each feature is exhibited by at least one identified discourse element;
computing empirical frequencies, wherein each empirical frequency relates to the presence of a feature with respect to the identified discourse elements across the plurality of first essays;
associating each empirical frequency with the related identified discourse element; and
utilizing the empirical frequencies to select discourse elements in at least one second essay.
45. The method of claim 44 wherein the annotations are prepared by human annotators.
46. The method of claim 44 wherein the annotations are prepared by electronic annotators.
47. The method of claim 44 wherein the features comprise positional features.
48. The method of claim 44 wherein the features comprise word choice features.
49. The method of claim 44 wherein the features comprise rhetorical structure theory features.
50. A method for providing feedback an essay, the method comprising:
receiving an essay prepared by a writer, wherein the essay is received in an electronic format;
automatically determining a first value for each sentence in the essay that reflects the probability that each sentence in the essay is a member of a discourse element category, wherein the probability is based on the presence of each of a predetermined set of features in each sentence of the essay;
utilizing the first value to determine whether each sentence in the essay should be assigned a discourse element category; and
providing feedback to the writer related to any discourse elements identified in the essay.
51. The method of claim 50, further comprising:
if the first values do not indicate the presence of a discourse element in the essay, indicating that the essay lacks sufficient clarity.
52. The method of claim 50 wherein the discourse element category is thesis statement.
53. The method of claim 50 wherein the predetermined set of features comprises a feature based on a position within the essay.
54. The method of claim 50 wherein the predetermined set of features comprises a feature based on a presence or an absence of one or more selected words.
55. The method of claim 54 wherein the one or more selected words comprise one or more words empirically associated with a thesis statement.
56. The method of claim 54 wherein the one or more selected words comprise one or more words of belief.
57. The method of claim 54 wherein the predetermined set of features comprises a feature based on rhetorical relation.
58. The method of claim 54 wherein the first value is determined by utilizing a multivariate Bernoulli model.
59. The method of claim 50 wherein the probability is calculated utilizing a LaPlace estimator.
US10/948,417 2001-01-23 2004-09-22 Methods for automated essay analysis Expired - Fee Related US7729655B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/948,417 US7729655B2 (en) 2001-01-23 2004-09-22 Methods for automated essay analysis
US12/785,721 US8452225B2 (en) 2001-01-23 2010-05-24 Methods for automated essay analysis

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US26322301P 2001-01-23 2001-01-23
US10/052,380 US6796800B2 (en) 2001-01-23 2002-01-23 Methods for automated essay analysis
US10/948,417 US7729655B2 (en) 2001-01-23 2004-09-22 Methods for automated essay analysis

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/052,380 Continuation US6796800B2 (en) 2001-01-23 2002-01-23 Methods for automated essay analysis

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/785,721 Continuation US8452225B2 (en) 2001-01-23 2010-05-24 Methods for automated essay analysis

Publications (2)

Publication Number Publication Date
US20050042592A1 true US20050042592A1 (en) 2005-02-24
US7729655B2 US7729655B2 (en) 2010-06-01

Family

ID=23000880

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/052,380 Expired - Lifetime US6796800B2 (en) 2001-01-23 2002-01-23 Methods for automated essay analysis
US10/948,417 Expired - Fee Related US7729655B2 (en) 2001-01-23 2004-09-22 Methods for automated essay analysis
US12/785,721 Expired - Fee Related US8452225B2 (en) 2001-01-23 2010-05-24 Methods for automated essay analysis

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/052,380 Expired - Lifetime US6796800B2 (en) 2001-01-23 2002-01-23 Methods for automated essay analysis

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/785,721 Expired - Fee Related US8452225B2 (en) 2001-01-23 2010-05-24 Methods for automated essay analysis

Country Status (6)

Country Link
US (3) US6796800B2 (en)
JP (1) JP2004524559A (en)
CA (1) CA2436740A1 (en)
GB (1) GB2388699A (en)
MX (1) MXPA03006566A (en)
WO (1) WO2002059857A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040091847A1 (en) * 2002-11-06 2004-05-13 Ctb/Mcgraw-Hill Paper-based adaptive testing
US20050187772A1 (en) * 2004-02-25 2005-08-25 Fuji Xerox Co., Ltd. Systems and methods for synthesizing speech using discourse function level prosodic features
US20060286540A1 (en) * 2002-06-24 2006-12-21 Jill Burstein Automated essay scoring
US20070077542A1 (en) * 2002-01-23 2007-04-05 Jill Burstein Automated annotation
US20090311659A1 (en) * 2008-06-11 2009-12-17 Pacific Metrics Corporation System and Method For Scoring Constructed Responses
US20100233666A1 (en) * 2001-01-23 2010-09-16 Jill Burstein Methods for Automated Essay Analysis
US20110300520A1 (en) * 2010-06-04 2011-12-08 Meadwestvaco Corporation Systems and methods for assisting a user in organizing and writing a research paper
US20120196253A1 (en) * 2011-01-31 2012-08-02 Audra Duvall Interactive communication design system

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7062220B2 (en) * 2001-04-18 2006-06-13 Intelligent Automation, Inc. Automated, computer-based reading tutoring systems and methods
US7491690B2 (en) * 2001-11-14 2009-02-17 Northwestern University Self-assembly and mineralization of peptide-amphiphile nanofibers
US7371719B2 (en) * 2002-02-15 2008-05-13 Northwestern University Self-assembly of peptide-amphiphile nanofibers under physiological conditions
US20040076930A1 (en) * 2002-02-22 2004-04-22 Steinberg Linda S. Partal assessment design system for educational testing
US8210850B2 (en) 2002-03-07 2012-07-03 Blank Marion S Literacy education system for students with autistic spectrum disorders (ASD)
WO2003088107A2 (en) * 2002-04-10 2003-10-23 Accenture Global Services Gmbh Determination of attributes based on product descriptions
WO2004018628A2 (en) 2002-08-21 2004-03-04 Northwestern University Charged peptide-amphiphile solutions & self-assembled peptide nanofiber networks formed therefrom
US7554021B2 (en) * 2002-11-12 2009-06-30 Northwestern University Composition and method for self-assembly and mineralization of peptide amphiphiles
AU2003298647A1 (en) 2002-11-14 2004-06-15 Claussen, Randal, C. Synthesis and self-assembly of abc triblock bola peptide
AU2003295562A1 (en) * 2002-11-14 2004-06-15 Educational Testing Service Automated evaluation of overly repetitive word use in an essay
CA2515651A1 (en) * 2003-02-11 2004-08-26 Northwestern University Methods and materials for nanocrystalline surface coatings and attachment of peptide amphiphile nanofibers thereon
US20040250209A1 (en) * 2003-06-05 2004-12-09 Gail Norcross Automated composition assistant
US7720675B2 (en) * 2003-10-27 2010-05-18 Educational Testing Service Method and system for determining text coherence
US7831196B2 (en) * 2003-10-27 2010-11-09 Educational Testing Service Automatic essay scoring system
US7544661B2 (en) * 2003-12-05 2009-06-09 Northwestern University Self-assembling peptide amphiphiles and related methods for growth factor delivery
MXPA06006387A (en) * 2003-12-05 2006-09-04 Univ Northwestern Branched peptide amphiphiles, related epitope compounds and self assembled structures thereof.
US7657220B2 (en) 2004-05-21 2010-02-02 Ordinate Corporation Adaptive scoring of responses to constructed response questions
US7835902B2 (en) * 2004-10-20 2010-11-16 Microsoft Corporation Technique for document editorial quality assessment
JP4872214B2 (en) * 2005-01-19 2012-02-08 富士ゼロックス株式会社 Automatic scoring device
CA2590336A1 (en) * 2005-01-21 2006-07-27 Northwestern University Methods and compositions for encapsulation of cells
WO2006093928A2 (en) 2005-02-28 2006-09-08 Educational Testing Service Method of model scaling for an automated essay scoring system
JP2008531733A (en) * 2005-03-04 2008-08-14 ノースウエスタン ユニバーシティ Angiogenic heparin-binding epitopes, peptide amphiphiles, self-assembling compositions, and related uses
CN101300613A (en) * 2005-07-20 2008-11-05 奥迪纳特公司 Spoken language proficiency assessment by computer
US20070192309A1 (en) * 2005-10-12 2007-08-16 Gordon Fischer Method and system for identifying sentence boundaries
US20070166684A1 (en) * 2005-12-27 2007-07-19 Walker Harriette L System and method for creating a writing
US20070218450A1 (en) * 2006-03-02 2007-09-20 Vantage Technologies Knowledge Assessment, L.L.C. System for obtaining and integrating essay scoring from multiple sources
US8608477B2 (en) * 2006-04-06 2013-12-17 Vantage Technologies Knowledge Assessment, L.L.C. Selective writing assessment with tutoring
US7970767B2 (en) * 2006-06-05 2011-06-28 Accenture Global Services Limited Extraction of attributes and values from natural language documents
US7996440B2 (en) * 2006-06-05 2011-08-09 Accenture Global Services Limited Extraction of attributes and values from natural language documents
US20080126319A1 (en) * 2006-08-25 2008-05-29 Ohad Lisral Bukai Automated short free-text scoring method and system
US7757163B2 (en) * 2007-01-05 2010-07-13 International Business Machines Corporation Method and system for characterizing unknown annotator and its type system with respect to reference annotation types and associated reference taxonomy nodes
US8076295B2 (en) * 2007-04-17 2011-12-13 Nanotope, Inc. Peptide amphiphiles having improved solubility and methods of using same
US8027941B2 (en) * 2007-09-14 2011-09-27 Accenture Global Services Limited Automated classification algorithm comprising at least one input-invariant part
US20090176198A1 (en) * 2008-01-04 2009-07-09 Fife James H Real number response scoring method
WO2010120830A1 (en) * 2009-04-13 2010-10-21 Northwestern University Novel peptide-based scaffolds for cartilage regeneration and methods for their use
KR101274419B1 (en) * 2010-12-30 2013-06-17 엔에이치엔(주) System and mehtod for determining rank of keyword for each user group
US8504492B2 (en) 2011-01-10 2013-08-06 Accenture Global Services Limited Identification of attributes and values using multiple classifiers
US8620836B2 (en) 2011-01-10 2013-12-31 Accenture Global Services Limited Preprocessing of text
US20130004931A1 (en) * 2011-06-28 2013-01-03 Yigal Attali Computer-Implemented Systems and Methods for Determining Content Analysis Metrics for Constructed Responses
US10446044B2 (en) * 2013-04-19 2019-10-15 Educational Testing Service Systems and methods for generating automated evaluation models
US10276055B2 (en) 2014-05-23 2019-04-30 Mattersight Corporation Essay analytics system and methods
EP3832627A1 (en) * 2015-02-06 2021-06-09 Sense Education Israel., Ltd. Semi-automated system and method for assessment of responses
CN105183712A (en) * 2015-08-27 2015-12-23 北京时代焦点国际教育咨询有限责任公司 Method and apparatus for scoring English composition
RU2660305C2 (en) * 2016-06-01 2018-07-05 Федеральное государственное бюджетное образовательное учреждение высшего образования "Воронежский государственный технический университет" Device for demonstration the bernoulli equation with respect to the closed flows
CN112106056A (en) 2018-05-09 2020-12-18 甲骨文国际公司 Constructing fictitious utterance trees to improve the ability to answer convergent questions
US20200020243A1 (en) * 2018-07-10 2020-01-16 International Business Machines Corporation No-ground truth short answer scoring
CN109189926B (en) * 2018-08-28 2022-04-12 中山大学 Construction method of scientific and technological paper corpus
US11562135B2 (en) 2018-10-16 2023-01-24 Oracle International Corporation Constructing conclusive answers for autonomous agents
CN109408829B (en) * 2018-11-09 2022-06-24 北京百度网讯科技有限公司 Method, device, equipment and medium for determining readability of article
US11593561B2 (en) * 2018-11-29 2023-02-28 International Business Machines Corporation Contextual span framework
CN111325001B (en) * 2018-12-13 2022-06-17 北大方正集团有限公司 Thesis identification method, thesis identification model training method, thesis identification device, thesis identification model training device, equipment and storage medium
US11321536B2 (en) * 2019-02-13 2022-05-03 Oracle International Corporation Chatbot conducting a virtual social dialogue
CN110096710B (en) * 2019-05-09 2022-12-30 董云鹏 Article analysis and self-demonstration method
US11599731B2 (en) * 2019-10-02 2023-03-07 Oracle International Corporation Generating recommendations by using communicative discourse trees of conversations
US12046156B2 (en) 2020-05-01 2024-07-23 Suffolk University Unsupervised machine scoring of free-response answers
US11972209B2 (en) * 2021-11-03 2024-04-30 iSchoolConnect Inc. Machine learning system for analyzing the quality and efficacy of essays for higher education admissions

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4958284A (en) * 1988-12-06 1990-09-18 Npd Group, Inc. Open ended question analysis system and method
US4978305A (en) * 1989-06-06 1990-12-18 Educational Testing Service Free response test grading method
US5437554A (en) * 1993-02-05 1995-08-01 National Computer Systems, Inc. System for providing performance feedback to test resolvers
US5672060A (en) * 1992-07-08 1997-09-30 Meadowbrook Industries, Ltd. Apparatus and method for scoring nonobjective assessment materials through the application and use of captured images
US5827070A (en) * 1992-10-09 1998-10-27 Educational Testing Service System and methods for computer based testing
US5878386A (en) * 1996-06-28 1999-03-02 Microsoft Corporation Natural language parser with dictionary-based part-of-speech probabilities
US5987302A (en) * 1997-03-21 1999-11-16 Educational Testing Service On-line essay evaluation system
US6115683A (en) * 1997-03-31 2000-09-05 Educational Testing Service Automatic essay scoring system using content-based techniques
US6181909B1 (en) * 1997-07-22 2001-01-30 Educational Testing Service System and method for computer-based automatic essay scoring
US6267601B1 (en) * 1997-12-05 2001-07-31 The Psychological Corporation Computerized system and method for teaching and assessing the holistic scoring of open-ended questions
US6332143B1 (en) * 1999-08-11 2001-12-18 Roedy Black Publishing Inc. System for connotative analysis of discourse
US6356864B1 (en) * 1997-07-25 2002-03-12 University Technology Corporation Methods for analysis and evaluation of the semantic content of a writing based on vector length
US6386759B2 (en) * 1999-12-17 2002-05-14 Siemens Aktiengesellschaft X-ray diagnostic device having a supporting apparatus
US6796800B2 (en) * 2001-01-23 2004-09-28 Educational Testing Service Methods for automated essay analysis
US7127208B2 (en) * 2002-01-23 2006-10-24 Educational Testing Service Automated annotation

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0743728B2 (en) 1990-08-02 1995-05-15 工業技術院長 Summary sentence generation method
JP3202381B2 (en) 1993-01-28 2001-08-27 株式会社東芝 Document search device and document search method
JP2957875B2 (en) 1993-03-17 1999-10-06 株式会社東芝 Document information search device and document search result display method
US5778397A (en) 1995-06-28 1998-07-07 Xerox Corporation Automatic method of generating feature probabilities for automatic extracting summarization
US5918240A (en) 1995-06-28 1999-06-29 Xerox Corporation Automatic method of extracting summarization using feature probabilities
JP2000029894A (en) * 1998-07-13 2000-01-28 Ntt Data Corp Subject sentence extraction system
GB0006721D0 (en) 2000-03-20 2000-05-10 Mitchell Thomas A Assessment methods and systems
US6461166B1 (en) 2000-10-17 2002-10-08 Dennis Ray Berman Learning system with learner-constructed response based testing methodology
US6866510B2 (en) 2000-12-22 2005-03-15 Fuji Xerox Co., Ltd. System and method for teaching second language writing skills using the linguistic discourse model
US20030031996A1 (en) 2001-08-08 2003-02-13 Adam Robinson Method and system for evaluating documents
US7088949B2 (en) 2002-06-24 2006-08-08 Educational Testing Service Automated essay scoring
US20040073510A1 (en) 2002-06-27 2004-04-15 Logan Thomas D. Automated method and exchange for facilitating settlement of transactions
JP2004090055A (en) 2002-08-30 2004-03-25 Sanwa Packing Kogyo Co Ltd Layered resin die for press working and method for manufacturing it
JP2006231178A (en) 2005-02-24 2006-09-07 Canon Electronics Inc Wastes treatment apparatus
US7427325B2 (en) 2005-12-30 2008-09-23 Siltron, Inc. Method for producing high quality silicon single crystal ingot and silicon single crystal wafer made thereby
JP2009016630A (en) 2007-07-06 2009-01-22 Mitsui Chemicals Inc Organic transistor
JP2009016631A (en) 2007-07-06 2009-01-22 Ricoh Printing Systems Ltd Semiconductor laser device and image forming device using it

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4958284A (en) * 1988-12-06 1990-09-18 Npd Group, Inc. Open ended question analysis system and method
US4978305A (en) * 1989-06-06 1990-12-18 Educational Testing Service Free response test grading method
US5672060A (en) * 1992-07-08 1997-09-30 Meadowbrook Industries, Ltd. Apparatus and method for scoring nonobjective assessment materials through the application and use of captured images
US5827070A (en) * 1992-10-09 1998-10-27 Educational Testing Service System and methods for computer based testing
US5718591A (en) * 1993-02-05 1998-02-17 National Computer Systems, Inc. Method for providing performance feedback to test resolvers
US5458493A (en) * 1993-02-05 1995-10-17 National Computer Systems, Inc. Dynamic on-line scoring guide
US5466159A (en) * 1993-02-05 1995-11-14 National Computer Systems, Inc. Collaborative and quality control scoring system
US5690497A (en) * 1993-02-05 1997-11-25 National Computer Systems, Inc. Dynamic on-line scoring method
US5709551A (en) * 1993-02-05 1998-01-20 National Computer Systems, Inc. Multiple test item scoring method
US5716213A (en) * 1993-02-05 1998-02-10 National Computer Systems, Inc. Method for preventing bias in test answer scoring
US6193521B1 (en) * 1993-02-05 2001-02-27 National Computer Systems, Inc. System for providing feedback to test resolvers
US5735694A (en) * 1993-02-05 1998-04-07 National Computer Systems, Inc. Collaborative and quality control scoring method
US5752836A (en) * 1993-02-05 1998-05-19 National Computer Systems, Inc. Categorized test item reporting method
US5558521A (en) * 1993-02-05 1996-09-24 National Computer Systems, Inc. System for preventing bias in test answer scoring
US6183261B1 (en) * 1993-02-05 2001-02-06 National Computer Systems, Inc. Collaborative and quality control scoring system and method
US6183260B1 (en) * 1993-02-05 2001-02-06 National Computer Systems, Inc. Method and system for preventing bias in test answer scoring
US5437554A (en) * 1993-02-05 1995-08-01 National Computer Systems, Inc. System for providing performance feedback to test resolvers
US6155839A (en) * 1993-02-05 2000-12-05 National Computer Systems, Inc. Dynamic on-line scoring guide and method
US6159018A (en) * 1993-02-05 2000-12-12 National Computer Systems, Inc. Categorized test reporting system and method
US6168440B1 (en) * 1993-02-05 2001-01-02 National Computer Systems, Inc. Multiple test item scoring system and method
US6558166B1 (en) * 1993-02-05 2003-05-06 Ncs Pearson, Inc. Multiple data item scoring system and method
US5878386A (en) * 1996-06-28 1999-03-02 Microsoft Corporation Natural language parser with dictionary-based part-of-speech probabilities
US5987302A (en) * 1997-03-21 1999-11-16 Educational Testing Service On-line essay evaluation system
US6115683A (en) * 1997-03-31 2000-09-05 Educational Testing Service Automatic essay scoring system using content-based techniques
US6181909B1 (en) * 1997-07-22 2001-01-30 Educational Testing Service System and method for computer-based automatic essay scoring
US6356864B1 (en) * 1997-07-25 2002-03-12 University Technology Corporation Methods for analysis and evaluation of the semantic content of a writing based on vector length
US6267601B1 (en) * 1997-12-05 2001-07-31 The Psychological Corporation Computerized system and method for teaching and assessing the holistic scoring of open-ended questions
US6332143B1 (en) * 1999-08-11 2001-12-18 Roedy Black Publishing Inc. System for connotative analysis of discourse
US6386759B2 (en) * 1999-12-17 2002-05-14 Siemens Aktiengesellschaft X-ray diagnostic device having a supporting apparatus
US6796800B2 (en) * 2001-01-23 2004-09-28 Educational Testing Service Methods for automated essay analysis
US7127208B2 (en) * 2002-01-23 2006-10-24 Educational Testing Service Automated annotation
US20070077542A1 (en) * 2002-01-23 2007-04-05 Jill Burstein Automated annotation

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8452225B2 (en) 2001-01-23 2013-05-28 Educational Testing Service Methods for automated essay analysis
US20100233666A1 (en) * 2001-01-23 2010-09-16 Jill Burstein Methods for Automated Essay Analysis
US7796937B2 (en) 2002-01-23 2010-09-14 Educational Testing Service Automated annotation
US8626054B2 (en) 2002-01-23 2014-01-07 Educational Testing Service Automated annotation
US20070077542A1 (en) * 2002-01-23 2007-04-05 Jill Burstein Automated annotation
US20100285434A1 (en) * 2002-01-23 2010-11-11 Jill Burstein Automated Annotation
US8467716B2 (en) 2002-06-24 2013-06-18 Educational Testing Service Automated essay scoring
US7769339B2 (en) 2002-06-24 2010-08-03 Educational Testing Service Automated essay scoring
US20060286540A1 (en) * 2002-06-24 2006-12-21 Jill Burstein Automated essay scoring
US20040091847A1 (en) * 2002-11-06 2004-05-13 Ctb/Mcgraw-Hill Paper-based adaptive testing
US20050187772A1 (en) * 2004-02-25 2005-08-25 Fuji Xerox Co., Ltd. Systems and methods for synthesizing speech using discourse function level prosodic features
US20090311659A1 (en) * 2008-06-11 2009-12-17 Pacific Metrics Corporation System and Method For Scoring Constructed Responses
US8882512B2 (en) * 2008-06-11 2014-11-11 Pacific Metrics Corporation System and method for scoring constructed responses
US10755589B2 (en) 2008-06-11 2020-08-25 Act, Inc. System and method for scoring constructed responses
US20110300520A1 (en) * 2010-06-04 2011-12-08 Meadwestvaco Corporation Systems and methods for assisting a user in organizing and writing a research paper
US20120196253A1 (en) * 2011-01-31 2012-08-02 Audra Duvall Interactive communication design system

Also Published As

Publication number Publication date
US20100233666A1 (en) 2010-09-16
GB2388699A (en) 2003-11-19
MXPA03006566A (en) 2004-10-15
US8452225B2 (en) 2013-05-28
GB0318627D0 (en) 2003-09-10
US6796800B2 (en) 2004-09-28
CA2436740A1 (en) 2002-08-01
WO2002059857A1 (en) 2002-08-01
JP2004524559A (en) 2004-08-12
US20020142277A1 (en) 2002-10-03
US7729655B2 (en) 2010-06-01

Similar Documents

Publication Publication Date Title
US7729655B2 (en) Methods for automated essay analysis
Maamuujav et al. Syntactic and lexical features of adolescent L2 students’ academic writing
Crossley et al. What is successful writing? An investigation into the multiple ways writers can write successful essays
Burstein et al. Towards automatic classification of discourse elements in essays
Goodwin et al. A meta-analysis of morphological interventions in English: Effects on literacy outcomes for school-age children
Rosé et al. Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning
Heilman et al. Classroom success of an intelligent tutoring system for lexical practice and reading comprehension.
US8380491B2 (en) System for rating constructed responses based on concepts and a model answer
Burstein Opportunities for natural language processing research in education
Nguyen et al. Iterative design and classroom evaluation of automated formative feedback for improving peer feedback localization
Pratiwi et al. Rhetorical move and genre knowledge development of English and Indonesian abstracts: A comparative analysis
Glover et al. Detecting stylistic inconsistencies in collaborative writing
Das et al. Automatic question generation and answer assessment for subjective examination
Chinkina et al. Crowdsourcing evaluation of the quality of automatically generated questions for supporting computer-assisted language teaching
Klebanov et al. Automated essay scoring
JP2002091276A (en) Method and system for teaching explanatory writing to student
Kiefer et al. Improving students' revising and editing: The Writer's Workbench system
Roesler A computer science academic vocabulary list
Goulart Communicative text types in university writing
Brock Three disk-based text analyzers and the ESL writer
Ayuningsih et al. Faulty parallel structure in students’ argumentative writing
de Lima et al. Automatic Punctuation Verification of School Students’ Essay in Portuguese
Tschichold et al. Intelligent CALL and written language
Adesiji et al. Development of an automated descriptive text-based scoring system
McNamara IIS: A Marriage of Computational Linguistics, Psychology, and Educational Technologies.

Legal Events

Date Code Title Description
AS Assignment

Owner name: EDUCATIONAL TESTIG SERVICE, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BURSTEIN, JILL;MARCU, DANIEL;ANDREYEV, VYACHESLAV;AND OTHERS;REEL/FRAME:015830/0918;SIGNING DATES FROM 20020118 TO 20020122

Owner name: EDUCATIONAL TESTIG SERVICE,NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BURSTEIN, JILL;MARCU, DANIEL;ANDREYEV, VYACHESLAV;AND OTHERS;SIGNING DATES FROM 20020118 TO 20020122;REEL/FRAME:015830/0918

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220601