Nothing Special   »   [go: up one dir, main page]

US20040072131A1 - Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing - Google Patents

Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing Download PDF

Info

Publication number
US20040072131A1
US20040072131A1 US10/713,755 US71375503A US2004072131A1 US 20040072131 A1 US20040072131 A1 US 20040072131A1 US 71375503 A US71375503 A US 71375503A US 2004072131 A1 US2004072131 A1 US 2004072131A1
Authority
US
United States
Prior art keywords
test
sound
tests
generating
stimulus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/713,755
Inventor
Janet Wasowicz
Feng-Qi Lai
Andrew Morrison
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognitive Concepts Inc
Original Assignee
Cognitive Concepts Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognitive Concepts Inc filed Critical Cognitive Concepts Inc
Priority to US10/713,755 priority Critical patent/US20040072131A1/en
Publication of US20040072131A1 publication Critical patent/US20040072131A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B17/00Teaching reading
    • G09B17/003Teaching reading electrically operated apparatus or devices
    • G09B17/006Teaching reading electrically operated apparatus or devices with audible presentation of the material to be studied

Definitions

  • Appendix A is 24 pages and discloses details of the data graphing and reporting functionality of the diagnostic system and method for phonological awareness, phonological processing and reading skill testing.
  • This invention relates generally to a diagnostic system and method for testing one or more different areas of phonological awareness, phonological processing, verbal short term memory, rapid access naming, phonemic decoding and reading fluency in order to determine if the individual being tested is at risk to having reading problems and the areas in which the individual may need further training.
  • the English language has words that are comprised of sounds in some predetermined order. From the vast number of possible sequences of sounds, words in the English language actually use a relatively small number of sequences and the majority of these sequences are common to many words. A child who becomes aware of these common sound sequences is typically more adept at mastering these sequences when the words are presented in their printed form (i.e., when the child is reading the words) than a child who lacks this awareness of sounds.
  • the word “mat” has three distinct phonemes /m/, /ae/ and /t/.
  • the words “sat” and “bat” have different initial phonemes, /s/ and /b/ respectively, but share the middle and final phonemes (/ae/ and /t/, respectively) that form the common spelling pattern “at”.
  • our alphabetic orthography appears to be a sensible system for representing speech in writing.
  • a child may employ the strategy of sounding out unknown words or letter sequences by analogy to known words with identical letter sequences. For example, the child may pronounce the unknown word “bat” by rhyming it with the known word “cat”.
  • Phonological awareness skills are grouped into two categories including synthesis and analysis.
  • Phonological synthesis refers to the awareness that separate sound units may be blended together to form whole words.
  • Phonological analysis refers to the awareness that whole words may be segmented into a set of sound units, including syllables, onset-rimes and phonemes. Both analysis and synthesis skills have been identified as important prerequisites for achieving the goal of early reading skill proficiency and deficits of either and/or both of these skills are typically present in children with reading disabilities.
  • phonetic coding refers to the child's ability to use a speech-sound representation system for efficient storage of verbal information in working memory.
  • the ability to efficiently use phonetic codes to represent verbal information in working memory may be measured by performance on memory span tasks for items with verbal labels. Children with reading problems have been found to perform poorly on memory span tasks for items with verbal labels.
  • phonetic coding is an important skill for a reader, such as a beginning reader.
  • a number of assessment tools are presently available to professionals to measure phonological processing and related skills. These include the Test of Phonological Awareness (TOPA), the Lindamood Auditory Conceptualization Test (LAC), The Phonological Awareness Test (PAT), the Comprehensive Test of phonological Testing (CTOPP) and a screening measure published in an educational textbook, Phonemic Awareness in Young Children: A classroom Curriculum. None of these conventional assessment tools are software based and therefore have limitations. For example, these conventional assessment tools must be manually administered so that the testing is not necessarily standardized since each test giver may give the test in a slightly different manner that reduces the reliability of the resulting assessment. These manually administered assessment tools also make the scoring, charting and comparison of the test results more difficult.
  • TOPA Test of Phonological Awareness
  • LAC Lindamood Auditory Conceptualization Test
  • PAT The Phonological Awareness Test
  • CTOPP Comprehensive Test of phonological Testing
  • None of these conventional assessment tools are software based and therefore have limitations. For example, these conventional assessment tools must be manually administered so that
  • the diagnostic system and method for evaluating phonological awareness and processing skills and related pre-reading and reading skills in accordance with the invention provides a system for identifying individuals, such as children in kindergarten through second grade, who are likely to experience academic failure due to phonological processing deficits and a lack of phonological awareness.
  • the system may also determine the relative weaknesses and strengths of the individual or group of individuals in different phonological awareness and processing areas or related reading skills in order to help develop appropriate intervention and curriculum activities to improve the weak skills and areas.
  • the system may also track, over time, an individual's development or a group's development of various phonological awareness and processing skills and relating reading skills and establish a baseline so that the effectiveness of instructional methods may be evaluated.
  • the system may identify individual with weak phonological awareness and processing skills and correct those skills before the individual develops a reading problem.
  • the diagnostic tool may be one or more software applications being executed on a Web server so that the diagnostic tool may be an Internet or World Wide Web (the Web) based tool that provides an easily accessible and affordable screening tool to help parents determine, in the comfort of their own home, if their child is at-risk for academic failure due to phonological awareness and processing deficits.
  • the system may also suggest solutions (training modules that train a particular phonological awareness, phonological processing skill or a related reading or pre-reading skill) for a parent to consider in correcting the phonological awareness and processing deficits.
  • the diagnostic system in accordance with a preferred embodiment of the invention may include one or more software applications that may be stored on a portable media, such as a CD or a zip disk or may be stored on a server.
  • the diagnostic system provides various advantages over conventional diagnostic tools.
  • the system permits more standardized administration of the tests that leads to more reliable assessments.
  • the system also permits more efficient, accurate and reliable scoring and tracking of an individual's phonological awareness and processing abilities so that the individual's progress may be determined by comparing the various test results to one another and comparing the results of tests given at different time to each other.
  • the system may be administered by people who do not necessarily understand the intricacies of phonological awareness and processing skills.
  • the system may be administered simultaneously to a large number of individuals since each children may use a separate computer to complete the tests.
  • the engaging graphical game format of the tests within the diagnostic system may reduce an individual's test anxiety so that a more accurate test may be conducted.
  • the diagnostic system may include one or more interactive computer activities that permit the diagnostic system to measure one or more different types of phonological awareness and processing skills, knowledge of sound-symbol correspondences and fluency of decoding and reading.
  • the system in accordance with the invention may also collect risk factor and other relevant data about each individual, assess performance on activities that measure phonological awareness and processing skill, analyze risk factor data and performance data for individuals or groups of individuals, and report those results.
  • the system may be used for diagnosing phonological awareness and processing skill deficits in a young child.
  • a system and method for testing one or more skills associated with the reading skills of an individual comprises presenting one or more stimuli to the individual, each stimulus associated with a test for testing a particular reading or pre-reading skill of the individual, the skills indicating the risk that the individual develops a language-based learning disability.
  • the method further comprises receiving a response from the individual to each stimulus, scoring the user's responses to each test, and recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
  • FIG. 1A is a block diagram illustrating a first embodiment of a computer-based phonological skills diagnostic system in accordance with the invention
  • FIG. 1B illustrates a second embodiment of a computer-based phonological skills diagnostic system in accordance with the invention
  • FIG. 1C illustrates more details of the second embodiment of the computer-based phonological skills diagnostic system in accordance with the invention as shown in FIG. 11B;
  • FIG. 2 is a diagram illustrating a Web-based server computer that may be a part of the diagnostic system of FIG. 1;
  • FIG. 2A graphically illustrates a method for determining a particular error of a user of the diagnostic system
  • FIG. 2B is a flowchart illustrating a preferred method for identifying a particular deficiency of a user of the diagnostic system
  • FIG. 2C illustrates the IF-THEN rule bases used to determine a user's deficient skill areas based on the incorrect answers in particular subtests
  • FIG. 2D illustrates an example of one or more subtests of the diagnostic system and the error measure associated with the particular subtest
  • FIG. 3 is a diagram illustrating a preferred embodiment of the diagnostic tool of FIG. 2 in accordance with the invention including one or more tests that are used to diagnose a reading problem of a child;
  • FIG. 4 is a flowchart illustrating filling out a questionnaire in accordance with the invention.
  • FIG. 5 is a flowchart illustrating a method for testing a child's recognition of rhymes
  • FIG. 6 is a diagram illustrating an example of how the child's rhyme recognition ability may be tested in accordance with the invention.
  • FIG. 7 is a flowchart illustrating a method for testing a child's ability to generate a rhyme
  • FIG. 8 is a diagram illustrating an example of how the child's rhyme generation ability may be tested in accordance with the invention.
  • FIG. 9 is a flowchart illustrating a method for testing the child's ability to distinguish the beginning and ending sounds of a word
  • FIG. 10 is a diagram illustrating an example of how the child's ability to discern the beginning and ending of words may be tested in accordance with the invention
  • FIG. 11 is a flowchart illustrating a method for testing a child's ability to blend sounds
  • FIG. 12 is a diagram illustrating an example of how the child's ability to blend sounds may be tested in accordance with the invention.
  • FIG. 13 is a flowchart illustrating a method for testing a child's ability to segment sounds
  • FIG. 14 is a diagram illustrating an example of how the child's ability to segment sounds may be tested in accordance with the invention.
  • FIG. 15 is a flowchart illustrating a method for testing a child's ability to manipulate sounds
  • FIG. 16 is a diagram illustrating an example of how the child's ability to manipulate sounds may be tested in accordance with the invention.
  • FIG. 17 is a flowchart illustrating a method for testing a child's ability to recall spoken items in sequential order
  • FIG. 18 is a diagram illustrating an example of how the child's ability to recall spoken items in sequential order may be tested in accordance with the invention
  • FIG. 19 is a flowchart illustrating a method for testing a child's ability to rapidly name visually-presented items
  • FIG. 20 is a diagram illustrating an example of how the child's ability to rapidly name visually-presented items may be tested in accordance with the invention
  • FIG. 21 is a flowchart illustrating a method for testing a child's ability to name letters and associate sounds with symbols
  • FIG. 22 is a diagram illustrating an example of how a child's ability to name letters and sound/symbol associations may be tested in accordance with the invention
  • FIG. 23 is a flowchart illustrating a method for testing a child's ability to decode words
  • FIG. 24 is a diagram illustrating an example of how a child's ability to decode words may be tested in accordance with the invention.
  • FIG. 25 is a flowchart illustrating a method for testing a child's ability for fluent reading
  • FIG. 26 is a diagram illustrating an example of how a child's ability for fluent reading may be tested in accordance with the invention.
  • FIG. 27 is a flowchart illustrating the operating of the training module recommender in accordance with the invention.
  • FIG. 28 illustrates an example of a report that is generated by the computer-based phonological skills diagnostic system in accordance with the invention
  • FIG. 29 illustrates an example of a test section selection drop down menu in accordance with the invention.
  • FIG. 30 illustrates an example of a data graph selection drop down menu in accordance with the invention.
  • the invention is particularly applicable to a World Wide Web (Web) based diagnostic system for determining a child's phonological awareness and processing skills and reading skills and it is in this context that the invention will be described. It will be appreciated, however, that the system and method in accordance with the invention has greater utility since it may be implemented on other types of computer systems, such as the Internet, a local area network, a wide area network or any other type of computer network.
  • the system may also be used to test a variety of other individuals, such as illiterate and mentally disabled people, individuals whose native language is not English who are learning to read, and adolescents and adults who read poorly and wish to improve their reading skills.
  • FIG. 1A is a block diagram illustrating a first embodiment of a computer-based phonological skills diagnostic system 50 in accordance with the invention.
  • the diagnostic system 50 may include a server 52 and one or more client computers 54 (Client # 1 -Client #N) connected together by a communications network 56 , that may be the Internet, the World Wide Web (the Web), a local area network, a wide area network or any other type of communications network.
  • the communications network is the Web and a typical Web communications protocol, such as the hypertext transfer protocol (HTTP), may be used for communications between the server and the client computer.
  • the server may download one or more Web pages to each client computer and each client computer may send responses back to the server.
  • HTTP hypertext transfer protocol
  • the server may further comprise a central processing unit (CPU) 58 , a memory 60 , a database (DB) 62 , a persistent storage device 64 and a diagnostic tool 66 .
  • the diagnostic tool may be one or more software applications (testing different phonological awareness and processing skills or reading skills) stored in the persistent storage of the server that may be downloaded into the memory 60 (as shown in FIG. 1A) so that the diagnostic tool may be executed by the CPU 58 of the server.
  • the DB 62 or persistent storage device 64 may store one or more Web pages associated with the diagnostic tool 66 . The Web pages may be downloaded to each client computer when the client computer requests the particular Web page.
  • the server may also include the necessary hardware and software to accept requests from one or more client computers.
  • the Web pages may be communicated to the one or more client computers using the HTTP protocol and the client computers may send data back to the server, such as test responses, using the same protocol.
  • Each client computer 54 (Client #N will be described herein, but it should be realized that each client computer is substantially similar) may be used by an individual user, such as a parent of a child or a test administrator, to access the diagnostic tool stored on the server.
  • Each client computer 54 may include a central processing unit (CPU) 70 , a memory 72 , a persistent storage device 74 such as a hard disk drive, a tape drive, an optical drive or the like, an input device 76 such as a keyboard, a mouse, a joystick, a speech recognition microphone or the like, and an output device 78 such as a typical cathode ray tube, a flat panel display, a printer for generating a printed report or the like.
  • CPU central processing unit
  • memory 72 such as a hard disk drive, a tape drive, an optical drive or the like
  • an input device 76 such as a keyboard, a mouse, a joystick, a speech recognition microphone or the like
  • an output device 78 such as a typical
  • Each client computer may also include a browser application 80 that may be stored in the persistent storage device and downloaded to the memory 72 as shown in the figure.
  • the browser application may be executed by the CPU 70 and may permit the user of the client computer to interact with the Web pages being downloaded from the server 52 .
  • multiple client computers may establish simultaneous communications sessions with the server and each client computer may be downloading Web pages from the server.
  • the system 50 thus permits multiple client computers to access the diagnostic tool 66 stored on the server so that the user of each client computer may take advantage of the benefits of the diagnostic tool.
  • the diagnostic tool may include one or more different tools that test various phonological awareness or processing skills as well as reading skills so that a child's proficiency at phonological awareness and processing skills and reading skills may be determined.
  • the diagnostic tool 66 may also use a child's scores on the one or more tools in order to recommend to the user of the client computer (e.g., the parent of the child) which training tools the parent may consider downloading to help the child with any deficiencies.
  • These training tools may also be stored in the persistent storage device 64 connected to the server so that the user may then download the training tool from the server as well.
  • the training tools are described in more detail in co-pending U.S. patent application Ser. Nos. 09/039,194 and 60/103,354, filed Mar.
  • an assessment tool software application such as a Windows .exe file for example, may be downloaded from the server to the client computer.
  • the assessment tool software application may then be executed by the CPU 70 of the client computer.
  • the assessment tool may then generate the graphical screens that test the different user's skills and may store the information/scores about the tests locally in the client computer. Then, during the assessment testing or after the assessment tool execution has been completed, the scores for the user may be uploaded back to the server computer.
  • FIG. 1B illustrates an example of a second embodiment of a computer-based phonological skills diagnostic system 50 in accordance with the invention.
  • the server 52 (whose elements and functions are described above and will not be described herein) that is connected via the communications network 56 to one or more clients as above.
  • each client may be a teacher computer system 84 , such as a server computer, a local area network server computer or a personal computer that is connected to a network, (Teacher Station 1 , Teacher Station 2 , . . . , Teacher Station N) that is connected to the server 52 over the communications network 56 .
  • the teacher station may have similar elements to the clients shown in FIG.
  • the CPU 70 of the teacher station may execute a diagnostic tool module 85 (that may be one or more pieces of software or one or more software applications) wherein the diagnostic tool module 85 resides in the memory 72 as shown.
  • the teacher station 84 may be connected and control a computer network 86 , such as an internal computer network within school or a computer network within a school district, etc. . . .
  • the computer network 86 may be connected to one or more student computers 87 (Student 1 , Student 2 , . . . , Student N) wherein each student computer may be a computing device with sufficient resources to implement the diagnostic testing in accordance with the invention.
  • each student computer 87 may be a typical personal computer and may have the elements of the clients 54 shown in FIG. 1 or it may be a personal digital assistant.
  • the diagnostic tool may be downloaded to the teacher station 84 from the server 52 when the particular school or school district purchases a license to the diagnostic tool.
  • the teacher station may execute the diagnostic tool and control the operation of the student computers 87 to implement the diagnostic testing.
  • This embodiment of the invention may be used, for example, to permit the teacher station (a LAN server) to monitor and control the diagnostic testing when the diagnostic tool is being used by multiple users in a school or other setting. More details of this embodiment of the invention will now be described.
  • FIG. 1C illustrates more details of an example of the second embodiment of the computer-based phonological skills diagnostic system in accordance with the invention as shown in FIG. 1B.
  • the teacher station 84 the computer network 86 , such as a local area network, and the one or more student computers 87 are shown and described in more detail.
  • a school purchases the program (or the school district purchases the program and assigns the program to a school) and the school is given a User ID and password for access.
  • the school may then download the program from the server 52 onto the school's LAN Server 84 (teacher station).
  • the teacher station performs the function of communicating with the server 52 (not shown) in order to, for example, download the program and send back students' test results.
  • the teacher station may also communicate with the one or more student computers 87 in order to, for example, monitor students' test progress, control the start, volume, pause, resume, exit functions for all of the students and/or any individual students and collecting students' testing data.
  • the testing environment presumes a networked environment with Internet access and the Xtranet Xtra installed.
  • the Xtranet Xtra facilitates messaging between networked machines.
  • the teacher/administrator would have an administrative version of the Testing Module.
  • the classroom teachers/test administrator may register each student who will take the test and generate a classroom layout to assign students to particular student computers 87 .
  • the teacher station may also permit the classroom teacher/test administrator to generate a layout for multiple different classes.
  • the teacher station may display one or more icons 88 wherein each student's computer is numbered.
  • the icons are shown in a seating chart arrangement so that the teacher can easily determine which student is represented by which icon.
  • Each icon may be one or more predetermined colors wherein each color indicates a particular status of the testing for the student using that computer.
  • a green colored icon may indicate an on going test
  • a yellow colored icon may indicate a paused test
  • a red colored flashing icon may indicate that help is needed.
  • the administrator may click on the icon that represents the student's computer and be presented in the student information area 89 with additional information about the particular student, such as the student's name, age, grade, type of test he/she is taking, and the progress of the test (e.g., “Rhyme Recognition 8 ” which is test item 8 of the Rhyme Recognition test section).
  • an interface may be displayed that shows 1) how many tests are currently available and what type of tests can be assigned to each student (since the school may purchase a license to a particular number of tests at any one time); and 2) how many tests are currently in process and what kind of tests have already been assigned.
  • a student can be assigned to more than one test.
  • the teacher station user interface may further include an activated student information area 89 wherein the information for a particular student is shown that has been selected by the administrator/teacher by clicking on the student's icon as described above.
  • This area 89 may further include one or more buttons 90 that permit the administrator to control the testing of the individual selected student.
  • the user interface may further include a second area 91 wherein the testing status is shown. For example, the area may indicate a failed connection with the student computer or server 52 , a completed test and data being sent (or data is sent) to the server 52 .
  • This area 91 may also include one or more buttons 92 that permit the administrator/teacher to control the testing of all of the student's computer at the same time. Now, the process of registration and access using this embodiment of the invention will be described.
  • An “Individual” is defined as an online client wanting to purchase one or a number of Single Test packages for immediate use.
  • an individual registers by completing an Individual Registration form wherein the individual assigns to herself a username and password (as well as a hint, should she forget her password).
  • a record is created in the Account table on the server 52 and the individual is assigned a unique account_id.
  • the individual who creates the account is known as the Account Manager, and has responsibilities and access for the account.
  • a record is created in the Pswd table on the server 52 and stamped with the account_id and assigned the default access level of “Individual”.
  • the individual may now purchase one or more test packages.
  • the individual selects a Single Test package appropriate for a child (e.g., Package “ 1 A”) and a record is created in the Order table and assigned a unique order_id and stamped with the account_id.
  • a record is also created in the Order_Item table.
  • the order item record is assigned a unique order_item_id and stamped with the account_id, order_d and package_id.
  • Each order item is assigned a unique order_item_id and stamped with the account_id and order_id.
  • the individual must complete and submit a Student Registration form for each child, which assigns the test to the particular child.
  • the order is then validated by a third party, such as CyberCash or RediCash. If validation succeeds, the validated field in the Order table on the server 52 is marked TRUE and records are created in the Usage table with one record for each test. In particular, each Usage record is assigned a unique usage_id and stamped with the account_id, order_id and order_item_id. If validation fails, the individual is notified and all records bearing both the account_id and order_id in the Order, Order_item and Student tables are deleted. Now, the institutional registration process will be described.
  • a third party such as CyberCash or RediCash.
  • An “Institution” is defined as a public or private school or other educational or child care institution wanting to purchase Single Test or 35-Test packages for use by a school district or school.
  • a “School” is defined as any school within a school district, or any single institutional element such as a parochial or private school, a day care center, a commercial learning center.
  • a public “School District” is any school district listed by the National Center for Education Statistics.
  • a public school is any school listed by the National Center for Education Statistics and associated with a school district.
  • An “Account Manager” is any individual who registers the account, orders and accepts responsibility for payment. The account manager has access to school-district level data if he/she purchases packages for a school district.
  • the account manager is responsible for assigning test packages to schools and lead teachers within the school district.
  • the account manager may assign himself as a lead teacher and the institution of record as the School (as is the case of a single school).
  • a “Lead Teacher” is responsible for school packages and assigns packages to classroom teachers.
  • a classroom teacher is a test administrator and monitors the actual testing.
  • the classroom teacher is given access by the lead teacher to register students so that they may take the test.
  • the lead teacher has access to school level data and the classroom teacher has access to class level data.
  • the system may impose certain restraints on the diagnostic tool, such as 1) test packages purchased by a school district may only be distributed within the district; and 2) one test package must be assigned to only one school; i.e., Students at different schools may not share one test package.
  • the account manager the person who purchases the packages for the school district, is responsible for assigning packages to schools and a lead teacher for each school.
  • the lead teacher assigned to a school is responsible for assigning packages to classroom teachers.
  • the classroom teachers are responsible for registering students and administering the test. Later, after the test, the classroom teacher has access only to view his/her own classes' students' test results although it is possible for two teachers share one package. For example, for a package for 35 students, Mr. L (class 1 teacher) was assigned 20 and Ms. D (class 2 teacher) was assigned 15.
  • Mr. L can only assign his own 20 students and view his own 20 students' test results
  • Ms. D can only assign her own 15 students and view her own 15 students' test results.
  • the lead teacher who was assigned to a school has access to school level to view his/her own school students' test results; and the account manager, who represents the school district, has an access to school district level to view his/her own school district students' test results.
  • an institution registers by completing an Institution Registration form.
  • a record is created in the Account table in the server 52 and the account manager is assigned a unique account_id.
  • the account manager has responsibilities and access for the account.
  • the Institutional Registration form requires that an institution specify a public school district if it wishes to distribute its packages among schools within the district. Or, conversely, the institution may register as a single school, in which case all the packages it purchases must be used within that school.
  • the account manager who submits the registration assigns to herself a username and password (as well as a hint, should she forget her password).
  • a record is then created in the Pswd table on the server 52 and stamped with the account_id and assigned the default access level “Institution”.
  • the “Institution” level allows access to data as described above.
  • the form identifies the account as a “School District” account
  • a record is created in the School_District table in the server 52 with a unique school_district_id and the record is stamped with the account_id.
  • the account manager may create records in the Region table, with unique region_ids. These records are stamped with the account_id and school district_id.
  • the form identifies the account as a “School” (i.e., a single institution), a record is created in the School table with a unique school_id and the record is stamped with the account_id.
  • the account manager may create a record in the School_District table to which the school belongs, with a unique school_district_id and the record is stamped with the account_id.
  • the account manager may create a record in the Region table, with unique region_ids. These records are stamped with the account_id and school_district_id.
  • the institution may purchase the tests.
  • the account manager may now purchase test packages.
  • the account manager selects a test package, enters the package quantity and adds the selection to her “shopping cart”.
  • the account manager may select additional items, specify the quantity and add them to the “shopping cart.”
  • the account manager may then submit the order.
  • the order is then validated by a third party, such as CyberCash or RediCash. If validation succeeds, the validated field in the Order table is marked TRUE and records are created in the Usage table with one record for each test.
  • each Usage record is assigned a unique usage_id and stamped with the account_id, order_id and order_item_id.
  • a record is also created in the Order_Item table.
  • the order item record is assigned a unique order_item_id and stamped with the account_id, order_id and package_id.
  • Each order item is assigned a unique order_item_id and stamped with the account_id and order_id. If validation fails, the individual is notified. All records bearing both the account_id and order_id in the School, School_District and Region tables are deleted if validation fails.
  • the account manager After validation, the account manager must now assign packages to schools and lead teachers. In particular, if the account is identified as type “School District”, the account manager completes and submits School Registration form for each school. (The system may have NCES databases on the server for the account manager to select school districts and/or schools). A record is created in the School table and assigned a unique school_id. The record is stamped with the account_id and school_district_id. Optionally, the school may further be identified as part of a “Region”. The account manager may now assign packages to a school or schools. An interface will inform the account manager of packages that are available to assign, which packages have been assigned and to what school.
  • the account manager may now assign lead teachers to school level access.
  • the account manager may assign access to more than one lead teacher at each school, or assign access to one lead teacher at more than one school.
  • the lead teacher has school level access to test data.
  • the account manager is responsible for communicating Username and Password to assigned lead teachers.
  • the lead teacher may assign classroom teachers to class level access.
  • the lead teacher is responsible for communicating Username and Password to assigned classroom teachers.
  • Teachers or account managers acting as “Teachers” may assign classroom teachers and classroom teachers may register students at any time after an order is validated.
  • a “Class” is any arbitrary group designation for students taking a test (e.g., “Mr. Busy's Kindergarten”).
  • a teacher may first define a class wherein a “Class” is defined by a class name unique to the school and given a unique class_id. The class record is stamped with the teacher_id and school_id
  • classroom teachers must complete a Student Registration form for each student.
  • An interface will show how many and of what kind of tests are available to assign, how many and of what kind of tests have been assigned.
  • the form will allow more than one test to be assigned to a student.
  • the student is assigned to a class.
  • a record is created in the Student table and assigned a unique student_id. The record is stamped with the account_id, school district_id, school_id, package_id and class_id
  • the testing environment presumes a networked environment with Internet access and the Xtranet Xtra installed.
  • the Xtranet Xtra facilitates messaging between networked machines.
  • the classroom teacher/test administrator would have an administrative version of the Testing Module.
  • the classroom teacher must log in to access the module.
  • the classroom teacher accesses the Test administration area, he is presented with a Seating Chart of student computers that are in communication with the administrative computer via Xtranet.
  • the classroom teacher is also presented with a list of registered students.
  • the classroom teacher begins a testing session by assigning students to a computer. Each “Desk” on the seating chart, when clicked, displays the student's name, age, grade, type of test, and the process of the test in the student information area 89 .
  • the classroom teacher will have control over start, volume, pause, resume, and exit functions for all the students or at each Desk.
  • the testing status information indicated in the area 91 includes whether 1) the diagnostic tool application is open; 2) a connection to the server 52 is tested and/or active; 3) the student diagnostic test on each student computer has started, paused, or completed; and 4) the test data for a particular diagnostic student test from a particular student computer is sent to the server 52 .
  • the Test Application on the Student's machine messages the server 52 via the Teacher's machine, and the server 52 returns data to the Application via HTTP. (This happens transparently within the Application).
  • the Application before it reaches the Access screen, will test its connection to the Server 52 . If the connection fails, the Application will not proceed.
  • the classroom teacher is notified of the result of the connection test.
  • the testing record of connection is marked as “completed”.
  • the testing record is retrieved from the Usage table by student_id and order_item_id and “completed” is marked TRUE. This in effect, debits the test holdings of the respective account.
  • the Student's Test Application will request a list of test stimuli and their resources and commence to download those resources from the server 52 . After a student has taken a test, most resources will already be cached locally, and the test may proceed with minimal downloads. The test will proceed even in the event of student timeouts due to inactivity. As the student answers the test, data is collected. At the conclusion of the test, that data is written to a temporary HTML page, which is then sent as a form to the server 52 . The Score table at the server 52 is updated with this form data. A test is concluded when the student answers the final test question OR when the classroom teacher clicks the EXIT button for the Student. In the preferred embodiment, no student or student score data will be held locally. The Teacher's machine will look for unsent files on student machines and attempt to resend them at a later time in the instance where a test is completed but the HTTP transmission fails.
  • Test performance data (graphs and tables) will be displayed by an applet embedded within a Web page.
  • the test performance data is username/password protected.
  • An HTML page will send a find request in the form of a Transact-SQL statement to the test result database which returns a record set.
  • the record set will be formatted for display by the embedded applet.
  • Account managers may view data by using their username/password.
  • Account managers may view and print data at the highest level of their access, typically at the School District Level. This entitles them to view individual and summary data by District, Region, School, Class and Student as set forth in more detail in Appendix A.
  • Lead teachers may view data by using their usemame/password.
  • Lead teachers may view and print data at the highest level of their access: the School Access Level. This entitles them to view individual and summary data by school, class and student.
  • classroom teachers may view and print data at a Class Access Level using their usemame/password. This entitles them to view individual and summary data by class and student.
  • the details of the data reporting feature of the diagnostic system in accordance with the invention will be described in more detail below with reference to FIG. 28 and Appendix A. Now, more details of the Web-based diagnostic system will be described.
  • FIG. 2 is a diagram illustrating the Web-based server computer 52 that may be a part of the diagnostic system of FIGS. 1A, 1B and 1 C.
  • the server 52 may include the CPU 58 , the memory 60 , the DB 62 , the persistent storage device 64 and the diagnostic tool 66 .
  • the diagnostic tool may further comprise a user interface (UI) 100 , a test section 102 , a scorer 104 , an administrator 106 , a recommender 108 and a motivator module 109 .
  • the user interface may download the Web pages to each client computer as the Web pages are requested and receive the responses back from the client computers.
  • the test section 102 may contain links to one or more different diagnostic tests (stored in the persistent storage or the DB) that may be used to determine a child's proficiency at a particular phonological awareness skill or reading skill as described in more detail below.
  • Each test may have the child play a graphical game in which some skill of the child is being tested without the child knowing that a test is being performed. This type of game-based testing may reduce the child's anxiety about taking a test.
  • the child may interact with each test and respond to the test with responses.
  • the user/student taking the tests in the assessment tool do not see the scores of the tests since those scores are only provided to the teacher or parent of the user.
  • the scorer 104 may accumulate the total score for each test and then store the score in the DB 62 . Since the scores from the tests are automatically gathered and stored by the scorer into the DB, the system helps to generate accurate scores, permits the scores from different children to be compared to each other and permit a child's progress to be tracked based on the changing scores of a child over time. An example of the report generated by the scorer in accordance with the invention is described below with reference to FIG. 28 and Appendix A.
  • the scorer 104 may also include statistical analysis mechanisms for determining various statistics about the scores of one or more children using the diagnostic tool.
  • the administrator 106 may perform various administrative actions such as monitoring the user of the diagnostic tool, billing the users (if appropriate) and the like.
  • the recommender 108 may use the scores and statistical information generated by the scorer, if requested by the user of the client computer, to recommend one or more training tools that may be used by the child taking the tests on the particular client computer in order to improve the child's ability in any deficient areas.
  • the scores may indicate that the child has weak/below average rhyme recognizing skills and the recommender may recommend that the child play the rhyme recognizer training tool in order to boost the child's rhyme recognition abilities.
  • the parent may then download the training tool from the system.
  • the recommender permits a parent of the child, who has no experience or knowledge about reading disorders or phonological awareness and processing deficits, to have their child tested for these deficits at home and then have the system automatically recommend a training tool that may help the child improve in any deficient areas.
  • the recommender may be one or more pieces of code in a preferred embodiment that analyze the incorrect responses to one or more different subtests in order to determine the skill areas of a particular user that are deficient so that a training module that trains that particular deficient skill area can be recommended to the user of the diagnostic system.
  • the recommendation module in accordance with the invention will now be described in more detail with reference to FIGS. 2 A- 2 D.
  • FIG. 2A graphically illustrates a method 800 for determining a particular phonological error of a user that is using the diagnostic system.
  • the diagnostic system stores the incorrect responses to each question. For example, as shown for the Rhyme Recognition subtest, there may be three incorrect responses for test items 2 , 3 , and 6 wherein each test item tests a different aspect of the rhyme recognition skills.
  • the incorrect responses are sorted by the type of error that is likely occurring based on the particular incorrect response wherein those differences are shown graphically in FIG. 2A, but are stored digitally in a database in the preferred embodiment.
  • two of the incorrect responses indicate the same type of error (for example, an open syllable rime error) and one indicates a different type of error (for example, a r-controlled vowel rime).
  • the data about the particular incorrect responses by the user stored in the database are mapped into the types of errors that are shown by the particular incorrect answer.
  • the particular preferred software based method for determining the particular type of error based on the answers from a user to all of the subtests will now be described with reference to FIG. 2B.
  • FIG. 2B is a flowchart illustrating a preferred method 810 for determining a particular deficiency of a user of the diagnostic system.
  • indexes are set to one to begin the analysis process. These indexes are then incremented as described below to analyze each incorrect response for each subtest wherein each incorrect response is compared to each error measure to determine the type of error.
  • step 818 the first incorrect response, IR 11 , for the first subtest, ST 1 , is compared to the first error measure, EM 11 , to determine if the incorrect response is consistent with the first error measure.
  • Each error measure is intended to compare a particular incorrect answer with a particular type of error as described in more detail below with reference to FIG. 2D.
  • the method determines if a type of error is identified (e.g., does the incorrect response indicate that the particular type of problem identified by the particular error measure is present for the particular user). If an error is identified based on the error measure, the error is labeled in step 822 and then stored in the database in step 824 for the particular user. Since there is only one error measure that matches each incorrect answer, the method will drop down to step 830 to analyze the next incorrect response against all of the error measures.
  • index l is a maximum (e.g., if all of the error measures have been analyzed) in step 826 . If 1 is not at its maximum value (e.g., there are other error measures that need to be compared to the first incorrect answer for the first subtest), then l is incremented in step 828 (to compare the next error measure to the first incorrect answer to the first subtest) and the method loops back to step 818 to compare the next error measure to the first incorrect answer for the first subtest.
  • each error measure is compared to the first incorrect answer for the first subtest.
  • the input (IR) 11 (incorrect response 1 of subtest 1 ) is provided and compared to (EM) 11 (error measure 1 of subtest 1 , for example, open syllable rime). If the error is identified, label the error and store it in the database: Error Storage. If the error is not identified, continue comparing this incorrect response with the.remaining error measures until the error is identified.
  • input (IR) 12 (incorrect response 2 of subtest 1 ) and repeat the steps 2 and 3 to identify the error. When all the incorrect responses from subtest 1 are compared and errors are identified, labeled, and stored, input incorrect errors of subtest 2 one by one and compare them with error measures for subtest 2 as what was done for subtest 1 .
  • the method in accordance with the invention compares each incorrect response for each subtest to each error measure to generate a database containing all of the errors that are identified for a particular user. Now, more details of the error measure and the comparison of the error measures to the incorrect responses will be described.
  • FIG. 2C visually illustrates an example of the IF-THEN rules used to determine a user's deficient skill areas based on the incorrect answers in particular subtests and FIG. 2D illustrates an example of one or more subtests of the diagnostic system and the error measure associated with the particular subtest.
  • the circled numbers illustrate the code of an error measure for a particular subtest (shown in more detail in FIG. 2D) and the lines illustrate the connections of all elements for a particular rule that indicates a particular skill deficiency.
  • the table illustrates one or more subtests, its associated error measure identification number (ID) and the actual error measure described.
  • ID error measure identification number
  • the second error measure identification is “2” and the actual error measure is that the user does not recognize /f/ when it is at the end following an /i/ sound.
  • Other examples of the error measures for different subtests are also shown.
  • each subtest may have one or more different error measures wherein the error measures are described in more detail in FIG. 2D.
  • the database may include one or more rules that identify different skill deficiencies. Each rule may reach a conclusion about a particular skill deficiency based on one or more error measures.
  • a single error measure (based on a single incorrect answer) may indicate a particular skill deficiency or a combination of error measures (based on more than one incorrect answer) may indicate a skill deficiency.
  • the recommender is capable of diagnosing skill deficiencies in a user in this manner.
  • FIG. 2C graphically illustrates three examples of rules in the recommendation module that indicate three different skill deficiencies. These examples, however, are merely illustrative and there may be a very large number of actual skill deficiency rules.
  • FIG. 2D illustrates the error measures that are being used in the rule examples shown in FIG. 2C.
  • the first rule (Rule 1) is indicated by a dashed line (- - - -)
  • the second rule (Rule 2) is indicated by a solid line (------)
  • the third rule (Rule 3) is indicated by a broken dashed line (--- - - ---).
  • FIG. 2C illustrates the combination of error measures that must be true for a particular user (indicating particular incorrect answers of the user) that in turn indicate a particular skill deficiency.
  • Each example of a rule will now be provided in text below (and shown graphically in FIG. 2C) and then a more in-depth explanation of the first rule only is provided since it is assumed that the second and third rules will be understood once the first rule is explained.
  • the first rule generally determines if the user has a problem understanding the /f/ sound in a word while the second and third rules determine if a particular location in a word of the /f/ sound is a problem.
  • the database has stored the incorrect answers of the user along with the error measures that correspond to the incorrect responses. Then, each rule is compared to the error measures that are stored in the database which are true (indicating a particular incorrect response to a particular subtest) for the particular user to diagnose any skill deficiency areas. Thus, a deficiency in understanding the /f/ sound is diagnosed if the above identified error measures (indicated in FIG. 2D) are true.
  • a specific deficiency for example, deficiency of /f/ sound at the end following /e/ sound or another consonant vs. deficiency of /f/ sound
  • relevant training modules are recommended.
  • the motivator module 109 may generate motivation images and sounds to encourage the user/student to complete the tests associated with the assessment tool so that the user is less aware that he/she is being tested by the system. The motivation may also maintain the user/student's interest in the testing.
  • the diagnostic system may show one or more animals, such as monkeys, eating bananas as the user is completing the tests so that the user is rewarded and incentivized by the monkey's actions.
  • there may be eleven different skills tests and the monkeys may be shown to the user after the first three tests are completed by the user, and then after the first six tests have been completed by the user, and finally after the first nine tests have been completed by the user.
  • the user is given a break between tests, given a chance to relax, and informed of the test portions completed and to be completed.
  • the monkey may be eating three bananas representing the three completed test sections and may say “I want more bananas. Help me get some more bananas” to encourage the student to complete the other tests in the diagnostic tool which are represented by the eight bananas on the tree.
  • the motivation module encourages the user to complete all of the tests in the diagnostic tool.
  • the diagnostic tool may also include speech recognition software that permits the various tests described below, to be used in conjunction with speech recognition technology (a microphone and speech recognition software) on the client computer to enhance the value of the diagnostic tests.
  • speech recognition technology a microphone and speech recognition software
  • the child may see one or more items on the computer screen in rapid succession, speak the name of each item into a microphone that is interpreted by the speech recognition software in the client computer, transmitted to the server and compared to a correct response by the speech recognition software in the server so that the scorer may determine whether or not the child correctly identified each item.
  • speech recognition technology a microphone and speech recognition software
  • FIG. 3 is a diagram illustrating a preferred embodiment of the diagnostic tool 66 including one or more tests 102 that are used to diagnose a reading problem of a child by testing various phonological awareness and processing skills and pre-reading skills of the child.
  • the one or more tests 102 may each be a separate software application module that may include a user interface portion 111 containing one or more Web pages.
  • Each test 102 may display images on the display of the client computer that test a particular phonological awareness skill of the child and receive responses from the child that are used to determine a score for the child.
  • the diagnostic tool may include, for example, a questionnaire module 110 , a rhyme recognizer module 112 , a rhyme generator module 114 , a beginning and ending sound or sound unit recognizer module 116 , a sound blender module 120 , a sound segmenter module 122 , a sound manipulator module 124 , a sequential verbal recall module 126 , a rapid item naming module 128 , a letter naming and sound/symbol association module 130 , a word decoder module 132 and a fluent reader module 134 .
  • each module may embody a test that tests a particular phonological or reading skill of the child that may affect the child's ability to read.
  • the questionnaire 110 is a fill-in form that permits the system to look for particular risk factors that may lead to reading deficiencies as described below with reference to FIG. 4.
  • the rhyme recognizer module 112 determines the child's ability to recognize a rhyme as described below with reference to FIGS. 5 and 6.
  • the rhyme generator module 114 determines the child's ability to make rhymes as described below with reference to FIGS. 7 and 8.
  • the beginning and ending sound or sound unit recognizer module 116 determines the child's ability the recognize the beginning and ending sounds in one or more words as described below with reference to FIGS. 9 and 10.
  • the sound blender module 120 determines the child's ability to blend known sounds or sound units together to form new words as described below with reference to FIGS. 11 and 12.
  • the sound segmenter module 122 determines the child's ability to segment a word into one or more sounds as described below with reference to FIGS. 13 and 14.
  • the sound manipulator module 124 determines a child's ability to manipulate the sounds in a word as described below with reference to FIGS. 15 and 16.
  • the sequential verbal recall module 126 determines the child's ability to recall a series of sequential items shown to the child as described below with reference to FIGS. 17 and 18.
  • the rapid naming module 128 determines a child's ability to rapidly name one or more items as described below with reference to FIGS. 19 and 20.
  • the letter naming and sound/symbol association module 130 determines the child's ability to name the letters of the alphabet and associate sounds with symbols as described below with reference to FIGS.
  • the word decoding module 132 determines a child's ability to determine words based on one or more sounds as described below with reference to FIGS. 23 and 24.
  • the fluent reader module 134 determines the child's fluent reading ability as described below with reference to FIGS. 25 and 26. As described above and below, each module may use the speech recognition technology to enhance the testing process. Now, each of these modules will be described in more detail starting with the questionnaire.
  • FIG. 4 is a flowchart illustrating a questionnaire process 140 in accordance with the invention.
  • the questionnaire permits the diagnostic system to gather information about an individual to be tested for the purpose of calculating the individual's risk for reading and academic failure.
  • a variety of historical, environmental, familial and behavioral factors that have been closely linked with and are predictive of language-based reading and learning disorders may be determined.
  • the frequency of middle ear infections, a family history of dyslexia, socioeconomic status, exposure to literacy in the home, competencies in speech sound awareness, word retrieval, verbal memory, speed sound perception and production and language comprehension and expressive language may provide information about an individual's risk for language-based reading and learning problems.
  • the questionnaire may display a first question to the user of the client computer, such as the parent of the child being tested.
  • the user may respond to the question using the user input devices and the user's response may be recorded by the questionnaire module in step 144 .
  • the questionnaire module determines if all of the questions have been answered and goes to step 142 to present the next question to the user if there are additional questions. As long as there are remaining questions, the method will loop through steps 142 - 146 .
  • the questionnaire module may analyze the responses in step 148 to calculate a score and a risk factor value and then display the results of the analysis (including the responses and the recommendations of the system) to the user in step 150 .
  • the score may be calculated as the number of items checked as being applicable to the user. Although a single factor does not indicate a risk, the more factors that exist for an individual, the more likely it is that the individual may experience difficulties.
  • the module may generate a category of the risk (high, medium or low) and then provide recommendations based on the category of risk.
  • the questionnaire may ask if the child has a history of middle ear infections, if anyone in the family has reading or other learning disabilities and if the child mispronounces multi-syllabic words. The responses to these questions may be used to determine the category of risk of the person being tested. The category of risk determined based on the questionnaire may then be used during the recommendation of training tools.
  • the rhyme recognition module will be described in more detail.
  • FIG. 5 is a flowchart illustrating a method 160 for testing a child's recognition of rhymes in accordance with the invention.
  • the rhyme recognizer module tests the child's ability to recognize rhyming words and, in order to determine if two words rhyme, the child must focus on the sounds of the words rather than the meaning. In addition, the child must focus on one part of the word rather than the word as a whole.
  • a sensitivity to rhyming is typically a child's first experience shifting their attention and focus from the content of the speech to the form of the words. Typically, this skill for recognizing rhymes should emerge by 3-4 years of age.
  • the module may show the child one or more different types of rhymes (using different sound units, for example) in order to assess the child's ability with different types of rhymes.
  • the rhyme recognizing module may display two words along with their pictures on the user's display screen as shown in FIG. 6. For example, the module may display the picture of a sun and a picture of a gun.
  • the module may display text below the pictures asking the user if the two words rhyme.
  • the module may present a verbal prompt asking the user if the two words rhyme since the users of the system may not be able to read.
  • the user may use the user input device, such as the keyboard, the mouse or the microphone of the speech recognition hardware, to respond to the question and the module may receive the response.
  • the module may determine if the response is correct.
  • the module may determine if there are other rhyme types to test in step 170 . If there are more rhyme types to test, the module may display the word pair for the next type of rhyme in step 172 and loops back to step 164 to display the question about whether the two words rhyme. If there are no more rhyme types to test, the module may calculate the child's score in step 174 . The score may be calculated based on the percentage of pairs of items correctly identified as rhyming or not. In step 176 , the module may display the score to the user and the recommender, based on the score, may recommend one or more training tools to help the child improve his rhyme identification skills.
  • the module may determine the number of consecutive errors of the particular rhyme type in step 178 .
  • the module may compare the number calculated above to a predetermined number and if the number of consecutive errors is more than the predetermined number, the module go to step 170 to determine if there are other rhyme types to be tested (assuming that more tests for the current rhyme types are not productive since the user has already missed more than the predetermined number). If the number of consecutive errors is less than the predetermined number, then the module may display the next word pair for the same rhyme type in step 182 in order to continue testing the child's ability with that particular type of rhyme.
  • the rhyme recognizer module may test the child's abilities with respect to a variety of rhyme types to gain a better understanding of the child's deficiencies or abilities to recognize rhymes. For example, the module may determine that the child only has deficiencies with respect to certain types of rhymes. Now, an example of the user interface for the rhyme recognition module will be described.
  • FIG. 6 is a diagram illustrating an example of how the child's rhyme recognition may be tested in accordance with the invention.
  • an image 190 that may be displayed on the user's display screen is shown.
  • the image may include a picture of a first item 192 and a picture of a second item 194 and the child must determine if the names of the two items rhyme with each other.
  • the items are a sun and a gun that do in fact rhyme.
  • the image may also include displayed instructions 196 from the module and one or more response buttons 198 , 200 , such as the “Yes” button and the “No” button in this example.
  • the user may also respond to the query by using the keyboard or by speaking into a speech recognition microphone.
  • the rhyme recognition module may present the rhyme recognition test as a series of colorful images that reduces the child's test anxiety since the child may not even realize that he/she is being tested. Now, the rhyme generation module will be described in more detail.
  • FIG. 7 is a flowchart illustrating a method 210 for testing a child's ability to generate a rhyme.
  • the rhyme generation module assesses a child's ability to focus on one part of a word rather than the entire word.
  • the ability to rhyme indicates the emergence of phonological awareness and processing skills and is a good early indicator of later reading ability. Typically, this skill begins to show as the child is 3-4 years old.
  • the module may generate a word sound on the speaker of the user's computer and may display an image of the word being spoken.
  • the module may also display a series of other pictures of items in step 214 and the user must determine which item in the series rhymes with the spoken word.
  • the module may then ask the user to select the rhyming item in step 216 , the user may provide a response using one of the input devices (keyboard, mouse or microphone). Instead of a series of images being displayed to the user, the module may provide a verbal prompt asking the user to generate a rhyming word and the user may speak the rhyming word into the microphone of the speech recognition device.
  • the module may then determine if the user's response is correct in step 218 . If the user's response is not correct, then the module may determine the number of consecutive incorrect responses in step 220 and compare the calculated number to a predetermined number, n, in step 222 . If the number of errors is less than the predetermined number (e.g., the user should be tested more on that rhyme type), the module may display the next image in step 224 and return to step 214 . If the number of consecutive errors is greater than the predetermined number (e.g., it is no longer useful to continue testing this rhyme pair because the user does not understand it) or the user's response was correct, the module may determine if there are more rhyme types to test in step 226 .
  • the predetermined number e.g., it is no longer useful to continue testing this rhyme pair because the user does not understand it
  • the module may display the items for the next rhyme type in step 228 and return to step 214 to elicit the user's response. If there are no other rhyme types (i.e., the user has completed the module), the module may calculate a score in step 230 (the score is equal to the percentage of items correctly identified as rhyming) and may display the results of the test and any recommendations from the recommender in step 232 .
  • the recommendations from the recommender are similar to those described above and therefore will not be described here. Now, an example of the rhyme generation test is described.
  • FIG. 8 is a diagram illustrating an example of how the child's rhyme generation may be tested in accordance with the invention using an image 240 .
  • the image may include an image 242 of the spoken word that may be a “pup” in this example.
  • the image 240 may also include one or more images of other items 244 - 248 (a horn, a bed and a cup in this example) and displayed instructions 250 as shown.
  • the user may hear the word “pup”, see the picture of the “pup” and select the item below it that rhymes with the pup. In this example, the user is supposed to select the picture of the cup.
  • the module may provide a verbal prompt asking the user to generate a rhyming word and the user may speak the rhyming word into the microphone of the speech recognition device.
  • the use of images to test the child's ability reduces the child's test anxiety since the child may not even realize that a test is being conducted.
  • FIG. 9 is a flowchart illustrating a method 260 performed by the beginning and ending sound recognizer module for testing the child's ability to distinguish the beginning and ending sounds of a word.
  • the module tests a child's ability to recognize sounds in words. Once the child establishes the skill to recognize the beginning and ending sounds of a word, the child may more readily learn to isolate the sounds in a word and hear them separately.
  • a normal kindergarten child is typically able to identify which word in a group of three words begins with the same first sound as the target word. Most normal first grade students can perform the harder task of identifying the word in a group with the same last sound.
  • the module may present a spoken word naming an item and display an image of the item to the user.
  • the module may query the user about which item in a sequence of items has the same beginning sound as the item. The module may then receive a user's response from the user entering the response into the input devices as described above in step 266 .
  • the module determines if the response is correct. If the response is not correct, the module may determine the number of consecutive errors for the particular beginning sound in step 270 and compare the calculated value with a predetermined value, n, in step 272 .
  • the module may present the user with another spoken word and picture in step 274 and return to step 264 to gather the user's response.
  • the module determines if all of the beginning sounds in the test are completed in step 276 and either presents the next beginning sound in step 278 and returns to step 264 if there are other beginning sounds to test or begins testing the ending sounds.
  • the module may present a spoken word and a picture of the item in step 280 and query the user about which item in a sequence of items has a similar ending sound in step 282 .
  • the module may gather the user's response and determine if the response is correct in step 286 .
  • the module may determine the number of consecutive errors for the particular ending sound in step 288 , compare the calculated number to a predetermined number in step 290 and display a next word in step 292 and returns to step 282 if the calculated number is less than the predetermined number. If the calculated number is not less than the predetermined number or the user's response is correct, the module may determines if the ending sounds has been completed in step 294 . If the testing of the ending sounds has not been completed then the module may present the next word in step 296 and return to step 282 . If the ending sounds are completed, the module may calculate a score based on the percentage of correct responses in step 298 .
  • the module and the recommender may generate a display of the score and any recommendations about training tools that the user may use to improve his recognition of the beginning and ending sounds of a word.
  • the user interface for testing the ability to discern the beginning and endings of words will be described.
  • FIG. 10 is a diagram illustrating an example of a user interface 310 of how the child's ability to discern the beginning and ending of words may be tested in accordance with the invention.
  • the user interface may include a picture of the current word 312 that is a leg in this example, and a series of pictures 314 showing other items. The user must recognize the beginning sound of the leg and then determine which picture of an item shows an item with the same beginning sound. The user may then select an item by clicking on the item. In this example, the correct response is the lamp. Now, a method for testing a child's ability to blend sounds will be described.
  • FIG. 11 is a flowchart illustrating a method 360 for testing a child's ability to blend sounds.
  • the game tests the user's ability to blend units of sound such as syllables or phonemes together.
  • the blending of these units of sound together requires a knowledge that individual sounds may be combined to form a word, but does not require letter recognition.
  • the blending of sounds is an important reading skill since, when children sound out a word, they must be able to then blend all of the sounds together to form the whole word. Typical children normally develop the blending skill during the early kindergarten years.
  • the module may display one or more graphical representations of items and present a spoken word, with it's sound units separated by equal intervals of time, to the user, such as “k-ey”.
  • the module may then ask the user to identify the graphical item referred to by the spoken word in step 364 and receive the response from the user using one of the input devices, such as the keyboard, mouse or microphone of the speech recognizer.
  • the module may determine if the response received is correct. If the response was not correct, the module may determine the number of consecutive errors for the current sound unit in step 368 .
  • the module may determine if the number of consecutive errors is less than a predetermined threshold and present the next word with similar sound unit types in step 372 and loop back to step 364 if the number of consecutive errors is not less than predetermined threshold. If the number of consecutive errors is not less than the predetermined threshold or if the prior response was correct, the module may determine if there are other sound unit types to test in step 374 . If there are other sound unit types, the module may present a word with sound units of the new type in step 376 and loop back to step 364 to test the child using the new sound unit type. If there are no more sound unit types to test, the module may determine the user's score in step 378 based on the percentage of correctly answered items.
  • the module may display the score to the user and the recommender may recommend one or more training tools that may help the user improve the blending sound ability and that may be downloaded from the diagnostic system.
  • the recommender may recommend one or more training tools that may help the user improve the blending sound ability and that may be downloaded from the diagnostic system.
  • FIG. 12 is a diagram illustrating an example of a user interface for testing a child's ability to blend sounds 380 in accordance with the invention.
  • the user interface 380 may include graphical representations 382 - 386 of one or more items, such as a key, a doll and a bell in this example, that the user may select in response to the spoken word's separated sound units.
  • the user may respond to the questions by clicking on the image, pressing a key on the keyboard or speaking a name into the microphone of the speech recognizer. In this example, the correct response is to select the key 382 .
  • a method for testing the sound segmenting ability of a user will be described.
  • FIG. 13 is a flowchart illustrating a method 390 for testing a child's ability to segment sounds in which the user's ability to segment a unit of sound, such as a word, into its constituent sound units, such as syllables and phonemes, is tested.
  • the ability to segment phonemes is a reliable predictor of reading success and usually is developed prior to and during kindergarten.
  • a sequence of sounds units such as a sentence, is spoken to the user.
  • step 394 the user is queried about how many words the user heard and the response from the user may be shown graphically as shown in FIG. 14. In the example shown in FIG. 16, the sentence “I have two brothers” was presented to the user, the user activated an input device (clicked the mouse button, hit a key or spoke into the microphone) four times to indicate that four words were heard, and four items 395 are shown on the display.
  • step 396 the accuracy of the user's response is checked in step 396 . If the response is not correct, the number of consecutive errors is determined in step 398 and compared to a threshold value in step 400 . If the number of errors is less than the threshold, the next sequence of sounds units is presented to the user in step 402 and the method loops back to step 394 . If the number of errors is not less than the threshold or the prior response of the user was correct, it is determined if there are more tests with a different sequence of sound units in step 404 . If there are more tests, a new sequence of sound units is presented in step 406 and the method loops back to step 394 .
  • step 408 the user's score is determined (as a percentage of correct responses) in step 408 and the score and any recommendations based on the score are displayed in step 410 .
  • FIG. 15 is a flowchart illustrating a method 420 for testing a child's ability to manipulate sounds.
  • the user's ability to manipulate phonemes is tested since that ability is highly correlated with reading ability through the 12 th grade.
  • the user is presented with a spoken word.
  • the spoken word is “cake”.
  • a graphical representation of constituent sound units is displayed for the user.
  • the graphical representations may be one or more blocks 426 (three for the word “cake” with the first and last blocks being the same color since the first and last sound units of “cake” have the same sound).
  • step 428 the user is asked to rearrange the blocks shown or use the available other blocks (as shown in FIG. 16) to form a new word and the user rearranges the blocks with an input device.
  • the user is asked to change “cake” to “cape”.
  • a correct response would be to have three blocks wherein a third block 429 has a color that does not match the other two blocks indicating that the third sounds unit is different from both the first and second sound units.
  • step 430 the accuracy of the response is determined. If the response is not correct, the number of consecutive errors is determined in step 432 and compared to a threshold value in step 434 .
  • step 436 If the threshold value is not exceeded (indicating that the same type of manipulation should continue to be tested), the next manipulation of the same type is presented in step 436 and the method loops back to step 424 . If the number of errors exceeds the threshold (indicating that the child is having too much trouble with the current type of manipulation) or if the prior response was correct, it is determined if there are more types of manipulations to test in step 438 . If there are more types to test, the next type of manipulation is presented in step 440 and the method loops back to step 424 . If there are no more types of test, the score of the user is determined in step 442 (based on the percentage of correct answers) and the score and any recommendations are displayed to the user in step 444 . Now, a method for testing the ability to recall spoken words will be described.
  • FIG. 17 is a flowchart illustrating a method 450 for testing a child's ability to recall spoken items in sequential order.
  • the ability to recall a sequence of verbal material depends on the ability to accurately represent the essential phonological features of each item in working memory and phonological coding efficiency is a primary determinant of performance of this task.
  • the ability to recall a list of spoken items increases with age from about 1 digit and 2 words at 4 years old to 8 digits and 6 words at 12 years old.
  • step 452 a sequence of words and/or digits is spoken with equal intervals between each word or digit through the speaker of the computer to the user. The user then repeats the sequence back using an input device such as a microphone of the speech recognizer in step 454 .
  • FIG. 18 illustrates an example of a sequence of digits that are presented to the user.
  • the response is checked for accuracy.
  • step 458 If the response is not correct, then the number of consecutive errors is determined in step 458 and the number of consecutive errors is compared to a threshold in step 460 . If the threshold is not exceeded, then the next sequence of words and/or digits is presented in step 462 and the method loops back to step 454 . If the threshold is exceeded or if the last response was correct, it is determined if there are more types of sequence of words to test in step 464 and the method presents a new type of sequence in step 466 and loops back to step 454 if there are more types. If all of the types of sequences have been completed, then the user's score is determined in step 468 (as a percentage of correct responses) and the scope and any recommendations for training modules is displayed in step 470 . Now, a method for testing rapid naming ability will be described.
  • FIG. 19 is a flowchart illustrating a method 480 for testing a child's ability to rapidly name visually-presented items.
  • an inability to name visual objects typically underlies a reading disorder.
  • an array 484 (an example of which is shown in FIG. 20 as a first row of a 4 ⁇ 6 array) is displayed to the user.
  • a timer is started and the user is asked to name all of the items in the array as fast as possible in step 488 using an input device such as a microphone of a speech recognizer. The timer may actually be started when the user makes his/her first response. After each response, the accuracy of the response is determined in step 490 .
  • step 492 If the response is not correct, then the number of consecutive errors is determined in step 492 and compared to a threshold in step 494 . If the threshold is exceeded, the test is aborted. If the threshold is not exceeded, then the user continues to identify the items in the array. If the prior response was correct, then it is determined if there are more items to name in step 496 and the method loops back to step 488 if there are more items. If all of the items have been named, then the timer is stopped in step 498 and the score is determined in step 500 based on the total time of the responses. In step 502 , the score and any recommendations for training modules are displayed. Now, a method for testing the ability to name letters and associate sounds with symbols will be described.
  • FIG. 21 is a flowchart illustrating a method 510 for testing a child's ability to name letters and associate a phoneme sound with a letter.
  • the inability to name letters may indicate a reading problem at the kindergarten level while an inability to associate a phoneme sound with a letter may indicate a reading problem at the first and second grade level.
  • a letter's name is spoken to the user by the computer.
  • the user may identify the letter in an array of letters 516 (as example of which is shown in FIG. 22) and select the appropriate letter using an input device.
  • the response accuracy is determined and it is determined if there are more letters. If there are more letters, the method loops back to step 512 .
  • step 520 If all of the letters have been completed, then a phoneme sound is generated by the computer and heard by the user in step 520 . The user may then indicate the corresponding letter for the phoneme sound in step 522 and the accuracy of the response is checked. In step 524 , it is determined if there are more phonemes to test and the method loops back to step 520 if there are more phonemes. If the phonemes have been completed, then the user's score is determined in step 526 and the score and any recommendations about training modules is displayed in step 528 . Now, a method for testing a child's ability to decode words will be described.
  • FIG. 23 is a flowchart illustrating another method 530 for testing a child's ability to decode words.
  • the method tests a child's ability to decode (i.e., read by sounding out) nonsense and real words since research has shown that the best measure of the ability to apply knowledge about grapheme-phoneme correspondences to reading words is a test of non-word phonemic decoding fluency.
  • the module may display a set of words 533 on the screen (an example of which is shown in FIG. 24) and then present a spoken word.
  • the module asks the user to identify the written word that was just spoken to the user.
  • the user's response may be provided using one of the input devices, such as the keyboard, mouse or microphone of the speech recognizer. Instead of speaking the word to the user, the module may present the word to the user is a visual manner.
  • the module determines if the correct response was received.
  • the module may determine the number of consecutive errors for the particular syllable type in step 538 and compares that calculated value to a predetermined threshold value in step 540 to determine if the calculated value is less than the threshold value. If the calculated value is less than the threshold, then the next spoken word for the same syllable type is presented in step 542 and the method loops back to step 534 to determine the user's response. If the number of consecutive errors is greater than the threshold or the prior response was correct, the module may determine if there are more syllable types to be tested in step 544 .
  • the module presents the next word for the next syllable type in step 546 and loops back to step 532 where a new spoken word is presented to the user. If there are no more syllable types to test, the module may repeat the above testing (not shown in the flowchart for clarity reasons) process for one or more nonsense words in step 548 . Once the above testing process has been repeated for nonsense words by testing if it is completed in step 550 and looping back to step 548 , the module may determine the score of the child in step 552 wherein the score is calculated as a percentage of items that have been correctly answered.
  • the module may display the score and the recommender may recommend one or more training tools to improve the child's decoding skills if the score reveals a decoding deficiency.
  • FIG. 25 is a flowchart illustrating a method 560 for testing a child's ability for fluent reading. Slow or inaccurate decoding interferes with the ability of the child or user to extract meaning from the text.
  • a typical child may read and respond to 30 sentences of the nature presented in this diagnostic tool in two minutes. The sentences may be questions (“Is the dog red?”) or statements (“The dog has fur.”) to which the user responds.
  • a question 564 is displayed to the user along with two answers 566 (an example of which is shown in FIG. 26).
  • a timer is started in step 568 as the user makes his first response in step 570 .
  • the accuracy of the response is determined.
  • the number of errors made is compared to a threshold in step 574 . If the number of errors are less than the threshold, then the method loops back to step 562 to continue testing. If the number of errors are more than the threshold or the prior response was correct, it is determined if the time exceeded two minutes in step 576 . If the time is less than two minutes, then the method loops back to step 562 . If the time exceeds two minutes, the total number of correct responses is tallied and then the entire test is repeated in step 577 and the score of the user is determined in step 578 . The total score of the user is calculated by determining the user's score for each two minute test and then averaging the scores from the two tests to arrive at a final score.
  • a user may score 30 on the first test and 28 on the second test so that the final score is 29 .
  • the score and any recommendations of training modules is displayed to the user. Now, the training module recommender in accordance with the invention will be described.
  • FIG. 27 is a flowchart illustrating the training recommender method 590 in accordance with the invention.
  • the method identifies, recommends and makes available specific training modules based on an individual's or a group's assessment profile based on the results from the various tests performed by the diagnostic tool in accordance with the invention.
  • the recommender may automatically recommend one or more training modules based on the test results.
  • the recommender gathers the data for the individual or group and analyzes it.
  • the recommender determines the individual or group's skill in each skill area tested by the diagnostic tool.
  • the recommender matches the skill level of the individual or group in a particular skill area with an appropriate training module.
  • the particular score of a user such as close to normal, on a particular test, such as rhyme recognition, may cause the recommender to recommend a lowest level (least amount of training) of the rhyme recognition training tool to help the child.
  • the recommender may recommend a higher level training tool with more rhyme recognition training.
  • the particular scores of a user on the various syllable types in the rhyme recognition test may cause the recommender to recommend no training for open rime syllable types but to recommend training for closed rime syllable types.
  • the recommender may display the recommended training modules to the user.
  • the user may then select the recommended training modules in step 600 and the training modules may be downloaded to the user's computer so that the user may use the training modules to improve the skill areas that require it.
  • the diagnostic system in accordance with the invention not only diagnoses reading problems using the various skill tests but also recommends training modules that may help improve a deficient skill.
  • the diagnostic system makes it easy for a parent to have the child tested for deficiencies and then to receive the tools that help correct any deficiencies. Now, an example of a report that is generated by the diagnostic system in accordance with the invention will be described.
  • FIG. 28 illustrates an example of a user interface 700 displaying a data report that is generated by the computer-based phonological skills diagnostic system in accordance with the invention. Further details of the data reporting in accordance with the invention are contained in Appendix A which is incorporated herein by reference.
  • Appendix A which is incorporated herein by reference.
  • a data graph (as shown in the example shown in FIG. 28) or a data table displays data to show individual students' test results of all the subtests and compare the test results (total score comparison or individual subtest score comparison) among students, across classes or schools.
  • the data tables can be sorted by scores and provide normative data for comparison. Now, the example of the data graph in accordance with the invention will be described.
  • the data graph shown in FIG. 28 may provide various information to a user of the system.
  • the data graph illustrates the percentage correct for a particular test (Rhyming Recognition in this example) for a particular school class (Ms. Davis° Class 1 A at Central School in this example).
  • this graph illustrates the percentage correct of one or more students (Melissa, Robert, Ken, Beth, etc. in this example) at different points in time.
  • the full names of the students are shown on the data report.
  • each student's score (a percentage of correct answers in the Rhyming Recognition test) prior to any training (pre-test), after a first round of training and testing (post-test 1 ) and after a second round of training and testing (post-test 2 ) are shown.
  • the different scores are color coded for easier viewing so that, for example, the pre-test score is a green bar, the post-test 1 score is a blue bar and the post-test 2 score is a red bar.
  • each bar lists the actual percentage that is represented by the bar, but that percentage can be suppressed by clicking on a button 702 on the user interface. In particular, when the button is clicked, all the percentages on the screen are hidden and the button will change to Show Percentages. When the Show Percentages button is clicked, all the percentages will be shown on the screen and the button changes back to Suppress Percentages.
  • the user interface may include an “Other Test Section” button 704 . If the user clicks on the “Other Test Section” button, a drop menu 740 (an example of which is shown in FIG. 29) appears that shows all of the subtest titles that can be selected by the administrator/teacher. Using the menu, a teacher can select a subtest to see students test results on this particular subtest. Thus, the data graph is dynamic in that the teacher/administrator can change the data shown in the graph at any time.
  • the user interface may further include a zoom button 706 that permits the teacher to change the number of students whose data is displayed on the graph. For example, there may be 18 students in the class shown in FIG. 28, but the graph shown in FIG. 28 only shows the test results for six students.
  • zoom Out button By clicking on the Zoom Out button, the teacher can see all the students' test results but details for each student will be missing so that the data can fit into the graph and the button will change to Zoom In.
  • Zoom In button When the Zoom In button is clicked, the default six students' test results will be shown on the screen and the button will change back to Zoom Out.
  • the purpose of having the zoom out function is to provide a general picture of the class performance.
  • the user interface may further include a back button 708 that permits other student scores to be displayed in the graph while retaining all of the data about each student.
  • the graph defaults to showing a predetermined number of students, such as six, and the back button permits the teacher to browse through the detailed scores of the entire class by viewing a predetermined number of students at a time. For example, if the six students shown on the screen are the first six in the class, then this button will be inactive since there are no prior students. However, when the six students on the screen are not the first six, clicking on this button will show the previous six students' full test results.
  • the user interface may also include a forward button 710 that permits the teacher to see the full scores for the next predetermined number of students.
  • this button will be inactive.
  • clicking on this button will show the next six students' test results.
  • the teacher is able to browse through the full test results for the entire class.
  • the user interface may further include a graph display button 712 .
  • a drop down menu 750 with small graphs will shown (as shown in FIG. 30) for the teacher to choose a data display she prefers.
  • the menu permits the teacher to choose the data display of one test (pre, or post 1 , or post 2 ), or two tests (pre and post 1 , or pre and post 2 , or post 1 and post 2 ), or three tests together (pre, post 1 , and post 2 ).
  • this menu is dynamic is that it changes depending on the actual test data. For example, if students take the same test more then three times, there will be more combinations.
  • the user interface may further permit the user to select a data report print choice.
  • a data report print choice there may be three different print choices.
  • the first print choice is a “Print All” choice in which reports for all the students or subtests of the graph or table on the screen are printed.
  • a second print choice is the “Print Current Student/Subtest” choice in which the report for the student/subtest currently on the screen is printed.
  • the third print choice is a “Print . . . ” choice in which the user is allowed to select the reports for certain student or subtest that the user would like to print out of the graph or table on the screen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A diagnostic system and method for evaluating one or more phonological awareness, phonological processing and reading skills of an individual to detect phonological awareness, phonological processing and reading skill deficiencies in the individual so that the risk of developing a reading deficiency is reduced and existing reading deficiencies are remediated. The system may use graphical games to test the individual's ability in a plurality of different phonological awareness, phonological processing and reading skills. The system may use speech recognition technology to interact with the games. The system may include a module for providing motivation to a user of the system being tested.

Description

    RELATED APPLICATION
  • This application is a continuation-in-part application of U.S. patent application Ser. No. 09/350,791, filed Jul. 9, 1999, entitled “Diagnostic System and Method for Phonological Awareness, Phonological Processing, and Reading Skill Testing” and owned by the same assignee as the present invention.[0001]
  • APPENDIX
  • This disclosure includes and incorporates Appendix A which is attached. Appendix A is 24 pages and discloses details of the data graphing and reporting functionality of the diagnostic system and method for phonological awareness, phonological processing and reading skill testing. [0002]
  • BACKGROUND OF THE INVENTION
  • This invention relates generally to a diagnostic system and method for testing one or more different areas of phonological awareness, phonological processing, verbal short term memory, rapid access naming, phonemic decoding and reading fluency in order to determine if the individual being tested is at risk to having reading problems and the areas in which the individual may need further training. [0003]
  • It is well known that a relationship exists between phonological processing abilities of an individual and the normal acquisition of beginning reading skills. For inefficient and disabled readers, the reading impasse exists in the perceptual and conceptual elusiveness of phonemes. Phonemes are the smallest units of speech that correspond to the sounds of our spoken language. Our phonologically based language requires that students have a sensitivity to and an explicit understanding of the phonological structure of words. This explicit understanding of the phonological structure of words is known as phonological awareness. Phonological awareness skills are displayed by an individual when the individual is able to isolate and identify individual sounds within words and to manipulate those identified sounds. Phonological processing refers to the use of information about the sound structure of oral language to process oral and written information. These include verbal short term memory and rapid access naming. [0004]
  • The English language has words that are comprised of sounds in some predetermined order. From the vast number of possible sequences of sounds, words in the English language actually use a relatively small number of sequences and the majority of these sequences are common to many words. A child who becomes aware of these common sound sequences is typically more adept at mastering these sequences when the words are presented in their printed form (i.e., when the child is reading the words) than a child who lacks this awareness of sounds. [0005]
  • For example, the word “mat” has three distinct phonemes /m/, /ae/ and /t/. The words “sat” and “bat” have different initial phonemes, /s/ and /b/ respectively, but share the middle and final phonemes (/ae/ and /t/, respectively) that form the common spelling pattern “at”. To a child with normal phonological awareness, our alphabetic orthography appears to be a sensible system for representing speech in writing. Thus, a child may employ the strategy of sounding out unknown words or letter sequences by analogy to known words with identical letter sequences. For example, the child may pronounce the unknown word “bat” by rhyming it with the known word “cat”. [0006]
  • Phonological awareness skills are grouped into two categories including synthesis and analysis. Phonological synthesis refers to the awareness that separate sound units may be blended together to form whole words. Phonological analysis refers to the awareness that whole words may be segmented into a set of sound units, including syllables, onset-rimes and phonemes. Both analysis and synthesis skills have been identified as important prerequisites for achieving the goal of early reading skill proficiency and deficits of either and/or both of these skills are typically present in children with reading disabilities. [0007]
  • In addition to these phonological awareness skills, there are two other phonological skills that have been linked to efficient reading ability. These skills are phonetic coding in verbal short term memory and rapid, automatic access to phonological information. Phonetic coding refers to the child's ability to use a speech-sound representation system for efficient storage of verbal information in working memory. The ability to efficiently use phonetic codes to represent verbal information in working memory may be measured by performance on memory span tasks for items with verbal labels. Children with reading problems have been found to perform poorly on memory span tasks for items with verbal labels. Thus, phonetic coding is an important skill for a reader, such as a beginning reader. For a beginning reader, he/she must 1) first decode each sound in the pattern by voicing the appropriate sound for the appropriate symbol; 2) store the appropriate sounds in short term memory while the remainder of the symbols are being sounded out; and 3) blend all of the sounds from memory together to form a word. The efficient phonetic representation in verbal short term memory permits beginning readers to devote less cognitive energy to the storage of sound symbol correspondence thus leaving adequate cognitive resources to blend the sounds together to form the word. [0008]
  • The strong performance of a child on rapid naming skills that requires rapid and automatic access to phonological information that is stored in long term memory is highly predictive on how well a child will learn fluent word identification skills. A reading-disabled child may normally perform much more slowly on these rapid naming tasks than a child with a normal reading skill. The rapid access of phonological information in memory may make the task of assembling word parts together much easier so that reading is easier. [0009]
  • In addition to assessing phonological processing skills that do not require knowledge of print, three other measures of pre-reading and reading skills prove helpful in monitoring a child's growth once reading instruction begins. In particular, the child's knowledge about letters, the child's phonemic decoding skill and the child's fluency of reading should be monitored during the first three grades in order to identify the need for early intervention that will prevent reading problems later on. It is desirable to be able to test these pre-reading and reading skills in order to further determine if a child is at risk. [0010]
  • Returning to the relationship between phonological processing and reading, an individual with good phonological processing skills and good phonological awareness tends to be better able to learn to read. In addition, phonological processing deficits have been identified by researchers as the most probable cause of reading-related learning disabilities. Due to this link, many states have started to mandate phonological awareness training as part of regular classroom reading curricula. At the same time, school personnel are being required to be accountable and take responsibility for the classroom curriculum and the remedial reading services they provide. The problem is that there is no diagnostic tool currently available to help professionals and the school personnel to identify children who are at-risk due to phonological awareness deficit and to help plan, evaluate and document the effectiveness of intervention and instructional methods. [0011]
  • A number of assessment tools are presently available to professionals to measure phonological processing and related skills. These include the [0012] Test of Phonological Awareness (TOPA), the Lindamood Auditory Conceptualization Test (LAC), The Phonological Awareness Test (PAT), the Comprehensive Test of phonological Testing (CTOPP) and a screening measure published in an educational textbook, Phonemic Awareness in Young Children: A Classroom Curriculum. None of these conventional assessment tools are software based and therefore have limitations. For example, these conventional assessment tools must be manually administered so that the testing is not necessarily standardized since each test giver may give the test in a slightly different manner that reduces the reliability of the resulting assessment. These manually administered assessment tools also make the scoring, charting and comparison of the test results more difficult. These conventional assessment tests require that a skilled person administrate the assessment test. In addition, the number of children who may be tested at any one time is limited to one child for each test administrator. These conventional assessment tests may also cause test anxiety that may cause the test results to inaccurately reflect the child's abilities. Thus, it is desirable to provide a diagnostic system and method for phonological awareness testing that overcomes the above problems and limitations of conventional assessment tests and it is to this end that the present invention is directed.
  • SUMMARY OF THE INVENTION
  • The diagnostic system and method for evaluating phonological awareness and processing skills and related pre-reading and reading skills in accordance with the invention provides a system for identifying individuals, such as children in kindergarten through second grade, who are likely to experience academic failure due to phonological processing deficits and a lack of phonological awareness. The system may also determine the relative weaknesses and strengths of the individual or group of individuals in different phonological awareness and processing areas or related reading skills in order to help develop appropriate intervention and curriculum activities to improve the weak skills and areas. The system may also track, over time, an individual's development or a group's development of various phonological awareness and processing skills and relating reading skills and establish a baseline so that the effectiveness of instructional methods may be evaluated. The system may identify individual with weak phonological awareness and processing skills and correct those skills before the individual develops a reading problem. In a preferred embodiment, the diagnostic tool may be one or more software applications being executed on a Web server so that the diagnostic tool may be an Internet or World Wide Web (the Web) based tool that provides an easily accessible and affordable screening tool to help parents determine, in the comfort of their own home, if their child is at-risk for academic failure due to phonological awareness and processing deficits. The system may also suggest solutions (training modules that train a particular phonological awareness, phonological processing skill or a related reading or pre-reading skill) for a parent to consider in correcting the phonological awareness and processing deficits. [0013]
  • In more detail, the diagnostic system in accordance with a preferred embodiment of the invention may include one or more software applications that may be stored on a portable media, such as a CD or a zip disk or may be stored on a server. The diagnostic system provides various advantages over conventional diagnostic tools. The system permits more standardized administration of the tests that leads to more reliable assessments. The system also permits more efficient, accurate and reliable scoring and tracking of an individual's phonological awareness and processing abilities so that the individual's progress may be determined by comparing the various test results to one another and comparing the results of tests given at different time to each other. The system may be administered by people who do not necessarily understand the intricacies of phonological awareness and processing skills. In addition, the system may be administered simultaneously to a large number of individuals since each children may use a separate computer to complete the tests. Finally, the engaging graphical game format of the tests within the diagnostic system may reduce an individual's test anxiety so that a more accurate test may be conducted. [0014]
  • The diagnostic system may include one or more interactive computer activities that permit the diagnostic system to measure one or more different types of phonological awareness and processing skills, knowledge of sound-symbol correspondences and fluency of decoding and reading. The system in accordance with the invention may also collect risk factor and other relevant data about each individual, assess performance on activities that measure phonological awareness and processing skill, analyze risk factor data and performance data for individuals or groups of individuals, and report those results. In a preferred embodiment, the system may be used for diagnosing phonological awareness and processing skill deficits in a young child. [0015]
  • Thus, in accordance with the invention, a system and method for testing one or more skills associated with the reading skills of an individual is provided. The method comprises presenting one or more stimuli to the individual, each stimulus associated with a test for testing a particular reading or pre-reading skill of the individual, the skills indicating the risk that the individual develops a language-based learning disability. The method further comprises receiving a response from the individual to each stimulus, scoring the user's responses to each test, and recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.[0016]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a block diagram illustrating a first embodiment of a computer-based phonological skills diagnostic system in accordance with the invention; [0017]
  • FIG. 1B illustrates a second embodiment of a computer-based phonological skills diagnostic system in accordance with the invention; [0018]
  • FIG. 1C illustrates more details of the second embodiment of the computer-based phonological skills diagnostic system in accordance with the invention as shown in FIG. 11B; [0019]
  • FIG. 2 is a diagram illustrating a Web-based server computer that may be a part of the diagnostic system of FIG. 1; [0020]
  • FIG. 2A graphically illustrates a method for determining a particular error of a user of the diagnostic system; [0021]
  • FIG. 2B is a flowchart illustrating a preferred method for identifying a particular deficiency of a user of the diagnostic system; [0022]
  • FIG. 2C illustrates the IF-THEN rule bases used to determine a user's deficient skill areas based on the incorrect answers in particular subtests; [0023]
  • FIG. 2D illustrates an example of one or more subtests of the diagnostic system and the error measure associated with the particular subtest; [0024]
  • FIG. 3 is a diagram illustrating a preferred embodiment of the diagnostic tool of FIG. 2 in accordance with the invention including one or more tests that are used to diagnose a reading problem of a child; [0025]
  • FIG. 4 is a flowchart illustrating filling out a questionnaire in accordance with the invention; [0026]
  • FIG. 5 is a flowchart illustrating a method for testing a child's recognition of rhymes; [0027]
  • FIG. 6 is a diagram illustrating an example of how the child's rhyme recognition ability may be tested in accordance with the invention; [0028]
  • FIG. 7 is a flowchart illustrating a method for testing a child's ability to generate a rhyme; [0029]
  • FIG. 8 is a diagram illustrating an example of how the child's rhyme generation ability may be tested in accordance with the invention; [0030]
  • FIG. 9 is a flowchart illustrating a method for testing the child's ability to distinguish the beginning and ending sounds of a word; [0031]
  • FIG. 10 is a diagram illustrating an example of how the child's ability to discern the beginning and ending of words may be tested in accordance with the invention; [0032]
  • FIG. 11 is a flowchart illustrating a method for testing a child's ability to blend sounds; [0033]
  • FIG. 12 is a diagram illustrating an example of how the child's ability to blend sounds may be tested in accordance with the invention; [0034]
  • FIG. 13 is a flowchart illustrating a method for testing a child's ability to segment sounds; [0035]
  • FIG. 14 is a diagram illustrating an example of how the child's ability to segment sounds may be tested in accordance with the invention; [0036]
  • FIG. 15 is a flowchart illustrating a method for testing a child's ability to manipulate sounds; [0037]
  • FIG. 16 is a diagram illustrating an example of how the child's ability to manipulate sounds may be tested in accordance with the invention; [0038]
  • FIG. 17 is a flowchart illustrating a method for testing a child's ability to recall spoken items in sequential order; [0039]
  • FIG. 18 is a diagram illustrating an example of how the child's ability to recall spoken items in sequential order may be tested in accordance with the invention; [0040]
  • FIG. 19 is a flowchart illustrating a method for testing a child's ability to rapidly name visually-presented items; [0041]
  • FIG. 20 is a diagram illustrating an example of how the child's ability to rapidly name visually-presented items may be tested in accordance with the invention; [0042]
  • FIG. 21 is a flowchart illustrating a method for testing a child's ability to name letters and associate sounds with symbols; [0043]
  • FIG. 22 is a diagram illustrating an example of how a child's ability to name letters and sound/symbol associations may be tested in accordance with the invention; [0044]
  • FIG. 23 is a flowchart illustrating a method for testing a child's ability to decode words; [0045]
  • FIG. 24 is a diagram illustrating an example of how a child's ability to decode words may be tested in accordance with the invention; [0046]
  • FIG. 25 is a flowchart illustrating a method for testing a child's ability for fluent reading; [0047]
  • FIG. 26 is a diagram illustrating an example of how a child's ability for fluent reading may be tested in accordance with the invention; [0048]
  • FIG. 27 is a flowchart illustrating the operating of the training module recommender in accordance with the invention; [0049]
  • FIG. 28 illustrates an example of a report that is generated by the computer-based phonological skills diagnostic system in accordance with the invention; [0050]
  • FIG. 29 illustrates an example of a test section selection drop down menu in accordance with the invention; and [0051]
  • FIG. 30 illustrates an example of a data graph selection drop down menu in accordance with the invention.[0052]
  • DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
  • The invention is particularly applicable to a World Wide Web (Web) based diagnostic system for determining a child's phonological awareness and processing skills and reading skills and it is in this context that the invention will be described. It will be appreciated, however, that the system and method in accordance with the invention has greater utility since it may be implemented on other types of computer systems, such as the Internet, a local area network, a wide area network or any other type of computer network. The system may also be used to test a variety of other individuals, such as illiterate and mentally disabled people, individuals whose native language is not English who are learning to read, and adolescents and adults who read poorly and wish to improve their reading skills. [0053]
  • FIG. 1A is a block diagram illustrating a first embodiment of a computer-based phonological skills [0054] diagnostic system 50 in accordance with the invention. In this embodiment, the diagnostic system 50 may include a server 52 and one or more client computers 54 (Client #1-Client #N) connected together by a communications network 56, that may be the Internet, the World Wide Web (the Web), a local area network, a wide area network or any other type of communications network. In the embodiment shown, the communications network is the Web and a typical Web communications protocol, such as the hypertext transfer protocol (HTTP), may be used for communications between the server and the client computer. In particular, the server may download one or more Web pages to each client computer and each client computer may send responses back to the server.
  • The server may further comprise a central processing unit (CPU) [0055] 58, a memory 60, a database (DB) 62, a persistent storage device 64 and a diagnostic tool 66. In a preferred embodiment, the diagnostic tool may be one or more software applications (testing different phonological awareness and processing skills or reading skills) stored in the persistent storage of the server that may be downloaded into the memory 60 (as shown in FIG. 1A) so that the diagnostic tool may be executed by the CPU 58 of the server. In the preferred Web-based embodiment, the DB 62 or persistent storage device 64 may store one or more Web pages associated with the diagnostic tool 66. The Web pages may be downloaded to each client computer when the client computer requests the particular Web page. The server may also include the necessary hardware and software to accept requests from one or more client computers. In the preferred embodiment, the Web pages may be communicated to the one or more client computers using the HTTP protocol and the client computers may send data back to the server, such as test responses, using the same protocol.
  • Each client computer [0056] 54 (Client #N will be described herein, but it should be realized that each client computer is substantially similar) may be used by an individual user, such as a parent of a child or a test administrator, to access the diagnostic tool stored on the server. Each client computer 54 may include a central processing unit (CPU) 70, a memory 72, a persistent storage device 74 such as a hard disk drive, a tape drive, an optical drive or the like, an input device 76 such as a keyboard, a mouse, a joystick, a speech recognition microphone or the like, and an output device 78 such as a typical cathode ray tube, a flat panel display, a printer for generating a printed report or the like. Each client computer may also include a browser application 80 that may be stored in the persistent storage device and downloaded to the memory 72 as shown in the figure. The browser application may be executed by the CPU 70 and may permit the user of the client computer to interact with the Web pages being downloaded from the server 52. In this system, multiple client computers may establish simultaneous communications sessions with the server and each client computer may be downloading Web pages from the server. The system 50 thus permits multiple client computers to access the diagnostic tool 66 stored on the server so that the user of each client computer may take advantage of the benefits of the diagnostic tool.
  • As described below in more detail, the diagnostic tool may include one or more different tools that test various phonological awareness or processing skills as well as reading skills so that a child's proficiency at phonological awareness and processing skills and reading skills may be determined. The [0057] diagnostic tool 66 may also use a child's scores on the one or more tools in order to recommend to the user of the client computer (e.g., the parent of the child) which training tools the parent may consider downloading to help the child with any deficiencies. These training tools may also be stored in the persistent storage device 64 connected to the server so that the user may then download the training tool from the server as well. The training tools are described in more detail in co-pending U.S. patent application Ser. Nos. 09/039,194 and 60/103,354, filed Mar. 13, 1998 and Oct. 7, 1998, respectively, that are incorporated herein by reference and owned by the same assignee as the present application. The incorporated applications also describe the different sounds units types, syllable types and phoneme types that may be tested using the diagnostic system since these types of sound units, syllables and phonemes are similar to the types of sound units, syllables and phonemes used in the training tools.
  • In another embodiment of the invention, an assessment tool software application, such as a Windows .exe file for example, may be downloaded from the server to the client computer. The assessment tool software application may then be executed by the [0058] CPU 70 of the client computer. The assessment tool may then generate the graphical screens that test the different user's skills and may store the information/scores about the tests locally in the client computer. Then, during the assessment testing or after the assessment tool execution has been completed, the scores for the user may be uploaded back to the server computer. Now, a second embodiment of the computer-based phonological skills diagnostic system in accordance with the invention will be described.
  • FIG. 1B illustrates an example of a second embodiment of a computer-based phonological skills [0059] diagnostic system 50 in accordance with the invention. In this embodiment of the invention, there may be the server 52 (whose elements and functions are described above and will not be described herein) that is connected via the communications network 56 to one or more clients as above. In this embodiment, each client may be a teacher computer system 84, such as a server computer, a local area network server computer or a personal computer that is connected to a network, (Teacher Station 1, Teacher Station 2, . . . , Teacher Station N) that is connected to the server 52 over the communications network 56. The teacher station may have similar elements to the clients shown in FIG. 1 and like elements have like reference numerals and will not be described here. To implement the diagnostic system in accordance with the invention, the CPU 70 of the teacher station may execute a diagnostic tool module 85 (that may be one or more pieces of software or one or more software applications) wherein the diagnostic tool module 85 resides in the memory 72 as shown. The teacher station 84 may be connected and control a computer network 86, such as an internal computer network within school or a computer network within a school district, etc. . . . The computer network 86 may be connected to one or more student computers 87 (Student 1, Student 2, . . . , Student N) wherein each student computer may be a computing device with sufficient resources to implement the diagnostic testing in accordance with the invention. For example, each student computer 87 may be a typical personal computer and may have the elements of the clients 54 shown in FIG. 1 or it may be a personal digital assistant.
  • In this embodiment, the diagnostic tool may be downloaded to the [0060] teacher station 84 from the server 52 when the particular school or school district purchases a license to the diagnostic tool. The teacher station may execute the diagnostic tool and control the operation of the student computers 87 to implement the diagnostic testing. This embodiment of the invention may be used, for example, to permit the teacher station (a LAN server) to monitor and control the diagnostic testing when the diagnostic tool is being used by multiple users in a school or other setting. More details of this embodiment of the invention will now be described.
  • FIG. 1C illustrates more details of an example of the second embodiment of the computer-based phonological skills diagnostic system in accordance with the invention as shown in FIG. 1B. In particular, the [0061] teacher station 84, the computer network 86, such as a local area network, and the one or more student computers 87 are shown and described in more detail. In operation, a school purchases the program (or the school district purchases the program and assigns the program to a school) and the school is given a User ID and password for access. The school may then download the program from the server 52 onto the school's LAN Server 84 (teacher station). The teacher station performs the function of communicating with the server 52 (not shown) in order to, for example, download the program and send back students' test results. The teacher station may also communicate with the one or more student computers 87 in order to, for example, monitor students' test progress, control the start, volume, pause, resume, exit functions for all of the students and/or any individual students and collecting students' testing data. In this example of the embodiment, the testing environment presumes a networked environment with Internet access and the Xtranet Xtra installed. The Xtranet Xtra facilitates messaging between networked machines. The teacher/administrator would have an administrative version of the Testing Module.
  • On the [0062] teacher station 84, the classroom teachers/test administrator may register each student who will take the test and generate a classroom layout to assign students to particular student computers 87. The teacher station may also permit the classroom teacher/test administrator to generate a layout for multiple different classes. As shown in FIG. 1C, the teacher station may display one or more icons 88 wherein each student's computer is numbered. In a preferred embodiment, the icons are shown in a seating chart arrangement so that the teacher can easily determine which student is represented by which icon. Each icon may be one or more predetermined colors wherein each color indicates a particular status of the testing for the student using that computer. For example, a green colored icon may indicate an on going test, a yellow colored icon may indicate a paused test and a red colored flashing icon may indicate that help is needed. To view additional information about a particular student, the administrator may click on the icon that represents the student's computer and be presented in the student information area 89 with additional information about the particular student, such as the student's name, age, grade, type of test he/she is taking, and the progress of the test (e.g., “Rhyme Recognition 8” which is test item 8 of the Rhyme Recognition test section).
  • In addition, an interface may be displayed that shows 1) how many tests are currently available and what type of tests can be assigned to each student (since the school may purchase a license to a particular number of tests at any one time); and 2) how many tests are currently in process and what kind of tests have already been assigned. In a preferred embodiment, a student can be assigned to more than one test. [0063]
  • The teacher station user interface may further include an activated [0064] student information area 89 wherein the information for a particular student is shown that has been selected by the administrator/teacher by clicking on the student's icon as described above. This area 89 may further include one or more buttons 90 that permit the administrator to control the testing of the individual selected student. The user interface may further include a second area 91 wherein the testing status is shown. For example, the area may indicate a failed connection with the student computer or server 52, a completed test and data being sent (or data is sent) to the server 52. This area 91 may also include one or more buttons 92 that permit the administrator/teacher to control the testing of all of the student's computer at the same time. Now, the process of registration and access using this embodiment of the invention will be described.
  • To use the above embodiment of the system with a group of students, such as in a school or school district, it is necessary to register. In particular, there may be two kinds of registration including individual registration and institutional registration. A preferred embodiment of each type of registration will now be described. [0065]
  • Individual Registration [0066]
  • An “Individual” is defined as an online client wanting to purchase one or a number of Single Test packages for immediate use. First, an individual registers by completing an Individual Registration form wherein the individual assigns to herself a username and password (as well as a hint, should she forget her password). Upon submission of a valid Individual Registration form, a record is created in the Account table on the [0067] server 52 and the individual is assigned a unique account_id. The individual who creates the account is known as the Account Manager, and has responsibilities and access for the account. Next, a record is created in the Pswd table on the server 52 and stamped with the account_id and assigned the default access level of “Individual”.
  • The individual may now purchase one or more test packages. Next, the individual selects a Single Test package appropriate for a child (e.g., Package “[0068] 1A”) and a record is created in the Order table and assigned a unique order_id and stamped with the account_id. A record is also created in the Order_Item table. The order item record is assigned a unique order_item_id and stamped with the account_id, order_d and package_id. Each order item is assigned a unique order_item_id and stamped with the account_id and order_id. Now, the individual must complete and submit a Student Registration form for each child, which assigns the test to the particular child. Then, a record is created in the Student table on the server 52 and the child is assigned a unique student_id. The child's record is stamped with the account_id and order_item_id. The individual may now repeat the process, selecting additional Single Test packages and assigning one child to each package. The ordering process has now been completed.
  • The order is then validated by a third party, such as CyberCash or RediCash. If validation succeeds, the validated field in the Order table on the [0069] server 52 is marked TRUE and records are created in the Usage table with one record for each test. In particular, each Usage record is assigned a unique usage_id and stamped with the account_id, order_id and order_item_id. If validation fails, the individual is notified and all records bearing both the account_id and order_id in the Order, Order_item and Student tables are deleted. Now, the institutional registration process will be described.
  • Institutional Registration [0070]
  • An “Institution” is defined as a public or private school or other educational or child care institution wanting to purchase Single Test or 35-Test packages for use by a school district or school. A “School” is defined as any school within a school district, or any single institutional element such as a parochial or private school, a day care center, a commercial learning center. A public “School District” is any school district listed by the National Center for Education Statistics. A public school is any school listed by the National Center for Education Statistics and associated with a school district. An “Account Manager” is any individual who registers the account, orders and accepts responsibility for payment. The account manager has access to school-district level data if he/she purchases packages for a school district. The account manager is responsible for assigning test packages to schools and lead teachers within the school district. The account manager may assign himself as a lead teacher and the institution of record as the School (as is the case of a single school). A “Lead Teacher” is responsible for school packages and assigns packages to classroom teachers. A classroom teacher is a test administrator and monitors the actual testing. The classroom teacher is given access by the lead teacher to register students so that they may take the test. The lead teacher has access to school level data and the classroom teacher has access to class level data. The system may impose certain restraints on the diagnostic tool, such as 1) test packages purchased by a school district may only be distributed within the district; and 2) one test package must be assigned to only one school; i.e., Students at different schools may not share one test package. [0071]
  • To better understand the differences between the account manager, the lead teacher and the classroom teacher, an example of the diagnostic tool registration and testing process that involves all three different people will now be described. The account manager, the person who purchases the packages for the school district, is responsible for assigning packages to schools and a lead teacher for each school. The lead teacher assigned to a school is responsible for assigning packages to classroom teachers. The classroom teachers are responsible for registering students and administering the test. Later, after the test, the classroom teacher has access only to view his/her own classes' students' test results although it is possible for two teachers share one package. For example, for a package for 35 students, Mr. L ([0072] class 1 teacher) was assigned 20 and Ms. D (class 2 teacher) was assigned 15. They can each test students at the same time using this same package, but Mr. L can only assign his own 20 students and view his own 20 students' test results; and Ms. D can only assign her own 15 students and view her own 15 students' test results. The lead teacher who was assigned to a school has access to school level to view his/her own school students' test results; and the account manager, who represents the school district, has an access to school district level to view his/her own school district students' test results. Now, the registration process will be described in more detail.
  • To start the registration process, an institution registers by completing an Institution Registration form. Upon submission of a valid Institution Registration form, a record is created in the Account table in the [0073] server 52 and the account manager is assigned a unique account_id. The account manager has responsibilities and access for the account. The Institutional Registration form requires that an institution specify a public school district if it wishes to distribute its packages among schools within the district. Or, conversely, the institution may register as a single school, in which case all the packages it purchases must be used within that school. The account manager who submits the registration assigns to herself a username and password (as well as a hint, should she forget her password). A record is then created in the Pswd table on the server 52 and stamped with the account_id and assigned the default access level “Institution”. The “Institution” level allows access to data as described above.
  • School District Registration [0074]
  • If the form identifies the account as a “School District” account, a record is created in the School_District table in the [0075] server 52 with a unique school_district_id and the record is stamped with the account_id. Optionally, the account manager may create records in the Region table, with unique region_ids. These records are stamped with the account_id and school district_id.
  • Single School Registration [0076]
  • If the form identifies the account as a “School” (i.e., a single institution), a record is created in the School table with a unique school_id and the record is stamped with the account_id. Optionally, the account manager may create a record in the School_District table to which the school belongs, with a unique school_district_id and the record is stamped with the account_id. Optionally, the account manager may create a record in the Region table, with unique region_ids. These records are stamped with the account_id and school_district_id. [0077]
  • Once the above registration is completed, the institution may purchase the tests. In particular, the account manager may now purchase test packages. In particular, the account manager selects a test package, enters the package quantity and adds the selection to her “shopping cart”. The account manager may select additional items, specify the quantity and add them to the “shopping cart.” The account manager may then submit the order. [0078]
  • The order is then validated by a third party, such as CyberCash or RediCash. If validation succeeds, the validated field in the Order table is marked TRUE and records are created in the Usage table with one record for each test. In more detail, each Usage record is assigned a unique usage_id and stamped with the account_id, order_id and order_item_id. A record is also created in the Order_Item table. The order item record is assigned a unique order_item_id and stamped with the account_id, order_id and package_id. Each order item is assigned a unique order_item_id and stamped with the account_id and order_id. If validation fails, the individual is notified. All records bearing both the account_id and order_id in the School, School_District and Region tables are deleted if validation fails. [0079]
  • After validation, the account manager must now assign packages to schools and lead teachers. In particular, if the account is identified as type “School District”, the account manager completes and submits School Registration form for each school. (The system may have NCES databases on the server for the account manager to select school districts and/or schools). A record is created in the School table and assigned a unique school_id. The record is stamped with the account_id and school_district_id. Optionally, the school may further be identified as part of a “Region”. The account manager may now assign packages to a school or schools. An interface will inform the account manager of packages that are available to assign, which packages have been assigned and to what school. When a package is assigned, records are created in the Usage table, the number of records corresponding with the number of tests in the package. Each record is given a unique usage_id and stamped with the account_id, order_id, order_item_id, school_district_id and school_id. [0080]
  • The account manager may now assign lead teachers to school level access. The account manager may assign access to more than one lead teacher at each school, or assign access to one lead teacher at more than one school. The lead teacher has school level access to test data. The account manager is responsible for communicating Username and Password to assigned lead teachers. The lead teacher may assign classroom teachers to class level access. The lead teacher is responsible for communicating Username and Password to assigned classroom teachers. Now, the process for testing student in accordance with the invention will be described in more detail. [0081]
  • Registering Students [0082]
  • Teachers or account managers acting as “Teachers” may assign classroom teachers and classroom teachers may register students at any time after an order is validated. A “Class” is any arbitrary group designation for students taking a test (e.g., “Mr. Busy's Kindergarten”). A teacher may first define a class wherein a “Class” is defined by a class name unique to the school and given a unique class_id. The class record is stamped with the teacher_id and school_id [0083]
  • Classroom teachers must complete a Student Registration form for each student. An interface will show how many and of what kind of tests are available to assign, how many and of what kind of tests have been assigned. The form will allow more than one test to be assigned to a student. The student is assigned to a class. When the form is submitted, a record is created in the Student table and assigned a unique student_id. The record is stamped with the account_id, school district_id, school_id, package_id and class_id [0084]
  • Administrating Tests [0085]
  • The testing environment presumes a networked environment with Internet access and the Xtranet Xtra installed. The Xtranet Xtra facilitates messaging between networked machines. The classroom teacher/test administrator would have an administrative version of the Testing Module. The classroom teacher must log in to access the module. When the classroom teacher accesses the Test administration area, he is presented with a Seating Chart of student computers that are in communication with the administrative computer via Xtranet. The classroom teacher is also presented with a list of registered students. The classroom teacher begins a testing session by assigning students to a computer. Each “Desk” on the seating chart, when clicked, displays the student's name, age, grade, type of test, and the process of the test in the [0086] student information area 89. The classroom teacher will have control over start, volume, pause, resume, and exit functions for all the students or at each Desk. The testing status information indicated in the area 91 includes whether 1) the diagnostic tool application is open; 2) a connection to the server 52 is tested and/or active; 3) the student diagnostic test on each student computer has started, paused, or completed; and 4) the test data for a particular diagnostic student test from a particular student computer is sent to the server 52.
  • The Test Application on the Student's machine messages the [0087] server 52 via the Teacher's machine, and the server 52 returns data to the Application via HTTP. (This happens transparently within the Application). The Application, before it reaches the Access screen, will test its connection to the Server 52. If the connection fails, the Application will not proceed. The classroom teacher is notified of the result of the connection test. When the student begins the test (e.g., the Student presses the “Yes” button on the test module access screen) the testing record of connection is marked as “completed”. The testing record is retrieved from the Usage table by student_id and order_item_id and “completed” is marked TRUE. This in effect, debits the test holdings of the respective account.
  • Taking the Test [0088]
  • When the test starts, the Student's Test Application will request a list of test stimuli and their resources and commence to download those resources from the [0089] server 52. After a student has taken a test, most resources will already be cached locally, and the test may proceed with minimal downloads. The test will proceed even in the event of student timeouts due to inactivity. As the student answers the test, data is collected. At the conclusion of the test, that data is written to a temporary HTML page, which is then sent as a form to the server 52. The Score table at the server 52 is updated with this form data. A test is concluded when the student answers the final test question OR when the classroom teacher clicks the EXIT button for the Student. In the preferred embodiment, no student or student score data will be held locally. The Teacher's machine will look for unsent files on student machines and attempt to resend them at a later time in the instance where a test is completed but the HTTP transmission fails.
  • Viewing Data [0090]
  • Test performance data (graphs and tables) will be displayed by an applet embedded within a Web page. The test performance data is username/password protected. An HTML page will send a find request in the form of a Transact-SQL statement to the test result database which returns a record set. The record set will be formatted for display by the embedded applet. [0091]
  • Individual Accounts [0092]
  • Individuals may view data by entering their username/password. Individuals will be able to view data for students who they have registered as set forth in more detail in Appendix A. [0093]
  • Institutional Accounts [0094]
  • Account managers may view data by using their username/password. Account managers may view and print data at the highest level of their access, typically at the School District Level. This entitles them to view individual and summary data by District, Region, School, Class and Student as set forth in more detail in Appendix A. Lead teachers may view data by using their usemame/password. Lead teachers may view and print data at the highest level of their access: the School Access Level. This entitles them to view individual and summary data by school, class and student. Classroom teachers may view and print data at a Class Access Level using their usemame/password. This entitles them to view individual and summary data by class and student. The details of the data reporting feature of the diagnostic system in accordance with the invention will be described in more detail below with reference to FIG. 28 and Appendix A. Now, more details of the Web-based diagnostic system will be described. [0095]
  • FIG. 2 is a diagram illustrating the Web-based [0096] server computer 52 that may be a part of the diagnostic system of FIGS. 1A, 1B and 1C. The server 52 may include the CPU 58, the memory 60, the DB 62, the persistent storage device 64 and the diagnostic tool 66. The diagnostic tool may further comprise a user interface (UI) 100, a test section 102, a scorer 104, an administrator 106, a recommender 108 and a motivator module 109. The user interface may download the Web pages to each client computer as the Web pages are requested and receive the responses back from the client computers. The test section 102 may contain links to one or more different diagnostic tests (stored in the persistent storage or the DB) that may be used to determine a child's proficiency at a particular phonological awareness skill or reading skill as described in more detail below. Each test may have the child play a graphical game in which some skill of the child is being tested without the child knowing that a test is being performed. This type of game-based testing may reduce the child's anxiety about taking a test. The child may interact with each test and respond to the test with responses. In accordance with a preferred embodiment of the invention, the user/student taking the tests in the assessment tool do not see the scores of the tests since those scores are only provided to the teacher or parent of the user. Those responses are uploaded to the server and gathered by the scorer 104. The scorer may accumulate the total score for each test and then store the score in the DB 62. Since the scores from the tests are automatically gathered and stored by the scorer into the DB, the system helps to generate accurate scores, permits the scores from different children to be compared to each other and permit a child's progress to be tracked based on the changing scores of a child over time. An example of the report generated by the scorer in accordance with the invention is described below with reference to FIG. 28 and Appendix A. The scorer 104 may also include statistical analysis mechanisms for determining various statistics about the scores of one or more children using the diagnostic tool.
  • The [0097] administrator 106 may perform various administrative actions such as monitoring the user of the diagnostic tool, billing the users (if appropriate) and the like. The recommender 108 may use the scores and statistical information generated by the scorer, if requested by the user of the client computer, to recommend one or more training tools that may be used by the child taking the tests on the particular client computer in order to improve the child's ability in any deficient areas. For example, the scores may indicate that the child has weak/below average rhyme recognizing skills and the recommender may recommend that the child play the rhyme recognizer training tool in order to boost the child's rhyme recognition abilities. The parent may then download the training tool from the system. The recommender permits a parent of the child, who has no experience or knowledge about reading disorders or phonological awareness and processing deficits, to have their child tested for these deficits at home and then have the system automatically recommend a training tool that may help the child improve in any deficient areas. In particular, the recommender may be one or more pieces of code in a preferred embodiment that analyze the incorrect responses to one or more different subtests in order to determine the skill areas of a particular user that are deficient so that a training module that trains that particular deficient skill area can be recommended to the user of the diagnostic system. The recommendation module in accordance with the invention will now be described in more detail with reference to FIGS. 2A-2D.
  • FIG. 2A graphically illustrates a [0098] method 800 for determining a particular phonological error of a user that is using the diagnostic system. For each subtest, such as the Rhyme Recognition subtest, the Rhyme Generation subtest, the Beginning and Ending Sound subtest, the Blending subtest, the Segmentation subtest, the Manipulating subtest, etc . . . shown in FIG. 2A, the diagnostic system stores the incorrect responses to each question. For example, as shown for the Rhyme Recognition subtest, there may be three incorrect responses for test items 2, 3, and 6 wherein each test item tests a different aspect of the rhyme recognition skills. As shown, the incorrect responses are sorted by the type of error that is likely occurring based on the particular incorrect response wherein those differences are shown graphically in FIG. 2A, but are stored digitally in a database in the preferred embodiment. In this example, two of the incorrect responses indicate the same type of error (for example, an open syllable rime error) and one indicates a different type of error (for example, a r-controlled vowel rime). In this manner, the data about the particular incorrect responses by the user stored in the database are mapped into the types of errors that are shown by the particular incorrect answer. The particular preferred software based method for determining the particular type of error based on the answers from a user to all of the subtests will now be described with reference to FIG. 2B.
  • FIG. 2B is a flowchart illustrating a [0099] preferred method 810 for determining a particular deficiency of a user of the diagnostic system. To understand the flowchart shown in FIG. 2B, there may be one or more indexes (i, j, l) that are used to indicate each subtest (ST)i, wherein i=1 . . . k; incorrect responses for each subtest item (IR)ij, wherein ij=11, 12, 1max, . . . k1, k2, . . . kmax (j=1 . . . max); and error measure (EM)il, wherein il=11, 12, . . . , 1max, . . . k1, k2, . . . kmax (l=1 . . . max). In particular, in steps 812, 814, 816, the indexes are set to one to begin the analysis process. These indexes are then incremented as described below to analyze each incorrect response for each subtest wherein each incorrect response is compared to each error measure to determine the type of error.
  • In [0100] step 818, the first incorrect response, IR11, for the first subtest, ST1, is compared to the first error measure, EM11, to determine if the incorrect response is consistent with the first error measure. Each error measure is intended to compare a particular incorrect answer with a particular type of error as described in more detail below with reference to FIG. 2D. In step 820, the method determines if a type of error is identified (e.g., does the incorrect response indicate that the particular type of problem identified by the particular error measure is present for the particular user). If an error is identified based on the error measure, the error is labeled in step 822 and then stored in the database in step 824 for the particular user. Since there is only one error measure that matches each incorrect answer, the method will drop down to step 830 to analyze the next incorrect response against all of the error measures.
  • If an error is not identified, then the method determines if index l is a maximum (e.g., if all of the error measures have been analyzed) in [0101] step 826. If 1 is not at its maximum value (e.g., there are other error measures that need to be compared to the first incorrect answer for the first subtest), then l is incremented in step 828 (to compare the next error measure to the first incorrect answer to the first subtest) and the method loops back to step 818 to compare the next error measure to the first incorrect answer for the first subtest. Thus, using the loop containing steps 818, 820, 826 and 828, each error measure is compared to the first incorrect answer for the first subtest.
  • Once each error measure is compared to the first incorrect response for the first subtest (e.g., l=max), the method determines if all of the incorrect responses (j=max) have been analyzed in [0102] step 830. If all of the incorrect responses have not been analyzed, then the method loops in step 832 to increment j (to analyze the next incorrect response) and loops back to step 816 to reset l=1 so that the next incorrect response is compared to all of the error measures. Again, the loop 816, 818, 820, 826, 828, .830, 832 compares each incorrect response for a particular subtest to each error measure and identifies any matching error measures. Once all of the incorrect responses (e.g., j=max) for a particular subtest have been analyzed, the method proceeds to step 834 in which the method determines if all of the subtests (e.g., i=k) have been analyzed. If all of the subtests have not been analyzed, then the index i is incremented in step 836 to analyze the next subtest and the method loops back to step 814 to then analyze each incorrect response for the particular subtest by comparing each incorrect response to each error measure. Again, the loop 814, 816, 818, 820, 826, 828, 830, 832, 834 and 836 compares each incorrect response for each subtest to each error measure.
  • In summary, the input (IR)[0103] 11 (incorrect response 1 of subtest 1) is provided and compared to (EM)11 (error measure 1 of subtest 1, for example, open syllable rime). If the error is identified, label the error and store it in the database: Error Storage. If the error is not identified, continue comparing this incorrect response with the.remaining error measures until the error is identified. Next, input (IR)12 (incorrect response 2 of subtest 1) and repeat the steps 2 and 3 to identify the error. When all the incorrect responses from subtest 1 are compared and errors are identified, labeled, and stored, input incorrect errors of subtest 2 one by one and compare them with error measures for subtest 2 as what was done for subtest 1. Continue doing this until all the incorrect responses from all the subtests are compared and labeled, and errors are stored in the database. Thus, the method in accordance with the invention compares each incorrect response for each subtest to each error measure to generate a database containing all of the errors that are identified for a particular user. Now, more details of the error measure and the comparison of the error measures to the incorrect responses will be described.
  • FIG. 2C visually illustrates an example of the IF-THEN rules used to determine a user's deficient skill areas based on the incorrect answers in particular subtests and FIG. 2D illustrates an example of one or more subtests of the diagnostic system and the error measure associated with the particular subtest. With reference to FIG. 2C, the circled numbers illustrate the code of an error measure for a particular subtest (shown in more detail in FIG. 2D) and the lines illustrate the connections of all elements for a particular rule that indicates a particular skill deficiency. With reference to FIG. 2D, the table illustrates one or more subtests, its associated error measure identification number (ID) and the actual error measure described. Thus, for example, for the Beginning and Ending Sounds subtest, the second error measure identification is “2” and the actual error measure is that the user does not recognize /f/ when it is at the end following an /i/ sound. Other examples of the error measures for different subtests are also shown. [0104]
  • In accordance with the invention, there may be a plurality of error measures that are compared to each incorrect response by the user for each subtest to determine the type of user error that is indicated by the particular incorrect answer. For example, as shown in FIG. 2C, each subtest may have one or more different error measures wherein the error measures are described in more detail in FIG. 2D. Then, once the error measure for each subtest is identified, it is stored in the database. Then, one or more skill deficiencies of the user are determined based on the stored error measures. In particular, the database may include one or more rules that identify different skill deficiencies. Each rule may reach a conclusion about a particular skill deficiency based on one or more error measures. For example, a single error measure (based on a single incorrect answer) may indicate a particular skill deficiency or a combination of error measures (based on more than one incorrect answer) may indicate a skill deficiency. Thus, the recommender is capable of diagnosing skill deficiencies in a user in this manner. Several examples that illustrate the rules set forth in FIGS. 2C (using graphic) and [0105] 2D (using text) will now be described.
  • FIG. 2C graphically illustrates three examples of rules in the recommendation module that indicate three different skill deficiencies. These examples, however, are merely illustrative and there may be a very large number of actual skill deficiency rules. FIG. 2D illustrates the error measures that are being used in the rule examples shown in FIG. 2C. In FIG. 2C, the first rule (Rule 1) is indicated by a dashed line (- - - -), the second rule (Rule 2) is indicated by a solid line (------) and the third rule (Rule 3) is indicated by a broken dashed line (--- - - ---). (Thus, FIG. 2C illustrates the combination of error measures that must be true for a particular user (indicating particular incorrect answers of the user) that in turn indicate a particular skill deficiency. Each example of a rule will now be provided in text below (and shown graphically in FIG. 2C) and then a more in-depth explanation of the first rule only is provided since it is assumed that the second and third rules will be understood once the first rule is explained. [0106]
  • Rule 1: If error measure [0107] 3.2 is true (e.g., an incorrect response in the Beginning and Ending Sounds subtest (subtest 3) that matches error measure 2 in FIG. 2D) and error measure 4.3 is true, and error measures 5.4, 6.4, 7.3, 9.4 and 10.2 are true, then the skill deficiency is the /f/ sound.
  • Rule 2: If error measures [0108] 3.2, 4.3, 5.4, 6.4, 7.3 and 10.2 are true, then the deficiency is the /f/ sound at the end of a word.
  • Rule 3: If error measures [0109] 4.3, 6.4, 7.3 and 10.2 are true, then the deficiency is the /f/ sound at the end of a word following an /el sound or another consonant.
  • In more detail, the first rule generally determines if the user has a problem understanding the /f/ sound in a word while the second and third rules determine if a particular location in a word of the /f/ sound is a problem. To analyze these rules, the database has stored the incorrect answers of the user along with the error measures that correspond to the incorrect responses. Then, each rule is compared to the error measures that are stored in the database which are true (indicating a particular incorrect response to a particular subtest) for the particular user to diagnose any skill deficiency areas. Thus, a deficiency in understanding the /f/ sound is diagnosed if the above identified error measures (indicated in FIG. 2D) are true. Thus, based on a plurality of these rules, a specific deficiency (for example, deficiency of /f/ sound at the end following /e/ sound or another consonant vs. deficiency of /f/ sound) is identified and relevant training modules are recommended. Now, the [0110] motivator module 109 will be described in more detail.
  • The [0111] motivator module 109 may generate motivation images and sounds to encourage the user/student to complete the tests associated with the assessment tool so that the user is less aware that he/she is being tested by the system. The motivation may also maintain the user/student's interest in the testing. In one embodiment, the diagnostic system may show one or more animals, such as monkeys, eating bananas as the user is completing the tests so that the user is rewarded and incentivized by the monkey's actions. In a preferred embodiment, there may be eleven different skills tests and the monkeys may be shown to the user after the first three tests are completed by the user, and then after the first six tests have been completed by the user, and finally after the first nine tests have been completed by the user. In this manner, the user is given a break between tests, given a chance to relax, and informed of the test portions completed and to be completed. For example, after the first three tests, the monkey may be eating three bananas representing the three completed test sections and may say “I want more bananas. Help me get some more bananas” to encourage the student to complete the other tests in the diagnostic tool which are represented by the eight bananas on the tree. Thus, the motivation module encourages the user to complete all of the tests in the diagnostic tool.
  • The diagnostic tool may also include speech recognition software that permits the various tests described below, to be used in conjunction with speech recognition technology (a microphone and speech recognition software) on the client computer to enhance the value of the diagnostic tests. For example, the child may see one or more items on the computer screen in rapid succession, speak the name of each item into a microphone that is interpreted by the speech recognition software in the client computer, transmitted to the server and compared to a correct response by the speech recognition software in the server so that the scorer may determine whether or not the child correctly identified each item. The tests that may benefit from the speech recognition technology will be described below. Now, a preferred embodiment of the diagnostic tool in accordance with the invention will be described in more detail. [0112]
  • FIG. 3 is a diagram illustrating a preferred embodiment of the [0113] diagnostic tool 66 including one or more tests 102 that are used to diagnose a reading problem of a child by testing various phonological awareness and processing skills and pre-reading skills of the child. In a preferred embodiment, the one or more tests 102 may each be a separate software application module that may include a user interface portion 111 containing one or more Web pages. Each test 102 may display images on the display of the client computer that test a particular phonological awareness skill of the child and receive responses from the child that are used to determine a score for the child. In the preferred embodiment, the diagnostic tool may include, for example, a questionnaire module 110, a rhyme recognizer module 112, a rhyme generator module 114, a beginning and ending sound or sound unit recognizer module 116, a sound blender module 120, a sound segmenter module 122, a sound manipulator module 124, a sequential verbal recall module 126, a rapid item naming module 128, a letter naming and sound/symbol association module 130, a word decoder module 132 and a fluent reader module 134. As described above, each module may embody a test that tests a particular phonological or reading skill of the child that may affect the child's ability to read.
  • The [0114] questionnaire 110 is a fill-in form that permits the system to look for particular risk factors that may lead to reading deficiencies as described below with reference to FIG. 4. The rhyme recognizer module 112 determines the child's ability to recognize a rhyme as described below with reference to FIGS. 5 and 6. The rhyme generator module 114 determines the child's ability to make rhymes as described below with reference to FIGS. 7 and 8. The beginning and ending sound or sound unit recognizer module 116 determines the child's ability the recognize the beginning and ending sounds in one or more words as described below with reference to FIGS. 9 and 10. The sound blender module 120 determines the child's ability to blend known sounds or sound units together to form new words as described below with reference to FIGS. 11 and 12.
  • The [0115] sound segmenter module 122 determines the child's ability to segment a word into one or more sounds as described below with reference to FIGS. 13 and 14. The sound manipulator module 124 determines a child's ability to manipulate the sounds in a word as described below with reference to FIGS. 15 and 16. The sequential verbal recall module 126 determines the child's ability to recall a series of sequential items shown to the child as described below with reference to FIGS. 17 and 18. The rapid naming module 128 determines a child's ability to rapidly name one or more items as described below with reference to FIGS. 19 and 20. The letter naming and sound/symbol association module 130 determines the child's ability to name the letters of the alphabet and associate sounds with symbols as described below with reference to FIGS. 21 and 22. The word decoding module 132 determines a child's ability to determine words based on one or more sounds as described below with reference to FIGS. 23 and 24. The fluent reader module 134 determines the child's fluent reading ability as described below with reference to FIGS. 25 and 26. As described above and below, each module may use the speech recognition technology to enhance the testing process. Now, each of these modules will be described in more detail starting with the questionnaire.
  • FIG. 4 is a flowchart illustrating a [0116] questionnaire process 140 in accordance with the invention. The questionnaire permits the diagnostic system to gather information about an individual to be tested for the purpose of calculating the individual's risk for reading and academic failure. In particular, a variety of historical, environmental, familial and behavioral factors that have been closely linked with and are predictive of language-based reading and learning disorders may be determined. For example, the frequency of middle ear infections, a family history of dyslexia, socioeconomic status, exposure to literacy in the home, competencies in speech sound awareness, word retrieval, verbal memory, speed sound perception and production and language comprehension and expressive language may provide information about an individual's risk for language-based reading and learning problems.
  • In [0117] step 142, the questionnaire may display a first question to the user of the client computer, such as the parent of the child being tested. Next, the user may respond to the question using the user input devices and the user's response may be recorded by the questionnaire module in step 144. In step 146, the questionnaire module determines if all of the questions have been answered and goes to step 142 to present the next question to the user if there are additional questions. As long as there are remaining questions, the method will loop through steps 142-146. When the user has answered all of the questions, the questionnaire module may analyze the responses in step 148 to calculate a score and a risk factor value and then display the results of the analysis (including the responses and the recommendations of the system) to the user in step 150. The score may be calculated as the number of items checked as being applicable to the user. Although a single factor does not indicate a risk, the more factors that exist for an individual, the more likely it is that the individual may experience difficulties.
  • In analyzing the results of the questionnaire, the module may generate a category of the risk (high, medium or low) and then provide recommendations based on the category of risk. As an example, the questionnaire may ask if the child has a history of middle ear infections, if anyone in the family has reading or other learning disabilities and if the child mispronounces multi-syllabic words. The responses to these questions may be used to determine the category of risk of the person being tested. The category of risk determined based on the questionnaire may then be used during the recommendation of training tools. Now, the rhyme recognition module will be described in more detail. [0118]
  • FIG. 5 is a flowchart illustrating a [0119] method 160 for testing a child's recognition of rhymes in accordance with the invention. The rhyme recognizer module tests the child's ability to recognize rhyming words and, in order to determine if two words rhyme, the child must focus on the sounds of the words rather than the meaning. In addition, the child must focus on one part of the word rather than the word as a whole. A sensitivity to rhyming is typically a child's first experience shifting their attention and focus from the content of the speech to the form of the words. Typically, this skill for recognizing rhymes should emerge by 3-4 years of age. The module may show the child one or more different types of rhymes (using different sound units, for example) in order to assess the child's ability with different types of rhymes.
  • At [0120] step 162, the rhyme recognizing module may display two words along with their pictures on the user's display screen as shown in FIG. 6. For example, the module may display the picture of a sun and a picture of a gun. In step 164, the module may display text below the pictures asking the user if the two words rhyme. In a preferred embodiment, the module may present a verbal prompt asking the user if the two words rhyme since the users of the system may not be able to read. In step 166, the user may use the user input device, such as the keyboard, the mouse or the microphone of the speech recognition hardware, to respond to the question and the module may receive the response. In step 168, the module may determine if the response is correct. If the response is correct, the module may determine if there are other rhyme types to test in step 170. If there are more rhyme types to test, the module may display the word pair for the next type of rhyme in step 172 and loops back to step 164 to display the question about whether the two words rhyme. If there are no more rhyme types to test, the module may calculate the child's score in step 174. The score may be calculated based on the percentage of pairs of items correctly identified as rhyming or not. In step 176, the module may display the score to the user and the recommender, based on the score, may recommend one or more training tools to help the child improve his rhyme identification skills.
  • Returning to step [0121] 168, if the response given by the user is not correct, then the module may determine the number of consecutive errors of the particular rhyme type in step 178. In step 180, the module may compare the number calculated above to a predetermined number and if the number of consecutive errors is more than the predetermined number, the module go to step 170 to determine if there are other rhyme types to be tested (assuming that more tests for the current rhyme types are not productive since the user has already missed more than the predetermined number). If the number of consecutive errors is less than the predetermined number, then the module may display the next word pair for the same rhyme type in step 182 in order to continue testing the child's ability with that particular type of rhyme. In this manner, the rhyme recognizer module may test the child's abilities with respect to a variety of rhyme types to gain a better understanding of the child's deficiencies or abilities to recognize rhymes. For example, the module may determine that the child only has deficiencies with respect to certain types of rhymes. Now, an example of the user interface for the rhyme recognition module will be described.
  • FIG. 6 is a diagram illustrating an example of how the child's rhyme recognition may be tested in accordance with the invention. In particular, an [0122] image 190 that may be displayed on the user's display screen is shown. The image may include a picture of a first item 192 and a picture of a second item 194 and the child must determine if the names of the two items rhyme with each other. In this example, the items are a sun and a gun that do in fact rhyme. The image may also include displayed instructions 196 from the module and one or more response buttons 198, 200, such as the “Yes” button and the “No” button in this example. As described above, the user may also respond to the query by using the keyboard or by speaking into a speech recognition microphone. In accordance with the invention, the rhyme recognition module may present the rhyme recognition test as a series of colorful images that reduces the child's test anxiety since the child may not even realize that he/she is being tested. Now, the rhyme generation module will be described in more detail.
  • FIG. 7 is a flowchart illustrating a [0123] method 210 for testing a child's ability to generate a rhyme. The rhyme generation module assesses a child's ability to focus on one part of a word rather than the entire word. The ability to rhyme indicates the emergence of phonological awareness and processing skills and is a good early indicator of later reading ability. Typically, this skill begins to show as the child is 3-4 years old.
  • In [0124] step 212, the module may generate a word sound on the speaker of the user's computer and may display an image of the word being spoken. The module may also display a series of other pictures of items in step 214 and the user must determine which item in the series rhymes with the spoken word. The module may then ask the user to select the rhyming item in step 216, the user may provide a response using one of the input devices (keyboard, mouse or microphone). Instead of a series of images being displayed to the user, the module may provide a verbal prompt asking the user to generate a rhyming word and the user may speak the rhyming word into the microphone of the speech recognition device. The module may then determine if the user's response is correct in step 218. If the user's response is not correct, then the module may determine the number of consecutive incorrect responses in step 220 and compare the calculated number to a predetermined number, n, in step 222. If the number of errors is less than the predetermined number (e.g., the user should be tested more on that rhyme type), the module may display the next image in step 224 and return to step 214. If the number of consecutive errors is greater than the predetermined number (e.g., it is no longer useful to continue testing this rhyme pair because the user does not understand it) or the user's response was correct, the module may determine if there are more rhyme types to test in step 226. If there are more rhyme types to test, then the module may display the items for the next rhyme type in step 228 and return to step 214 to elicit the user's response. If there are no other rhyme types (i.e., the user has completed the module), the module may calculate a score in step 230 (the score is equal to the percentage of items correctly identified as rhyming) and may display the results of the test and any recommendations from the recommender in step 232. The recommendations from the recommender are similar to those described above and therefore will not be described here. Now, an example of the rhyme generation test is described.
  • FIG. 8 is a diagram illustrating an example of how the child's rhyme generation may be tested in accordance with the invention using an [0125] image 240. The image may include an image 242 of the spoken word that may be a “pup” in this example. The image 240 may also include one or more images of other items 244-248 (a horn, a bed and a cup in this example) and displayed instructions 250 as shown. During the test, the user may hear the word “pup”, see the picture of the “pup” and select the item below it that rhymes with the pup. In this example, the user is supposed to select the picture of the cup. As above, instead of a series of images being displayed to the user, the module may provide a verbal prompt asking the user to generate a rhyming word and the user may speak the rhyming word into the microphone of the speech recognition device. As above, the use of images to test the child's ability reduces the child's test anxiety since the child may not even realize that a test is being conducted. Now, more details of the beginning and ending sound recognizer module will be described.
  • FIG. 9 is a flowchart illustrating a [0126] method 260 performed by the beginning and ending sound recognizer module for testing the child's ability to distinguish the beginning and ending sounds of a word. In particular, the module tests a child's ability to recognize sounds in words. Once the child establishes the skill to recognize the beginning and ending sounds of a word, the child may more readily learn to isolate the sounds in a word and hear them separately. A normal kindergarten child is typically able to identify which word in a group of three words begins with the same first sound as the target word. Most normal first grade students can perform the harder task of identifying the word in a group with the same last sound.
  • In [0127] step 262, the module may present a spoken word naming an item and display an image of the item to the user. In step 264, the module may query the user about which item in a sequence of items has the same beginning sound as the item. The module may then receive a user's response from the user entering the response into the input devices as described above in step 266. In step 268, the module determines if the response is correct. If the response is not correct, the module may determine the number of consecutive errors for the particular beginning sound in step 270 and compare the calculated value with a predetermined value, n, in step 272. If the calculated value is not less than the predetermined value (i.e., the user should be asked more questions about that particular type of beginning sound), then the module may present the user with another spoken word and picture in step 274 and return to step 264 to gather the user's response.
  • Returning to step [0128] 268, if the response of the user is correct, the module determines if all of the beginning sounds in the test are completed in step 276 and either presents the next beginning sound in step 278 and returns to step 264 if there are other beginning sounds to test or begins testing the ending sounds. In particular, the module may present a spoken word and a picture of the item in step 280 and query the user about which item in a sequence of items has a similar ending sound in step 282. In step 284, the module may gather the user's response and determine if the response is correct in step 286. If the response is incorrect, the module may determine the number of consecutive errors for the particular ending sound in step 288, compare the calculated number to a predetermined number in step 290 and display a next word in step 292 and returns to step 282 if the calculated number is less than the predetermined number. If the calculated number is not less than the predetermined number or the user's response is correct, the module may determines if the ending sounds has been completed in step 294. If the testing of the ending sounds has not been completed then the module may present the next word in step 296 and return to step 282. If the ending sounds are completed, the module may calculate a score based on the percentage of correct responses in step 298. In step 300, the module and the recommender, respectively, may generate a display of the score and any recommendations about training tools that the user may use to improve his recognition of the beginning and ending sounds of a word. Now, an example of the user interface for testing the ability to discern the beginning and endings of words will be described.
  • FIG. 10 is a diagram illustrating an example of a [0129] user interface 310 of how the child's ability to discern the beginning and ending of words may be tested in accordance with the invention. In particular, the user interface may include a picture of the current word 312 that is a leg in this example, and a series of pictures 314 showing other items. The user must recognize the beginning sound of the leg and then determine which picture of an item shows an item with the same beginning sound. The user may then select an item by clicking on the item. In this example, the correct response is the lamp. Now, a method for testing a child's ability to blend sounds will be described.
  • FIG. 11 is a flowchart illustrating a [0130] method 360 for testing a child's ability to blend sounds. In particular, the game tests the user's ability to blend units of sound such as syllables or phonemes together. The blending of these units of sound together requires a knowledge that individual sounds may be combined to form a word, but does not require letter recognition. The blending of sounds is an important reading skill since, when children sound out a word, they must be able to then blend all of the sounds together to form the whole word. Typical children normally develop the blending skill during the early kindergarten years.
  • In [0131] step 362, the module may display one or more graphical representations of items and present a spoken word, with it's sound units separated by equal intervals of time, to the user, such as “k-ey”. The module may then ask the user to identify the graphical item referred to by the spoken word in step 364 and receive the response from the user using one of the input devices, such as the keyboard, mouse or microphone of the speech recognizer. In step 366, the module may determine if the response received is correct. If the response was not correct, the module may determine the number of consecutive errors for the current sound unit in step 368. In step 370, the module may determine if the number of consecutive errors is less than a predetermined threshold and present the next word with similar sound unit types in step 372 and loop back to step 364 if the number of consecutive errors is not less than predetermined threshold. If the number of consecutive errors is not less than the predetermined threshold or if the prior response was correct, the module may determine if there are other sound unit types to test in step 374. If there are other sound unit types, the module may present a word with sound units of the new type in step 376 and loop back to step 364 to test the child using the new sound unit type. If there are no more sound unit types to test, the module may determine the user's score in step 378 based on the percentage of correctly answered items. In step 379, the module may display the score to the user and the recommender may recommend one or more training tools that may help the user improve the blending sound ability and that may be downloaded from the diagnostic system. An example of the user interface for testing the blending of sounds will now be described.
  • FIG. 12 is a diagram illustrating an example of a user interface for testing a child's ability to blend [0132] sounds 380 in accordance with the invention. As shown, the user interface 380 may include graphical representations 382-386 of one or more items, such as a key, a doll and a bell in this example, that the user may select in response to the spoken word's separated sound units. As described above, the user may respond to the questions by clicking on the image, pressing a key on the keyboard or speaking a name into the microphone of the speech recognizer. In this example, the correct response is to select the key 382. Now, a method for testing the sound segmenting ability of a user will be described.
  • FIG. 13 is a flowchart illustrating a [0133] method 390 for testing a child's ability to segment sounds in which the user's ability to segment a unit of sound, such as a word, into its constituent sound units, such as syllables and phonemes, is tested. The ability to segment phonemes is a reliable predictor of reading success and usually is developed prior to and during kindergarten. In step 392, a sequence of sounds units, such as a sentence, is spoken to the user. In step 394, the user is queried about how many words the user heard and the response from the user may be shown graphically as shown in FIG. 14. In the example shown in FIG. 16, the sentence “I have two brothers” was presented to the user, the user activated an input device (clicked the mouse button, hit a key or spoke into the microphone) four times to indicate that four words were heard, and four items 395 are shown on the display.
  • Returning to FIG. 13, the accuracy of the user's response is checked in [0134] step 396. If the response is not correct, the number of consecutive errors is determined in step 398 and compared to a threshold value in step 400. If the number of errors is less than the threshold, the next sequence of sounds units is presented to the user in step 402 and the method loops back to step 394. If the number of errors is not less than the threshold or the prior response of the user was correct, it is determined if there are more tests with a different sequence of sound units in step 404. If there are more tests, a new sequence of sound units is presented in step 406 and the method loops back to step 394. If all of the tests have been completed, then the user's score is determined (as a percentage of correct responses) in step 408 and the score and any recommendations based on the score are displayed in step 410. Now, a method for testing a child's ability to manipulate sounds is described.
  • FIG. 15 is a flowchart illustrating a [0135] method 420 for testing a child's ability to manipulate sounds. In particular, the user's ability to manipulate phonemes is tested since that ability is highly correlated with reading ability through the 12th grade. In step 422, the user is presented with a spoken word. In the example shown in FIG. 16, the spoken word is “cake”. In step 424, a graphical representation of constituent sound units is displayed for the user. In the example shown in FIG. 16, the graphical representations may be one or more blocks 426 (three for the word “cake” with the first and last blocks being the same color since the first and last sound units of “cake” have the same sound). In step 428, the user is asked to rearrange the blocks shown or use the available other blocks (as shown in FIG. 16) to form a new word and the user rearranges the blocks with an input device. In the example, the user is asked to change “cake” to “cape”. A correct response would be to have three blocks wherein a third block 429 has a color that does not match the other two blocks indicating that the third sounds unit is different from both the first and second sound units. In step 430, the accuracy of the response is determined. If the response is not correct, the number of consecutive errors is determined in step 432 and compared to a threshold value in step 434. If the threshold value is not exceeded (indicating that the same type of manipulation should continue to be tested), the next manipulation of the same type is presented in step 436 and the method loops back to step 424. If the number of errors exceeds the threshold (indicating that the child is having too much trouble with the current type of manipulation) or if the prior response was correct, it is determined if there are more types of manipulations to test in step 438. If there are more types to test, the next type of manipulation is presented in step 440 and the method loops back to step 424. If there are no more types of test, the score of the user is determined in step 442 (based on the percentage of correct answers) and the score and any recommendations are displayed to the user in step 444. Now, a method for testing the ability to recall spoken words will be described.
  • FIG. 17 is a flowchart illustrating a [0136] method 450 for testing a child's ability to recall spoken items in sequential order. The ability to recall a sequence of verbal material depends on the ability to accurately represent the essential phonological features of each item in working memory and phonological coding efficiency is a primary determinant of performance of this task. Typically, the ability to recall a list of spoken items increases with age from about 1 digit and 2 words at 4 years old to 8 digits and 6 words at 12 years old. In step 452, a sequence of words and/or digits is spoken with equal intervals between each word or digit through the speaker of the computer to the user. The user then repeats the sequence back using an input device such as a microphone of the speech recognizer in step 454. FIG. 18 illustrates an example of a sequence of digits that are presented to the user. In step 456, the response is checked for accuracy.
  • If the response is not correct, then the number of consecutive errors is determined in [0137] step 458 and the number of consecutive errors is compared to a threshold in step 460. If the threshold is not exceeded, then the next sequence of words and/or digits is presented in step 462 and the method loops back to step 454. If the threshold is exceeded or if the last response was correct, it is determined if there are more types of sequence of words to test in step 464 and the method presents a new type of sequence in step 466 and loops back to step 454 if there are more types. If all of the types of sequences have been completed, then the user's score is determined in step 468 (as a percentage of correct responses) and the scope and any recommendations for training modules is displayed in step 470. Now, a method for testing rapid naming ability will be described.
  • FIG. 19 is a flowchart illustrating a [0138] method 480 for testing a child's ability to rapidly name visually-presented items. In particular, an inability to name visual objects typically underlies a reading disorder. In step 482, an array 484 (an example of which is shown in FIG. 20 as a first row of a 4×6 array) is displayed to the user. In step 486, a timer is started and the user is asked to name all of the items in the array as fast as possible in step 488 using an input device such as a microphone of a speech recognizer. The timer may actually be started when the user makes his/her first response. After each response, the accuracy of the response is determined in step 490. If the response is not correct, then the number of consecutive errors is determined in step 492 and compared to a threshold in step 494. If the threshold is exceeded, the test is aborted. If the threshold is not exceeded, then the user continues to identify the items in the array. If the prior response was correct, then it is determined if there are more items to name in step 496 and the method loops back to step 488 if there are more items. If all of the items have been named, then the timer is stopped in step 498 and the score is determined in step 500 based on the total time of the responses. In step 502, the score and any recommendations for training modules are displayed. Now, a method for testing the ability to name letters and associate sounds with symbols will be described.
  • FIG. 21 is a flowchart illustrating a [0139] method 510 for testing a child's ability to name letters and associate a phoneme sound with a letter. The inability to name letters may indicate a reading problem at the kindergarten level while an inability to associate a phoneme sound with a letter may indicate a reading problem at the first and second grade level. In step 512, a letter's name is spoken to the user by the computer. In step 514, the user may identify the letter in an array of letters 516 (as example of which is shown in FIG. 22) and select the appropriate letter using an input device. In step 518, the response accuracy is determined and it is determined if there are more letters. If there are more letters, the method loops back to step 512. If all of the letters have been completed, then a phoneme sound is generated by the computer and heard by the user in step 520. The user may then indicate the corresponding letter for the phoneme sound in step 522 and the accuracy of the response is checked. In step 524, it is determined if there are more phonemes to test and the method loops back to step 520 if there are more phonemes. If the phonemes have been completed, then the user's score is determined in step 526 and the score and any recommendations about training modules is displayed in step 528. Now, a method for testing a child's ability to decode words will be described.
  • FIG. 23 is a flowchart illustrating another [0140] method 530 for testing a child's ability to decode words. In particular, the method tests a child's ability to decode (i.e., read by sounding out) nonsense and real words since research has shown that the best measure of the ability to apply knowledge about grapheme-phoneme correspondences to reading words is a test of non-word phonemic decoding fluency.
  • At [0141] step 532, the module may display a set of words 533 on the screen (an example of which is shown in FIG. 24) and then present a spoken word. In step 534, the module asks the user to identify the written word that was just spoken to the user. As above, the user's response may be provided using one of the input devices, such as the keyboard, mouse or microphone of the speech recognizer. Instead of speaking the word to the user, the module may present the word to the user is a visual manner. In step 536, the module determines if the correct response was received. If the response was not correct, then the module may determine the number of consecutive errors for the particular syllable type in step 538 and compares that calculated value to a predetermined threshold value in step 540 to determine if the calculated value is less than the threshold value. If the calculated value is less than the threshold, then the next spoken word for the same syllable type is presented in step 542 and the method loops back to step 534 to determine the user's response. If the number of consecutive errors is greater than the threshold or the prior response was correct, the module may determine if there are more syllable types to be tested in step 544. If there are more syllable types to test, the module presents the next word for the next syllable type in step 546 and loops back to step 532 where a new spoken word is presented to the user. If there are no more syllable types to test, the module may repeat the above testing (not shown in the flowchart for clarity reasons) process for one or more nonsense words in step 548. Once the above testing process has been repeated for nonsense words by testing if it is completed in step 550 and looping back to step 548, the module may determine the score of the child in step 552 wherein the score is calculated as a percentage of items that have been correctly answered. In step 554, based on the score, the module may display the score and the recommender may recommend one or more training tools to improve the child's decoding skills if the score reveals a decoding deficiency. Now, a method for testing fluent reading will be described.
  • FIG. 25 is a flowchart illustrating a [0142] method 560 for testing a child's ability for fluent reading. Slow or inaccurate decoding interferes with the ability of the child or user to extract meaning from the text. A typical child may read and respond to 30 sentences of the nature presented in this diagnostic tool in two minutes. The sentences may be questions (“Is the dog red?”) or statements (“The dog has fur.”) to which the user responds. In step 562, a question 564 is displayed to the user along with two answers 566 (an example of which is shown in FIG. 26). A timer is started in step 568 as the user makes his first response in step 570. In step 572, the accuracy of the response is determined. If the response is not accurate, then the number of errors made is compared to a threshold in step 574. If the number of errors are less than the threshold, then the method loops back to step 562 to continue testing. If the number of errors are more than the threshold or the prior response was correct, it is determined if the time exceeded two minutes in step 576. If the time is less than two minutes, then the method loops back to step 562. If the time exceeds two minutes, the total number of correct responses is tallied and then the entire test is repeated in step 577 and the score of the user is determined in step 578. The total score of the user is calculated by determining the user's score for each two minute test and then averaging the scores from the two tests to arrive at a final score. For example, a user may score 30 on the first test and 28 on the second test so that the final score is 29. In step 580, the score and any recommendations of training modules is displayed to the user. Now, the training module recommender in accordance with the invention will be described.
  • FIG. 27 is a flowchart illustrating the [0143] training recommender method 590 in accordance with the invention. The method identifies, recommends and makes available specific training modules based on an individual's or a group's assessment profile based on the results from the various tests performed by the diagnostic tool in accordance with the invention. In particular, the recommender may automatically recommend one or more training modules based on the test results. In step 592, the recommender gathers the data for the individual or group and analyzes it. In step 594, the recommender determines the individual or group's skill in each skill area tested by the diagnostic tool. In step 596, the recommender matches the skill level of the individual or group in a particular skill area with an appropriate training module. For example, the particular score of a user, such as close to normal, on a particular test, such as rhyme recognition, may cause the recommender to recommend a lowest level (least amount of training) of the rhyme recognition training tool to help the child. For a child with more rhyme recognition deficiencies, the recommender may recommend a higher level training tool with more rhyme recognition training. An another example, the particular scores of a user on the various syllable types in the rhyme recognition test may cause the recommender to recommend no training for open rime syllable types but to recommend training for closed rime syllable types.
  • In [0144] step 598, the recommender may display the recommended training modules to the user. The user may then select the recommended training modules in step 600 and the training modules may be downloaded to the user's computer so that the user may use the training modules to improve the skill areas that require it. In this manner, the diagnostic system in accordance with the invention not only diagnoses reading problems using the various skill tests but also recommends training modules that may help improve a deficient skill. Thus, the diagnostic system makes it easy for a parent to have the child tested for deficiencies and then to receive the tools that help correct any deficiencies. Now, an example of a report that is generated by the diagnostic system in accordance with the invention will be described.
  • FIG. 28 illustrates an example of a [0145] user interface 700 displaying a data report that is generated by the computer-based phonological skills diagnostic system in accordance with the invention. Further details of the data reporting in accordance with the invention are contained in Appendix A which is incorporated herein by reference. In accordance with the invention, there may be two types of data reports including a graph and a table. A data graph (as shown in the example shown in FIG. 28) or a data table displays data to show individual students' test results of all the subtests and compare the test results (total score comparison or individual subtest score comparison) among students, across classes or schools. The data tables can be sorted by scores and provide normative data for comparison. Now, the example of the data graph in accordance with the invention will be described.
  • The data graph shown in FIG. 28 may provide various information to a user of the system. In this example, the data graph illustrates the percentage correct for a particular test (Rhyming Recognition in this example) for a particular school class (Ms. Davis° [0146] Class 1A at Central School in this example). As shown, this graph illustrates the percentage correct of one or more students (Melissa, Robert, Ken, Beth, etc. in this example) at different points in time. In a preferred embodiment, the full names of the students are shown on the data report. In this example, each student's score (a percentage of correct answers in the Rhyming Recognition test) prior to any training (pre-test), after a first round of training and testing (post-test 1) and after a second round of training and testing (post-test 2) are shown. In a preferred embodiment, the different scores are color coded for easier viewing so that, for example, the pre-test score is a green bar, the post-test 1 score is a blue bar and the post-test 2 score is a red bar. As shown, each bar lists the actual percentage that is represented by the bar, but that percentage can be suppressed by clicking on a button 702 on the user interface. In particular, when the button is clicked, all the percentages on the screen are hidden and the button will change to Show Percentages. When the Show Percentages button is clicked, all the percentages will be shown on the screen and the button changes back to Suppress Percentages.
  • The user interface may include an “Other Test Section” [0147] button 704. If the user clicks on the “Other Test Section” button, a drop menu 740 (an example of which is shown in FIG. 29) appears that shows all of the subtest titles that can be selected by the administrator/teacher. Using the menu, a teacher can select a subtest to see students test results on this particular subtest. Thus, the data graph is dynamic in that the teacher/administrator can change the data shown in the graph at any time. The user interface may further include a zoom button 706 that permits the teacher to change the number of students whose data is displayed on the graph. For example, there may be 18 students in the class shown in FIG. 28, but the graph shown in FIG. 28 only shows the test results for six students. By clicking on the Zoom Out button, the teacher can see all the students' test results but details for each student will be missing so that the data can fit into the graph and the button will change to Zoom In. When the Zoom In button is clicked, the default six students' test results will be shown on the screen and the button will change back to Zoom Out. The purpose of having the zoom out function is to provide a general picture of the class performance.
  • The user interface may further include a [0148] back button 708 that permits other student scores to be displayed in the graph while retaining all of the data about each student. In other words, the graph defaults to showing a predetermined number of students, such as six, and the back button permits the teacher to browse through the detailed scores of the entire class by viewing a predetermined number of students at a time. For example, if the six students shown on the screen are the first six in the class, then this button will be inactive since there are no prior students. However, when the six students on the screen are not the first six, clicking on this button will show the previous six students' full test results. The user interface may also include a forward button 710 that permits the teacher to see the full scores for the next predetermined number of students. Thus, if the students shown on the screen are the last six in the class, then this button will be inactive. When the six students on the screen are not the last six, clicking on this button will show the next six students' test results. Using the back and forward buttons, the teacher is able to browse through the full test results for the entire class.
  • The user interface may further include a [0149] graph display button 712. When the teacher/administrator clicks on the button, a drop down menu 750 with small graphs will shown (as shown in FIG. 30) for the teacher to choose a data display she prefers. For example, as shown in FIG. 30, the menu permits the teacher to choose the data display of one test (pre, or post 1, or post 2), or two tests (pre and post 1, or pre and post 2, or post 1 and post 2), or three tests together (pre, post1, and post2). As with the other elements of the user interface, this menu is dynamic is that it changes depending on the actual test data. For example, if students take the same test more then three times, there will be more combinations. For example, if there are four tests (one pre test and three post tests), then there will be 14 different data graph choices including: pre, post 1, post 2, post 3, pre & post 1, pre & post 2, pre & post 3, post 1 & post 2, post 1 & post 3, post 2 & post 3, pre & post 1 & post 2, pre & post 1 and post 3, pre & post 2 & post 3, pre & post 1 & post 2 & post 3. Using this data view module, a teacher or administrator can view any test data in the system in various different formats, etc. Since the data view module is dynamic and tracks the information in the system, it is able to provide the user with constantly updated information.
  • The user interface may further permit the user to select a data report print choice. In a preferred embodiment, there may be three different print choices. The first print choice is a “Print All” choice in which reports for all the students or subtests of the graph or table on the screen are printed. A second print choice is the “Print Current Student/Subtest” choice in which the report for the student/subtest currently on the screen is printed. The third print choice is a “Print . . . ” choice in which the user is allowed to select the reports for certain student or subtest that the user would like to print out of the graph or table on the screen. [0150]
  • While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims. [0151]

Claims (481)

1. A computer implemented system for testing one or more skills associated with the reading skills of an individual, comprising:
a portable media disk storing instructions for one or more tests for determining deficiencies in one or more reading and pre-reading skills and for a scorer for determining a score for each test;
a teacher station into which the portable media is inserted wherein the teacher station executes the instructions on the portable media to test one or more skills; and
a student computer comprising means for displaying at least one of a graphical image and audio associated with each test based on the instructions on the portable media, means for receiving a user response to one of the graphical images and audio presented by each test and means for communicating the responses for each test back to the teacher station.
2. The system of claim 1, wherein the teacher station further comprises a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
3. The system of claim 1, wherein the teacher station further comprises a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
4. The system of claim 3, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
5. The system of claim 1, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
6. The system of claim 1, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
7. The system of claim 1, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
8. The system of claim 1, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
9. The system of claim 1, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
10. The system of claim 1, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
11. The system of claim 1, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
12. The system of claim 1, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
13. The system of claim 1, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
14. The system of claim 1, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
15. The system of claim 5 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
16. The system of claim 1, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
17. The system of claim 1, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
18. The system of claim 1, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
19. The system of claim 1 wherein the instructions on the portable media further comprises means for motivating the user to complete the tests.
20. The system of claim 19, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
21. The system of claim 20, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
22. The system of claim 21, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
23. The system of claim 21, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
24. The system of claim 2, wherein the recommender further comprises means for downloading the recommended training module from the teacher station to the student computer.
25. The system of claim 2, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
26. The system of claim 25, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
27. The system of claim 26, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
28. The system of claim 26, wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
29. The system of claim 1, wherein the teacher station further comprises means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
30. The system of claim 29, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
31. The system of claim 30, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
32. The system of claim 30, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
33. The system of claim 29, wherein the data report generator further comprises a user interface for browsing other test data for a user.
34. The system of claim 29, wherein the data report generator further comprises means for determining the number of user test results shown.
35. The system of claim 29, wherein the data report generator further comprises means for permitting the user to select a data report print format.
36. The system of claim 29, wherein the data report generator further comprises means for permitting the user to select a data report display format.
37. The system of claim 29, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
38. The system of claim 1, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
39. The system of claim 38, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
40. The system of claim 1, wherein each student computer further comprises means for connecting to the teacher station and means for downloading the resources necessary to execute the current test when the test is started.
41. The system of claim 1, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
42. The system of claim 41, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
43. The system of claim 41, wherein the teacher station further comprises means for collecting student test data.
44. The system of claim 41, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
45. The system of claim 1, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
46. The system of claim 45, wherein the teacher station further comprises means for each lead teacher to register one or more classroom teachers who administer the test and means for each classroom teacher to register one or more students who are taking the test.
47. The system of claim 46, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the students in the class of the classroom teacher.
48. A portable media product having one or more instructions for implementing a computer implemented system for testing one or more skills associated with the reading skills of an individual, the portable media product comprising:
instructions for one or more tests for determining deficiencies in one or more reading and pre-reading skills;
instructions for determining a score for each test;
instructions that cause a student computer to display at least one of a graphical image and audio associated with each test;
instructions that receive a user response to one of the graphical images and audio presented by each test; and
instructions that communicate the test data to a computer that is executing the instructions of the portable media product.
49. The portable media product of claim 48 further comprising instructions for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
50. The portable media product of claim 48 further comprising instructions for generating a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
51. The portable media product of claim 50, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
52. The portable media product of claim 48 further comprising instructions for receiving a verbal response from the user to the one or more tests using speed recognition.
53. The portable media product of claim 48 further comprising instructions to generate one or more tests comprising a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
54. The portable media product of claim 48, wherein the tests further comprise a rhyme recognition test further comprising instructions for providing at least two stimuli to the user and instructions for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
55. The portable media product of claim 48, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising instructions for generating at least one stimulus having at least an initial phoneme and instructions for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
56. The portable media product of claim 48, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising instructions for generating at least one stimulus having at least an ending phoneme and instructions for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
57. The portable media product of claim 48, wherein the tests further comprise a rhyme generation test comprising instructions for generating a stimulus and instructions for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
58. The portable media product of claim 48, wherein the tests further comprise a sound blender test comprising instructions for generating at least two sound stimuli and instructions for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
59. The portable media product of claim 48, wherein the tests further comprise a sound segmentation test comprising instructions for generating at least one stimulus and instructions for receiving a response to the stimulus comprising instructions for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
60. The portable media product of claim 48, wherein the tests comprise a sound manipulation test comprising instructions for generating a sound stimulus having one or more sound units and instructions, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
61. The portable media product of claim 48, wherein the tests further comprise a verbal recall test comprising instructions for generating at least one sound stimulus and instructions, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
62. The portable media product of claim 52 further comprising instructions for receiving a verbal response from the user into the speech recognition device for receiving a verbal response from the user.
63. The portable media product of claim 48, wherein the tests further comprise a naming test comprising instructions for generating at least one visual stimulus and instructions, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
64. The portable media product of claim 48, wherein the tests further comprise a word decoder test comprising instructions for displaying a visual stimulus to the user and instructions, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
65. The portable media product of claim 48, wherein the tests further comprise a fluency test comprising instructions for generating a plurality of visual stimuli and instructions for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
66. The portable media product of claim 48 further comprising instructions for motivating the user to complete the tests.
67. The portable media product of claim 66, wherein the motivation instructions further comprises instructions for generating a graphical image and an associated sound to motivate the user to complete the tests.
68. The portable media product of claim 67, wherein the motivation instructions further comprises instructions for generating the graphical image and associated sound after a first predetermined number of tests are completed and instructions for generating another graphical image and associated sound after a second predetermined number of tests are completed.
69. The portable media product of claim 68, wherein the generating instructions further comprises instructions for generating a graphical image indicating the number of tests remaining to be completed.
70. The portable media product of claim 68, wherein the motivation instructions further comprises instructions for generating the graphical image and associated sound after a third predetermined number of tests.
71. The portable media product of claim 49, wherein the recommending instructions further comprises instructions for downloading the recommended training module from the teacher station to the student computer.
72. The portable media product of claim 49, wherein the recommender further comprises instructions for storing the incorrect responses to the one or more tests and instructions for generating a training module recommendation based on the incorrect responses.
73. The portable media product of claim 72, wherein the recommender further comprises instructions for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and instructions for generating a training module recommendation based on the identified error.
74. The portable media product of claim 73, wherein the comparing instructions further comprises instructions for identifying one or more errors for each incorrect response.
75. The portable media product of claim 73, wherein the recommender further comprises instructions for identifying a deficient skill by comparing the identified error to a deficient skill rule and instructions for generating a training module recommendation based on the identified deficient skill.
76. The portable media product of claim 48, wherein the teacher station further comprises instructions for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
77. The portable media product of claim 76, wherein the data reports further comprises instructions for displaying the test results simultaneously for one or more students.
78. The portable media product of claim 77, wherein the displaying instructions further comprises instructions for displaying the percentage of correct responses for a test.
79. The portable media product of claim 77, wherein the displaying instructions further comprises instructions for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
80. The portable media product of claim 76, wherein the data report generator further comprises instructions that generate a user interface for browsing other test data for a user.
81. The portable media product of claim 76, wherein the data report generator further comprises instructions for determining the number of user test results shown.
82. The portable media product of claim 76, wherein the data report generator further comprises instructions for permitting the user to select a data report print format.
83. The portable media product of claim 76, wherein the data report generator further comprises instructions for permitting the user to select a data report display format.
84. The portable media product of claim 76, wherein the data report generator further comprises instructions for generating a data report for one or more students in a class, instructions for generating a data report for one or more classes each having one or more students and instructions for generating a data report for a school having one or more classes.
85. The portable media product of claim 48 further comprises instructions for communicating the response for each test for each student back to the server computer.
86. The portable media product of claim 85 further comprises instructions for detecting a break in the communication between the teacher station and the server computer and instructions for resending any test data that was not sent due to the communications break.
87. The portable media product of claim 48, wherein each student computer further comprises instructions for connecting to the teacher station and instructions for downloading the resources necessary to execute the current test when the test is started.
88. The portable media product of claim 48, wherein the teacher station further comprises instructions for generating a classroom layout showing an icon for each student computer.
89. The portable media product of claim 88, wherein the teacher station further comprises instructions for monitoring each student's test progress and controlling each student computer.
90. The portable media product of claim 88, wherein the teacher station further comprises instructions for collecting student test data.
91. The portable media product of claim 88, wherein generating the layout further comprises instructions for coloring each icon depending on the state of testing for the particular student computer.
92. The portable media product of claim 48, wherein the teacher station further comprises instructions for generating one or more separate accounts for the diagnostic system, wherein the accounts include a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
93. The portable media product of claim 92, wherein the teacher station further comprises instructions for each lead teacher to register one or more classroom teachers who administer the test and instructions for each classroom teacher to register one or more students who are taking the test.
94. The portable media product of claim 93, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the students in the class of the classroom teacher.
95. A system for testing one or more skills associated with the reading skills of an individual, comprising:
a server computer comprising one or more tests for determining deficiencies in one or more reading and pre-reading skills, a scorer for determining a score for each test; and
one or more client computers that establish a communications session with the server computer to download the one or more tests from the server computer, each client computer comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test, means for communicating the responses for each test back to the server computer, and means for motivating the user to complete the tests.
96. The system of claim 95, wherein the server computer further comprises a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
97. The system of claim 95, wherein the server further comprises a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
98. The system of claim 97, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
99. The system of claim 95, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
100. The system of claim 95, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
101. The system of claim 95, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
102. The system of claim 95, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
103. The system of claim 95, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
104. The system of claim 95, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
105. The system of claim 95, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
106. The system of claim 95, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
107. The system of claim 95, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
108. The system of claim 95, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
109. The system of claim 99 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
110. The system of claim 95, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
111. The system of claim 95, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
112. The system of claim 95, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
113. The system of claim 95, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
114. The system of claim 113, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
115. The system of claim 114, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
116. The system of claim 114, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
117. The system of claim 96, wherein the recommender further comprises means for downloading the recommended training module to the client computer.
118. The system of claim 96, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
119. The system of claim 118, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
120. The system of claim 119, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
121. The system of claim 119, wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
122. The system of claim 95, wherein the server further comprises means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
123. The system of claim 122, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
124. The system of claim 123, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
125. The system of claim 123, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
126. The system of claim 122, wherein the data report generator further comprises a user interface for browsing other test data for a user.
127. The system of claim 122, wherein the data report generator further comprises means for determining the number of user test results shown.
128. The system of claim 122, wherein the data report generator further comprises means for permitting the user to select a data report print format.
129. The system of claim 122, wherein the data report generator further comprises means for permitting the user to select a data report display format.
130. The system of claim 122, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
131. The system of claim 95, wherein the client computer further comprises a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test and means for communicating the responses for each test back to the teacher station.
132. The system of claim 131, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
133. The system of claim 132, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
134. The system of claim 131, wherein each student computer further comprises means for connecting to the server computer and means for downloading the resources necessary to execute the current test when the test is started.
135. The system of claim 131, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
136. The system of claim 135, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
137. The system of claim 135, wherein the teacher station further comprises means for collecting student test data.
138. The system of claim 135, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
139. The system of claim 131, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
140. The system of claim 139, wherein the teacher station further comprises means for the account manager to register one or more lead teachers and means for each lead teacher to register one or more classroom teachers who administer the test.
141. The system of claim 140, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
142. A method for testing one or more skills associated with the reading skills of an individual, comprising:
storing instructions corresponding to one or more tests on a server for determining deficiencies in one or more reading and pre-reading skills and a scorer for determining a score for each test;
downloading the one or more tests from the server computer by a client computer;
displaying at least one of a graphical image and audio associated with each test located on the server at the client computer;
receiving a user response to one of the graphical images and audio presented by each test at the client computer;
communicating the responses for each test back to the server computer; and
motivating the user to complete the tests.
143. The method of claim 142 further comprising recommending at the server, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
144. The method of claim 142 further comprising generating a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
145. The method of claim 144, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
146. The method of claim 142 further comprising receiving a verbal response from a user to one or more tests using a speech recognition device.
147. The method of claim 142, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
148. The method of claim 142, wherein the tests further comprise a rhyme recognition test further comprising providing at least two stimuli to the user and receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
149. The method of claim 142, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising generating at least one stimulus having at least an initial phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
150. The method of claim 142, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising generating at least one stimulus having at least an ending phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
151. The method of claim 142, wherein the tests further comprise a rhyme generation test comprising generating a stimulus and receiving a response from the user identifying a sound unit that rhymes with the stimulus.
152. The method of claim 142, wherein the tests further comprise a sound blender test comprising generating at least two sound stimuli and receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
153. The method of claim 142, wherein the tests further comprise a sound segmentation test comprising generating at least one stimulus and receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
154. The method of claim 142, wherein the tests comprise a sound manipulation test comprising generating a sound stimulus having one or more sound units and, in response to the sound stimulus, manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
155. The method of claim 142, wherein the tests further comprise a verbal recall test comprising generating at least one sound stimulus and, in response to the at least one sound stimulus, receiving a user response indicating the recalling of the at least one sound stimulus.
156. The method of claim 146 further comprising speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
157. The method of claim 142, wherein the tests further comprise a naming test comprising generating at least one visual stimulus and, in response to the display of the visual stimulus, speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
158. The method of claim 142, wherein the tests further comprise a word decoder test comprising displaying a visual stimulus to the user and, in response to the visual stimulus, receiving a response from the user to determine the ability to read the visual stimulus.
159. The method of claim 142, wherein the tests further comprise a fluency test comprising generating a plurality of visual stimuli and receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
160. The method of claim 142, wherein the motivation further comprises generating a graphical image and an associated sound to motivate the user to complete the tests.
161. The method of claim 160, wherein the motivation further comprises generating the graphical image and associated sound after a first predetermined number of tests are completed and generating another graphical image and associated sound after a second predetermined number of tests are completed.
162. The method of claim 161, wherein the generating further comprises generating a graphical image indicating the number of tests remaining to be completed.
163. The method of claim 161, wherein the motivation further comprises generating the graphical image and associated sound after a third predetermined number of tests.
164. The method of claim 143, wherein the recommender further comprises downloading the recommended training module to the client computer.
165. The method of claim 143, wherein the recommender further comprises storing the incorrect responses to the one or more tests and generating a training module recommendation based on the incorrect responses.
166. The method of claim 165, wherein the recommender further comprises comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and generating a training module recommendation based on the identified error.
167. The method of claim 166, wherein the comparing further comprises identifying one or more errors for each incorrect response.
168. The method of claim 166, wherein the recommender further comprises identifying a deficient skill by comparing the identified error to a deficient skill rule and generating a training module recommendation based on the identified deficient skill.
169. The method of claim 142 further comprising dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
170. The method of claim 169, wherein the data reports further comprises displaying the test results simultaneously for one or more students.
171. The method of claim 170, wherein the displaying further comprises displaying the percentage of correct responses for a test.
172. The method of claim 170, wherein the displaying further comprises displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
173. The method of claim 169, wherein the data report generator further comprises a user interface for browsing other test data for a user.
174. The method of claim 169, wherein the data report generator further comprises determining the number of user test results shown.
175. The method of claim 169, wherein the data report generator further comprises permitting the user to select a data report print format.
176. The method of claim 169, wherein the data report generator further comprises permitting the user to select a data report display format.
177. The method of claim 169, wherein the data report generator further comprises generating a data report for one or more students in a class, generating a data report for one or more classes each having one or more students, generating a data report for a school having one or more classes, and generating a data
178. The method of claim 142, wherein the client computer further comprises a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising displaying at least one of a graphical image and audio associated with each test located on the server, receiving a user response to one of the graphical images and audio presented by each test and communicating the responses for each test back to the teacher station.
179. The method of claim 178, wherein the teacher station further comprises communicating the response for each test for each student back to the server computer.
180. The method of claim 179, wherein the teacher station further comprises detecting a break in the communication between the teacher station and the server computer and resending any test data that was not sent due to the communications break.
181. The method of claim 178, wherein each student computer further comprises connecting to the server computer and downloading the resources necessary to execute the current test when the test is started.
182. The method of claim 178, wherein the teacher station further comprises generating a classroom layout showing an icon for each student computer.
183. The method of claim 182, wherein the teacher station further comprises monitoring each student's test progress and controlling each student computer.
184. The method of claim 182, wherein the teacher station further comprises collecting student test data.
185. The method of claim 182, wherein generating the layout further comprises coloring each icon depending on the state of testing for the particular student computer.
186. The method of claim 178, wherein the teacher station further comprises generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
187. The method of claim 186, wherein the teacher station further comprises a user interface for the account manager to register one or more lead teachers and a user interface for each lead teacher to register one or more classroom teachers who administer the test.
188. The method of claim 187, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
189. A system for testing one or more skills associated with the reading skills of an individual, comprising:
a server computer comprising one or more tests for determining deficiencies in one or more reading and pre-reading skills, a scorer for determining a score for each test; and
one or more client computers that establish a communications session with the server computer to download the one or more tests from the server computer, each client computer comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test, means for communicating the responses for each test back to the server computer, and a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
190. The system of claim 189, wherein each client computer further comprises means for motivating the user to complete the tests.
191. The system of claim 189, wherein the server further comprises a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
192. The system of claim 191, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
193. The system of claim 189, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
194. The system of claim 189, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
195. The system of claim 189, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
196. The system of claim 189, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
197. The system of claim 189, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
198. The system of claim 189, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
199. The system of claim 189, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
200. The system of claim 189, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
201. The system of claim 189, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
202. The system of claim 189, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
203. The system of claim 193 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
204. The system of claim 189, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
205. The system of claim 189, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
206. The system of claim 189, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
207. The system of claim 190, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
208. The system of claim 207, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
209. The system of claim 208, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
210. The system of claim 208, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
211. The system of claim 189, wherein the recommender further comprises means for downloading the recommended training module to the client computer.
212. The system of claim 189, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
213. The system of claim 212, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
214. The system of claim 213, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
215. The system of claim 213, wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
216. The system of claim 189, wherein the server further comprises means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
217. The system of claim 216, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
218. The system of claim 217, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
219. The system of claim 217, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
220. The system of claim 216, wherein the data report generator further comprises a user interface for browsing other test data for a user.
221. The system of claim 216, wherein the data report generator further comprises means for determining the number of user test results shown.
222. The system of claim 216, wherein the data report generator further comprises means for permitting the user to select a data report print format.
223. The system of claim 216, wherein the data report generator further comprises means for permitting the user to select a data report display format.
224. The system of claim 216, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
225. The system of claim 189, wherein the client computer further comprises a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test and means for communicating the responses for each test back to the teacher station.
226. The system of claim 225, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
227. The system of claim 226, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
228. The system of claim 225, wherein each student computer further comprises means for connecting to the server computer and means for downloading the resources necessary to execute the current test when the test is started.
229. The system of claim 225, wherein the teacher station further comprises means for controlling each student computer.
230. The system of claim 225, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
231. The system of claim 230, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
232. The system of claim 230, wherein the teacher station further comprises means for collecting student test data.
233. The system of claim 230, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
234. The system of claim 189, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
235. The system of claim 234, wherein the teacher station further comprises means for the account manager to register one or more lead teachers and means for each lead teacher to register one or more classroom teachers who administer the test.
236. The system of claim 235, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
237. A method for testing one or more skills associated with the reading skills of an individual, comprising:
storing instructions corresponding to one or more tests on a server for determining deficiencies in one or more reading and pre-reading skills and a scorer for determining a score for each test;
downloading the one or more tests from the server computer by a client computer;
displaying at least one of a graphical image and audio associated with each test located on the server at the client computer;
receiving a user response to one of the graphical images and audio presented by each test at the client computer;
communicating the responses for each test back to the server computer; and
recommending at the server, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
238. The method of claim 237 further comprising motivating the user to complete the tests.
239. The method of claim 237 further comprising generating a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
240. The method of claim 239, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
241. The method of claim 237 further comprising receiving a verbal response from a user to one or more tests using a speech recognition device.
242. The method of claim 237, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
243. The method of claim 237, wherein the tests further comprise a rhyme recognition test further comprising providing at least two stimuli to the user and receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
244. The method of claim 237, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising generating at least one stimulus having at least an initial phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
245. The method of claim 237, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising generating at least one stimulus having at least an ending phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
246. The method of claim 237, wherein the tests further comprise a rhyme generation test comprising generating a stimulus and receiving a response from the user identifying a sound unit that rhymes with the stimulus.
247. The method of claim 237, wherein the tests further comprise a sound blender test comprising generating at least two sound stimuli and receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
248. The method of claim 237, wherein the tests further comprise a sound segmentation test comprising generating at least one stimulus and receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
249. The method of claim 237, wherein the tests comprise a sound manipulation test comprising generating a sound stimulus having one or more sound units and, in response to the sound stimulus, manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
250. The method of claim 237, wherein the tests further comprise a verbal recall test comprising generating at least one sound stimulus and, in response to the at least one sound stimulus, receiving a user response indicating the recalling of the at least one sound stimulus.
251. The method of claim 241 further comprising speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
252. The method of claim 237, wherein the tests further comprise a naming test comprising generating at least one visual stimulus and, in response to the display of the visual stimulus, speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
253. The method of claim 237, wherein the tests further comprise a word decoder test comprising displaying a visual stimulus to the user and, in response to the visual stimulus, receiving a response from the user to determine the ability to read the visual stimulus.
254. The method of claim 237, wherein the tests further comprise a fluency test comprising generating a plurality of visual stimuli and receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
255. The method of claim 237, wherein the motivation further comprises generating a graphical image and an associated sound to motivate the user to complete the tests.
256. The method of claim 255, wherein the motivation further comprises generating the graphical image and associated sound after a first predetermined number of tests are completed and generating another graphical image and associated sound after a second predetermined number of tests are completed.
257. The method of claim 256, wherein the generating further comprises generating a graphical image indicating the number of tests remaining to be completed.
258. The method of claim 256, wherein the motivation further comprises generating the graphical image and associated sound after a third predetermined number of tests.
259. The method of claim 238, wherein the recommender further comprises downloading the recommended training module to the client computer.
260. The method of claim 238, wherein the recommender further comprises storing the incorrect responses to the one or more tests and generating a training module recommendation based on the incorrect responses.
261. The method of claim 260, wherein the recommender further comprises comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and generating a training module recommendation based on the identified error.
262. The method of claim 261, wherein the comparing further comprises identifying one or more errors for each incorrect response.
263. The method of claim 261, wherein the recommender further comprises identifying a deficient skill by comparing the identified error to a deficient skill rule and generating a training module recommendation based on the identified deficient skill.
264. The method of claim 237 further comprising dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
265. The method of claim 264, wherein the data reports further comprises displaying the test results simultaneously for one or more students.
266. The method of claim 265, wherein the displaying further comprises displaying the percentage of correct responses for a test.
267. The method of claim 265, wherein the displaying further comprises displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
268. The method of claim 264, wherein the data report generator further comprises a user interface for browsing other test data for a user.
269. The method of claim 264, wherein the data report generator further comprises determining the number of user test results shown.
270. The method of claim 264, wherein the data report generator further comprises permitting the user to select a data report print format.
271. The method of claim 264, wherein the data report generator further comprises permitting the user to select a data report display format.
272. The method of claim 264, wherein the data report generator further comprises generating a data report for one or more students in a class, generating a data report for one or more classes each having one or more students and generating a data report for a school having one or more classes.
273. The method of claim 237, wherein the client computer further comprises a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising displaying at least one of a graphical image and audio associated with each test located on the server, receiving a user response to one of the graphical images and audio presented by each test and communicating the responses for each test back to the teacher station.
274. The method of claim 273, wherein the teacher station further comprises communicating the response for each test for each student back to the server computer.
275. The method of claim 274, wherein the teacher station further comprises detecting a break in the communication between the teacher station and the server computer and resending any test data that was not sent due to the communications break.
276. The method of claim 273, wherein each student computer further comprises connecting to the server computer and downloading the resources necessary to execute the current test when the test is started.
277. The method of claim 273, wherein the teacher station further comprises generating a classroom layout showing an icon for each student computer.
278. The method of claim 277, wherein the teacher station further comprises monitoring each student's test progress and controlling each student computer.
279. The method of claim 277, wherein the teacher station further comprises collecting student test data.
280. The method of claim 277, wherein generating the layout further comprises coloring each icon depending on the state of testing for the particular student computer.
281. The method of claim 273, wherein the teacher station further comprises generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
282. The method of claim 281, wherein the teacher station further comprises a user interface for the account manager to register one or more lead teachers and a user interface for each lead teacher to register one or more classroom teachers who administer the test.
283. The method of claim 282, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
284. A server for testing one or more skills associated with the reading skills of an individual, comprising:
one or more tests for determining deficiencies in one or more reading and pre-reading skills;
a scorer for determining a score for each test; and
means for establishing a communications session with a client computer to download the one or more tests to the client computer wherein each client computer displays at least one of a graphical image and audio associated with each test located on the server, receiving a user response to one of the graphical images and audio presented by each test and communicates the responses for each test back to the server computer;
a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
285. The server of claim 284 further comprising a client computer, wherein each client computer further comprises means for motivating the user to complete the tests.
286. The server of claim 284 further comprising a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
287. The server of claim 286, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
288. The server of claim 285, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
289. The server of claim 284, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
290. The server of claim 284, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
291. The server of claim 284, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
292. The server of claim 284, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
293. The server of claim 284, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
294. The server of claim 284, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
295. The server of claim 284, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
296. The server of claim 284, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
297. The server of claim 284, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
298. The server of claim 288 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
299. The server of claim 284, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
300. The server of claim 284, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
301. The server of claim 284, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
302. The server of claim 285, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
303. The server of claim 302, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
304. The server of claim 303, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
305. The server of claim 303, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
306. The server of claim 284, wherein the recommender further comprises means for downloading the recommended training module to the client computer.
307. The server of claim 284, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
308. The server of claim 307, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
309. The server of claim 308, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
310. The server of claim 308, wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
311. The server of claim 284 further comprising means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
312. The server of claim 311, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
313. The server of claim 312, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
314. The server of claim 312, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
315. The server of claim 311, wherein the data report generator further comprises a user interface for browsing other test data for a user.
316. The server of claim 311, wherein the data report generator further comprises means for determining the number of user test results shown.
317. The server of claim 311, wherein the data report generator further comprises means for permitting the user to select a data report print format.
318. The server of claim 311, wherein the data report generator further comprises means for permitting the user to select a data report display format.
319. The server of claim 311, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
320. The server of claim 285 further comprising a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test and means for communicating the responses for each test back to the teacher station.
321. The server of claim 320, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
322. The server of claim 321, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
323. The server of claim 320, wherein each student computer further comprises means for connecting to the server computer and means for downloading the resources necessary to execute the current test when the test is started.
324. The server of claim 320, wherein the teacher station further comprises means for controlling each student computer.
325. The server of claim 320, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
326. The server of claim 325, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
327. The server of claim 325, wherein the teacher station further comprises means for collecting student test data.
328. The server of claim 325, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
329. The server of claim 284, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
330. The server of claim 329, wherein the teacher station further comprises means for the account manager to register one or more lead teachers and means for each lead teacher to register one or more classroom teachers who administer the test.
331. The server of claim 330, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
332. A system for recommending a training module based on one or more tests, comprising:
means for determining the incorrect responses to one or more tests wherein the incorrect responses indicate a reading skill deficiency; and
means for recommending a training module that improves a particular reading skill based on the incorrect responses.
333. The system of claim 332, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to generate an error measure associated with each incorrect response and means for generating a training module recommendation based on the error measures.
334. The system of claim 333, wherein the comparing means further comprises means for generating one or more error measures for each incorrect response.
335. The system of claim 333, wherein the recommender further comprises means for identifying a deficient skill by comparing the error measure to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
336. A method for recommending a training module based on one or more tests, comprising:
determining the incorrect responses to one or more tests wherein the incorrect responses indicate a reading skill deficiency; and
recommending a training module that improves a particular reading skill based on the incorrect responses.
337. The method of claim 336, wherein the recommender further comprises comparing each incorrect response to one or more error measures to generate an error measure associated with each incorrect response and generating a training module recommendation based on the error measures.
338. The method of claim 337, wherein the comparing further comprises generating one or more error measures for each incorrect response. The method of claim 337, wherein the recommender further comprises identifying a deficient skill by comparing the error measure to a deficient skill rule and generating a training module recommendation based on the identified deficient skill.
339. The system of claim F, wherein each client computer further comprises means for motivating the user to complete the tests.
340. A system for testing one or more skills associated with the reading skills of an individual, comprising:
a server computer comprising one or more tests for determining deficiencies in one or more reading and pre-reading skills, a scorer for determining a score for each test;
a teacher station for downloading the tests from the server computer; and
one or more student computers that establish a communications session with the teacher station over a computer network, each student computer comprising means for displaying at least one of a graphical image and audio associated with each test located on the server, means for receiving a user response to one of the graphical images and audio presented by each test, and means for communicating the responses for each test back to the server computer.
341. The system of claim 340, wherein each client computer further comprises means for motivating the user to complete the tests.
342. The system of claim 340, wherein the server further comprises a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
343. The system of claim 342, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
344. The system of claim 340, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
345. The system of claim 340, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
346. The system of claim 340, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
347. The system of claim 340, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
348. The system of claim 340, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
349. The system of claim 340, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
350. The system of claim 340, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
351. The system of claim 340, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
352. The system of claim 340, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
353. The system of claim 340, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
354. The system of claim 344 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
355. The system of claim 340, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
356. The system of claim 340, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
357. The system of claim 340, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
358. The system of claim 341, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
359. The system of claim 358, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
360. The system of claim 359, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
361. The system of claim 359, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
362. The system of 340 her comprising a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
363. The system of claim 362, wherein the recommender further comprises means for downloading the recommended training module to the student computer.
364. The system of claim 362, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
365. The system of claim 364, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
366. The system of claim 365, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
367. The system of claim 365, wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
368. The system of 340, wherein the server further comprises means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
369. The system of claim 368, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
370. The system of claim 369, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
371. The system of claim 369, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
372. The system of claim 368, wherein the data report generator further comprises a user interface for browsing other test data for a user.
373. The system of claim 368, wherein the data report generator further comprises means for determining the number of user test results shown.
374. The system of claim 368, wherein the data report generator further comprises means for permitting the user to select a data report print format.
375. The system of claim 368, wherein the data report generator further comprises means for permitting the user to select a data report display format.
376. The system of claim 368, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
377. The system of claim 340, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
378. The system of claim 377, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
379. The system of claim 340, wherein each student computer further comprises means for connecting to the server computer and means for downloading the resources necessary to execute the current test when the test is started.
380. The system of claim 340, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
381. The system of claim 380, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
382. The system of claim 380, wherein the teacher station further comprises means for collecting student test data.
383. The system of claim 380, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
384. The system of claim F, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
385. The system of claim F+40, wherein the teacher station further comprises means for the account manager to register one or more lead teachers and means for each lead teacher to register one or more classroom teachers who administer the test.
386. The system of claim F+41, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
387. A method for testing one or more skills associated with the reading skills of an individual, comprising:
storing instructions corresponding to one or more tests on a server for determining deficiencies in one or more reading and pre-reading skills and a scorer for determining a score for each test;
downloading the one or more tests from the server computer to a teacher station;
displaying at least one of a graphical image and audio associated with each test located on the server at the client computer connected to the teacher station;
receiving a user response to one of the graphical images and audio presented by each test at the client computer connected to the teacher station;
communicating the responses for each test back to the teacher station; and
388. The method of claim F further comprising motivating the user to complete the tests.
389. The method of claim F further comprising generating a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
390. The method of claim F+2, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
391. The method of claim F further comprising receiving a verbal response from a user to one or more tests using a speech recognition device.
392. The method of claim F, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
393. The method of claim F, wherein the tests further comprise a rhyme recognition test further comprising providing at least two stimuli to the user and receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
394. The method of claim F, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising generating at least one stimulus having at least an initial phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
395. The method of claim F, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising generating at least one stimulus having at least an ending phoneme and receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
396. The method of claim F, wherein the tests further comprise a rhyme generation test comprising generating a stimulus and receiving a response from the user identifying a sound unit that rhymes with the stimulus.
397. The method of claim F, wherein the tests further comprise a sound blender test comprising generating at least two sound stimuli and receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
398. The method of claim F, wherein the tests further comprise a sound segmentation test comprising generating at least one stimulus and receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
399. The method of claim F, wherein the tests comprise a sound manipulation test comprising generating a sound stimulus having one or more sound units and, in response to the sound stimulus, manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
400. The method of claim F, wherein the tests further comprise a verbal recall test comprising generating at least one sound stimulus and, in response to the at least one sound stimulus, receiving a user response indicating the recalling of the at least one sound stimulus.
401. The method of claim F+4 further comprising speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
402. The method of claim F, wherein the tests further comprise a naming test comprising generating at least one visual stimulus and, in response to the display of the visual stimulus, speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
403. The method of claim F, wherein the tests further comprise a word decoder test comprising displaying a visual stimulus to the user and, in response to the visual stimulus, receiving a response from the user to determine the ability to read the visual stimulus.
404. The method of claim F, wherein the tests further comprise a fluency test comprising generating a plurality of visual stimuli and receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
405. The method of claim F, wherein the motivation further comprises generating a graphical image and an associated sound to motivate the user to complete the tests.
406. The method of claim F+19, wherein the motivation further comprises generating the graphical image and associated sound after a first predetermined number of tests are completed and generating another graphical image and associated sound after a second predetermined number of tests are completed.
407. The method of claim F+20, wherein the generating further comprises generating a graphical image indicating the number of tests remaining to be completed.
408. The method of claim F+20, wherein the motivation further comprises generating the graphical image and associated sound after a third predetermined number of tests.
409. The method of claim F further compring, based on the test scores, recommending a training module to improve the skill of a user.
410. The method of claim F+1, wherein the recommender further comprises downloading the recommended training module to the client computer.
411. The method of claim F+1, wherein the recommender further comprises storing the incorrect responses to the one or more tests and generating a training module recommendation based on the incorrect responses.
412. The method of claim F+23, wherein the recommender further comprises comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and generating a training module recommendation based on the identified error.
413. The method of claim F+24, wherein the comparing further comprises identifying one or more errors for each incorrect response.
414. The method of claim F+24, wherein the recommender further comprises identifying a deficient skill by comparing the identified error to a deficient skill rule and generating a training module recommendation based on the identified deficient skill.
415. The method of claim F further comprising dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
416. The method of claim F+27, wherein the data reports further comprises displaying the test results simultaneously for one or more students.
417. The method of claim F+28, wherein the displaying further comprises displaying the percentage of correct responses for a test.
418. The method of claim F+28, wherein the displaying further comprises displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
419. The method of claim F+27, wherein the data report generator further comprises a user interface for browsing other test data for a user.
420. The method of claim F+27, wherein the data report generator further comprises determining the number of user test results shown.
421. The method of claim F+27, wherein the data report generator further comprises permitting the user to select a data report print format.
422. The method of claim F+27, wherein the data report generator further comprises permitting the user to select a data report display format.
423. The method of claim F+27, wherein the data report generator further comprises generating a data report for one or more students in a class, generating a data report for one or more classes each having one or more students and generating a data report for a school having one or more classes.
424. The method of claim F, wherein the client computer further comprises a teacher station that downloads the tests from the server and one or more student computers connected to the teacher station by a network, each student computer further comprising displaying at least one of a graphical image and audio associated with each test located on the server, receiving a user response to one of the graphical images and audio presented by each test and communicating the responses for each test back to the teacher station.
425. The method of claim F+33, wherein the teacher station further comprises communicating the response for each test for each student back to the server computer.
426. The method of claim F+34, wherein the teacher station further comprises detecting a break in the communication between the teacher station and the server computer and resending any test data that was not sent due to the communications break.
427. The method of claim F+33, wherein each student computer further comprises connecting to the server computer and downloading the resources necessary to execute the current test when the test is started.
428. The method of claim F+33, wherein the teacher station further comprises generating a classroom layout showing an icon for each student computer.
429. The method of claim 428, wherein the teacher station further comprises monitoring each student's test progress and controlling each student computer.
430. The method of claim 428, wherein the teacher station further comprises collecting student test data.
431. The method of claim 428, wherein generating the layout further comprises coloring each icon depending on the state of testing for the particular student computer.
432. The method of claim 424, wherein the teacher station further comprises generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
433. The method of claim 432, wherein the teacher station further comprises a user interface for the account manager to register one or more lead teachers and a user interface for each lead teacher to register one or more classroom teachers who administer the test.
434. The method of claim 433, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
435. A dual server system for testing one or more skills associated with the reading skills of an individual, comprising:
a server computer comprising one or more tests for determining deficiencies in one or more reading and pre-reading skills, a scorer for determining a score for each test;
a teacher station for downloading the tests from the server computer; and
wherein the teacher station is connected to one or more student computer over a computer network, the teacher station further comprising means for establishing a communications session with each student computer so that each student computer displays at least one of a graphical image and audio associated with each test located on the server, receives a user response to one of the graphical images and audio presented by each test, and communicates the responses for each test back to the teacher station.
436. The system of claim 435, wherein each client computer further comprises means for motivating the user to complete the tests.
437. The system of claim 435, wherein the server further comprises a questionnaire having one or more questions for eliciting information about risk factors associated with language-based learning disabilities.
438. The system of claim 437, wherein the information comprises historical data about reading-related risk factors including one or more of medical conditions including chronic otitis media, family history data including history of dyslexia, environmental data including socioeconomic status and exposure to literacy at home and observational data about an individual's behaviors reflecting competencies in speech sound awareness.
439. The system of claim 435, wherein the user input device of the one or more client computers comprise a speech recognition device for receiving a verbal response from the user to the one or more tests.
440. The system of claim 435, wherein the one or more tests comprise a rhyme recognition test for testing the ability to recognize rhymes, a rhyme generation test for testing the ability to generate rhymes, a beginning and ending sound recognizer for testing the ability to recognize the beginning and ending sounds of a word, a word decoder test for testing the ability to read by sounding out a written word, a sound blender test for testing the ability to blend sound units together to form words, a sound segmenting test for testing the ability to segment a sound unit into smaller sound units, a sound manipulator test for testing the ability to manipulate sound units to form a new unit, a sequential verbal recall test for testing the ability to recall a sequence of spoken items, a rapid naming test for testing the ability to rapidly name one or more items, a letter naming and symbol/sound association test for testing the ability to name letters and identify the association between a symbol and an associated sound, and a fluent reader test for testing the ability to read fluently.
441. The system of claim 435, wherein the tests further comprise a rhyme recognition test further comprising means for providing at least two stimuli to the user and means for receiving user input in response to the at least two stimuli to determine the user's ability to recognize rhyming words.
442. The system of claim 435, wherein the tests further comprise a test for recognizing the beginning sound of a stimulus, the test comprising means for generating at least one stimulus having at least an initial phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the initial phoneme of the stimulus.
443. The system of claim 435, wherein the tests further comprise a test for recognizing the ending sound of a stimulus, the test comprising means for generating at least one stimulus having at least an ending phoneme and means for receiving a response to the stimulus that indicates an ability of the test taker to recognize the ending phoneme of the stimulus.
444. The system of claim 435, wherein the tests further comprise a rhyme generation test comprising means for generating a stimulus and means for receiving a response from the user identifying a sound unit that rhymes with the stimulus.
445. The system of claim 435, wherein the tests further comprise a sound blender test comprising means for generating at least two sound stimuli and means for receiving a user response to the at least two sound stimuli, the response indicating an ability to blend the at least two sound stimuli into a larger sound unit.
446. The system of claim 435, wherein the tests further comprise a sound segmentation test comprising means for generating at least one stimulus and means for receiving a response to the stimulus comprising means for segmenting the stimulus into smaller units in order to test the ability to segment the stimulus into smaller units.
447. The system of claim 435, wherein the tests comprise a sound manipulation test comprising means for generating a sound stimulus having one or more sound units and means, in response to the sound stimulus, for manipulating the sound units of the sound stimulus to test the ability to manipulate sound units.
448. The system of claim 435, wherein the tests further comprise a verbal recall test comprising means for generating at least one sound stimulus and means, in response to the at least one sound stimulus, for receiving a user response indicating the recalling of the at least one sound stimulus.
449. The system of claim 439 further comprising means for speaking a verbal response into the speech recognition device for receiving a verbal response from the user.
450. The system of claim 435, wherein the tests further comprise a naming test comprising means for generating at least one visual stimulus and means, in response to the display of the visual stimulus, for speaking the name of or the sound associated with the visual stimulus using the speech recognition device.
451. The system of claim 435, wherein the tests further comprise a word decoder test comprising means for displaying a visual stimulus to the user and means, in response to the visual stimulus, for receiving a response from the user to determine the ability to read the visual stimulus.
452. The system of claim 435, wherein the tests further comprise a fluency test comprising means for generating a plurality of visual stimuli and means for receiving a user's response to the visual stimuli within a predetermined time interval to determine the user's ability to read and understand the visual stimuli.
453. The system of claim 436, wherein the motivation means further comprises means for generating a graphical image and an associated sound to motivate the user to complete the tests.
454. The system of claim 453, wherein the motivation means further comprises means for generating the graphical image and associated sound after a first predetermined number of tests are completed and means for generating another graphical image and associated sound after a second predetermined number of tests are completed.
455. The system of claim 454, wherein the generating means further comprises means for generating a graphical image indicating the number of tests remaining to be completed.
456. The system of claim 454, wherein the motivation means further comprises means for generating the graphical image and associated sound after a third predetermined number of tests.
457. The system of claim 435 further comprising a recommender for recommending, based on the scores of the one or more tests, one or more training modules for improving a reading or pre-reading skill of the individual as indicated by the score of the tests.
458. The system of claim 457, wherein the recommender further comprises means for downloading the recommended training module to the student computer.
459. The system of claim 457, wherein the recommender further comprises means for storing the incorrect responses to the one or more tests and means for generating a training module recommendation based on the incorrect responses.
460. The system of claim 459, wherein the recommender further comprises means for comparing each incorrect response to one or more error measures to identify an error associated with each incorrect response and means for generating a training module recommendation based on the identified error.
461. The system of claim 460, wherein the comparing means further comprises means for identifying one or more errors for each incorrect response.
462. The system of claim 460 wherein the recommender further comprises means for identifying a deficient skill by comparing the identified error to a deficient skill rule and means for generating a training module recommendation based on the identified deficient skill.
463. The system of claim 435, wherein the server further comprises means for dynamically generating one or more data reports that illustrate the data associated with the one or more tests.
464. The system of claim 463, wherein the data reports further comprises means for displaying the test results simultaneously for one or more students.
465. The system of claim 464, wherein the displaying means further comprises means for displaying the percentage of correct responses for a test.
466. The system of claim 464, wherein the displaying means further comprises means for displaying the results for one or more different tests for each user wherein the results for each test are displayed in a different color.
467. The system of claim 463, wherein the data report generator further comprises a user interface for browsing other test data for a user.
468. The system of claim 463, wherein the data report generator further comprises means for determining the number of user test results shown.
469. The system of claim 463, wherein the data report generator further comprises means for permitting the user to select a data report print format.
470. The system of claim 463, wherein the data report generator further comprises means for permitting the user to select a data report display format.
471. The system of claim 463, wherein the data report generator further comprises means for generating a data report for one or more students in a class, means for generating a data report for one or more classes each having one or more students and means for generating a data report for a school having one or more classes.
472. The system of claim 435, wherein the teacher station further comprises means for communicating the response for each test for each student back to the server computer.
473. The system of claim 472, wherein the teacher station further comprises means for detecting a break in the communication between the teacher station and the server computer and means for resending any test data that was not sent due to the communications break.
474. The system of claim 435, wherein each student computer further comprises means for connecting to the server computer and means for downloading the resources necessary to execute the current test when the test is started.
475. The system of claim 435, wherein the teacher station further comprises means for generating a classroom layout showing an icon for each student computer.
476. The system of claim 475, wherein the teacher station further comprises means for monitoring each student's test progress and controlling each student computer.
477. The system of claim 475, wherein the teacher station further comprises means for collecting student test data.
478. The system of claim 475, wherein generating the layout further comprises means for coloring each icon depending on the state of testing for the particular student computer.
479. The system of claim 435, wherein the teacher station further comprises means for generating one or more separate accounts for the diagnostic system, wherein the accounts include an account manager for managing an entire school or district of users of the diagnostic system, a lead teacher for managing the use of the diagnostic system by one or more classroom teachers in a particular school and one or more classroom teachers who each administer the diagnostic testing for a particular class of students.
480. The system of claim 479, wherein the teacher station further comprises means for the account manager to register one or more lead teachers and means for each lead teacher to register one or more classroom teachers who administer the test.
481. The system of claim 480, wherein the account manager has access to testing data for the entire district, wherein the lead teacher has access to testing data for the entire school and each classroom teacher has access to testing data for the class of the classroom teacher.
US10/713,755 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing Abandoned US20040072131A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/713,755 US20040072131A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/350,791 US6299452B1 (en) 1999-07-09 1999-07-09 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/912,681 US20020164563A1 (en) 1999-07-09 2001-07-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,755 US20040072131A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US09/350,791 Continuation-In-Part US6299452B1 (en) 1999-07-09 1999-07-09 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/912,681 Division US20020164563A1 (en) 1999-07-09 2001-07-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Publications (1)

Publication Number Publication Date
US20040072131A1 true US20040072131A1 (en) 2004-04-15

Family

ID=23378198

Family Applications (9)

Application Number Title Priority Date Filing Date
US09/350,791 Expired - Lifetime US6299452B1 (en) 1999-07-09 1999-07-09 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/912,681 Abandoned US20020164563A1 (en) 1999-07-09 2001-07-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/939,014 Abandoned US20020001791A1 (en) 1999-07-09 2001-08-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/973,481 Abandoned US20020076677A1 (en) 1999-07-09 2001-10-08 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,755 Abandoned US20040072131A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,676 Abandoned US20040115600A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,745 Abandoned US20050106540A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,695 Abandoned US20040175679A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/740,862 Abandoned US20040137412A1 (en) 1999-07-09 2003-12-18 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US09/350,791 Expired - Lifetime US6299452B1 (en) 1999-07-09 1999-07-09 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/912,681 Abandoned US20020164563A1 (en) 1999-07-09 2001-07-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/939,014 Abandoned US20020001791A1 (en) 1999-07-09 2001-08-24 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US09/973,481 Abandoned US20020076677A1 (en) 1999-07-09 2001-10-08 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Family Applications After (4)

Application Number Title Priority Date Filing Date
US10/713,676 Abandoned US20040115600A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,745 Abandoned US20050106540A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/713,695 Abandoned US20040175679A1 (en) 1999-07-09 2003-11-14 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US10/740,862 Abandoned US20040137412A1 (en) 1999-07-09 2003-12-18 Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Country Status (3)

Country Link
US (9) US6299452B1 (en)
AU (1) AU6076800A (en)
WO (1) WO2001004863A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020164563A1 (en) * 1999-07-09 2002-11-07 Janet Wasowicz Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US20060168134A1 (en) * 2001-07-18 2006-07-27 Wireless Generation, Inc. Method and System for Real-Time Observation Assessment
US20080096171A1 (en) * 2006-10-13 2008-04-24 Deborah Movahhedi System and method for improving reading skills
US8231389B1 (en) 2004-04-29 2012-07-31 Wireless Generation, Inc. Real-time observation assessment with phoneme segment capturing and scoring
US10332417B1 (en) * 2014-09-22 2019-06-25 Foundations in Learning, Inc. System and method for assessments of student deficiencies relative to rules-based systems, including but not limited to, ortho-phonemic difficulties to assist reading and literacy skills
US20220383895A1 (en) * 2021-05-28 2022-12-01 Metametrics, Inc. Assessing Reading Ability Through Grapheme-Phoneme Correspondence Analysis

Families Citing this family (158)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6146147A (en) * 1998-03-13 2000-11-14 Cognitive Concepts, Inc. Interactive sound awareness skills improvement system and method
GB2338333B (en) * 1998-06-09 2003-02-26 Aubrey Nunes Computer assisted learning system
US6801751B1 (en) * 1999-11-30 2004-10-05 Leapfrog Enterprises, Inc. Interactive learning appliance
US6882824B2 (en) 1998-06-10 2005-04-19 Leapfrog Enterprises, Inc. Interactive teaching toy
US6676412B1 (en) 1999-10-08 2004-01-13 Learning By Design, Inc. Assessment of spelling and related skills
US6755657B1 (en) 1999-11-09 2004-06-29 Cognitive Concepts, Inc. Reading and spelling skill diagnosis and training system and method
US9520069B2 (en) 1999-11-30 2016-12-13 Leapfrog Enterprises, Inc. Method and system for providing content for learning appliances over an electronic communication medium
US9640083B1 (en) 2002-02-26 2017-05-02 Leapfrog Enterprises, Inc. Method and system for providing content for learning appliances over an electronic communication medium
US6681098B2 (en) * 2000-01-11 2004-01-20 Performance Assessment Network, Inc. Test administration system using the internet
CA2396509A1 (en) * 2000-01-12 2001-07-19 Avis Gustason Methods and systems for multimedia education
JP4004218B2 (en) * 2000-09-20 2007-11-07 株式会社リコー Education support system and target presentation method
US6726486B2 (en) * 2000-09-28 2004-04-27 Scientific Learning Corp. Method and apparatus for automated training of language learning skills
JP2002108185A (en) * 2000-09-29 2002-04-10 Akihiro Kawamura Information providing device, information providing system, and information providing method
WO2002033946A1 (en) * 2000-10-16 2002-04-25 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application
US6544039B2 (en) * 2000-12-01 2003-04-08 Autoskill International Inc. Method of teaching reading
US6523007B2 (en) * 2001-01-31 2003-02-18 Headsprout, Inc. Teaching method and system
US6789047B1 (en) 2001-04-17 2004-09-07 Unext.Com Llc Method and system for evaluating the performance of an instructor of an electronic course
US6730041B2 (en) * 2001-04-18 2004-05-04 Diane Dietrich Learning disabilities diagnostic system
US7286793B1 (en) * 2001-05-07 2007-10-23 Miele Frank R Method and apparatus for evaluating educational performance
US6953344B2 (en) * 2001-05-30 2005-10-11 Uri Shafrir Meaning equivalence instructional methodology (MEIM)
US6790045B1 (en) * 2001-06-18 2004-09-14 Unext.Com Llc Method and system for analyzing student performance in an electronic course
US7416488B2 (en) * 2001-07-18 2008-08-26 Duplicate (2007) Inc. System and method for playing a game of skill
US8956164B2 (en) 2001-08-02 2015-02-17 Interethnic, Llc Method of teaching reading and spelling with a progressive interlocking correlative system
US7101185B2 (en) * 2001-09-26 2006-09-05 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US10347145B1 (en) 2001-10-05 2019-07-09 Vision Works Ip Corporation Method and apparatus for periodically questioning a user using a computer system or other device to facilitate memorization and learning of information
US7632101B2 (en) * 2001-10-05 2009-12-15 Vision Works Ip Corporation Method and apparatus for periodically questioning a user using a computer system or other device to facilitate memorization and learning of information
US20030104344A1 (en) * 2001-12-03 2003-06-05 Sable Paula H. Structured observation system for early literacy assessment
US7311524B2 (en) * 2002-01-17 2007-12-25 Harcourt Assessment, Inc. System and method assessing student achievement
US6953343B2 (en) * 2002-02-06 2005-10-11 Ordinate Corporation Automatic reading system and methods
US20030170596A1 (en) * 2002-03-07 2003-09-11 Blank Marion S. Literacy system
US8210850B2 (en) 2002-03-07 2012-07-03 Blank Marion S Literacy education system for students with autistic spectrum disorders (ASD)
US8128406B2 (en) * 2002-03-15 2012-03-06 Wake Forest University Predictive assessment of reading
US7016842B2 (en) * 2002-03-26 2006-03-21 Sbc Technology Resources, Inc. Method and system for evaluating automatic speech recognition telephone services
US6676413B1 (en) * 2002-04-17 2004-01-13 Voyager Expanded Learning, Inc. Method and system for preventing illiteracy in substantially all members of a predetermined set
US20050100875A1 (en) * 2002-04-17 2005-05-12 Best Emery R. Method and system for preventing illiteracy in struggling members of a predetermined set of students
US20030232317A1 (en) * 2002-04-22 2003-12-18 Patz Richard J. Method of presenting an assessment
US20030235806A1 (en) * 2002-06-19 2003-12-25 Wen Say Ling Conversation practice system with dynamically adjustable play speed and the method thereof
US7305336B2 (en) * 2002-08-30 2007-12-04 Fuji Xerox Co., Ltd. System and method for summarization combining natural language generation with structural analysis
US20040049391A1 (en) * 2002-09-09 2004-03-11 Fuji Xerox Co., Ltd. Systems and methods for dynamic reading fluency proficiency assessment
US7455522B2 (en) * 2002-10-04 2008-11-25 Fuji Xerox Co., Ltd. Systems and methods for dynamic reading fluency instruction and improvement
US6808392B1 (en) 2002-11-27 2004-10-26 Donna L. Walton System and method of developing a curriculum for stimulating cognitive processing
US7369985B2 (en) * 2003-02-11 2008-05-06 Fuji Xerox Co., Ltd. System and method for dynamically determining the attitude of an author of a natural language document
US7424420B2 (en) * 2003-02-11 2008-09-09 Fuji Xerox Co., Ltd. System and method for dynamically determining the function of a lexical item based on context
US7363213B2 (en) * 2003-02-11 2008-04-22 Fuji Xerox Co., Ltd. System and method for dynamically determining the function of a lexical item based on discourse hierarchy structure
US7260519B2 (en) * 2003-03-13 2007-08-21 Fuji Xerox Co., Ltd. Systems and methods for dynamically determining the attitude of a natural language speaker
CA2470588A1 (en) * 2003-06-09 2004-12-09 Blue Diamond International Capital Inc. Pull-tab skill tournament poker
CN100585662C (en) * 2003-06-20 2010-01-27 汤姆森普罗梅特里克公司 System and method for computer based testing using cache and cacheable objects to expand functionality of a test driver application
US7695284B1 (en) * 2003-07-11 2010-04-13 Vernon Mears System and method for educating using multimedia interface
US20050153263A1 (en) * 2003-10-03 2005-07-14 Scientific Learning Corporation Method for developing cognitive skills in reading
US20060051727A1 (en) * 2004-01-13 2006-03-09 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20060177805A1 (en) * 2004-01-13 2006-08-10 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20070111173A1 (en) * 2004-01-13 2007-05-17 Posit Science Corporation Method for modulating listener attention toward synthetic formant transition cues in speech stimuli for training
US20070065789A1 (en) * 2004-01-13 2007-03-22 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US8210851B2 (en) * 2004-01-13 2012-07-03 Posit Science Corporation Method for modulating listener attention toward synthetic formant transition cues in speech stimuli for training
US20060073452A1 (en) * 2004-01-13 2006-04-06 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20060105307A1 (en) * 2004-01-13 2006-05-18 Posit Science Corporation Method for enhancing memory and cognition in aging adults
US20050191603A1 (en) * 2004-02-26 2005-09-01 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US20050208459A1 (en) * 2004-03-16 2005-09-22 Zechary Chang Computer game combined progressive language learning system and method thereof
US8498567B2 (en) * 2004-04-23 2013-07-30 Alchemy Training Systems, Inc. Multimedia training system and apparatus
WO2006009727A2 (en) * 2004-06-16 2006-01-26 Harcourt Assessment, Inc. Language disorder assessment and associated methods
US20060008781A1 (en) * 2004-07-06 2006-01-12 Ordinate Corporation System and method for measuring reading skills
WO2006007632A1 (en) * 2004-07-16 2006-01-26 Era Centre Pty Ltd A method for diagnostic home testing of hearing impairment, and related developmental problems in infants, toddlers, and children
US20060046232A1 (en) * 2004-09-02 2006-03-02 Eran Peter Methods for acquiring language skills by mimicking natural environment learning
US8033831B2 (en) * 2004-11-22 2011-10-11 Bravobrava L.L.C. System and method for programmatically evaluating and aiding a person learning a new language
US8272874B2 (en) * 2004-11-22 2012-09-25 Bravobrava L.L.C. System and method for assisting language learning
WO2006057896A2 (en) * 2004-11-22 2006-06-01 Bravobrava, L.L.C. System and method for assisting language learning
US8221126B2 (en) * 2004-11-22 2012-07-17 Bravobrava L.L.C. System and method for performing programmatic language learning tests and evaluations
US8764455B1 (en) * 2005-05-09 2014-07-01 Altis Avante Corp. Comprehension instruction system and method
WO2006125347A1 (en) * 2005-05-27 2006-11-30 Intel Corporation A homework assignment and assessment system for spoken language education and testing
US8439684B2 (en) * 2005-08-31 2013-05-14 School Specialty, Inc. Method of teaching reading
US20070134635A1 (en) * 2005-12-13 2007-06-14 Posit Science Corporation Cognitive training using formant frequency sweeps
US20070172810A1 (en) * 2006-01-26 2007-07-26 Let's Go Learn, Inc. Systems and methods for generating reading diagnostic assessments
US20100092931A1 (en) * 2006-01-26 2010-04-15 Mccallum Richard Douglas Systems and methods for generating reading diagnostic assessments
US7933852B2 (en) * 2006-06-09 2011-04-26 Scientific Learning Corporation Method and apparatus for developing cognitive skills
US20070298383A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building accuracy and fluency in phonemic analysis, decoding, and spelling skills
US20070298384A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building accuracy and fluency in recognizing and constructing sentence structures
US20070298385A1 (en) * 2006-06-09 2007-12-27 Scientific Learning Corporation Method and apparatus for building skills in constructing and organizing multiple-paragraph stories and expository passages
US7984003B2 (en) * 2006-07-21 2011-07-19 Nathaniel Williams Method and system for automated learning through repetition
JP4904971B2 (en) * 2006-08-01 2012-03-28 ヤマハ株式会社 Performance learning setting device and program
US20080096172A1 (en) * 2006-08-03 2008-04-24 Sara Carlstead Brumfield Infant Language Acquisition Using Voice Recognition Software
US20080046232A1 (en) * 2006-08-18 2008-02-21 Jan Groppe Method and System for E-tol English language test online
US20080070202A1 (en) * 2006-08-31 2008-03-20 Fisher Jason B Reading Comprehension System and Associated Methods
US9230445B2 (en) * 2006-09-11 2016-01-05 Houghton Mifflin Harcourt Publishing Company Systems and methods of a test taker virtual waiting room
US20080102430A1 (en) * 2006-09-11 2008-05-01 Rogers Timothy A Remote student assessment using dynamic animation
US9390629B2 (en) 2006-09-11 2016-07-12 Houghton Mifflin Harcourt Publishing Company Systems and methods of data visualization in an online proctoring interface
US10861343B2 (en) * 2006-09-11 2020-12-08 Houghton Mifflin Harcourt Publishing Company Polling for tracking online test taker status
US9111455B2 (en) * 2006-09-11 2015-08-18 Houghton Mifflin Harcourt Publishing Company Dynamic online test content generation
US7886029B2 (en) * 2006-09-11 2011-02-08 Houghton Mifflin Harcourt Publishing Company Remote test station configuration
US20080102432A1 (en) * 2006-09-11 2008-05-01 Rogers Timothy A Dynamic content and polling for online test taker accomodations
US9892650B2 (en) 2006-09-11 2018-02-13 Houghton Mifflin Harcourt Publishing Company Recovery of polled data after an online test platform failure
US9142136B2 (en) 2006-09-11 2015-09-22 Houghton Mifflin Harcourt Publishing Company Systems and methods for a logging and printing function of an online proctoring interface
US9111456B2 (en) * 2006-09-11 2015-08-18 Houghton Mifflin Harcourt Publishing Company Dynamically presenting practice screens to determine student preparedness for online testing
US8672682B2 (en) * 2006-09-28 2014-03-18 Howard A. Engelsen Conversion of alphabetic words into a plurality of independent spellings
US9355568B2 (en) * 2006-11-13 2016-05-31 Joyce S. Stone Systems and methods for providing an electronic reader having interactive and educational features
US8113842B2 (en) * 2006-11-13 2012-02-14 Stone Joyce S Systems and methods for providing educational structures and tools
US20080133816A1 (en) * 2006-12-05 2008-06-05 Conopco Inc, D/B/A Unilever Goal shielding interface
US8000955B2 (en) * 2006-12-20 2011-08-16 Microsoft Corporation Generating Chinese language banners
US20080160487A1 (en) * 2006-12-29 2008-07-03 Fairfield Language Technologies Modularized computer-aided language learning method and system
US8433576B2 (en) * 2007-01-19 2013-04-30 Microsoft Corporation Automatic reading tutoring with parallel polarized language modeling
US20100159437A1 (en) * 2008-12-19 2010-06-24 Xerox Corporation System and method for recommending educational resources
US8457544B2 (en) * 2008-12-19 2013-06-04 Xerox Corporation System and method for recommending educational resources
US8699939B2 (en) * 2008-12-19 2014-04-15 Xerox Corporation System and method for recommending educational resources
US8725059B2 (en) * 2007-05-16 2014-05-13 Xerox Corporation System and method for recommending educational resources
US20080311547A1 (en) * 2007-06-18 2008-12-18 Jay Samuels System and methods for a reading fluency measure
NZ582858A (en) * 2007-06-26 2012-11-30 Learningscience Pty Ltd Teaching method using visual and auditory presentation of words, clues and scoring
EP2019383A1 (en) 2007-07-25 2009-01-28 Dybuster AG Device and method for computer-assisted learning
US8306822B2 (en) * 2007-09-11 2012-11-06 Microsoft Corporation Automatic reading tutoring using dynamically built language model
US20090226872A1 (en) * 2008-01-16 2009-09-10 Nicholas Langdon Gunther Electronic grading system
US20090197233A1 (en) * 2008-02-06 2009-08-06 Ordinate Corporation Method and System for Test Administration and Management
US8639177B2 (en) * 2008-05-08 2014-01-28 Microsoft Corporation Learning assessment and programmatic remediation
US20100075289A1 (en) * 2008-09-19 2010-03-25 International Business Machines Corporation Method and system for automated content customization and delivery
US20100075290A1 (en) * 2008-09-25 2010-03-25 Xerox Corporation Automatic Educational Assessment Service
US20100075291A1 (en) * 2008-09-25 2010-03-25 Deyoung Dennis C Automatic educational assessment service
US20100092933A1 (en) * 2008-10-15 2010-04-15 William Kuchera System and method for an interactive phoneme video game
US20100092930A1 (en) * 2008-10-15 2010-04-15 Martin Fletcher System and method for an interactive storytelling game
US20100157345A1 (en) * 2008-12-22 2010-06-24 Xerox Corporation System for authoring educational assessments
US8494857B2 (en) 2009-01-06 2013-07-23 Regents Of The University Of Minnesota Automatic measurement of speech fluency
US20100190143A1 (en) * 2009-01-28 2010-07-29 Time To Know Ltd. Adaptive teaching and learning utilizing smart digital learning objects
US8702428B2 (en) * 2009-04-13 2014-04-22 Sonya Davey Age and the human ability to decode words
CA2773476A1 (en) * 2009-09-08 2011-03-17 Wireless Generation, Inc. Associating diverse content
US20110081640A1 (en) * 2009-10-07 2011-04-07 Hsia-Yen Tseng Systems and Methods for Protecting Websites from Automated Processes Using Visually-Based Children's Cognitive Tests
US20110123967A1 (en) * 2009-11-24 2011-05-26 Xerox Corporation Dialog system for comprehension evaluation
US8768241B2 (en) * 2009-12-17 2014-07-01 Xerox Corporation System and method for representing digital assessments
US8356068B2 (en) 2010-01-06 2013-01-15 Alchemy Systems, L.P. Multimedia training system and apparatus
US20110195389A1 (en) * 2010-02-08 2011-08-11 Xerox Corporation System and method for tracking progression through an educational curriculum
US8521077B2 (en) 2010-07-21 2013-08-27 Xerox Corporation System and method for detecting unauthorized collaboration on educational assessments
US8727781B2 (en) * 2010-11-15 2014-05-20 Age Of Learning, Inc. Online educational system with multiple navigational modes
US9324240B2 (en) * 2010-12-08 2016-04-26 Age Of Learning, Inc. Vertically integrated mobile educational system
US8825642B2 (en) 2011-01-27 2014-09-02 Electronic Entertainment Design And Research Game recommendation engine for mapping games to disabilities
WO2013035097A2 (en) * 2011-09-07 2013-03-14 Carmel-Haifa University Economic System and method for evaluating and training academic skills
US8731454B2 (en) 2011-11-21 2014-05-20 Age Of Learning, Inc. E-learning lesson delivery platform
US20130157245A1 (en) * 2011-12-15 2013-06-20 Microsoft Corporation Adaptively presenting content based on user knowledge
US9576593B2 (en) 2012-03-15 2017-02-21 Regents Of The University Of Minnesota Automated verbal fluency assessment
US9254437B2 (en) 2012-04-25 2016-02-09 Electronic Entertainment Design And Research Interactive gaming analysis systems and methods
US20150216414A1 (en) * 2012-09-12 2015-08-06 The Schepens Eye Research Institute, Inc. Measuring Information Acquisition Using Free Recall
US8755737B1 (en) 2012-12-24 2014-06-17 Pearson Education, Inc. Fractal-based decision engine for intervention
US20140234809A1 (en) * 2013-02-15 2014-08-21 Matthew Colvard Interactive learning system
US9308445B1 (en) 2013-03-07 2016-04-12 Posit Science Corporation Neuroplasticity games
CN112992316A (en) * 2013-10-31 2021-06-18 P-S·哈鲁塔 Computing techniques for diagnosing and treating language-related disorders
US20150134399A1 (en) * 2013-11-11 2015-05-14 International Business Machines Corporation Information model for supply chain risk decision making
US20150294580A1 (en) * 2014-04-11 2015-10-15 Aspen Performance Technologies System and method for promoting fluid intellegence abilities in a subject
US9875348B2 (en) 2014-07-21 2018-01-23 Green Grade Solutions Ltd. E-learning utilizing remote proctoring and analytical metrics captured during training and testing
US9386950B1 (en) * 2014-12-30 2016-07-12 Online Reading Tutor Services Inc. Systems and methods for detecting dyslexia
US20160232805A1 (en) * 2015-02-10 2016-08-11 Xerox Corporation Method and apparatus for determining patient preferences to promote medication adherence
US10431112B2 (en) * 2016-10-03 2019-10-01 Arthur Ward Computerized systems and methods for categorizing student responses and using them to update a student model during linguistic education
US10065118B1 (en) 2017-07-07 2018-09-04 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US10870058B2 (en) 2017-07-07 2020-12-22 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US11373546B2 (en) 2017-07-07 2022-06-28 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US10191830B1 (en) 2017-07-07 2019-01-29 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US10600018B2 (en) 2017-07-07 2020-03-24 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US10872538B2 (en) 2017-07-07 2020-12-22 ExQ, LLC Data processing systems for processing and analyzing data regarding self-awareness and executive function
US20210142685A1 (en) * 2017-08-23 2021-05-13 Aparna Nalinkumar RAMANATHAN Literacy awareness skills tools implemented via smart speakers and conversational assistants on smart devices
US20200043357A1 (en) * 2017-09-28 2020-02-06 Jamie Lynn Juarez System and method of using interactive games and typing for education with an integrated applied neuroscience and applied behavior analysis approach
CN110136543A (en) * 2019-04-26 2019-08-16 北京大米科技有限公司 Online teaching interactive approach, relevant device, storage medium and system
CN110211438A (en) * 2019-05-20 2019-09-06 广州市吉星信息科技有限公司 A kind of wrong answer list generation system based on wireless video terminal
US20200381126A1 (en) * 2019-06-03 2020-12-03 Pearson Education, Inc. Diagnostic probability calculator
US20230282130A1 (en) * 2021-10-14 2023-09-07 PlaBook Reading level determination and feedback
US11545043B1 (en) 2022-02-18 2023-01-03 Marlyn Andrew Morgan Interface for educational tool

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5868683A (en) * 1997-10-24 1999-02-09 Scientific Learning Corporation Techniques for predicting reading deficit based on acoustical measurements
US6112049A (en) * 1997-10-21 2000-08-29 The Riverside Publishing Company Computer network based testing system
US6219669B1 (en) * 1997-11-13 2001-04-17 Hyperspace Communications, Inc. File transfer system using dynamically assigned ports
US6353447B1 (en) * 1999-01-26 2002-03-05 Microsoft Corporation Study planner system and method
US6411796B1 (en) * 1997-11-14 2002-06-25 Sony Corporation Computer assisted learning system
US6704541B1 (en) * 2000-12-06 2004-03-09 Unext.Com, L.L.C. Method and system for tracking the progress of students in a class

Family Cites Families (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US586308A (en) * 1897-07-13 Skiving-machine
US633813A (en) * 1898-11-01 1899-09-26 Thomas Croston Rotary engine.
US2514289A (en) * 1946-06-11 1950-07-04 Us Navy Recognition trainer
US3799146A (en) * 1971-10-06 1974-03-26 Neuro Data Inc Hearing testing method and audiometer
US4122452A (en) * 1974-03-12 1978-10-24 Sanders Associates, Inc. Jamming signal cancellation system
US4166452A (en) 1976-05-03 1979-09-04 Generales Constantine D J Jr Apparatus for testing human responses to stimuli
US4285517A (en) 1979-02-09 1981-08-25 Marvin Glass & Associates Adaptive microcomputer controlled game
US4363482A (en) 1981-02-11 1982-12-14 Goldfarb Adolph E Sound-responsive electronic game
US4457719A (en) * 1982-05-25 1984-07-03 Texas Instruments Incorporated Electronic learning aid for providing sequencing and spelling exercises
US4483681A (en) * 1983-02-07 1984-11-20 Weinblatt Lee S Method and apparatus for determining viewer response to visual stimuli
JPS62281986A (en) 1986-05-30 1987-12-07 株式会社トミー Sound game apparatus
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5456607A (en) * 1989-12-13 1995-10-10 Antoniak; Peter R. Knowledge testing computer game method employing the repositioning of screen objects to represent data relationships
US5149084A (en) 1990-02-20 1992-09-22 Proform Fitness Products, Inc. Exercise machine with motivational display
US5122952A (en) * 1990-10-22 1992-06-16 Minkus Leslie S Method and apparatus for automated learning tool selection for child development
CA2069355C (en) 1991-06-07 1998-10-06 Robert C. Pike Global user interface
US5692906A (en) 1992-04-01 1997-12-02 Corder; Paul R. Method of diagnosing and remediating a deficiency in communications skills
US5302132A (en) * 1992-04-01 1994-04-12 Corder Paul R Instructional system and method for improving communication skills
WO1994015272A1 (en) * 1992-12-22 1994-07-07 Morgan Michael W Pen-based electronic teaching system
US5562453A (en) * 1993-02-02 1996-10-08 Wen; Sheree H.-R. Adaptive biofeedback speech tutor toy
US6186794B1 (en) * 1993-04-02 2001-02-13 Breakthrough To Literacy, Inc. Apparatus for interactive adaptive learning by an individual through at least one of a stimuli presentation device and a user perceivable display
US5421731A (en) * 1993-05-26 1995-06-06 Walker; Susan M. Method for teaching reading and spelling
US5513126A (en) 1993-10-04 1996-04-30 Xerox Corporation Network having selectively accessible recipient prioritized communication channel profiles
US5475826A (en) * 1993-11-19 1995-12-12 Fischer; Addison M. Method for protecting a volatile file using a single hash
US6334779B1 (en) * 1994-03-24 2002-01-01 Ncr Corporation Computer-assisted curriculum
US6336813B1 (en) * 1994-03-24 2002-01-08 Ncr Corporation Computer-assisted education using video conferencing
US5694546A (en) 1994-05-31 1997-12-02 Reisman; Richard R. System for automatic unattended electronic information transport between a server and a client by a vendor provided transport software with a manifest list
US6009397A (en) 1994-07-22 1999-12-28 Siegel; Steven H. Phonic engine
WO1996018184A1 (en) * 1994-12-08 1996-06-13 The Regents Of The University Of California Method and device for enhancing the recognition of speech among speech-impaired individuals
US5584698A (en) * 1995-05-15 1996-12-17 Rowland; Linda C. Method and apparatus for improving the reading efficiency of a dyslexic
GB9517808D0 (en) 1995-08-31 1995-11-01 Philips Electronics Uk Ltd Interactive entertainment personalisation
WO1997021201A1 (en) * 1995-12-04 1997-06-12 Bernstein Jared C Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing
US5779486A (en) * 1996-03-19 1998-07-14 Ho; Chi Fai Methods and apparatus to assess and enhance a student's understanding in a subject
US5863208A (en) 1996-07-02 1999-01-26 Ho; Chi Fai Learning system and method based on review
US5649826A (en) 1996-03-19 1997-07-22 Sum Total, Inc. Method and device for teaching language
US5727951A (en) 1996-05-28 1998-03-17 Ho; Chi Fai Relationship-based computer-aided-educational system
US5743746A (en) 1996-04-17 1998-04-28 Ho; Chi Fai Reward enriched learning system and method
US5743743A (en) 1996-09-03 1998-04-28 Ho; Chi Fai Learning method and system that restricts entertainment
IL120622A (en) * 1996-04-09 2000-02-17 Raytheon Co System and method for multimodal interactive speech and language training
US5727950A (en) 1996-05-22 1998-03-17 Netsage Corporation Agent based instruction system and method
US5762503A (en) 1996-06-13 1998-06-09 Smart Productivity System for use as a team building exercise
US5823781A (en) * 1996-07-29 1998-10-20 Electronic Data Systems Coporation Electronic mentor training system and method
US5944530A (en) 1996-08-13 1999-08-31 Ho; Chi Fai Learning method and system that consider a student's concentration level
US5820838A (en) * 1996-09-27 1998-10-13 Foster Wheeler Energia Oy Method and an apparatus for injection of NOx reducing agent
US5836771A (en) 1996-12-02 1998-11-17 Ho; Chi Fai Learning method and system based on questioning
US5907831A (en) * 1997-04-04 1999-05-25 Lotvin; Mikhail Computer apparatus and methods supporting different categories of users
US5920838A (en) 1997-06-02 1999-07-06 Carnegie Mellon University Reading and pronunciation tutor
US6017219A (en) * 1997-06-18 2000-01-25 International Business Machines Corporation System and method for interactive reading and language instruction
US6098033A (en) * 1997-07-31 2000-08-01 Microsoft Corporation Determining similarity between words
US6056551A (en) * 1997-10-03 2000-05-02 Marasco; Bernie Methods and apparatus for computer aided reading training
US6113393A (en) * 1997-10-29 2000-09-05 Neuhaus; Graham Rapid automatized naming method and apparatus
US6422869B1 (en) * 1997-11-14 2002-07-23 The Regents Of The University Of California Methods and apparatus for assessing and improving processing of temporal information in human
US5927988A (en) * 1997-12-17 1999-07-27 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI subjects
US6159014A (en) * 1997-12-17 2000-12-12 Scientific Learning Corp. Method and apparatus for training of cognitive and memory systems in humans
US6019607A (en) * 1997-12-17 2000-02-01 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI systems
US5957699A (en) 1997-12-22 1999-09-28 Scientific Learning Corporation Remote computer-assisted professionally supervised teaching system
US6134529A (en) * 1998-02-09 2000-10-17 Syracuse Language Systems, Inc. Speech recognition apparatus and method for learning
US6074212A (en) * 1998-02-11 2000-06-13 Cogliano; Mary Ann Sequence learning toy
US6227863B1 (en) * 1998-02-18 2001-05-08 Donald Spector Phonics training computer system for teaching spelling and reading
US6146147A (en) * 1998-03-13 2000-11-14 Cognitive Concepts, Inc. Interactive sound awareness skills improvement system and method
US6077085A (en) * 1998-05-19 2000-06-20 Intellectual Reserve, Inc. Technology assisted learning
US6336089B1 (en) * 1998-09-22 2002-01-01 Michael Everding Interactive digital phonetic captioning program
US6511324B1 (en) * 1998-10-07 2003-01-28 Cognitive Concepts, Inc. Phonological awareness, phonological processing, and reading skill training system and method
US6036496A (en) * 1998-10-07 2000-03-14 Scientific Learning Corporation Universal screen for language learning impaired subjects
JP2000152593A (en) * 1998-11-06 2000-05-30 Mitsumi Electric Co Ltd Stepping motor
US6149441A (en) * 1998-11-06 2000-11-21 Technology For Connecticut, Inc. Computer-based educational system
US6305942B1 (en) * 1998-11-12 2001-10-23 Metalearning Systems, Inc. Method and apparatus for increased language fluency through interactive comprehension, recognition and generation of sounds, words and sentences
US6296489B1 (en) * 1999-06-23 2001-10-02 Heuristix System for sound file recording, analysis, and archiving via the internet for language training and other applications
US6299452B1 (en) * 1999-07-09 2001-10-09 Cognitive Concepts, Inc. Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6112049A (en) * 1997-10-21 2000-08-29 The Riverside Publishing Company Computer network based testing system
US5868683A (en) * 1997-10-24 1999-02-09 Scientific Learning Corporation Techniques for predicting reading deficit based on acoustical measurements
US6219669B1 (en) * 1997-11-13 2001-04-17 Hyperspace Communications, Inc. File transfer system using dynamically assigned ports
US6411796B1 (en) * 1997-11-14 2002-06-25 Sony Corporation Computer assisted learning system
US6353447B1 (en) * 1999-01-26 2002-03-05 Microsoft Corporation Study planner system and method
US6704541B1 (en) * 2000-12-06 2004-03-09 Unext.Com, L.L.C. Method and system for tracking the progress of students in a class

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020164563A1 (en) * 1999-07-09 2002-11-07 Janet Wasowicz Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US20060168134A1 (en) * 2001-07-18 2006-07-27 Wireless Generation, Inc. Method and System for Real-Time Observation Assessment
US8667400B2 (en) 2001-07-18 2014-03-04 Amplify Education, Inc. System and method for real-time observation assessment
US8997004B2 (en) 2001-07-18 2015-03-31 Amplify Education, Inc. System and method for real-time observation assessment
US8231389B1 (en) 2004-04-29 2012-07-31 Wireless Generation, Inc. Real-time observation assessment with phoneme segment capturing and scoring
US20080096171A1 (en) * 2006-10-13 2008-04-24 Deborah Movahhedi System and method for improving reading skills
US10332417B1 (en) * 2014-09-22 2019-06-25 Foundations in Learning, Inc. System and method for assessments of student deficiencies relative to rules-based systems, including but not limited to, ortho-phonemic difficulties to assist reading and literacy skills
US20220383895A1 (en) * 2021-05-28 2022-12-01 Metametrics, Inc. Assessing Reading Ability Through Grapheme-Phoneme Correspondence Analysis
WO2022250828A1 (en) * 2021-05-28 2022-12-01 Metametrics, Inc. Assessing reading ability through grapheme-phoneme correspondence analysis
US11908488B2 (en) * 2021-05-28 2024-02-20 Metametrics, Inc. Assessing reading ability through grapheme-phoneme correspondence analysis

Also Published As

Publication number Publication date
US6299452B1 (en) 2001-10-09
WO2001004863A1 (en) 2001-01-18
US20050106540A1 (en) 2005-05-19
US20040175679A1 (en) 2004-09-09
US20020164563A1 (en) 2002-11-07
US20040137412A1 (en) 2004-07-15
US20020001791A1 (en) 2002-01-03
AU6076800A (en) 2001-01-30
US20020076677A1 (en) 2002-06-20
US20040115600A1 (en) 2004-06-17

Similar Documents

Publication Publication Date Title
US20040072131A1 (en) Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
Hosp et al. The ABCs of CBM: A practical guide to curriculum-based measurement
US6688889B2 (en) Computerized test preparation system employing individually tailored diagnostics and remediation
Stiggins Design and development of performance assessments
Chapelle et al. Assessing language through computer technology
US20070172810A1 (en) Systems and methods for generating reading diagnostic assessments
Dixon-Krauss et al. Development of the dialogic reading inventory of parent-child book reading
US20140335499A1 (en) Method and apparatus for evaluating educational performance
US20120058458A1 (en) Interactive method and system for teaching decision making
WO2006041622A2 (en) Test item development system and method
US20100092931A1 (en) Systems and methods for generating reading diagnostic assessments
Kamei-Hannan et al. Investigating the efficacy of Reading Adventure Time! for improving reading skills in children with visual impairments
US20040224291A1 (en) Predictive assessment of reading
Clark et al. Constructing and evaluating a validation argument for a next-generation alternate assessment
Zhang et al. Developing a listening comprehension problem scale for university students’ metacognitive awareness
Huerta Fourth-grade biliteracy: Searching for instructional footholds
Kinsey The relationship between prosocial behaviors and academic achievement in the primary multiage classroom
Jones Validation of a simulation to evaluate instructional consultation problem identification skill competence
Mansen et al. Evaluation of health assessment skills using a computer videodisc interactive program
Woodbury Computer assisted evaluation of problem solving skills of primary health care providers
Medlin The Use of Equivalence-Based Instruction to Teach Graduate Students Simplified Definitions of Behavior Analytic Terminology
Foorman et al. Texas primary reading inventory (1998 edition)
CAVUTO TEACHER FEEDBACK TO STUDENTS'MISCUES AS A REFLECTION OF TEACHER THEORETICAL ORIENTATION
Singleton et al. Computerised identification of dyslexia
Callis The development and validation of the oral/aural tests for the senior primary phase

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION