MULTIPLICITY INTERACTIVE TOY SYSTEM IN COMPUTER NETWORK
FIELD OF THE INVENTION The present invention relates to networked electronic devices.
BACKGROUND OF THE INVENTION Networked electronic devices are known. Wireless toys are also known.
The disclosures of all publications mentioned in the specification and of the publications cited therein are hereby incorporated by reference.
SUMMARY OF THE INVENTION
The present invention seeks to provide toy apparatus for electronic shopping.
There is thus provided in accordance with a preferred embodiment of the present invention, a multiplicity of interactive toys, each of which is connected to a computer network and adaptive toy operation software which is supplied to the multiplicity of interactive toys via the computer network, the adaptive toy operation software being operative to provide feedback, based on play experience with at least some of the multiplicity of interactive toys, via the computer network and to employ the feedback in adapting itself so as to change the play experience provided thereby.
There is also provided in accordance with another preferred embodiment of the present invention, a multiplicity of interactive entertainment units, each of which is connected to a computer network and adaptive entertainment software which is supplied to the multiplicity of interactive entertainment units via the computer network, the adaptive entertainment software being operative to provide feedback, based on user experience with at least some of the multiplicity of interactive entertainment units via the computer network and to employ the feedback in adapting itself so as to change the entertainment experience provided thereby.
Also- provided, in accordance with another preferred embodiment of the present invention, is a method for establishing a network of toys, the method including providing a plurality of scripts for at least some of the network of toys and sending at least one of the plurality of scripts to at least one of the toys in the network, over the
network.
Also provided, in accordance with another preferred embodiment of the present invention, is a toy system including an electronic toy content shop providing users with an option to pre-purchase accounts for users and a plurality of networked toys directly connected via a network to the electronic toy content store, the toys being operative to load themselves with at least one script sold at the electronic toy content store, wherein only a subset of the scripts sold at the electronic toy content shop are displayed to a user, depending on at least one personal characteristic of the user.
Also provided, in accordance with another preferred embodiment of the present invention, is a toy system providing multi-level interaction between a population of users and a population of toys, the system including at least one scripts operative to pose at least one question to at least one user about a topic other than the user's characteristics, each of the scripts operative to analyze the user's answer and act upon its content and to derive knowledge about the user's characteristics from his answer.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
Fig. 1 is a simplified semi-pictorial semi-block-diagram illustration of an interactive toy web system comprising a learning machine in accordance with a preferred embodiment of the present invention;
Fig. 2 is a more detailed, semi-pictorial semi-block-diagram illustration of an interactive toy web system comprising a learning machine in accordance with a preferred embodiment of the present invention;
Fig. 3 is a semi-pictorial semi-block-diagram illustration of an adaptive pattern learning system in accordance with a preferred embodiment of the present invention;
Fig. 4 is a flowchart that describes an example of a sample group procedure of an adaptive pattern learning system in accordance with a preferred embodiment of the present invention;
Fig. 5 is a simplified flowchart illustration of a suitable procedure of building a task for a user based on results of a sample group as provided by an adaptive pattern learning system in accordance with a preferred embodiment of the present invention;
Fig. 6 A is a simplified script diagram illustration of a main artificial life script in accordance with a preferred embodiment of the present invention;
Fig. 6B is a simplified flowchart illustration corresponding to the script diagram illustration of Fig. 6 A which is provided in order to explain the script diagram notation of Fig. 6 A;
Fig. 7A is a simplified script diagram illustration of an example of an artificial life script that provides a level 1 (i.e. simple) game in accordance with a preferred embodiment of the present invention;
Figs. 7B - 7C, taken together, form a simplified flowchart illustration corresponding to the script diagram illustration of Fig. 7 A;
Fig. 8 is a simplified script diagram illustration of an example of an artificial life script that provides a level2 (i.e. normal) game in accordance with a preferred embodiment of the present invention;
Fig. 9 is a simplified script diagram illustration of an example of an artificial life script that provides a level3 (i.e. harder) game in accordance with a preferred embodiment of the present invention;
Fig. 10 is an example of a simplified screen display for a "play" function in the player;
Fig. 11 is a simplified illustration of a "textbox" screen display of the programming function of the player, constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 12 is a table that describes the programming feature of the player in accordance with a preferred embodiment of the present invention;
Fig. 13 is a semi-block diagram semi-flowchart illustration of the "personal" function of a player in accordance with a preferred embodiment of the present invention;
Fig. 14 is a semi-block diagram semi-flowchart illustration of the "club" function of a player in accordance with a preferred embodiment of the present invention;
Fig. 15 is a semi-block diagram semi-flowchart illustration of the "shop" function of a player in accordance with a preferred embodiment of the present invention;
Fig. 16 is a simplified flowchart illustration of a registration procedure provided in accordance with a preferred embodiment of the present invention;
Fig. 17 is a simplified flowchart illustration of an example procedure of sending a request message from a user to a server in accordance with a preferred embodiment of the present invention;
Fig. 18 is a simplified flowchart illustration of a suitable procedure of creating a new group for the users' club in accordance with a preferred embodiment of the present invention;
Fig. 19 is a simplified flowchart illustration of a suitable procedure of a leaving a group of the users' club in accordance with a preferred embodiment of the present invention;
Fig. 20 is a simplified flowchart illustration of a suitable procedure of viewing group members provided by the users' club in accordance with a preferred embodiment of the present invention;
Figs. 21 to 24 are simplified flowchart illustrations which, taken together, describe an example of a search procedure provided by the club in accordance with a preferred embodiment of the present invention;
Fig. 25 is a simplified flowchart illustration of a suitable procedure for sending a message to another user in accordance with a preferred embodiment of the present invention;
Fig. 26 is a simplified flowchart illustration of a suitable procedure for sending a script to another user in accordance with a preferred embodiment of the present invention;
Fig. 27 is a simplified flowchart illustration of a suitable implementation of send procedure D in Figs. 25 and 26, constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 28A is a simplified flowchart illustration of a suitable procedure for adding a user to a contact list in accordance with a preferred embodiment of the present invention;
Fig. 28B is a simplified flowchart illustration of a suitable procedure for removing a user from a contact list in accordance with a preferred embodiment of the present invention;
Fig. 29 is a simplified flowchart illustration of a suitable account update procedure provided in accordance with a preferred embodiment of the present invention;
Fig. 30 is a simplified flowchart illustration of a suitable subject update procedure provided in accordance with a preferred embodiment of the present invention;
Fig. 31 is a simplified flowchart illustration of a suitable procedure for ignoring a message and/or a user in accordance with a preferred embodiment of the present invention;
Fig. 32 is a simplified flowchart illustration of a suitable procedure for adding fields to a user's contact list in accordance with a preferred embodiment of the present invention;
Figs. 33 to 35 are simplified flowchart illustrations of three respective procedures for "adding to basket", which procedures are preferably provided by the electronic shop of the present invention;
Fig. 36 is a simplified flowchart illustration of a suitable implementation of a "remove from basket" procedure preferably provided by the electronic shop of the present invention;
Fig. 37 is a simplified flowchart illustration of a suitable search procedure preferably provided by the electronic shop of the present invention;
Fig. 38 is a simplified flowchart illustration of a suitable procedure for "winning credit points" preferably provided by the electronic shop of the present invention;
Fig. 39 is a simplified flowchart illustration of a suitable procedure of paying with credit points preferably provided by the electronic shop of the present invention;
Fig. 40 is a simplified illustration of a screen display of the "My account" function of the club provided in accordance with a preferred embodiment of the present invention;
Fig. 41 is a simplified illustration of a screen display of the "Contact list" function of the club in accordance with a preferred embodiment of the present invention;
Fig. 42 is a simplified pictorial illustration of a screen display of the "Group" function of the club provided in accordance with a preferred embodiment of the present
invention;
Fig. 43 is a simplified pictorial illustration of a screen display of the "Search user" function of the club in accordance with a preferred embodiment of the present invention;
Fig. 44 is a simplified pictorial illustration of a screen display of the "Send message/scripts" function of the club in accordance with a preferred embodiment of the present invention;
Fig. 45 is a simplified pictorial illustration of a screen display of the "Interests" form provided in accordance with a preferred embodiment of the present invention;
Fig. 46 is a simplified pictorial illustration of a screen display of the "Registration" form provided in accordance with a preferred embodiment of the present invention;
Fig. 47 is a simplified pictorial illustration of a screen display of the "Select content" function preferably provided by the electronic shop of the present invention;
Fig. 48 is a simplified pictorial illustration of a screen display of the "Select packages" function preferably provided by the electronic shop of the present invention;
Fig. 49 is a simplified pictorial illustration of a screen display of the "View Account" function preferably provided by the electronic shop of the present invention;
Fig. 50 is an illustration of a Living Object base station;
Fig. 51 is an illustration of a Living Object Toy;
Fig. 52 is a screen display of a Scriptwriter icon on desktop;
Fig. 53 is a screen display of a Living Object Scriptwriter main screen;
Fig. 54 is a "select tools - options" screen window display;
Fig. 55 is a "toy" screen window display;
Fig. 56 is a "hardware" screen window display;
Fig. 57 is a "talk icon" screen display;
Fig. 58 is a Scriptwriter system's main screen display with the added talk object;
Fig. 59 is an illustration of the Scriptwriter main screen display with the added talk object connected by a line to the start object;
Fig. 60 is an illustration of the screen display of the action toolbar with the save icon;
Fig. 61 illustrates the screen display for naming and saving the script;
Fig. 62 illustrates a screen window display of a combo box for typing the toy's speech;
Fig. 63 is a screen window display for recording sound to be played by the toy, wherein the toy's speech can be recorded through the toy or through the computer's microphone;
Fig. 64 is a screen window display for saving a recording;
Fig. 65 is a screen window display for selecting a "wave" file to be played by the toy;
Fig. 66 illustrates a Listen icon;
Fig. 67 is a screen display of a part of the Scriptwriter main window with the Listen object added;
Fig. 68 is an example of the "Listen and Sense" screen window display;
Fig. 69 illustrates the "Keyword link box" in the "Choose Link" screen display;
Fig. 70 shows the Scriptwriter main screen display with a Listen object links to corresponding Talk objects;
Fig. 71 shows a Run-Run screen window display;
Fig. 72 shows the Sample error message screen window display;
Figs. 73 A and 73B show a table of the functions provided by the Scriptwriter with their icons as presented on the Scriptwriter main screen display;
Figs. 74 - 99 are simplified illustrations of examples of screen displays which may be generated by the scriptwriter system shown and described herein;
Fig. 100 is a dependence table useful in building an artificial life toy and environment constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 101 is a formula table useful in building an artificial life toy and environment constructed and operative in accordance with a preferred embodiment of the present invention;
Figs. 102 - 124 are simplified illustrations of examples of screen displays which may be generated by the scriptwriter system shown and described herein;
Fig. 125 is a table that presents an interactive script between a toy and a player where the toy determines the characteristics of the player (namely, age range) to suggest the appropriate level of game content;
Fig. 126 is a table that stores an example of a list of multilevel questions;
Fig. 127 is a script diagram of an interactive script where a toy determines the characteristics of a player (e.g. age range) to suggest the appropriate level of game content;
Fig. 128 is a simplified flowchart illustration of a learning process for a toy system constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 129 is a simplified flowchart illustration of a process for analyzing a user's game results constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 130 is a simplified flowchart illustration of a method for analyzing game results of a group of users, constructed and operative in accordance with a preferred embodiment of the present invention; and
Fig. 131 is a simplified flowchart illustration of a process for customizing toy content to a user of a toy system, based on results of a learning procedure.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS According to a preferred embodiment of the present invention, some or all of the following electronic elements are provided in combination: a. An interactive development environment (TDE) which allows user to "play god" by generating scripts, b. an IDE player to run the scripts c. an electronic shop on Internet d. an electronic on-line club.
The electronic shop typically provides a credit account for a child user which the parent opens for him. Generally, the software enables persons, typically adults, to buy other persons, typically children, a present in a store, and the present may comprise a fixed sum account in that store and the software includes book-keeping capability.
Typically, an electronic voucher is provided. The child can be granted, by the parent, 100 credit points which serve as money (tender) issued by a particular electronic store. The child may be granted a gift certificate. A particular advantage of a preferred embodiment of the present invention is that an individual, such as a child, who does not
own a credit card can have an independent electronic shopping experience.
Preferably, there is parametrization or filtering of child purchases as part of the conditions of the voucher or gift certificate. For example, the child may be entitled to buy only toys for his age-bracket, not computer toys, or only computer toys.
Preferably, the child is prompted to earn credit points e.g. by agreeing to hear/view, and actually hearing and/or viewing, advertising. Preferably, filtering parameters imposed by the parents are also applied to the advertising which is presented to the child. Preferably, the child's response to advertising is monitored to verify exposure to the advertising message and the monitoring information is provided to the advertisement provider.
Preferably, certain operations on the part of the child require parental approval which is given e.g. by the parent supplying his credit card number at the appropriate juncture in response to a prompt. Other than at these junctures, the child can operate as an independent consumer within the limitations imposed by the voucher or gift certificate.
Preferably, the vehicle with which the child interacts comprises inter alia or exclusively a toy figure such as a teddy bear. The teddy bear is typically purchased in conjunction with a CD-ROM or other software vehicle which the parent installs on the computer. Once the software is installed, the software preferably is operable even by a very small child, either by means of a script which actuates the toy figure to interact with the child, and/or by means of suitable simple on-screen input devices, such as buttons, with which the child can perform operations such as opening an Internet shop.
The Internet shop preferably allows a child or parent to enter a site and buy a desired number of credit points e.g. with a credit card. The shop preferably has an option whereby user can check how many credit points s/he has, and what filters or restrictions if any apply to these credit cards.
Filters typically include content filters. For example, a child is entitled to purchase only educational toys, or only games which pertain to history. Examples of subfilters are "only USA history", "only the Napoleonic period", and so on.
Sold goods typically comprise products, typically paid for by a lump sum, or content services, including games and books which are periodically updated, typically paid for by a monthly charge. For example, a parent may purchase, for his child, 5 credit
points' worth of U.S. history games per month. Alternatively or in addition, learning motivation may be generated by allowing a child to earn up to 5 credit points per month toward toy purchases by playing 5 credit points' worth of mathematical games per month, at his level of skill.
A toy such as a teddy bear may serve as a "tutor" or "governess" and teach the child the games which the child selects. The games may be initiated at the child's request or the toy may initiate these sessions, at times or in response to prompts which are system determined or selected by a user such as an adult. In effect, the brain of the toy is updatable via Internet and therefore an adult can buy pieces of the toy's brain for a child.
Preferably, the controlling computer comprises a scheduler such that each of a plurality of content elements may be set to be executed at different child- or adult-selected times or in response to different prompts. The toy typically initiates these sessions. For example, the toy may call for the child at 4 pm and if the child answers the toy initiates a learning session on the topic of Napoleon. At 5:30 pm the toy tries to initiate a learning session on multiplication.
A particular advantage of a preferred embodiment of the present invention is that it enables efficient processing of "nickel and dime" purchases and or micro payments, typically via the Internet and without resorting to use of smart cards.
Preferably, a user club or affinity group is provided which may be similar to conventional ICQ systems. However, unlike conventional ICQ systems, messages are preferably delivered orally by a toy such as a teddy bear, the teddy bear therefore acting like a secretary/messenger. Through the affinity group, a child can write a joke or generate a script or game, using a suitable development environment, and can post the product of his efforts for a friend in the same user group.
Preferably a portal is provided which sells gift certificates from any of a multiplicity of sites. Preferably, the value of gift certificate which the user can buy exceeds the purchase price.
Preferably, a post-to-web feature is provided, allowing vendor access to the Internet shop of the present invention. The post-to-web is a tool allowing vendors to post toy and game items for sale in an Internet toy and game shop. Preferably, filtering parametrization can also be provided by the vendor such that customers, typically
adults, can filter purchases for the intended recipients, typically children.
Preferably, a personification feature is provided. Personification is a feature of a toy, a doll or another interactive entertainment unit having a defined persona, such as a known comic figure, action figure or human celebrity. The personified unit presents the user with voice, intonation and mimics typical of the personified figure. The personification mechanism enables a content developer to develop generic content such as a song or a story or educational material (e.g. biology) or an information item (e.g. news) and post it to the web. The personification mechanism also enables each personified unit to download a personified version of the generic content to present to the user with the voice, intonation, gestures and other characteristics typical of a personified figure, such as a known comic figure, action figure or human celebrity.
The personification mechanism enables a content developer to develop generic content such as a song or a story or educational material (e.g. biology) or an information item (e.g. news) and post it to the web. The personification mechanism also enables each personified unit to download a personified version of the generic content to present to the user with the voice, intonation, gestures and other characteristics of the personified figure.
Preferably, an artificial life feature is provided. The system of the preset invention comprises a script actuation database operative to receive output from at least one operating scripts and to actuate at least one additional script when certain conditions are fulfilled by the output, in combination, of the operating scripts. An impression of artificial life is generated due to the accumulation of conditions from various scripts, imbuing activation with a live quality which is substantially not anticipatable.
Preferably, scripts may be scheduled. Preferably, the system also provides time-based selection of single actions.
Preferably, a script for at least one doll includes in it activation of another script for another doll when certain conditions are fulfilled.
Preferably, the system of the present invention resides on a CD-ROM storing an IDE player plus scripts therefore plus, optionally, a shop for buying more scripts.
Preferably, a "cyberbrain" feature is provided whereby each toy grows up on the basis of its own unique experiences, much as a child does. The toy preferably adapts its contents to a user thereof as the toy learns more about that user.
Optionally, the education of each toy is provided not only by the child owner of that particular toy but by a total population of toy owners linked over the Internet. The toy uses its experiences or impressions of its own child-owner and optionally of a group of children to which his child-owner belongs in order to become a better teacher or companion. Therefore, a particular toy, when bought by a first child, is not identical to the same toy bought subsequently by a second child because the toy is preferably constantly learning, as each child's experience is transmitted to a server which shares that experience with the entire virtual community of toys. Each day, or periodically, each toy in the community of toys develops, becomes smarter, and becomes a better companion.
For example, a toy's content software may comprise 3 jokes. The toy may learn, for example, that children prefer one joke over the remaining two, in which case the toy is fed new parameters and begins using that joke in preference over the other jokes. The toy may also learn that none of the 3 jokes available pass a satisfaction threshold of, say, 20%, in which case a content developer develops other jokes and downloads these replacement jokes to all toys in the virtual community.
When the toy calls its server, it may receive new parameters and/or new contents.
Preferably, a micro marketing feature is provided. The micro marketing feature enables the online service provider to identify communities, or affinity groups, of users that have something in common, such as sharing a similar interest or preference or need. The online service can provide, or suggest the provisioning of, selected appropriate content, such as educational, informational or promotional content, to the appropriate community or affinity group.
Fig. 1 is a simplified semi-pictorial semi-block-diagram illustration of an interactive toy web system comprising a learning machine in accordance with a preferred embodiment of the present invention.
Fig. 2 is a more detailed, semi-pictorial semi-block-diagram illustration of an interactive toy web system comprising a learning machine in accordance with a preferred embodiment of the present invention.
Fig. 3 is a semi-pictorial semi-block-diagram illustration of an adaptive pattern learning system in accordance with a preferred embodiment of the present invention.
Fig. 4 is a flowchart that describes an example of a sample group procedure of an adaptive pattern learning system in accordance with a preferred embodiment of the present invention.
Fig. 5 is a simplified flowchart illustration of a suitable procedure of building a task for a user based on results of a sample group as provided by an adaptive pattern learning system in accordance with a preferred embodiment of the present invention.
According to a preferred embodiment of the present invention, there is provided a web system, comprising a plurality of networked entertainment units such as networked toys, and a learning machine allowing the web system to learn from the interaction/s between at least one user and at least one entertainment unit. Preferably, the learning machine is implemented by a system of scripts, termed herein "artificial life" scripts, which breathe artificial life into at least one of the networked entertainment units.
The term "artificial life" is intended to refer to scripts which cause at least one entertainment unit to respond not only to a current user interaction situation but also to take into account the past. These scripts typically operate in the background during interaction of a toy with a user. Each "artificial life" script is parametrized, each parameter not being fixed but rather being determined as a function of learned situational characteristics or user characteristic. Therefore, each "artificial life" script typically comprises an endless family of scripts having potentially endless variation contained therewithin. Typically, a system of artificial life scripts builds on itself by learning from the interaction of users with the scripts such that the toy's functioning develops over time as a result of the variation between users in their interactions with their toys, which in turn causes differential parametrization of scripts depending on who has been playing with them.
In accordance with a preferred embodiment of the present invention, an interactive toy web system with a learning machine comprises a feature of artificial life. Typically, individual users on a system interact with artificial life scripts, e.g. in the form of games. Results from interaction of all or some of the users on a system are sent to a server of an interactive toy web system. The results are processed and used in order to modify the scripts sent to individual users.
Following is a description of a learning machine web system based on an
artificial life feature of interactive toys in accordance with a preferred embodiment of the present invention. The system described herein has a client side and a server side.
The Client side preferably is operative to perform at least the following functions: a. Collect game result information. b. Send information and AL behavior (formulas or parameters) to server. c. Update AL behavior from server.
The Server side preferably is operative to perform at least the following functions: a. Get clients information results and AL behavior (formulas or parameters). b. Analyzing information, e.g. the server may update a formula in a server database every 1000 clients. c. Send update AL behavior to client. Get and send data are saved in history database, e.g. the last 20 client results may be saved to the history database.
For example, a Server database may include the following tables, inter alia: a. Personal table which includes the following fields: ID, Password, Name, Gender, Birthday, City, Country and Address. b. History table which includes the following fields: ID, Company, Product, Script, LastRun(Date), ParamList. c. Script Table which includes the following fields: Script, Company, Product, ParamList, Formula, ScheduleData.
A preferred Session Process which is presented to a client at a desired time set by the client, typically comprises the following stages: a. Client plays a game. b. Client computer sends to server the game results (parameters) and associate AL formula. c. Server gets all client results and saves it to database, finds associate formula from database and sends it to client. d. Client gets new formula and parameters from server and updates them if necessary.
A Trivia game, comprising a script which can call any of 3 other scripts, is now described. The game described herein includes 9 questions, each three of which are
illustrated in Figs. 7 - 9 respectively. The questions have the following 3 difficulty levels each associated with 3 questions in the illustrated example: level 1 : simple, level 2: normal, level 3: harder.
The trivia game therefore typically comprises the following parameter structure:
A level parameter is defined as: SlLevel {values: 1 or 2 or 3}. Results parameters are S1L1, S1L2, S1L3 {S-script, L-level}, SI Boring.
Typical values of the results parameters at the end of a game comprise the following: 0-no answer, 1-only answered question #1 from among the three (in the illustrated embodiment) questions within the current level, 2-only answered #2, 4-only answered #3, 3-answered #1 and #2, 5-answered #1 and #3, 6-answered #2 and #3, 7-answered all 3 questions within the current level.
Figs. 6A to 9 illustrate a set of artificial life scripts in accordance with a preferred embodiment of the present invention.
Fig. 6A is a simplified script diagram illustration of a main artificial life script in accordance with a preferred embodiment of the present invention. The script can call any of the three levels described in Figs. 7 to 9 respectively.
Fig. 6B is a simplified flowchart illustration corresponding to the script diagram illustration of Fig. 6A which is provided in order to explain the script diagram notation of Fig. 6 A.
Fig. 7A is a simplified script diagram illustration of an example of an artificial life script that provides a level 1 (i.e. simple) game in accordance with a preferred embodiment of the present invention.
Figs. 7B - 7C, taken together, form a simplified flowchart illustration corresponding to the script diagram illustration of Fig. 7 A. Figs. 7B - 7C are provided to explain the script diagram notation of Fig. 7 A.
Fig. 8 is a simplified script diagram illustration of an example of an artificial life script that provides a level2 (i.e. normal) game in accordance with a preferred embodiment of the present invention.
Fig. 9 is a simplified script diagram illustration of an example of an artificial life script that provides a level3 (i.e. harder) game in accordance with a preferred embodiment of the present invention.
The script diagram notation of Figs. 8 and 9 is similar to the script diagram
notation of Figs. 6 A and 7 A.
A "Main" Artificial Life Script, illustrated in Fig. 6, which sends the toy to one of the script portions of Figs. 7 - 9, each script portion representing a level, is now described.
A preferred Objects Description for the "main" artificial life script of Figs. 6A - 6B is as follows:
Start 1 (Start): Starting point for execution.
Memory 1 (Memory): Sets memory cell <S1L1> to "0".
Memory2(Memory): Sets memory cell <S1L2> to "0".
Memory3(Memory): Sets memory cell <S1L3> to "0".
Memory4(Memory): Sets memory cell <SlBoring> to "0".
Condition 1 (Condition): Follow the true branch if the value of memory cell <SlLevel> is equal to "1", or the false branch, otherwise.
Condition2(Condition): Follow the true branch if the value of memory cell <SlLevel> is equal to "2", or the false branch, otherwise.
Script 1 (Script): Runs the "C:\CreatorIDE\Sl_Level3.script" script of Fig. 9.
Script2(Script): Runs the "C:\CreatorIDE\Sl_Level2.script" script of Fig. 8.
Script3 (Script): Runs the "C:\CreatorIDE\Sl_Levell.script" script of Fig. 7.
A preferred Objects Description for The Level 1 Artificial Life Script of Fig. 7 is now described:
Start 1 (Start): Starting point for execution.
Talkl(Talk): Say "table or cup?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSensel(ListenAndSense): Listens for one of the keywords (table, cup) for 5 seconds.
Talk2(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk3(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk4(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation 1 (Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation2(Calculation): Set the value of memory cell <S1L1> to the sum of the value of memory cell <S1L1> and "1" (if the operation is invalid, the cell is cleared).
Talk5(Talk): Say "TV or car?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense2(ListenAndSense): Listens for one of the keywords (TV, car) for 5 seconds.
Talkό(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk7(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkδ(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation (Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation4(Calculation): Set the value of memory cell <S1L1> to the sum of the value of memory cell <S1L1> and "2" (if the operation is invalid, the cell is cleared).
Talk9(Talk): Say "piano or book?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense3(ListenAndSense): Listens for one of the keywords (piano, book) for 5 seconds.
TalklO(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl l(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl2(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation5(Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation6(Calculation): Set the value of memory cell <S1L1> to the sum of the value of memory cell <S1L1> and "4" (if the operation is invalid, the cell is cleared).
End 1 (End): Execution ends here.
Text 1 (Text)
A preferred Objects Description for The Level 2 Artificial Life Script of Fig. 8 is now described:
Start 1 (Start): Starting point for execution.
Talkl(Talk): Say "turtle or dog?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSensel(ListenAndSense): Listens for one of the keywords (turtle, dog) for 5 seconds.
Talk2(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk3(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk4(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation 1 (Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation2(Calculation): Set the value of memory cell <S1L2> to the sum of the value of memory cell <S1L2> and "1" (if the operation is invalid, the cell is cleared).
Talk5(Talk): Say "car or bike?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense2(ListenAndSense): Listens for one of the keywords (car, bike)
for 5 seconds.
Talkό(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk7(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk8(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation3(Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation4(Calculation): Set the value of memory cell <S1L2> to the sum of the value of memory cell <S1L2> and "2" (if the operation is invalid, the cell is cleared).
Talk9(Talk): Say "space ship or car?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense3(ListenAndSense): Listens for one of the keywords (spaceship, car) for 5 seconds.
TalklO(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl l(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl2(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation5(Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculationό(Calculation): Set the value of memory cell <S1L2> to the sum of the value of memory cell <S1L2> and "4" (if the operation is invalid, the cell is cleared).
Endl(End): Execution ends here.
Text 1 (Text):
A preferred Objects Description for The Level 3 Artificial Life Script of Fig. 9 is now described:
Start 1 (Start): Starting point for execution.
Talkl(Talk): Say "elephant or mouse?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSensel(ListenAndSense): Listens for one of the keywords (elephant, mouse) for 5 seconds.
Talk2(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk3(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk4(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculation 1 (Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation2(Calculation): Set the value of memory cell <S1L3> to the sum of the value of memory cell <S1L3> and "1" (if the operation is invalid, the cell is cleared).
Talk5(Talk): Say "big TV or pencil?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense2(ListenAndSense): Listens for one of the keywords (big TV, pencil) for 5 seconds.
Talkό(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talk7(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkδ(Talk): Say "not understood" in a Man's voice (duration: 127seconds) while performing the "Talk" move using a toy named Storyteller.
Calculation3(Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation4(Calculation): Set the value of memory cell <S1L3> to the sum of the value of memory cell <S1L3> and "2" (if the operation is invalid, the cell is cleared).
Talk9(Talk): Say "bike or car?" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
ListenAndSense3(ListenAndSense): Listens for one of the keywords (bike, car) for 5 seconds.
TalklO(Talk): Say "good answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl l(Talk): Say "wrong answer" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Talkl2(Talk): Say "not understood" in a Man's voice (duration: 1277.637 seconds) while performing the "Talk" move using the toy named Storyteller.
Calculations (Calculation): Set the value of memory cell <SlBoring> to the sum of the value of memory cell <SlBoring> and "1" (if the operation is invalid, the cell is cleared).
Calculation6(Calculation): Set the value of memory cell <S1L3> to the sum of the value of memory cell <S1L3> and "4" (if the operation is invalid, the cell is cleared).
End 1 (End): Execution ends here.
Text 1 (Text)
An interactive toy system constructed and operative in accordance with a preferred embodiment of the present invention is now described. The system described herein includes an electronic shop, a club of networked toy users, a user registration system and a user interests' registration system registering each users' interests.
Following is an example of a list of web services provided by the interactive toy system described above in accordance with a preferred embodiment of the present invention: a. a shop b. a members club c. automatic download of charge-free new scripts
d. updating parts of the player e. messages of different types f new services, which evolve on-line.
In order to adapt the system's services to the unique demands of each user (e.g. the type of new scripts), a profile of the user may be used. This profile may comprise parameters such as age, gender, and subjects of interest.
To get any of the services above, the user typically has to register and to fill-in a list of personal interests.
Preferably, while registering, the user has to fill in the following personal data: full name, gender, birthday, phone number, full address, E-mail, and the language the user speaks.
In the user interests' registration system, the user typically selects his/her own subject(s) of interest from a given list. The list may for example include the following main subjects: Education, Information, Services, TV guide, News, Entertainment, Freaky stuff, Free time, Famous people, Reading and writing, Sports, Pets.
The system shown and described herein preferably includes a shop in which the user buys different packages according to his/her personal preferences and relevant toy(s). The user is exposed first to a sorted list of packages, which include different categories of content adapted to user characteristics such as the user's age, personal interest, and language.
Examples of package content categories, as shown in Fig. 40, may include: Education, Information, Services, TV guide, News, Entertainment, Freaky stuff, Free time, Famous people, Reading and writing, Sports, Pets.
These categories may be chosen using the "select content" screen of Fig. 47.
The screen display typically also presents the number of packages in each category, as well as information about packages on sale.
For each selected packet content category, the user finds different packages for the chosen category, e.g. using the "select packages" screen display of Fig. 48. In the category of education, for example, the user finds a variety of subjects as: Arts, Biology, History, Languages, Logic, Arithmetic, Geometry, Medicine, Physics.
For each package the user finds the package price, the name(s) of the toy the package is aimed for, and a comment, if the user has bought this package already.
Here, the user digitally signs the package choice he has made, indicating that he wishes this to be his purchase.
The user may browse around the shop, looking for other packages from other categories or for different toys. While browsing, the function of "search", e.g. as illustrated in Fig. 37, may be employed.
When the "basket is full up" with packages - it is time to pay and the user is typically prompted to pay. The user can see the number of credit points he has accumulated. The user can go from where he is to a browser to buy more credit points. The user can go from where he is to a playing station where he can EARN/WIN more credit points.
When the user enters his or her account, he typically views his account e.g. using the "view account" display screen of Fig. 49. The user finds here all the information about each of the packages s/he has chosen according to name, description, price, section (commercial and other), category (e.g. art or news) and the relevant toy(s).
Here the user confirms his/her shopping.
Examples of screen displays suitable for implementing a preferred club feature of the system of the present invention are illustrated in Figs. 40 - 44. When in the club the user can preferably do one or all of the following: a. Join to any of the group(s) b. Meet old or new colleagues and list their data c. Send messages to the club members
In the club the user typically has several options, which may for example include the following options or functions illustrated in Fig. 40: My account, Group, Search, Send message/script.
In the My Account option, the user defines his own personal information. For example, the user may fill in the following personal data to log-in to the club: nickname, full name, full address, phone number, E-mail, comments.
For each item the user typically decides if the information is public or confidential. The public information is available to other users while they are looking for new colleagues via the above-mentioned SEARCH function.
Under the My Account option, the user typically also marks in his/her own interests from among a given list. The list contains subjects as: Education, Information,
Services, TV guide, News, Entertainment, Freaky stuff, Free time, Famous people, Reading and writing, Sports, Pets.
Under the Group option, the user may join any of the existing group(s), which were created and defined by the administrator. The user finds here a list of all the groups with a full description of each. Typically, groups that fit the users' interests, as listed by him, appear first.
Under the Search option, the user can search for new friends characterized according to parameters such as age, interests, country, gender, and toy type. He can also locate old friends according to a nickname or country for example. New friends approved by the user are added to the contact list.
The Contact List lists the user's colleagues with their personal data, derived from their public personal information. The users' colleague data may for example comprise: nickname, full name, full address, phone number, e-mail and comments.
Under the Send Messages/Scripts option, the user can send either a messages or scripts to his/her group's members or colleagues listed in his contact list, which is typically accessible from the screen display of the send messages/scripts option.
The origin of scripts to send might be free download from the web or home made scripts composed by the user or by his or her colleagues. Delivery of messages and scripts can be immediate or at any future date.
Typically, the system provides an incoming messages/scripts icon using which the user can elect to keep all new messages or to let the user delete incoming messages.
Typically, within the My Account option there is defined a Message Filtration suboption using which the user can list authors whose messages should be ignored. From here the user can also access his full contact list.
Typically, the system of the present invention includes a "player" which plays scripts selected by a user in accordance with a schedule also typically selected by the user.
Fig. 10 is an example of a simplified screen display for a "play" function in the player. Using the play function, the user programs schedule and content. Following is a description of the "play" function in accordance with a preferred embodiment of the present invention.
The User can get to the play function either from an existing PLAY icon in the
player, or via a new icon that the user adds.
When the user clicks on "PROGRAMMING" at the window of Fig. 10, s/he typically gets the interface of Fig. 11.
Fig. 11 is a simplified illustration of a "textbox" screen display of the programming function of the player, constructed and operative in accordance with a preferred embodiment of the present invention.
Fig. 12 is a table that describes the programming feature of the player in accordance with a preferred embodiment of the present invention.
Fig. 13 is a semi-block diagram semi-flowchart illustration of the "personal" function of a player in accordance with a preferred embodiment of the present invention.
Fig. 14 is a semi-block diagram semi-flowchart illustration of the "club" function of a player in accordance with a preferred embodiment of the present invention.
Fig. 15 is a semi-block diagram semi-flowchart illustration of the "shop" function of a player in accordance with a preferred embodiment of the present invention.
Fig. 16 is a simplified flowchart illustration of a registration procedure provided in accordance with a preferred embodiment of the present invention;
Fig. 17 is a simplified flowchart illustration of an example procedure of sending a request message from a user to a server in accordance with a preferred embodiment of the present invention.
Fig. 18 is a simplified flowchart illustration of a suitable procedure of creating a new group for the users' club in accordance with a preferred embodiment of the present invention.
Fig. 19 is a simplified flowchart illustration of a suitable procedure of a leaving a group of the users' club in accordance with a preferred embodiment of the present invention.
Fig. 20 is a simplified flowchart illustration of a suitable procedure of viewing group members provided by the users' club in accordance with a preferred embodiment of the present invention.
Figs. 21 to 24 are simplified flowchart illustrations which, taken together,
describe an example of a search procedure provided by the club in accordance with a preferred embodiment of the present invention.
Fig. 25 is a simplified flowchart illustration of a suitable procedure for sending a message to another user in accordance with a preferred embodiment of the present invention.
Fig. 26 is a simplified flowchart illustration of a suitable procedure for sending a script to another user in accordance with a preferred embodiment of the present invention.
Fig. 27 is a simplified flowchart illustration of a suitable implementation of send procedure D in Figs. 25 and 26, constructed and operative in accordance with a preferred embodiment of the present invention.
Fig. 28 A is a simplified flowchart illustration of a suitable procedure for adding a user to a contact list in accordance with a preferred embodiment of the present invention.
Fig. 28B is a simplified flowchart illustration of a suitable procedure for removing a user from a contact list in accordance with a preferred embodiment of the present invention.
Fig. 29 is a simplified flowchart illustration of a suitable account update procedure provided in accordance with a preferred embodiment of the present invention.
Fig. 30 is a simplified flowchart illustration of a suitable subject update procedure provided in accordance with a preferred embodiment of the present invention.
Fig. 31 is a simplified flowchart illustration of a suitable procedure for ignoring a message and/or a user in accordance with a preferred embodiment of the present invention.
Fig. 32 is a simplified flowchart illustration of a suitable procedure for adding fields to a user's contact list in accordance with a preferred embodiment of the present invention.
Figs. 33 to 35 are simplified flowchart illustrations of three respective procedures for "adding to basket", which procedures are preferably provided by the electronic shop of the present invention.
Fig. 36 is a simplified flowchart illustration of a suitable implementation of a "remove from basket" procedure preferably provided by the electronic shop of the
present invention.
Fig. 37 is a simplified flowchart illustration of a suitable search procedure preferably provided by the electronic shop of the present invention.
Fig. 38 is a simplified flowchart illustration of a suitable procedure for "winning credit points" preferably provided by the electronic shop of the present invention.
Fig. 39 is a simplified flowchart illustration of a suitable procedure of paying with credit points preferably provided by the electronic shop of the present invention.
Fig. 40 is a simplified illustration of a screen display of the "My account" function of the club provided in accordance with a preferred embodiment of the present invention.
Fig. 41 is a simplified illustration of a screen display of the "Contact list" function of the club in accordance with a preferred embodiment of the present invention.
Fig. 42 is a simplified pictorial illustration of a screen display of the "Group" function of the club provided in accordance with a preferred embodiment of the present invention.
Fig. 43 is a simplified pictorial illustration of a screen display of the "Search user" function of the club in accordance with a preferred embodiment of the present invention.
Fig. 44 is a simplified pictorial illustration of a screen display of the "Send message/scripts" function of the club in accordance with a preferred embodiment of the present invention.
Fig. 45 is a simplified pictorial illustration of a screen display of the "Interests" form provided in accordance with a preferred embodiment of the present invention.
Fig. 46 is a simplified pictorial illustration of a screen display of the "Registration" form provided in accordance with a preferred embodiment of the present invention.
Fig. 47 is a simplified pictorial illustration of a screen display of the "Select content" function preferably provided by the electronic shop of the present invention.
Fig. 48 is a simplified pictorial illustration of a screen display of the "Select packages" function preferably provided by the electronic shop of the present invention. Fig. 49 is a simplified pictorial illustration of a screen display of the "View
Account" function preferably provided by the electronic shop of the present invention.
Described herein is a software tool for generating verbal content and for controlling toys and other manipulable objects, particularly suited for toys operated by a PC computer, in wireless communication, by means of a wireless, e.g. radio, base station connected to the PC, with a toy controller embedded inside the toy.
The present specification uses the following terminology:
Living Object: Hardware and software technology for building computer controlled toys and other manipulable objects, and for the generation of verbal content for their control.
Scriptwriter: A software program for the generation of verbal content for the control of toys based on Living Object technology.
Base Station: A radio or other wireless transceiver connected to the PC providing wireless communication between the PC and the toy controller embedded in the computer controlled toy.
Toys and other objects based on Living Object technology use the computer, wireless communications, and voice recognition software to speak with their users in a human voice, with human-like personality and intelligence. The toys hold entertaining, personalized dialogs with the child, demonstrating knowledge of the child and his/her likes and recalling past interactive sessions.
The Living Object Scriptwriter is a tool useful in creating interactive scripts that give the toys speech and personality. These scripts typically feature content that includes: a. Interactive dialog focused on a variety of content and activities, b. Personalized data, c. Historical data with same user, and d. Time-linked content.
The following description explains how to use the Living Object Scriptwriter to write interactive scripts for a Living Object toy.
To start working with the Living Object Scriptwriter (hereafter, Scriptwriter), the Scriptwriter's computer is turned on. The Living Toy is turned on and close by and the Living Object base station is plugged into the computer.
A preferred method for setting up the Base Station and Toy is now described
with reference to Fig. 50 showing the Living Object base station and to Fig. 51 showing the Living Object Toy: a. Plug the base station's computer cable into the serial port in the back of the computer. b. Plug the base station's electrical transformer into a nearby electrical socket. c. Turn the computer on and wait until the computer is fully operational and the desktop is displayed. d. Turn the toy's on/off switch to ON. The toy emits a few beeps and any moving facial parts move briefly. The system is now ready for preparation of a script in order to provide dialog between user and toy. When the toy is on but not active for a while, it automatically switches to Sleep mode. A description of how to wake the toy up is provided herein.
A preferred method for opening the Living Object Scriptwriter software is now described with reference to Fig. 52 which shows a screen display of the Scriptwriter icon on desktop and to Fig. 53 showing a screen display of Living Object Scriptwriter main screen: a. Click on the Scriptwriter icon on the desktop. b. The Living Object Scriptwriter program opens to its main screen. The screen on the computer may look like the screen display illustrated in Fig. 53.
A preferred method for telling the system which toy is being used is now described with reference to Fig. 54 showing the "select tools - options" screen window display and to Fig. 55 showing the "toy" screen window display: a. Click on Tools-Options. The Options window opens. This window typically has "tabs" such as the following: Toys, Hardware, Environment, Volume Settings, Smart Toy, Scripts, and Reports. b. Click on the Toys tab to open the Toy window. This is done only if the Toy window is not already displayed. c. Check that, in the Toy List, a check mark appears next to the name of the toy. For example, if the toy is "Monster", check that a check mark appears next to Monster. Information related to the Toy Description for Monster appears to the left. d. Click on Insert/Edit to update the system with the details of a particular toy.
A preferred method for telling the system to recognize the base station and the
toy is now described with reference to Fig. 56 showing the "hardware" screen window display: a. Click on the Hardware tab to display the Hardware window. b. Click on the Check button in the "Check base station" section. The system begins looking for the base station, which is plugged into the computer and the wall socket. When it finds the base station, the phrase "Base connected." appears in the Report window. c. Click on the Check button in the "Check toys" section. The system begins looking for the toy defined as above. When it finds the toy, the phrase "Toy connected." appears in the Report window and details about the toy appear in the Search for toy section of the window.
Making Sure the Toy is Awake: When turned on but not actively engaged in talking, listening, or another function, the user's Living Object is programmed to slip into "Sleep" mode. Upon doing so, the toy emits a single beep. Even with the beep, during scriptwriting a user may not notice the toy has switched to Sleep mode until the user tries to make the toy talk or record and gets no response.
To switch a sleeping toy back to Alert mode, press one of its sensors, by squeezing its hand, foot, or other body part. The toy emits a beep and is back in Alert Mode, ready for action.
Typically, only a few minutes are required to write a simple interactive script.
Reference is now made to Fig. 57 showing the "talk icon" screen display, Fig. 58 showing the Scriptwriter main screen display with the added talk object and to Fig. 59 showing the Scriptwriter main screen display with the added talk object connected by a line to the start object.
A preferred method for adding a Talk Object is now described: a. Click and hold down the Talk icon which is the first icon in the group to the left of the Scriptwriter desktop. b. Drag the Talk icon onto the Scriptwriter desktop, underneath the Start object. The Talk icon now appears in a white box. When on the desktop, the icon is called a Talk object. c. Move the cursor so that it is on top of the Start triangle. The cursor changes to a pencil. Drag the pencil from the Start triangle to the Talk object. A line appears that
connects both objects.
A preferred method for saving a script is now described with reference to Fig. 60 showing the screen display of the action toolbar with the save icon and to Fig. 61 showing the screen display for naming and saving the script: a. Save the work done so far. Click on the Save icon on the Actions toolbar or select Save from the File menu. The Save window appears. b. For a user's Living Object script to run correctly, the user typically must save all related script and wave files in a suitable directory. If, for example, the user creates a script comprising 3 script files and 26 wave files, the user typically must keep all 29 files in the same directory. The directory does not have to be inside the Scriptwriter directory: it can be in any directory on the hard drive of the computer. The user may click on the down (browse) arrow to get to the directory in which s/he wants to save the file. If appropriate, create a new directory in which to save all files related to the particular script being worked on. It is advisable to name the directory after the toy, such as "Monster script." If not done yet, double-click on the desired director (whether new or old) so that the file is saved to it. d. In the File name field, enter a name for the script, such as "scriptl." The software automatically adds the extension .script. e. Click Save.
A preferred method for adding speech is now described with reference to Fig. 62 which illustrates a screen window display of a combo box for typing the toy's speech: a. Double click on the Talk object. The Talk window opens. b. In the first field, marked Toy Name, the name of the toy should appear. If it does not, click on the down arrow to the right of the field and choose the name of the toy from the list. c. Click in the TTS box and type the words that the toy is to say, e.g. a question which creates a script sequence that demonstrates the toy's voice recognition ability. Type the question: "What do you feel like doing now, wise guy? When my eyes light up, say: A joke, a song, or a game." d. The toy now has a line of speech that the user has created. To hear the voice segment right now, through the toy, click the "Play" button on the screen. The toy vocalizes the typed text, speaking in computer synthesized speech called
"text-to-speech". Note that the user can select text-to-speech that sounds like a man, woman, boy, or girl. e. To hear the toy vocalize the text in a human voice, the user still typically needs to record the line. Preferred methods for recording speech are described below.
A preferred method for recording speech is now described with reference to Fig. 63 which is a screen window display for recording sound to be played by the toy. The toy's speech can be recorded through the toy or through the computer's microphone. Logistically, it may be easier to use a conventional microphone. If recording through the toy, make sure it is on and awake. If using the computer's microphone, make sure it is plugged into the microphone jack in the back of the computer and that the speakers and sound software are on. In the Talk window, click on the Record button. The Sound Recorder window opens.
As soon as the user clicks on the microphone icon, s/he is recording. For example, the user may record the line, "What do you feel like doing now, wise guy? When my eyes light up, say: A joke, a song, or a game." When done, the user clicks again on the microphone icon.
The user plays back the recording to make sure it recorded well. The user then clicks on the speaker icon. If the user is not satisfied with the recording, he repeats. If it is desired to increase or decrease the volume of the recording, the Volume dial is adjusted by twisting it to the left or right with the mouse. Then the user records the line again.
A preferred method for saving a recording is now described with reference to Fig. 64 which is a screen window display for saving a recording.
The recording is preferably saved. Click on the Save icon. The Record into window appears. The recording is saved in the same directory as the Scriptwriter script.
Click on the down arrow to get to the appropriate directory which is the directory in which the script file was saved earlier, in the above-referenced description of how to save a script. Then save the recorded file under any suitable name, such as "wavl." The software automatically adds the extension .wav . Once saved, the recording becomes a sound file, also known in the art as a "wave" file.
A preferred method for playing back a recording through the toy is now described with reference to Fig. 65 which is a screen window display for selecting a
"wave" file to be played by the toy. Now a line that the user created and recorded is played through the user's toy.
If the Talk window is not already open, double-click on the Talk object on the desktop.
In the Talk window, click on the circle next to WAV. This tells the system that the user wants to play a pre-recorded wave file rather than the text-to-speech version of the text that appears in the TTS box.
Click on the open file icon (to the right of the WAV field) and browse until the wave file just recorded is found and saved. Click on the Open button to select the file. The file and its directory path now appear in the WAV field.
Click on the Play button. The system plays the wave file through the toy.
If no sound comes out of the toy, the toy may have gone to sleep while the user was occupied with scriptwriting and recording. To wake the toy up, squeeze its hand or another body part that contains a sensor. If the toy responds with movement and/or a beep, then the user has switched it back to Alert mode. AT this point, the user clicks on the play button and the system plays the wave file through the toy.
If problems occur, perform one or more of the following checks:
Make sure that the name in the Toy Name field at the top of the window is that of the toy. If "Computer" appears, then change the name to the toy's name, as explained above in the description of how to add speech.
Make sure the WAV circle is selected, rather than the TTS circle. This time the toy is to vocalize the wave file rather than the computer synthesized (TTS) version of the text.
The system set-up steps for the base station and toy, as described above, are now followed.
Reference is now made to Fig. 66 that illustrates a Listen icon and to Fig. 67 that is a screen display of a part of the Scriptwriter main window with the Listen object added. To add a Listen object to the script: a. Click and hold down the Listen icon. Drag the Listen icon onto the Scriptwriter desktop, underneath the Talk object. The Listen icon now appears inside a white box. When in this form, on the Scriptwriter desktop, the icon is a Listen object.
b. Move the cursor over the Talk object until it changes to a pencil. Then drag a line from the Talk object to the Listen object. The script now flows from the Start object to the Talk object to the Listen object. Now the toy is told what to listen for.
A preferred method for defining keywords is now described with reference to Fig. 68 which is the "Listen and Sense" screen window display. a. Previously, an object was added that tells the toy to listen. Now the user tells the toy what words to listen for. In defining the Talk object, the toy was told to tell the user: ".say: a joke, a song, or a game." Each of these phrases is a keyword phrase which is now defined. b. Double-click on the Listen object. The Listen and Sense window opens. In the Listen and Sense window, the user defines what words the toy listens for or what sensors are in input mode during the current listen and sense segment. c. Double check that the correct name appears in the Toy Name field. Click in the Keywords field. d. Type the keywords, following the same spacing and punctuation pattern seen in parentheses. Type: a joke, a song, or a game. e. If it is desired to make one or more of the toy's sensors active at this point, the user should click the sensor number that corresponds to each of the sensors. f. Click OK. Part of the list of keywords appears on the listen icon, as a point of reference.
To improve the accuracy of the keyword recognition, try to use keywords that have a least two syllables and make sure that the keywords in a particular group of keywords sound different from each other. Keyword phrases that may be used typically comprise exactly two words.
Sometimes the system does not know how to pronounce a keyword. This typically happens when the user uses special names or made up words. Click on the play button to hear how the system reads out the word. Then adjust the spelling of the word and play the word again, and repeat this process as necessary, until the system pronounces the word correctly.
A preferred method for creating a response for each keyword is now described with reference to Fig. 69 which illustrates the "Keyword link box" in the "Choose Link" screen display and to Fig. 70 showing the Scriptwriter main screen display with a Listen
object links to corresponding Talk objects.
The toy gives a different answer to each keyword it hears. This process of building questions, keywords, and responses to keywords gives the toy its intelligent conversational ability—at the most basic level. The system offers many different features that enable the user to give the dialog a highly intelligent aspect, such as random answers, answers based on memory, and answers based on collected personal data.
To create a response, simply add a Talk object for each keyword as described herein with reference to Adding Speech.
To add a response to each of the keywords that have already been created: a. Drag the Talk icon over to the Scriptwriter desktop and under the Listen object. Connect the Listen object to the Talk object. The keyword link box appears, with the first keyword in the list that was entered in the Listen window. b. Click on OK. If this is not the right keyword, click the down arrow and scroll down until the correct keyword appears. Then click OK. c. Drag the Talk icon over to the desktop four separate times, until there are four Talk objects beneath the Listen object. Connect a keyword link to each Talk object, as in the previous step. d. To type some kind of verbal answer for each Talk object :Double-click on the first Talk object, which links to the "a joke" keyword. In the TTS box, type: "You want to hear a joke? You must be a very funny person."
It is advisable to repeat the keyword at the very beginning of the toy's response. This tells the user that the toy indeed understood the spoken keyword. e. For each keyword, type an appropriate response in the TTS box of the corresponding Talk object.
The fourth keyword may automatically display a link called "Not-Found." This link allows the user to create a verbal response to a situation in which the toy did not hear or understand any of the keywords it was listening for (or, if the toy was awaiting sensor input, did not feel input to the sensor that was waiting for input). A description of how to create a "Not-Found" reaction by the toy is provided below.
Creating an Answer to Not-Found: Sometimes the system does not understand the user's response (or the user did not provide a response at all). The fourth Talk object created typically needs to contain speech that tells the user what to do if the toy did not
understand or hear the keyword spoken by the user. Typically, the user should repeat the keyword or make a comment to the effect that the toy did not get one of the expected answers and is therefore moving on to the next point in the script. If the user is asked to repeat the keyword, the user should be reminded what the keywords are, in case she or he has forgotten them.
In the fourth Talk object, which is linked to a "not found" situation, do as follows: a. Type text that tells the user to repeat the keyword. Double-click on the Talk object. In the TTS box, the user might type: "I'm sorry, but I didn't quite hear what you said. Please tell me again. Say: a joke, a song, or a game."
Alternatively, type text that tells the user that the toy did not hear the response, but is moving on to the next point in the script. In the TTS box, type: "Hmmm, you want me to choose? Ok, I'm in the mood for a joke!"
The objects are now linked accordingly. If the user typed the text in Step 1, then the user typically needs to draw a link from the fourth Talk object back to the ListenStep 2, then the user typically needs to draw a link from the fourth Talk object to the Talk object that provides a response to the keyword "joke."
A preferred method of running a script is now described with reference to Fig. 71 showing the Run-Run screen window display and to Fig. 72 showing the Sample error message screen window display.
At this point, there is enough of a script to run a talk-listen-respond sequence through the toy, which may be performed as follows:
Make sure the toy is awake by squeezing one of its sensors. Select Run-Run: To run the script from a certain point rather than the beginning, simply click on the object from which it is desired to run the script and select Run-Run from Selected. The Start icon on the desktop is highlighted and the Living Object software runs through the script, highlighting each icon as that part of the script is activated. If there are any problems with the script, a window with error messages appears, like that illustrated in Fig. 72.
The errors listed indicate a problem with Talkl and Talk5. These errors were generated when the Run-Run option was selected and the toy was still in Sleep mode. The system found an error with Talkl because it was the first segment of the script that
the system could not execute. The error in Talk5 reflects the inability of the sleeping toy to listen at all.
Ideally, as the user runs the script, the user's toy voices the texts defined in Talkl, listens for one of the three keywords you defined in Listenl, and responds accordingly by voicing the text from Talk2, or Talk3, or Talk4.
Talkl is a wave file, whereas the other Talk objects are played through the toy as synthesized speech. To run the entire script as wave files i.e. in natural voice, the Talk window of each of the Talk objects is opened and the text recorded, as described above.
An explanation of each of the functions that appears on the Scriptwriter main screen is now provided with reference to Figs. 73A and 73B showing a table of the functions provided by the Scriptwriter with their icons as presented on the Scriptwriter main screen display.
Reference is now made to Fig. 74 showing the Talk object screen window display. To enter into the talk options window double click on the icon on the script. The user clicks on the Advanced button for the following additional options: a. Toy Name: Determined according to toy available (appears in all of the motion group options) b. Name: Name of object (appears in all of the motion group options) The toy name and name options appear in all of the motion group objects.
TTS (Text to Speech) field: enter in text to be spoken by toy.
A change can be made in the type of voice used by clicking on the different options available of the right of the TTS field e.g. man, women, boy, girl.
A wav file can be inserted by choosing the wav option field and allocating a wav file either from the computer or from a recorded wav file.
A message can be recorded by selecting the record button. This brings the user to the Sound recorder window, as described herein in the description of how to record speech.
The wav file can be played back from this window by clicking on the play button.
Movement Options allows the user to select the type of movement for the talk segment.
The Mood and Stage field are used for additional comment information.
Listen & Sense Object: Reference is now made to Fig. 75 showing the Listen and Sense screen window display.
Toy Name: Determined according to toy available
The keywords field is where the user defines the options available for speech recognition. With the say keywords button which is located at the end of the keywords field the user can hear the words chosen.
The Sensors field allows to define the sensor areas located on the toy for non verbal response.
Listen time allows the user to define the maximum time given to listen or wait for activation of sensors.
The Memory field allows the user to save the results of the recognition process.
In order to change the accuracy level the user clicks on the "Active" field and then "ok". This brings the user to the Script Properties window. Here the user can change the speech recognition accuracy level. The lower the level of accuracy, the more sensitive the recognition is.
Move Object: Reference is now made to Fig. 76 showing the Move screen window display. The Movement field allows the user to pick the type of activity the Toy is to make. The Moving Time field defines the length of time the movement is to take place. When the user chooses the "Run in the Background" option, the user is instructing the toy to continue with the script once receiving the movement command.
Record Options: Reference is now made to Fig. 77 showing the Record Options screen window display. The Record option allows the user to record his/her voice, or anyone else's voice. Wav File Name - Insert name of file that is to be recorded in the script.
Memory Object: Reference is now made to Fig. 78 showing the Memory screen window display which allows the user to put a certain compartment of the computer's memory and give the compartment a name.
Condition Object: Reference is now made to Fig. 79 showing the Condition screen window display.
Compare two different values or check if one value is greater then, less then, equal to, or not equal to a certain value.
Calculation Object: Reference is now made to Fig. 80 showing the Calculation
screen window display.
Do some math on the values that are stored on the computer's memory compartments. The computer can add, subtract, multiply and divide.
Random Object: Reference is now made to Fig. 81 showing the Random screen window display which allows a user to create a list of values that the computer chooses from a random basis and to tell the computer in which memory compartment to put the values.
Time Marker Object: Reference is now made to Fig. 82 showing the Date and Time screen window display which allows a user to put a certain time or date in a compartment in the computer's memory.
Wait Object: Reference is now made to Fig. 83 showing the Wait screen window display. This display instructs the toy to wait for a certain amount of time before proceeding with the script.
Jump Object: Reference is now made to Fig. 84 showing the Jump screen window display, allowing a user to skip to a different point in the script.
Execute Object: Reference is now made to Fig. 85 showing the Execute screen window display which allows the user to run and software program on the computer.
Script Object: Reference is now made to Fig. 86 showing the Run Script screen window display which enables the user to run any other Scriptwriter Script.
Internet Object: Reference is now made to Fig. 87 showing the Internet screen window display which opens a defined web page.
Graphics Object: Reference is now made to Fig. 88 showing the Graphics screen window display which shows a picture or video file on the computer's screen.
Preferably, some or all of the following options are provided: a. Display time is the length of time the image/video is to be shown. b. Size field allows the user to determine the height and width of the image chosen. c. Choose Display is a function used for limiting and controlling the display panels.
Video (Advanced Options): When choosing the "Wait until finish" command this instructs the toy to wait until the video is completed before continuing with the script.
Reference is now made to Figs. 89, 90, 91, 92 and 93 showing "End", "Script Properties", "Choose Link", "Pop-up Menu" and "Options" screen window displays, respectively.
End Object: The end object stops the script and allows for the users to define the exit names. When opening the script from a different window and the single output mode in not defined the user is able to view all the available script exists.
Once a script has been written by the user, the IDE lets the user activate it in a way that gives life-like behavior to the toy. The IDE comprises algorithms and a strong compiler that integrate time, pattern, and minimal interval and apply them to the script or a collection of scripts. The resulting artificially created life for the toy is typically authentic in the sense that users can easily forget they are speaking and interacting with a toy. A description of how to use the IDE to create artificial life is now provided.
Artificial life is divided into three main screens, the Editor, the Manager, and the Viewer. Each of these screens is described herein.
Artificial Life (AL) Editor: There are two kinds of AL editors, professional and non-professional.
Reference is now made to Fig. 94 showing the Artificial Life Algorithm Editor screen window display.
The Artificial Life Professional Editor allows the user to define formulas and assign values to local and system parameters that later act on a given script. The Editor is used to write a user's own formulas or edit pre-written formulas provided by the function library. The Editor then allows the user to create an algorithm from the formula the user has defined, and associate the algorithm with the current script.
In the current example, a formula and parameters are being defined to determine how often the script entitled Games, script, is executed.
In the Behavior parameter box, four parameters must be assigned values: memory, initial Priority, threshold Priority, and minimum Interval.
To do a test run on the algorithm, the user typically needs to assign the formula a temporary value in the Formula parameter box. For example, the formula on the sample screen has been assigned a value of 1. This value could represent successful completion of, say, a lesson in science. If the script has never been completed successfully, it could have formula parameter value of 2.
Reference is now made to Fig. 95 showing the Artificial Life Editor screen window display. The editor enables to build formula for specific script. Here the steps to add AL formula to script. First the user chooses the script by pressing load button. Then fill in formula by double click on a cell. At least one cell must typically be filled in. Finally the user saves the AL formula by pressing the save button.
Reference is now made to Fig. 96 showing the Artificial Life Editor screen window display with the Cell Management pop-up window. By pressing on the right click on the mouse, the user gets a pop-up with cell functions.
Reference is now made to Fig. 97 showing the Artificial Life Editor screen window display with the Function Library pop-up window.
Add specific function. Select the function then fill in properties. Press OK in the properties section or double-click on the selected function to add the function, or press Esc button to cancel.
Reference is now made to Fig. 98 showing the Artificial Life Manager screen window display. The Artificial Life Manager gives the user an overview of all scripts that the Artificial Life engine is to check and the formulas, parameters, and values assigned to them. It is possible to work from the Manager to make changes to the definitions. The Manager contains functions for adding, removing, and viewing the history of the 10 last executions each script. Highlighting a script name with the highlight bar displays all the relevant details of that script.
Reference is now made to Fig. 99 showing the Artificial Life Editor Viewer window display. The Artificial Life Viewer presents a real-time and historical graphical depiction of the status of up to five scripts at any one time. The Viewer can be used to track the behavior of different scripts, as determined by the value stored for each script in "Memory." The "Show activation level" item can be selected to view the threshold of the selected scripts, and thereby determine when the last time was that each script executed. The Viewer displays the last 10 minutes of Artificial Life activity on an ongoing basis, scrolling to the right as additional charted activity takes place.
Building Artificial Life Environment: A preferred method for building an AL toy typically comprises the following steps: a. Make list of scripts. b. Make list of parameters.
c. Make Tables, e.g. Dependence table and formula table. d. Fill in formula table. Fig. 101 is an example of a Formula table. e. Fill in dependence table. Fig. 100 is an example of a Dependence table. f. Build the scripts. g. Register/ Add the scripts.
Reference is now made to Figs. 102 and 103 showing the Scriptwriter main screen display with corresponding AL scripts, specifically a game script and a laugh script respectively.
Preferably the system provides a variety of optional commands, which may be implemented as Function Bar Commands. An example of a set of optional commands which may be provided is now described.
File Menu: Reference is now made to Fig. 104 showing the Scriptwriter main screen display with the File menu open. The file menu allows the user to create a new script, open an existing script and other operations that are found in a normal file menu. The file menu preferably includes menu options such as the following: a. New Script: In order to begin writing a script, click on new script in the file menu and a new window appears on the screen. Now the user can begin working on his/her script. b. Open Script: To open an already saved script, click on open script in the file menu. A window opens up containing a list of the existing scripts which the user can search from. When the user finds the script s/he is looking for, click on its name, for example script 1. script, and the script file opens. c. Download Script: The download script command in the File Menu opens a collection of existing scripts typically residing on an internet site such as www.creator.co.il. An existing script can be downloaded from the web to the IDE Scriptwriter program. d. Save Script: To save a script created on the program, click on the Save Script command in the File Menu. e. Save Script As: To save the new script under a certain file name, click the Save Script As command on the File Menu. A window opens up asking for the scripts' name, name the script, press the save command and the file is now saved in the directory assigned to it.
f. Save Script As Image: The Save Script As Image command saves the script in the format of a picture image. The script is saved as a Metafile Image (WMF). WMF is especially compatible with the Word program. When the user saves the script in the form of WMF, the user can make corrections and changes, outside the IDE Scriptwriter program, in Word itself. g. Create Report: Reference is now made to Fig. 105 showing the Scriptwriter main screen display with the Create Report Window. The Create Report command creates a chart in the Excel program which documents which objects appear in the script created. In the window that opens, when the user clicks on Create Report, the user can choose to chart all properties of all existing objects by pressing Print All. The user can limit the chart to a specific object, for example talk, by selecting Create Selected in the window that opened when Create Report was clicked on. h. Print Description: When clicking on the Print Description command a detailed text file and NOT a chart appear. The same information, which appears in the Create
Report chart, appears in Print Description in textual form. i. Print Preview: When clicking on the Print Preview command, the user receives a print preview of the script s/he has just created. j. Print: The Print command prints a visual picture of the script as well as a verbal description of the stages of the script. Below the print command in the file menu, appear the Last Opened Scripts. It can display a maximum of the last Three files that have been worked on. The last command in the File Menu is the Exit command. When clicking on the exit command, the user exits the IDE Scriptwriter Program.
Reference is now made to Fig. 106 showing the Scriptwriter main screen display with the Edit menu open. The Edit Menu allows the user to amend and change the script s/he has already created. It includes commands such as the following: a. Undo: This function allows the user to undo the last operation that was made on the script that s/he has been working on. b. Redo: Allows the user to redo an undo operation that s/he has made. c. Cut: Allows the user to cut a part of his script and to paste it in another place, or to cut a part of the script in order to remove it. d. Copy: Allows the user to copy a part of his script and to place the same action copied into another part of the script, thus having two operations repeat themselves in
two separate parts of the script. e. Paste: The paste and copy commands are interconnected to undo the last operation that was made on the script that the user has been working on. When cutting a part of his script, the user must then click on the paste command in order to place that operation in another part of his script. f. Select All: Allows the user to select all parts in his script so that changes and corrections he wish to make can take place in the whole script itself.
Reference is now made to Fig. 107 showing the Scriptwriter main screen display with the Find Window. Using the Find command the user can search for a specific word or object in his whole script, making his search easier.
Reference is now made to Fig. 108 showing the Scriptwriter main screen display with the Replace Window. When clicking on the Replace a window appears. This window is split into two sections- Target and Object.
Target- the target defines where the desired replacement should take place. It can take place in a selected part, or in the whole script.
Object- the object defines in which objects the replacement should take place. It can take place in All objects that have shared properties, or the Replace command can be executed according to Object Type. A replacement is made in a specific object according to its unique properties.
Clipboard: Copy the image or description (Copy Image to Clipboard, Copy Description to Clipboard) of the script onto Windows' clipboard. All Windows applications can now use the image or description.
View Menu: Reference is now made to Fig. 109 showing the Scriptwriter main screen display with the View menu open. The View Menu offers different forms of viewing the script created, such as zoom in/out and volume.
Zoom in: The Zoom in lets the user view his script in magnified size.
Zoom out: The Zoom out lets the user view his script in decreased size.
Normal Size: The normal Size lets the user view his script in its original size.
Volume: Reference is now made to Fig. 110 showing the Scriptwriter main screen display with the Volume and Speech Recognition Windows. Clicking on the Volume show the volume of all that is spoken or heard in the script. This can help the user understand why, for example, words are not being recognized by the program
because the microphone level is too low.
SR Result: The system uses a Speech Recognition (SR) window to show the speech recognition results during running the script. The accuracy helps the user determine if the sensitivity in identifying certain parts in the program should be lowered. The higher the accuracy the closer the annunciation is to the computer's.
The Rec. Wav button allows the user to hear the last saved recordings during listen process.
Reference is now made to Fig. I l l showing the Scriptwriter main screen display with the Watch List and the Add Memory windows
Watches: Using the Watches command, the user can follow the different values of Memory that have been saved, during or after running the script.
Execute log: Reference is now made to Fig. 112 showing the Scriptwriter main screen display with the Execute Log and the Messages windows. The Execute Log is a logger of all operations that have been identified and executed. This can be extremely helpful in identifying errors that have been made.
Messages: When clicking on the Messages, a box comes up on screen identifying any errors that might have been made or any hints the program has for the user. If nothing appears in the box no error was found and no hint offered.
Sensor Simulation: Reference is now made to Fig. 113 showing the Scriptwriter main screen display with the Sensor Selection window. This is a simulation for the sensors of the specific object in the user's script. The sensors that are active in different parts during the script are identified by name during this Sensor Simulation.
Link Style: This refers to the different styles of links that can be made between two objects in the script (e.g. . between talk & move). There are six different styles of links for e.g. . Vertical-horizontal, horizontal-vertical. These different styles help the user to better organize his script writing form.
The user can also change link style by double clicking on the link line itself, in his script.
Simulator: When clicking on the Simulator, a window opens up on the user's screen. A simulator doll is displayed that actually Acts Out the script, only if it is running in simulation mode.
Scheduler: Reference is now made to Fig. 114 showing the Scheduler screen
window display. The Scheduler can determine at what set time the user's script is executed. A user can schedule his script to appear once an hour, once a day, on an event like a birthday or every time a dolls' hand is pressed. Not only scripts can be scheduled; the user can also schedule a Message to be delivered on a set time or event. A user can also receive a List of the last scripts to be run and on what dates their running had occurred.
Scheduler - adding a task: Reference is now made to Fig. 115 and 116 showing the Scheduler screen window display with the Add Task pop-up window Scheduler List pop-up window correspondingly.
Find Toys: Reference is now made to Fig. 117 showing the Scriptwriter main screen display with the Find Toys and Assign Channels window. This command searches for the marked toys. The toys that are defined, are identified. It can also tell the user which toys are awake and which are sleeping, and which do not exist.
Run Menu: Reference is now made to Fig. 118 showing the Scriptwriter main screen display with the Run menu open. This menu allows the user to Run his finished script, pause and so on. The menu typically offers options such as the following:
Run: Play the script.
Run from Selected: Allows the user to begin playing his script from a specific chosen point.
Pause: Pause the script midway at a chosen place.
Stop: Bring the running script to a complete stop.
Check Script: Check the script for any errors or hints (if there are any errors or hints in the script the Message window appears).
Tools Menu: The Tools Menu controls all the environment of the IDE scriptwriter program. A user can control the toys, volume, and sensors. Options
The Options commands, in the illustrated embodiment, are split into the pages or screen windows illustrated in Figs. 119 - 124, including Toys, Hardware, Environment, Volume Setting, Living Toys, Script and Report.
Toys Page: Reference is now made to Fig. 119 showing the Scriptwriter main screen display with the Option window at the Toys Page. In the Toys page the user can define toys shown in the list. Toy is typically defined with name- toy and name, type-
(which defines the toy according to the operations it can perform), number, and channel. This also allows a user to remove toys from the list.
Reference is now made to Fig. 120 showing the Scriptwriter main screen display with the Option window at the Hardware Page. The Hardware page is split into the following three subsections: a. Check Base Station- checks the communication between the base station and the program, it resets the base station. b. Check Toys- the toys which I have chosen to work with are checked. c. Search for Toys- searches for a toy according to its number or channel, when the toy is found the program activates the toy which in return makes a sound.
There is also a Report box that reports what has happened, which toy was identified and which was not.
Environment Page: Reference is now made to Fig. 121 showing the Scriptwriter main screen display with the Option window at the Environment Page. Simulation through the PC- simulation of the script run in the computer. View simulator- awakens the simulator doll inside the program.
Advance Properties- show every object (for e.g.. talking, moving) with advanced properties.
Toy Identity- Changes the illustration in the script itself to an illustration of the chosen toy and not the generic illustration. This helps to clarify which toy is talking or operating at different points in a multi-toy script. Default Toy- the toy in the script is the default toy.
Volume Setting Page: Reference is now made to Fig. 122 showing the Scriptwriter main screen display with the Option window at the Volume Setting Page. These Volume Settings are for the speaker as well as the microphone in the doll. A doll is selected, and the reload button is clicked on. This asks the doll to send the program its internal volume settings. An update can be made to these settings, saving the update and changing the original settings. After the update, another check of the volume settings is made.
Living Toy Page: Reference is now made to Fig. 123 showing the Scriptwriter main screen display with the Option window at the Living Toy Page.
Using the Living Toy page the user can activate all toys that are programmed for
artificial life. The toy, like a live person, to "sleep", "eat", and "wake up" at a set time. A user can choose to activate only certain toys for artificial life, once the user has chosen them they "wake up", if they are sleeping.
Script page: Reference is now made to Fig. 124 showing the Scriptwriter main screen display with the Option window at the Script page.
When selecting "activate automatic downloading" the scripts from the internet are directly downloaded into a chosen place in the user's hard disk. This option is only available to those who register. A user can choose to receive only scripts that match criteria selected.
Report Page: The Report page typically comprises the following elements:
Save Logger- every script that has run can be saved along with the date of the running. This can help to keep better track of the scripts.
Save Listen File- can save any listen made in a script (it always saves the last listen heard).
Memory- this allows the user to add or remove a memory to the compilation of memories.
A suitable Error list for the system described herein is the following:
1 - SR-Catch all error. Probably an internal error or a subtle corruption of the database.
2 - SR-User not found in database. - SR-Language not found in database. - SR-Syntax not found in database.
5 - SR-Context not found in database. - SR-Database not found. - SR-Dictionary not found in database. - SR-Context with this name already exists in database. - SR-Language with this name already exists in database.
10 - SR-Syntax with this name already exists in database. 1 - SR-User with this name already exists in database. 2 - SR-Database with this name already exists. 3 - SR-Error occurred while trying to activate context on recogniser. 4 - SR-Error occurred while trying to activate language on recogniser.
- SR-Error occurred while trying to activate syntax on recogniser. - SR-Error occurred while trying to load user. - SR-Grammar load failure. - SR-No context defined. - SR-No database defined. - SR-No algorithm running. - SR-No active context. - SR-Invalid pointer (in a parameter). - SR-Wrong inifile. - SR-Access denied. - SR-Buffer too small (in a parameter). - SR-You cannot perform this action in this state. - SR-Could not activate. - SR-Out of heap memory. - SR-No word recognized. - SR-Invalid syntax. - SR-Cannot merge given contexts. - SR-WORDNOTFOUND-Cannot find or delete word. - SR-Word already exists. - SR-Class not found in context. - SR-Cannot convert BNF file to context. - SR-Cannot merge active words. - SR-The active context is closed. - SR-Cannot open file. - SR-Cannot load library. - SR-Cannot merge. - SR-Wrong type (in a parameter). - SR-Unsupported wave format. - SR-Already active. - SR-Context is still installed. - SR-Cannot load context. - SR-Context is not active.
49 - SR-Cannot load language.
50 - SR-Cannot load user.
51 - SR-Different languages cannot be active at the same time or trying to compile to a context with a different language.
52 - SR-Different users cannot be active at the same time.
53 - SR-No wave format specified.
54 - SR-Context is active.
55 - SR-Language is in use.
56 - SR-Language is in use.
57 - SR-Cannot create directory.
58 - SR-No valid database.
59 - SR-Database is opened. 0 - SR-Language is already registered. 1 - SR-Language is not registered. 2 - SR-Context is already registered. 3 - SR-Context is not registered. 4 - SR-Environment already exists. 5 - SR-Environment not found. 6 - SR-Cannot delete directory. 7 - SR-No dictionary specified. 8 - SR-Dictionary already exists. 9 - SR-DLL not found. 0 - SR-Corrupt DLL. 1 - SR-Database is corrupted. 2 - SR-Feature is not yet implemented. 3 - SR-Invalid input (of a parameter, or input signal). 4 - SR-Conversion failed. 5 - SR-Unable to copy a file. 6 - SR-Unable to delete a file. 7 - SR-Context is opened. 8 - SR-Bad name. 9 - SR-Incompatibility problem.
80 - SR-Disk full.
81 - SR-Dictionary is opened.
82 - SR-Format not found.
83 - SR-Symbol already exists in library.
84 - SR-Symbol not found in library.
85 - SR-Database is in use by a recogniser.
86 - SR-Dictionary is in use.
87 - SR-Syntax is in use.
88 - SR-Error creating file.
89 - SR-License Number in asrapi is invalid.
90 - SR-No training set found.
91 - SR-Property not found.
92 - SR-Export not found.
93 - SR- Value out of range.
94 - SR-No context library defined.
95 - SR-Different database used.
96 - SR-Error when generating transcription of a word.
97 - SR- Age can not be active during user word training.
-1 - TTS-File not found.
-2 - TTS-File creation error.
-3 - TTS-File writing error.
-4 - TTS-Memory allocation error.
-5 - TTS-Memory locking error.
-6 - TTS-Memory unlocking error.
-7 - TTS-Memory free error.
-8 - TTS-Wave Device open error.
-9 - TTS-Wave device closing error.
-10 - TTS-Specified waveformat not supported.
-11 - TTS-No wave devices available.
-12 - TTS-TTS has not been initialized.
-13 - TTS-Specified frequency not available.
-14 - TTS-Specified parameter is out of range.
-15 - TTS-Specified output PCM format not available.
-16 - TTS-TTS system is busy.
-17 - TTS-Not authorized TTS DLL is used.
-18 - TTS-Dictionary loading error.
-19 - TTS-wrong dictionary handle.
-20 - TTS-Wave device writing error.
-21 - TTS-No input text.
-22 - TTS-Bad command for current state.
-23 - TTS-Grapheme to phoneme conversion fail.
-24 - TTS-Unknown dictionary format has been found.
-25 - TTS-Creating instance error.
-26 - TTS-No more TTS instance available.
-27 - TTS-Invalid TTS instance has been specified.
-28 - TTS-Invalid TTS engine has been specified.
-29 - TTS-TTS instance is busy.
-30 - TTS-TTS engine loading error.
-31 - TTS-No engine has been selected.
-32 - TTS-Internal system error.
-33 - TTS-Specified wave device is busy.
-34 - TTS-Invalid dictionary entry has been specified.
-35 - TTS-Too long source or destination text has been used.
-36 - TTS-Max. dictionary entries are reached.
-37 - TTS-Specified entry exists already.
-38 - TTS-Not enough space.
-39 - TTS-Invalid argument.
-40 - TTS-Invalid voice id.
-41 - TTS-No engine has been specified.
xOl - Invalid Handle. x02 - Device already opened. x03 - Device can't setup.
x04 - Memory allocation. x05 - No communication. x06 - System. x07 - Base not connected x08 - Timeout. x09 - Invalid register number. xlO - Invalid channel. xl 1 - Invalid DevicelD. xl2 - Wrong state. xl3 - Invalid parameter. xl4 - Sound card IN opened. xl5 - Sound card OUT opened. xl6 - File open. xl7 - File create. xl8 - File read. xl9 - File write. x20 - Format not supported. x21 - TTS speech generation. x22 - SR engine not active x23 - Buffer is too small. x24 - SR no active context. x25 - TTS engine not active.
In the above error list, SR is an abbreviation for "script writer" and TTS is an abbreviation for "text to speech".
It appreciated that the apparatus of the present invention is useful for generating scripts not only for toys but also for any computer-controllable object.
A preferred method for multilevel interaction between a user and a toy or other entertainment element is now described. Effective interaction is based on a continuous effort to cultivate the interest of the other party. To achieve this goal, each party struggles to identify the possible interests of the other party. This is done in daily small talk as well as in business negotiations.
In some situations, a direct question can be used to identify a possible interest or even a detailed form. However, in many situations an indirect approach is more useful. The indirect approach is more appropriate when conversing with kids and especially for very young children.
The present invention relates to machines' interaction with humans, where the machines use the indirect approach to identify the characteristics, preferences and interests of the person. The proper identification of the characteristics, preferences and interests enables the machine to suggest appropriate content, information, entertainment, games, stories, riddles and jokes as well as TV programs, movies and theater shows, and various articles on sale.
A multilevel question is a question that has two or more correct answers. The answer selected by the questioned person provides information about or her. Such information may be: the age range, gender, culture, breadth of knowledge, preferences and inclinations, possible areas of interest and associations.
Typically the question is a trivia question with one "obvious" or "superfluous" answer and another "concealed" or "deep" answer that requires somewhat "increased" level of knowledge. Typically the question contains a clue that orients the "knowledgeable" person to select the "concealed" answer. The "unknowledgeable" person disregards the clue and select the "obvious" answer.
There can be several clues for the "concealed answer" and there can be more than two "concealed answers" identifying, respectively, more than two "layers of knowledge."
Any suitable clues can be provided such as: a. Complex syntax, even somewhat "wrong" syntax. b. Soundex, using words that sound alike but have different meanings. c. Additional information that the "knowledgeable person" can identify as redundant.
The "closed" type of "multi layered questions" provides the answers while the "open" type does not. For example ("open" version): Question: Who were the beetles? Answer 1 : Insects. Answer 1 : A pop music group.
"Concealment": the soundex beetles and Beatles
"Clue": the use of "were" instead of "are." The knowledge of the Beatles reveals the age group of the person. The "closed" version of this question may be:
Question: What did the beetles do?
Answer 1: Crawl
Answer 2: Sing.
It is possible to identify the knowledge of the Beatles (to identify the age group) by asking, for example, "Who sailed the yellow submarine?" The disadvantage of this method is that the "unknowledgeable person" does not know the answer and is therefore discouraged. If a "closed form" question is used, such as "Who sailed the yellow submarine, the beetles or the crocodiles?" the "unknowledgeable person can still guess and with a 50% probability select the correct answer. The multilevel question provides two (or more) correct answers and therefore, even the "unknowledgeable person" can knowingly select a correct answer.
The preferred embodiment of the current invention relates to computer games played by kids and, more specifically, to games played with computer controlled toys. The present invention employs multilevel questions to identify the player's age and level of knowledge to select the level of content (game or story) to play with the player. The interrogation is performed by verbally asking the questions and performing speech recognition on the recorded response.
It is clear that other forms of interaction can also be employed. Such forms can be visual and tactile, employing screen display, keyboard, pointing device such as mouse, switches and sensors. Concealed information can also be provided by moving or lighting limbs of a doll or a graphic character on screen, or other parts of the scene.
Fig. 125 is a table that presents an interactive script between a toy and a player where the toy determines the characteristics of the player (namely, age range) to suggest the appropriate level of game content.
A list of multilevel questions is maintained with a set of characteristics associated with each question. The characteristics define the types of persons that the question can help identify according to characteristics such as gender and age group, as shown in the table of Fig. 126. When the machine is required to characterize the person
who uses it, the machine selects the appropriate question according to its associated characteristics. For example, if the player selects a male avatar, the program may ask a multilevel question with an obvious answer of general knowledge and a concealed answer regarding, for example, football. If the program is about to present an advertisement, it may first identify the age group by presenting a multilevel question that is associated with age group identification.
Fig. 126 is a table that stores an example of a list of multilevel questions. Fig. 127 is a script diagram of an interactive script where a toy determines the characteristics of a player (e.g. age range) to suggest the appropriate level of game content.
In accordance with a preferred embodiment of the present invention, the adaptive toy system of Figs. 1 - 5 comprises a learning machine that uses trivia game scripts. An example of such trivia game scripts is described in Figs. 6-9. A detailed example of an adaptive toy system that uses trivia game scripts in accordance with a preferred embodiment of the present invention is now described.
Reference is now made to Fig. 128 which describes by means of a flowchart the basic stages of a learning procedure for a toy system in accordance with a preferred embodiment of the present invention. As shown in Fig. 128, trivia game scripts are sent to users of a toy system. For example, a sample-group of users of a given age group from among the entire community of users on a toy system receive the said scripts over a network. It is appreciated that the age of a user is typically registered by a toy system. An example of a registration procedure in accordance with a preferred embodiment of the present invention is described in Fig. 16.
The results of game scripts played by individual users are retrieved by a toy system. For example, the results of each user are initially processed on a personal computer controlling the toy of the said user, and the results are then sent over a network to a central computer on a server of a toy system.
The learning process of Fig. 128 typically comprises a stage of checking whether game results are "meaningful". For example, game results are typically considered not meaningful if the game scripts turned out to be too difficult for users participating in a sample group. In such a case, one or more parts of a learning process are typically modified before the whole process is repeated. If game results are
considered meaningful, such results are typically used in order to change the content provided by a toy system, possibly for the entire community of users on such a system. Examples of this as well as the previous stages of the procedure of Fig. 128 are detailed below. Thus, the procedure of Fig. 128 preferably provides the toy system with one or both of the following two adaptation capabilities: 1) change of toy content; and 2) modification of the learning procedure itself.
Reference is now made to Fig. 129 that describes by means of a flowchart an example of a procedure of analyzing game results for a single user in accordance with a preferred embodiment of the present invention. The game in the present example includes three trivia game scripts that are described in Figs. 7-9. The following four parameters are used in the course of the present procedure:
1. SlBoring: shows the number of questions to which a user did not give a recognizable answer. Since a game in the present example comprises 9 questions, the value of this parameter is to be between 0 and 9.
2. S1L1: shows which and how many questions from among the 3 questions of level 1 were answered correctly by a user. Typically, S1L1 is incremented by 2n"1 if the n'th question is answered correctly, so that for each value of S1L1 it is possible to discern which questions were answered correctly. For example, if S1L1 equals 1, 2 or 4, that means that only a single question (questions 1, 2 and 3, respectively) was answered correctly by a user.
3. S1L2: shows which and how many questions of level 2 were answered correctly by a user.
4. S1L3: shows which and how many questions of level 3 were answered correctly by a user.
The following are possible results of the procedure described in Fig. 129:
1. User bored: is defined in the present example as a case where a user has ignored or not given a recognizable answer to more than three of the nine questions of the whole trivia game.
2. Game too easy: is defined in the present example as a case where a user has answered correctly two or more of the questions in each of the three game scripts.
3. Game too difficult: is defined in the present example as a case where a user has only answered correctly up to a single question in each of the tree game scripts.
4. The user has succeeded in some yet not all of the three trivia game scripts. As shown in Fig. 129, there are six cases in the present example that correspond to this result. For example, a user has answered correctly at least two questions in each of the first two game scripts ("what is bigger?" and "what is faster?") but has correctly answered no more than a single question in the third game script ("what is heavier?").
It is appreciated that such game scripts, possibly comprising new sets of questions for each game script, may be repeatedly used for a single user, for example, at different times of day, until reliable results are accumulated. Preferably, results of such game scripts are processed on a personal computer that controls the toy of the user in question, and then sent over a network to a computer on a server of a toy system. This allows utilization of the multiplicity of computing power resources available on a networked system. Results sent to a computer on a server may or may not comprise information regarding a user's identity. For example, game results of a multiplicity of users are used in order to derive statistical data related to a sample group as a whole as described below. In such a case, no information related to users' identities is typically required.
Reference is now made to Fig. 130 that describes by means of a flowchart an example of a procedure of handling game results of a plurality of users, for example, members of sample group, in accordance with a preferred embodiment of the present invention. In this example, the same categories of game result are used that are described in Fig. 129. The previously described results of individual users are now treated at the sample group level.
As shown in Fig. 130, if a large number of users from among the sample group members (30% or more, in the present example) were bored with the game scripts, the scripts are modified. This is performed either by automatic selection of different database scripts, or, in case the same has already previously occurred, by generation of new scripts, possibly by means of direct human intervention.
Also as shown in Fig. 130, a learning procedure is modified in case the game scripts turn out to be either too easy or too difficult for a large number of users from among a sample group - 30% or more in the present example. In such a case, the procedure of Fig. 128 is repeated for a sample group of users of either a higher or a lower age group, depending on whether the game scripts were too difficult or too easy
as shown in Fig. 130. Thus, this example also illustrates a method of handling a case described in Fig. 128 where sample group results are initially not meaningful in order to serve as learning data for the entire community of users on a toy system.
The final stage of the procedure described in Fig. 130 comprises using sample group results in order to change the content that is sent to toys on a toy system, possibly affecting the entire community of users on such a system. For example, a possible result of the procedure of Fig. 130 is that a large number of users from among the sample group members (e.g. 50% or more) are found to have succeeded in game script 1 but not in game scripts 2 and 3. In the context of the illustrated example, such a result possibly implies that many users of the age group in question have a well developed understanding of the concept of "bigger", yet have not yet acquired sufficient understanding of the concepts of "faster" and "heavier". In such a case, a toy system preferably generates a toy content package where knowledge of "what is bigger" is assumed, and which is intended to facilitate acquisition of the concepts of "faster" and "heavier". Such a content package is then made available for sending to users of the age group concerned from among the entire community of users on a toy system or any subgroup thereof.
Fig. 131 is a simplified flowchart illustration of a procedure whereby results of a previously performed learning procedure are used in order to customize toy content to a user of a toy system. A user receives a game script that is intended to allow determination of one or more characteristics of that user. For example, a trivia game script such as that described in Figs. 6 - 9 is used. A process of analyzing game results for a single user is provided, such as, for example, the procedure described in Fig. 129. Game results of that user are then matched with results of a previously performed learning procedure at a sample group level, such as, for example, the procedure described in Fig. 130. Finally, content is customized to the said user depending on the results of the said matching step. For example, a user has succeeded in game script 1 but not in game scripts 2 and 3. Then, a previously generated content package that is especially intended for users with such game results as described above is customized to the user in the present example.
It is appreciated that the software components of the present invention may, if desired, be implemented in ROM (read-only memory) form. The software components
may, generally, be implemented in hardware, if desired, using conventional techniques.
It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcombination.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention is defined only by the claims that follow: