Nothing Special   »   [go: up one dir, main page]

CN109739469B - Context-aware service providing method and apparatus for user device - Google Patents

Context-aware service providing method and apparatus for user device Download PDF

Info

Publication number
CN109739469B
CN109739469B CN201910012868.6A CN201910012868A CN109739469B CN 109739469 B CN109739469 B CN 109739469B CN 201910012868 A CN201910012868 A CN 201910012868A CN 109739469 B CN109739469 B CN 109739469B
Authority
CN
China
Prior art keywords
rule
user
user device
input
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910012868.6A
Other languages
Chinese (zh)
Other versions
CN109739469A (en
Inventor
裴婤允
高旼廷
金成洙
金震晟
金桦庆
全珍河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020130048755A external-priority patent/KR102070196B1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN109739469A publication Critical patent/CN109739469A/en
Application granted granted Critical
Publication of CN109739469B publication Critical patent/CN109739469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

A context aware service providing method and apparatus of a user device are provided, which recognize a user context according to a user-defined rule and perform an action corresponding to the user context, and interactively feed back the execution result to a user. The method for providing a context aware service includes: receiving a user input, the user input being at least one of a text input and a speech input; identifying a rule including a condition and an action corresponding to the condition based on the received user input; activating a rule to detect a scenario corresponding to a condition of the rule; and executing an action corresponding to the condition when the scene is detected.

Description

Context-aware service providing method and apparatus for user device
The application is a divisional application of an invention patent application having an application date of 2013, 9 and 22, and an application number of "201310432058.9", entitled "method and apparatus for providing context awareness service for a user device".
Technical Field
The invention relates to a method and a device for providing Context awareness Service (Context Aware Service). More particularly, the present invention relates to a context awareness service providing method and apparatus for recognizing a user context, performing an action corresponding to the user context according to a user-defined rule, and interactively feeding back the execution result to a user.
Background
With the advancement of digital technology, various types of user devices capable of communicating and processing data have emerged, such as cellular communication terminals, Personal Digital Assistants (PDAs), electronic notebooks, smart phones, and tablet Personal Computers (PCs). Recently, with the trend of mobile convergence, user devices are gradually evolving into multi-function devices integrated with various functions. For example, recent user devices integrate various functions including voice and video telephony functions, messaging functions including short message service/multimedia message service (SMS/MMS) and email, navigation functions, picture capture functions, broadcast play functions, multimedia (e.g., audio and video) play functions, internet access functions, instant messaging functions, social network service (SMS) functions, and the like.
Meanwhile, there is an increasing interest in a Context Awareness Service (CAS) that utilizes various life logging technologies developed to record an individual's daily life in the form of digital information. The CAS is characterized in that: the determination as to whether to provide the service and the determination as to the content of the service to be provided are implemented according to a change of the scenario defined by the service object. The term "scenario" refers to information used in determining service behavior defined by a service object, the information including service provision timing, whether to provide a service, a target of the service, a service provision location, and the like. This technology can record various types of information describing a personal behavior and provide CAS based on the recorded information.
However, the CAS method according to the related art is implemented under the assumption of cumbersome installation of various sensor devices based on fields for collecting information on individuals. The CAS system according to the related art is composed of a user device for collecting data by means of a sensor and a server for analyzing data acquired from the user device to establish a scenario and performing a service based on the scenario. For example, since a user device must be equipped with various sensors and must interoperate with a server to provide a context-based service to a user, there are obstacles of high system implementation cost and design complexity in order to implement a CAS system according to the related art.
CAS systems according to the prior art have disadvantages associated with: it is difficult to effectively provide context-based services due to limitations in information collected via user equipment and lack of an effective learning process. For example, the CAS system according to the related art may provide a user with a context-based service using only rules defined by a device manufacturer, thereby not satisfying the needs of all users. The CAS system according to the related art has a disadvantage of low user accessibility because a user needs to perform an additional program and/or perform a complicated operation to use a scenario-based service. Furthermore, the CAS system according to the related art is limited to a single context awareness scheme, and thus has no flexibility in setting conditions by noticing various contexts.
Therefore, there is a need for a CAS method and apparatus that is capable of supporting a CAS having one or more rules defined by a user.
The above information is presented merely as background information to aid in the understanding of the present disclosure. No determination is made as to whether any of the above is available as prior art to the present invention and no assertion is made.
Disclosure of Invention
Aspects of the present invention will address at least the above problems and/or disadvantages and will provide at least the advantages described below. Accordingly, an aspect of the present invention provides a Context Aware Service (CAS) method and apparatus capable of supporting a CAS having one or more rules defined by a user.
Another aspect of the present invention provides a CAS method and apparatus capable of feeding back scenario information collected based on one or more rules to a user and performing an action corresponding to the situation of the user in such a manner that the situation of the user determined according to the rules defined by the user is perceived at a terminal.
According to another aspect of the present invention, there is provided a CAS method and apparatus capable of allowing a user to define a rule (or situation), a command for executing the rule, and an action to be executed based on the rule by inputting a natural language-based text and/or voice to a user device.
Another aspect of the present invention provides a CAS method and apparatus that can extend CAS support in the following manner: the method includes defining rules, commands and actions on the user device using natural language based text or speech, recognizing the natural language based text or speech, and executing the rules selected according to the motion of the user device.
Another aspect of the present invention provides a CAS method and apparatus capable of configuring a plurality of conditions of a rule, sensing a plurality of scenarios corresponding to the respective conditions, and performing a plurality of actions corresponding to the respective scenarios.
Another aspect of the present invention provides a CAS method and apparatus capable of configuring one or more conditions according to a user's preference when defining a rule.
Another aspect of the present invention provides a CAS method and apparatus capable of improving convenience of users and usability of the apparatus by implementing an optimized CAS environment.
According to an aspect of the present invention, there is provided a method for providing a context awareness service of a user device. The context awareness service providing method comprises the following steps: receiving a user input, the user input being at least one of a text input and a speech input; identifying a rule including a condition and an action corresponding to the condition based on a received user input, the user input being one of a text input and a speech input; activating a rule to detect a scenario corresponding to a condition of the rule; and executing an action corresponding to the condition when the scene is detected.
According to another aspect of the present invention, there is provided a method for providing a context aware service of a user device. The context awareness service providing method comprises the following steps: providing a user interface for configuring the rules; receiving at least one of a natural language based speech input and a natural language based text input through a user interface; configuring a rule using conditions and actions identified from user input; activating a rule to detect an event corresponding to a condition of the rule; and when the event is detected, performing an action corresponding to the condition.
According to another aspect of the present invention, there is provided a method for providing a context aware service of a user device. The method comprises the following steps: receiving user input for configuring rules using natural language based speech or text; configuring rules according to user input; receiving a command to activate a rule, the command being one of a natural language based voice, a natural language based text, a motion detection event of a user device, a receipt of an incoming sound, and a receipt of an incoming message; executing a rule corresponding to the command; checking at least one condition specified in the rule that occurs internally and externally; when at least one condition specified in the rule is reached, performing at least one action corresponding to at least one of the at least one reached condition.
According to another aspect of the present invention, there is provided a method for providing a context aware service of a user device. The context-aware service providing method includes: defining a rule; receiving a command input to execute the rule; executing a rule in response to the command; checking a condition corresponding to the rule; at least one action is performed when a condition corresponding to the rule is detected.
According to another aspect of the present invention, a method for providing a context awareness service of a user equipment is provided. The context awareness service providing method comprises the following steps: monitoring to detect whether an event occurs in a state where the rule is executed; extracting a function designated for performing an action when an event is detected; performing the action in accordance with the function; feeding back information related to the performance of the action; determining whether a current situation reaches a rule release condition when an event is not detected; the rule is released when the current situation reaches a rule release condition.
According to another aspect of the present invention, a non-transitory computer readable storage medium stores a program that executes the above method by a processor.
According to another aspect of the present invention, a user equipment is provided. The user device includes: a storage unit that stores a rule including a condition and an action corresponding to the condition; a display unit displaying a user interface for receiving user input and execution information in a state where the rule is activated and an execution result of the action; a control unit which controls recognition of a rule including a condition and an action based on a user input, the user input being at least one of a text input and a voice input, controls activation of the rule to detect a scenario corresponding to the condition of the rule, and performs the action corresponding to the condition when the scenario is detected.
According to another aspect of the present invention, a user equipment is provided. The user device includes: a rule configuration module, implemented by a computer, for receiving a user input and for identifying a rule including a condition and an action corresponding to the condition based on the user input, the user input being at least one of a natural language based speech input and a natural language based text input; a rule execution module, implemented by a computer, for receiving a command to activate a rule and for executing the rule corresponding to the command, wherein the command is one of a natural language based speech, a natural language based text, a motion detection event of a user device, a receipt of an incoming sound, and a receipt of an incoming message; a condition checking module, implemented by a computer, for detecting a scenario corresponding to a condition specified in a rule; an action execution module, computer-implemented, to execute an action corresponding to the condition when the scenario is detected.
According to another aspect of the present invention, a non-transitory computer readable storage medium is provided. A non-transitory computer readable storage medium includes a program that, when executed, causes at least one processor to perform a method comprising: defining rules for context aware services according to user input; executing a rule corresponding to a command when the command for executing the rule is received; when a condition specified in a rule is reached, an action corresponding to the condition is performed.
According to yet another aspect of the present invention, a non-transitory computer readable storage medium is provided. The computer readable storage medium includes a program that, when executed, causes at least one processor to perform a method comprising: receiving a user input, the user input being at least one of a text input and a speech input; identifying a rule including a condition and an action corresponding to the condition based on the received user input; activating a rule to detect a scenario corresponding to a condition of the rule; and executing an action corresponding to the condition when the scene is detected.
Other aspects, advantages and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.
Drawings
The above and other aspects, features and advantages of certain exemplary embodiments of the present invention will become more apparent from the following description when taken in conjunction with the accompanying drawings, in which:
fig. 1 is a block diagram illustrating a configuration of a user device according to an exemplary embodiment of the present invention;
fig. 2 is a flowchart illustrating a Context Awareness Service (CAS) providing method of a user device according to an exemplary embodiment of the present invention;
fig. 3A to 3K are diagrams illustrating exemplary screens for explaining an operation of generating a rule in a user device according to an exemplary embodiment of the present invention;
fig. 4A to 4J are diagrams illustrating exemplary screens for explaining an operation of generating a rule in a user device according to an exemplary embodiment of the present invention;
fig. 5A to 5E are diagrams illustrating exemplary screens for explaining an operation of executing a predefined rule in a user device according to an exemplary embodiment of the present invention;
fig. 6 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 7 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 8 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 9 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 10 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 11 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 12 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 13 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention;
fig. 14A and 14B are diagrams illustrating exemplary screens for explaining an operation of temporarily stopping a currently running rule in a user device according to an exemplary embodiment of the present invention;
fig. 15A and 15B are diagrams illustrating exemplary screens for explaining an operation of temporarily stopping a currently running rule in a user device according to an exemplary embodiment of the present invention;
fig. 16A to 16C are diagrams illustrating exemplary screens for explaining an operation of temporarily stopping a currently running rule in a user device according to an exemplary embodiment of the present invention;
FIG. 17 is a diagram illustrating an exemplary screen with an indication of execution rules in a user device according to an exemplary embodiment of the invention;
fig. 18A and 18B are diagrams illustrating exemplary screens having items notifying a rule executed in a user device according to an exemplary embodiment of the present invention;
fig. 19A and 19B are diagrams illustrating exemplary screens having items notifying a rule executed in a user device according to an exemplary embodiment of the present invention;
fig. 20A to 20C are diagrams illustrating exemplary screens having an item notifying a rule executed in a user device according to an exemplary embodiment of the present invention;
fig. 21A and 21B are diagrams illustrating exemplary screens associated with an operation of notifying an execution rule in a user device according to an exemplary embodiment of the present invention;
fig. 22A to 22C are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention;
fig. 23A and 23B are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention;
fig. 24 and 25 are diagrams illustrating a case where a CAS service is terminated in a subscriber device according to an exemplary embodiment of the present invention;
fig. 26A and 26B are diagrams illustrating exemplary screens associated with an operation of deleting a rule in a user device according to an exemplary embodiment of the present invention;
fig. 27 is a flowchart illustrating a procedure of generating a rule in a user device according to an exemplary embodiment of the present invention;
fig. 28 is a flowchart illustrating a process of providing a CAS in a user device according to an exemplary embodiment of the present invention;
fig. 29 is a flowchart illustrating a process of providing a CAS in a user device according to an exemplary embodiment of the present invention;
fig. 30 is a flowchart illustrating a process of providing a CAS in a user device according to an exemplary embodiment of the present invention;
fig. 31A to 31N are diagrams illustrating exemplary screens associated with an operation of generating a rule in a user device according to an exemplary embodiment of the present invention;
fig. 32A to 32E are diagrams illustrating exemplary screens associated with an operation of executing a rule in a user device according to an exemplary embodiment of the present invention;
fig. 33A to 33D are diagrams illustrating exemplary screens associated with an operation of pausing a currently running rule in a user device according to an exemplary embodiment of the present invention;
fig. 34A to 34D are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention.
Throughout the drawings, it should be noted that the same reference numerals are used to describe the same or similar elements, features and structures.
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention defined by the claims and equivalents thereof. It includes various specific details to assist in understanding, but these are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the written meaning, but are used only by the inventor to enable a clear and concise understanding of the invention. Therefore, it should be apparent to those skilled in the art that the following descriptions of the exemplary embodiments of the present invention are provided for illustration only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It will be understood that the singular forms include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces.
Exemplary embodiments of the present invention relate to a Context Aware Service (CAS) providing method and apparatus of a user device.
According to an exemplary embodiment of the present invention, a user device is able to perceive different scenarios of a user according to one or more rules defined by the user.
According to an exemplary embodiment of the present invention, a user device is capable of performing one or more actions according to context awareness, and feeds back context information to a user or a predetermined person as a result of performing the actions.
According to exemplary embodiments of the present invention, the CAS providing method and apparatus can feed back scenario information to a user by means of an external device (e.g., a television, a lamp, etc.) and/or transmit the scenario information to another user through a message.
In various exemplary embodiments of the present invention, the rules may be defined by text (e.g., handwriting) or speech entered using natural language. In various exemplary embodiments of the present invention, a natural language corresponds to a language used by humans, as compared to an artificial language (or a machine language) invented for a specific purpose.
In various embodiments of the invention, a rule may be activated in response to the input (or receipt) of a command associated with the rule.
In various embodiments of the invention, the rules may be identified or selected based on received user input. The identified rule may be activated to detect a scenario corresponding to a condition of the rule. When the scenario is detected, an action corresponding to the condition may be performed.
In various exemplary embodiments of the present invention, when a rule is activated, a user device may monitor or detect a scenario in which a user equipment is operating. Based on the monitored or detected context in which the user device is operating, the user device may determine or identify that the user device is operating in a context corresponding to the activated rule.
In various exemplary embodiments of the present invention, a rule may be composed of at least one condition and at least one action, and a method or process of generating the rule is described below.
In various exemplary embodiments of the present invention, the predetermined rule may be executed in response to receipt of an instruction corresponding to the rule.
In various exemplary embodiments of the present invention, the instructions may include voice commands or command statements or natural language based text entered via various input devices (e.g., a touch screen, a keyboard, a microphone, etc.). The instructions may also include changes in the user device (e.g., changes in gestures, orientation, etc.) that are detected by various sensors of the user device (e.g., proximity sensors, illuminance sensors, acceleration sensors, gyroscopic sensors, voice sensors, etc.) according to predetermined rules. The instructions may also include the receipt of an incoming call message or an incoming call sound corresponding to the predetermined rule. The instructions may also include a change in the geographic location of the user (or user device) corresponding to the predetermined rule.
In various exemplary embodiments of the present invention, instructions for executing a rule (e.g., definitions regarding commands, command statements, sensible behavior of a user device, sensors for sensing the behavior, etc.) may be configured in such a way that natural language based speech or text is input.
In various exemplary embodiments of the present invention, a type of command or command sentence, which is an instruction to execute a rule, may be input in the form of a partial (e.g., word), a partial sentence, or a complete sentence of a natural language included in a definition rule.
In various exemplary embodiments of the present invention, a sentence may be a minimum expression unit expressing the entire content of an idea or an emotion, and although it may be essential that it includes a subject and a predicate, any one of the subject and the predicate may be omitted.
In various exemplary embodiments of the present invention, the detection of the behavior of one type of user device as an instruction may be input through the operation of one or more sensors configured according to defined rules.
In various exemplary embodiments of the invention, the action can include an operation performed by the user device when the situation specified in the currently running rule is perceived.
In various exemplary embodiments of the present invention, the action can include an operation control (e.g., an internal operation control) for feeding back information on a situation specified in the corresponding rule by controlling an internal component (e.g., a display unit, a communication module, a speaker), an operation control (e.g., an external operation control) for feeding back information on a situation specified in the corresponding rule by controlling an external component (e.g., a television, a lamp, an external speaker), and an operation control for controlling both the internal component of the user device and the external component of the user device.
In various exemplary embodiments of the present invention, the CAS represents a service in which a user device senses a situation specified in a rule defined by a user, performs an action corresponding to the situation, and provides information about the situation to the user (or a predetermined person) as a result of the performance of the action. The situation information includes all information available at the time of user interaction, such as user (or user device) location, identifier, activity, status, and application of the user device.
A configuration and operation control method of a user device according to an exemplary embodiment of the present invention is described hereinafter with reference to the accompanying drawings. It is to be noted that the exemplary embodiments of the present invention are not limited to the configuration of the user device and the operation control method according to the following description, but the exemplary embodiments of the present invention may be implemented with various changes and modifications without departing from the scope of the present invention.
Fig. 1 is a block diagram illustrating a configuration of a user device according to an exemplary embodiment of the present invention.
Referring to fig. 1, the user device 100 includes a radio communication unit 110, an input unit 120, a touch screen 130, an audio processing unit 140, a storage unit 150, an interface unit 160, a control unit 170, and a power supply 180. User device 100 according to an exemplary embodiment of the present invention may be implemented with or without at least one of the components described and not described in fig. 1. For example, if the user device 100 according to an exemplary embodiment of the present invention supports a picture capturing function, a camera module (not shown) may be included. Similarly, if the user device 100 does not have the broadcast receiving and playing function, some functional modules (e.g., the broadcast receiving module 119 of the radio communication unit 110) may be omitted.
The radio communication unit 110 may include at least one module that enables the user device 100 to perform radio communication with another device. For example, the radio communication unit 110 may include a cellular communication module 111, a Wireless Local Area Network (WLAN) module 113, a short-range communication module 115, a position-location module 117, and a broadcast-receiving module 119.
The cellular communication module 111 is capable of communicating radio signals with at least one of a base station of a cellular communication network, an external device, and various servers (e.g., an integration server, a provider server, a content server, an internet server, a cloud server). The radio signals may carry voice telephony data, video telephony data, text/multimedia message data, and the like. Under the control of the control unit 170, the cellular communication module 111 can connect to a provider server or a content server to download various rules regarding the CAS. Under the control of the control unit 170, the cellular communication module 111 can transmit the action execution result (e.g., situation information) to at least one target user device 100. Under the control of the control unit 170, the cellular communication module 111 is also able to receive a message generated when a condition defined in the currently running rule or a further condition associated with the currently running rule is reached (e.g. satisfied).
The WLAN module 113 is responsible for establishing a WLAN link with an Access Point (AP) or another user device 100, and can be embedded in the user device 100 or implemented as an external device. There are various available radio internet access technologies such as Wi-Fi, wireless broadband (WiBro), Worldwide Interoperability for Microwave Access (WiMAX), High Speed Downlink Packet Access (HSDPA), and the like. The WLAN module 113 can receive various types of data (e.g., including rules) about the CAS in a state of being connected to the server. In a state where the WLAN link has been established with another user device, the WLAN module 113 can transmit and receive various data (e.g., including a rule) to and from the other user device according to the user's intention. The WLAN module 113 can also transmit and receive various data (e.g., including rules) on the CAS to and from the cloud server through the WLAN link. The WLAN module 113 can also transmit an action execution result (e.g., situation information) to at least one target user device under the control of the control unit 170. The WLAN module 113 can also receive a message generated when a condition specified in a currently running rule is reached under the control of the control unit 170.
The short-range communication module 115 is responsible for short-range communication of the user device 100. There are various short-range communication technologies available, such as bluetooth, Bluetooth Low Energy (BLE), Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), ZigBee, Near Field Communication (NFC), and the like. When the user device 100 is connected to another user device, the short-range communication module 115 can transmit and receive various data (including rules) about the CAS to and from the other user device according to the user's intention.
The position location module 117 is responsible for locating the position of the user device 100. The position-location module 117 may include a Global Positioning System (GPS) module or the like. The position location module 117 collects accurate distance information and time information from at least three base stations and performs triangulation based on the acquired information to acquire 3-dimensional (3D) position information having latitude, longitude, and altitude. The position-location module 117 is also capable of calculating position information in real time based on signals from three or more satellites. Various methods may be used to obtain location information for user device 100.
The broadcast receiving module 119 receives broadcast signals (e.g., TV broadcast signals, radio broadcast signals, and data broadcast signals) and/or information on broadcasting (e.g., broadcast channel information, broadcast program information, and broadcast service provider information) from an external broadcast management server through broadcast channels (e.g., satellite broadcast channels and terrestrial broadcast channels).
The input unit 120 generates an input signal for controlling the operation of the user device 100 in response to a user input. The input unit 120 may include a keyboard, dome (dome) switches, a touch pad (e.g., using capacitive technology, resistive technology, etc.), knobs (jog wheel), wheel switches (jog switch), sensors (e.g., voice sensors, proximity sensors, illuminance sensors, acceleration sensors, and gyro sensors), and the like. The input unit 120 may be implemented as an external button and/or a virtual button on a touch panel. The input unit 120 is capable of generating input signals in response to user input (e.g., text input, voice input, user device motion input, etc.) for defining or executing rules (e.g., instructions).
The touch screen 130 is an input/output device that is responsible for both input and output functions, and includes a display panel 131 and a touch panel 133. According to an exemplary embodiment of the present invention, if a touch gesture (e.g., one or more touches, a tap, a drag, a swipe, a flick, etc.) of a user is detected by the touch panel in a state where an execution screen (e.g., a rule (condition and action) configuration screen, an outgoing call dialing screen, a messaging screen, a game screen, a gallery screen, etc.) of the user device 100 is displayed on the display panel 131, the touch screen 130 generates an input signal corresponding to the touch gesture to the control unit 170. The control unit 170 recognizes the touch gesture and performs an operation according to the touch gesture. For example, if a touch gesture based on text writing in natural language is detected on the touch panel 133 with a rule configuration screen displayed on the display panel 131, the control unit 170 generates a rule in response to the touch gesture.
The display panel 131 displays (outputs) information processed by the user apparatus 100. For example, if the user device is operating in a phone mode, the display panel 131 displays a phone User Interface (UI) or a graphic UI (gui). If the user device 100 operates in a video telephony mode or a picture capturing mode, the display panel 131 displays a UI or GUI, which displays a picture captured by a camera or a picture received through a communication channel. According to an exemplary embodiment of the present invention, the display panel 131 can display a UI or GUI related to CAS operation. For example, the display panel 131 can provide various UIs or GUIs displaying a rule configuration and a rule execution state in response to a user input, an action execution state, and an action execution result (e.g., situation information). The display panel 131 can also support a display mode switching function for switching between a portrait mode and a landscape mode according to a rotation direction (or orientation) of the user device 100. The operation of the display panel 131 is described later with reference to an exemplary screen.
The display panel 131 may be implemented as any one of a Liquid Crystal Display (LCD), a thin film transistor LCD (tft LCD), a Light Emitting Diode (LED), an organic LED (oled), an active matrix oled (amoled), a flexible display, a bent (banded) display, a 3-dimensional (3D) display, and the like. The display panel 131 may be implemented as a transparent display panel or a semi-transparent display panel that is penetrated by light.
The touch panel 133 can be placed on the display panel 131 to detect a user touch gesture (e.g., a single touch gesture and a multi-touch gesture) made on the surface of the touch screen 130. If a user touch gesture is detected on the surface of the touch screen 130, the touch panel 133 extracts coordinates at the position of the touch gesture and transmits the coordinates to the control unit 170. The touch panel 133 detects a touch gesture made by the user and generates a signal corresponding to the touch gesture to the control unit 170. The control unit 170 can perform a function according to a signal associated with a position where the touch gesture is detected, which is transmitted by the touch panel 133.
The touch panel 133 may be configured to convert a pressure applied at a specific position of the display panel 131 or a capacitance change at a specific position of the display panel 131 into an electrical input signal. The touch panel 133 can measure the pressure of the touch input and the position and size of the touch. If a touch input is detected, the touch panel 133 generates a corresponding signal to a touch controller (not shown). The touch controller (not shown) can process the signals and transmit corresponding data to the control unit 170. In this way, the control unit 170 can determine the touch area on the display panel 131.
The audio processing unit 140 transmits an audio signal received from the control unit 170 to the Speaker (SPK)141 and transmits an audio signal, such as voice input through the Microphone (MIC)143, to the control unit 170. The audio processing unit 140 can process voice/sound data to output audible sound waves through the speaker 141, and can process audio signals including voice to generate digital signals to the control unit 170.
The speaker 141 can output audio received by the radio communication unit 110 or stored in the storage unit 150 in a phone mode, an audio (video) recording mode, a voice recognition mode, a broadcast reception mode, a photographing mode, and a CAS mode. The speaker 141 can also output sound effects associated with functions performed in the user device 100, such as rule execution, action execution, context information feedback, incoming call reception, outgoing call dialing, photographing, and media content (e.g., audio and video) playing.
The microphone 143 can process an inputted acoustic signal to generate voice data in a phone mode, an audio (video) recording mode, a voice recognition mode, a broadcast reception mode, a photographing mode, a CAS mode, and the like. The processed voice data may be processed into a signal to be transmitted to a base station through the cellular communication module 111 in a phone mode. The microphone 143 may be implemented with various noise removal algorithms to remove noise occurring during the input of the audio signal. The microphone 143 is capable of processing user input (e.g., natural language based text or voice input) for rule definition and execution (instructions) to generate corresponding input signals to the control unit 170.
The storage unit 150 can store programs associated with processing and control operations of the control unit 170 and temporarily save input/output data (e.g., rule information, instruction information, action information, scenario information, contact information, messages, media content (e.g., audio, video, and electronic book), and the like). The storage unit 150 can also store usage frequencies of user device functions (e.g., rule usage frequencies, instruction usage frequencies, application usage frequencies, data (e.g., phone numbers, messages, and media content), importance rates, priorities, etc.). The storage unit 150 can also store data related to vibration patterns and sound effects output in response to user input made through the input unit 120 and/or the touch screen 130. According to an exemplary embodiment of the present invention, the storage unit 150 can store a mapping table including a mapping between instructions of each user-defined rule and actions (e.g., functions and applications) of each rule and rule termination conditions.
The storage unit 150 stores the following: an Operating System (OS) of the user device 100; programs associated with input and display control operations of the touch screen 130, context-aware CAS control operations (e.g., rules including conditions and actions) depending on rules (e.g., conditions), action execution according to rules, and context information feedback; data generated semi-persistently or temporarily by a program. According to an exemplary embodiment of the present invention, the storage unit 150 can also store setting information for supporting the CAS. The setting information can include information about whether the voice-based CAS is supported or the text-based CAS is supported. The setting information can further include at least one condition of each rule and a rule specifying an action corresponding to the condition.
The storage unit 150 may be implemented with a storage medium of at least one of a flash memory type, a hard disk type, a micro type, a card type (e.g., a Secure Digital (SD) type and an extreme digital (XD) card type) memory, a Random Access Memory (RAM), a dynamic RAM (dram), a static RAM (sram), a Read Only Memory (ROM), a programmable ROM (prom), an electrically erasable prom (eeprom), a magnetic RAM (mram), a magnetic disk and optical disk type memory, and the like. The user device 100 can interoperate with a web page memory on the internet serving as the storage unit 150.
The interface unit 160 provides an interface for an external device that can be connected to the user apparatus 100. The interface unit 160 can transmit data or power from an external device to internal components of the user device 100, and can transmit internal data to the external device. For example, the interface unit 106 may be provided as a wired/wireless headset port, an external charging port, a wired/wireless data port, a memory card slot, an identification module slot, an audio input/output port, a video input/output port, an earphone jack, and the like.
The control unit 170 controls the overall operation of the user device 100. For example, the control unit 170 can control voice telephony, data telephony, and video telephony functions. In various exemplary embodiments of the present invention, the control unit 170 may also be capable of controlling operations associated with the CAS. In various exemplary embodiments of the present invention, the control unit 170 may include a data processing module 171, wherein the data processing module 171 has a rule configuration module 173, a rule execution module 175, a condition check module 177, and an action execution module 179. The operations of the rule configuration module 173, the rule execution module 175, the condition check module 177, and the action execution module 179 are described later with reference to the drawings. The control unit 170 may include a CAS framework (not shown) for supporting the CAS and a multimedia module (not shown) for multimedia playback. In an exemplary embodiment of the present invention, the CAS framework (not shown) and the multimedia module (not shown) may be embedded in the control unit 170 or implemented as separate modules.
According to an exemplary embodiment of the present invention, the control unit 170 can control operations related to the CAS, such as user-defined rule configuration, rule-based context awareness, rule-based action execution, and context information feedback as a result of the action execution. The control unit 170 (e.g., the rule configuration module 173) can define a rule for providing the CAS according to a user input (e.g., a natural language-based voice or text input). The control unit 170 may be operable to (operativeiy) receive a user input and identify, based on the received user input, a rule including a condition and an action corresponding to the condition. The control unit 170 can activate a rule to detect a scenario corresponding to a condition of the rule. The control unit 170 (e.g., the rule execution module 175) can execute one or more rules if instructions for executing the rules specified in the configuration rules are detected. The control unit 170 (e.g., the condition checking module 177) can check (e.g., determine) and identify a condition (or scenario) as a result of the rule execution. If the condition specified in the respective rule is identified, the control unit 170 (e.g., action execution module 179) executes the action triggered when the condition is reached. For example, when a scene is detected, the control unit 170 performs an action corresponding to the condition. The control unit 170 (e.g., the action execution module 179) executes at least one function (application) to perform an operation corresponding to the function (or application).
If the condition detection module 177 detects an event (an action to reach a condition specified in a rule) in a state where the rule execution module 175 executes at least one rule in response to a user request, the control unit 170 extracts a function defined for executing an action corresponding to an event in the currently running rule through the action execution module 179. The control unit 170 can control execution of an action corresponding to the function extracted by the action execution module 179. If the user request event is not detected, the control unit 170 determines whether the current situation meets a condition for terminating at least one currently running rule. If the current situation meets the condition, the control unit 170 controls the rule execution module 175 to terminate the currently running rule.
When an action corresponding to the condition checked (e.g., determined) by the condition checking module 177 is performed in a state where the rule execution module 175 executes at least one rule, the control unit 170 can also control an operation of feedback of the context information as a result of the action execution by the action execution module 179. When the rule executed according to the condition checked (e.g., determined) by the condition checking module 177 is terminated in a state where the rule execution module 175 executes at least one rule, the control unit can also control a feedback operation corresponding to the termination of the action by the action execution module 179.
In various exemplary embodiments of the present invention, the feedback operation performed according to the action may include presenting an action execution result (e.g., context information) to a user through the display panel 131 and transmitting the action execution result (e.g., context information) to another user through the radio communication unit 110. The feedback operation performed according to the motion may include transmitting a control signal for controlling an operation (e.g., turning on/off) of an external device (e.g., a lamp, a television, etc.) corresponding to the motion execution to a corresponding external apparatus.
In various exemplary embodiments of the present invention, the feedback operation may include providing at least one of an audio effect (e.g., a predetermined sound effect through the speaker 141), a visual effect (e.g., a predetermined screen through the display panel 131), and a tactile effect (e.g., a predetermined vibration pattern through the vibration module (not shown)) to the device user.
Detailed control operations of the control unit 170 will become more apparent in the following description about the operation and control method of the user device 100 with reference to the accompanying drawings.
In various exemplary embodiments of the present invention, the control unit 170 can control operations related to the normal functions of the user device 100 and the above-described operations. For example, the control unit 170 can control execution of an application program and display of an execution screen. The control unit 170 can also control operations of receiving an input signal generated by a touch-based input interface (e.g., the touch screen 130) in response to a touch gesture and performing a function according to the input signal. The control unit 170 can also communicate various data through a wired or wireless channel.
The power supply 180 provides power to the internal components of the user device 100 from an external power source or an internal power source.
As described above, according to an exemplary embodiment of the present invention, the user device 100 includes: a rule configuration module 173 that configures computer-executable rules in response to natural language-based user speech or text input for configuring the rules; a rule execution module 175 executing computer-executable rules in response to instructions for executing the rules in the form of natural speech-based voice or text instructions or motion response instructions of a user device or message instructions from the outside; a condition checking module 177 that checks (e.g., identifies and/or determines) whether at least one condition (e.g., situation) specified in the rule is reached; an action execution module 179 performs at least one action based on whether the condition specified in the rule is met.
In various exemplary embodiments of the present invention, the rule configuration module 173 is operable to perceive a natural language based speech or text input made by a user in a rule configuration mode. In various exemplary embodiments of the present invention, the rule execution module 175 can configure a plurality of conditions for each rule and map a plurality of actions with the conditions. In various exemplary embodiments of the present invention, the condition checking module 177 can perform a multiple context awareness function for checking a plurality of contexts corresponding to the conditions configured for each rule. In various exemplary embodiments of the invention, the action execution module 179 is capable of executing a plurality of actions simultaneously or sequentially in response to the perception of a plurality of scenarios of a rule.
The CAS providing method according to one of the various exemplary embodiments of the present invention may be implemented in software, hardware, or a combination of both, or stored in a non-transitory computer readable storage medium. In case of hardware implementation, the CAS providing method according to an exemplary embodiment of the present invention may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and other electrical units performing specific tasks.
The exemplary embodiment of the present invention may be implemented by the control unit 170 itself. In the case of being implemented in software, the procedures and functions described in the exemplary embodiments of the present invention may be implemented with software modules (e.g., the rule configuration module 173, the rule execution module 175, the condition check module 177, the action execution module 179, and the like). The software modules are capable of performing at least one of the functions and operations described above.
The storage medium may be any non-transitory computer readable storage medium that stores the following program instructions: storing a program command defining a rule regarding the CAS in response to a user input; program instructions for executing at least one rule in response to program execution instructions; program instructions that, when a condition (situation) specified in the rule is reached, perform at least one action corresponding to the condition specified in the execution rule. The storage medium may also be a non-transitory computer readable storage medium storing program instructions to: program instructions for configuring a rule including a condition and an action corresponding to the condition in response to a natural language based speech or text input by a user; program instructions to activate a rule in response to an instruction indicating the rule; program instructions that determine whether a condition specified in the executed rule is met; and executing the work corresponding to the achieved condition.
In various exemplary embodiments of the present invention, the user device 100 may be any type of information communication device, multimedia device, and their equivalents (having any one of an Application Processor (AP), a Graphic Processing Unit (GPU), and a Central Processing Unit (CPU)). For example, the user device 100 may be any one of a cellular communication terminal, a tablet Personal Computer (PC), a smart phone, a digital camera, a Portable Multimedia Player (PMP), a media player (e.g., MP3 player), a portable game console, a Personal Digital Assistant (PDA), and the like, which operate using various communication protocols corresponding to the communication system. The CAS providing method according to any one of the various exemplary embodiments of the present invention may also be applied to various display devices such as a digital Television (TV), a Digital Signage (DS), a large display (LFD), a laptop computer, a desktop computer, and the like.
Fig. 2 is a flowchart illustrating a CAS providing method of a subscriber device according to an exemplary embodiment of the present invention.
Referring to fig. 2, in step 201, the control unit 170 (e.g., the rule configuration module 173) defines (e.g., configures and generates) a rule in response to a user input for defining the rule with respect to the CAS by means of one of the input unit 120, the microphone 143, and the touch screen 130.
For example, a user can input a natural language-based voice for configuring a rule through the microphone 143 in the rule configuration mode. The user can also input natural language-based text for configuring the rule through the touch screen 130 in the rule configuration mode. The control unit 170 (e.g., the rule configuration module 173) recognizes and parses user input (e.g., speech recognition and text recognition) to define (e.g., recognize) rules to be executed. The control unit 170 (e.g., the rule execution module 175) may control the user device 100 to enter an active state and wait for execution of the configured rule in response to a user input (e.g., an instruction to execute the rule). Rule configuration and generation operations according to various exemplary embodiments of the present invention are described with reference to the drawings (e.g., fig. 3A to 3K and fig. 31A to 31N).
If an instruction for executing a specific rule is received in a state where at least one rule is defined in response to a user input, the control unit 170 (e.g., the rule execution module 175) controls the corresponding rule to be executed (step 203).
For example, the user can input a natural language based command or command sentence for executing a predefined rule through one of the input unit 120, the microphone 143, and the touch screen 130. The user can use function key input, voice input, touch input (e.g., text writing and selecting a widget), and gesture-based input (e.g., changing a gesture of the user device 100, such as a tilting and accelerating motion) to input a specific instruction for at least one rule for activating the CAS. According to an exemplary embodiment of the present invention, an instruction for executing a corresponding rule may be generated when input behaviors of various users reach a condition specified in the rule. According to an exemplary embodiment of the present invention, the instructions for executing the rule may be generated in the form of receiving a specific message or sound that meets the conditions specified in the rule. The control unit 170 (e.g., the rule execution module 175) can recognize an instruction that reaches a condition for executing a rule and execute the corresponding rule in response to the month-recognized instruction to activate the CAS.
In step 205, the control unit 170 (e.g., the condition checking module 177) triggers a condition (situation) corresponding to the currently running rule.
If a condition corresponding to a currently running rule is triggered, the control unit 170 (e.g., the action execution module 179) can control the execution of at least one action corresponding to the condition (step 207).
For example, if at least one rule is executed, the control unit 170 (e.g., the condition checking module 177) can monitor to detect whether a condition specified in the rule for triggering the action is reached. If the condition or situation triggering the action is reached, the control unit 170 (e.g., action execution module 179) can control internal and/or external operations for executing the corresponding action. The action execution can include operations of executing a function (or an application) according to a predefined rule (e.g., a condition and an action), generating an execution result (e.g., context information), and feeding back the execution result to a user or others.
According to an exemplary embodiment of the present invention, the operation of defining the rule at step 201 may have been performed or may be otherwise performed by the user before the target rule is executed. In the former case, the user can immediately input an instruction for executing the rule at step 203, and in the latter case, the user can execute steps 201 and 203 for defining the rule and inputting an instruction for executing the rule.
Fig. 3A to 3K are diagrams illustrating exemplary screens for explaining an operation of generating a rule in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 3A through 3K, exemplary operations of the control unit 170 (e.g., the rule configuration module 173) for receiving a natural language based voice input made by a user and for defining and/or recognizing rules (e.g., conditions and actions) according to the voice input are illustrated.
Fig. 3A illustrates an exemplary screen of the user device 100 when the device user executes an application for CAS according to an exemplary embodiment of the present invention.
Referring to fig. 3A, the CAS application provides a User Interface (UI) or a Graphical User Interface (GUI) (hereinafter, referred to as a "screen interface") including the following menus: a first menu 310 (e.g., my menu or "my rules" in FIG. 3A) for displaying a list of rules defined by the user; a second menu 320 (e.g., an action menu or "running rules" in FIG. 3A) for displaying a list of currently running rules among the defined rules; a third menu 350 (e.g., an add menu or "add rule" in fig. 3A) is used to additionally define new rules.
The screen interface can provide a list corresponding to menu items selected in the first menu 310 and the second menu 320. As shown in fig. 3A, if the first menu 310 is selected, a list of items (e.g., a "home" item 330 and a "taxi" item 340) corresponding to the user-defined rule is displayed.
The items 330 and 340 may be set as a drop-down menu item 335 for displaying detailed information configured with the respective items. For example, if the user selects a drop-down (e.g., pull-down) menu item 335, a full-full down window appears below the corresponding item.
In the state of fig. 3A, the user can select (tap) a third menu ("add rule") 350 for defining new rules. Then, the control unit 170 (e.g., the rule configuration module 173) of the user device 100 determines whether to start an operation for defining a rule, and switches to the rule configuration mode in conjunction with displaying a corresponding screen interface. Fig. 3B shows an exemplary screen interface displayed in this case.
FIG. 3B illustrates an exemplary screen of the user device 100 when the device user executes a rule configuration mode for defining a rule. In an exemplary embodiment of the present invention, the operation described in fig. 3B is to provide a tutorial on a method of defining a rule to a user, and such a tutorial providing step may be omitted according to the user's intention.
Referring to fig. 3B, a tutorial may be provided in the form of a pop-up window 351. The control unit 170 can control the tutorial to be displayed in the form of a pop-up window 351, wherein the pop-up window 351 presents a guide (e.g., pictures and text) for guiding how the rules are defined. For example, the guide may be set as an image 351c indicating activation of a voice recognition mode and text 351d guiding how to define a rule (e.g., how to make a rule like "play music if" subway "is spoken", "proceed with the following operation if" subway "is spoken").
The pop-up window 351 providing the tutorial may include a menu item 351a (e.g., a "start" button) for confirming the definition of the rule and a menu item 351b (e.g., an "end" button) for canceling the definition of the rule. The user can continue or cancel the definition of the new rule by selecting one of the menu items 351a and 351B of the pop-up window 351 providing the tutorial as shown in fig. 3B.
Fig. 3C to 3K illustrate an operation of defining a new rule in a rule configuration mode according to an exemplary embodiment of the present invention. Fig. 3C through 3K illustrate exemplary screens displayed when receiving a natural language based voice of a user and configuring conditions and actions of a corresponding rule in response to the natural language based voice of the user.
As described in fig. 3C to 3K, the control unit 170 displays a pop-up window 353 prompting the user for voice input (e.g., "speak rule") in the rule configuration mode and waits for the user voice input. In the state of fig. 3C, the user can perform a natural language-based voice input for at least one condition and at least one action corresponding to the condition based on the type of rule to be defined (e.g., singular structure or plural structure).
For example, in the state of fig. 3D, the user can perform voice input of "perform the following operation if" subway "is spoken". Then, the control unit 170 recognizes the voice input of the user and displays a pop-up window 355 as a result of the voice recognition. According to an exemplary embodiment of the present invention, the control unit 170 may display a pop-up window 355 and then wait for a voice input of the user, wherein the pop-up window 355 prompts the user with a notification message (e.g., "what can i do") to instruct [ subway ] what can i do with a voice recognition result "for the current condition" subway? "the action to be performed when reached is voice input.
In the state of FIG. 3E, the user may say "turn on Wi-Fi". As shown in fig. 3E, the control unit 170 recognizes a voice input of the user and displays a pop-up window 357, wherein the pop-up window 357 informs a progress of a voice recognition mode and an operation of mapping a condition (e.g., subway) and an action (e.g., opening Wi-Fi) (e.g., "recognition in progress"). In various exemplary embodiments of the present invention, the screen display related to the recognition operation may be omitted.
Once the recognition and mapping operations are completed, the control unit 170 may provide the recognition and mapping results in the form of a pop-up window 359 as shown in fig. 3F. For example, the control unit 170 can display a pop-up window 359 that notifies information about the newly defined rule and the specified action associated with the rule. According to an exemplary embodiment of the present invention, the control unit 170 may notify that a new rule having a condition of "subway" for configuring Wi-Fi on when the condition is reached has been generated and menu items for prompting the user to perform an operation (e.g., "confirm" and "cancel"). In the state of FIG. 3F, the user may select the "confirm" menu item to apply the configured rule or the "cancel" menu item to cancel the configured rule.
In the state of fig. 3F, if the user selects the "confirm" menu item (or makes a voice input), the control unit 170 may display a menu prompting the user to make a next voice input (e.g., "speak next command") and wait for the user's voice input, as shown in fig. 3G. In the state of fig. 3G, the user may make a voice input of "change to vibration". As shown in fig. 3H, the control unit 170 may then recognize the voice input and display a pop-up window 363, wherein the pop-up window 363 informs of a progress of the voice recognition mode and the operation of mapping the condition (e.g., subway) and the additional action (e.g., configuration of vibration) (e.g., "recognition in progress"). If the recognition and mapping operation is completed, the control unit 170 may provide the operation result in the form of a pop-up window 365 as shown in FIG. 3I. For example, the control unit 170 can display a pop-up window 365 informing a rule configured in response to a voice input of the user and an action corresponding to the rule. According to an exemplary embodiment of the present invention, the control unit 170 can notify that a new rule having a condition of "subway" for switching to the vibration mode when the condition is reached has been generated and menu items (e.g., "confirm" and "cancel") through the pop-up window 365. In the state of FIG. 3I, the user may select the "confirm" menu item to apply the configured rule or the "cancel" menu item to cancel the configured rule.
In the state of fig. 3I, if the user selects the "confirm" menu item, the control unit 170 may display a pop-up window 367 prompting the user for voice input (e.g., "speak next command") and wait for the voice input, as shown in fig. 3J. In the state of fig. 3I, the user can make a voice input of "end (or stop)". Then, the control unit 170 recognizes the voice input and provides information on the condition specified in the rule defined through the steps of fig. 3B to 3J and at least one action corresponding to the condition, as shown in fig. 3K.
For example, the control unit 170 may display the condition "subway" of the rule defined by the above-described operation together with the actions "Wi-Fi on configuration" and "vibration mode switching configuration" mapped to the condition, as shown in fig. 3K. In this manner, newly defined rules may be added to the rule list as shown in FIG. 3A. For example, the newly added rule may be displayed in a list as the item "subway" 360 along with previously defined items (e.g., "home" item 330 and "taxi" item 340), and the "subway" item 360 may be provided with detailed information (e.g., conditions and actions). The screen interface may also display various settings of the device. For example, the screen interface may display Wi-Fi settings 371, sound settings 373, and the like. The settings of the apparatus may be associated with an item (e.g., item 330, item 340, and/or item 360).
As described above with reference to fig. 3A through 3K, according to various exemplary embodiments of the present invention, at least one action may be mapped to one condition. In an exemplary embodiment of the present invention, the CAS method can support both a single rule definition operation and a multiple rule definition operation. This can be summarized as follows.
The single structure rule definition operations may be summarized as shown in table 1.
TABLE 1
Figure GDA0003546863490000221
The multi-element structure rule definition operations may be summarized as shown in table 2.
TABLE 2
Figure GDA0003546863490000222
As shown in table 1 and table 2, a simple if-sentence such as "if" home "is spoken, switch to a ringtone" or a complex if-sentence such as "if" home "is spoken, mute TV sound when receiving an incoming call" may be used. According to an exemplary embodiment of the present invention, a plurality of actions (e.g., terminal functions, a plurality of application Accessory (App + access) interoperations for an adaptive scenario, and use of a cloud service) corresponding to at least one condition may be configured based on a simple or complex if-statement. Among the plurality of actions, the terminal functions may include Wi-Fi mode configuration, ring/vibrate/mute mode switching, text messaging (recipient and content voice configuration), camera flash flashing, etc.; the use of cloud services may include checking (e.g., determining) a user location (using GPS) and then sending a text message, etc.
The types of conditions (or instructions) that may be specified in a rule and the types of actions that each condition may configure may be summarized as shown in table 3.
TABLE 3
Figure GDA0003546863490000223
Figure GDA0003546863490000231
According to various exemplary embodiments of the present invention, the user device 100 can perform interaction (e.g., query and answer) by providing voice or text feedback to the user about information required according to an action the user intends to perform when a specified condition is reached. According to an exemplary embodiment of the present invention, information received from the user regarding all actions supported by the user device 100 may be provided in the form of a Database (DB). According to an exemplary embodiment of the present invention, in case of the text message transmission function, the user device 100 may recognize the necessity of additional information about the recipient and the text message, and may prompt the user to input the additional information in the form of voice or text, alert the erroneous input, and request the re-input to be received. Such an operation is exemplarily described below with reference to fig. 4A to 4J.
Fig. 4A to 4J are diagrams illustrating exemplary screens for explaining an operation of generating a rule in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 4A to 4J, in the state of fig. 3A (or fig. 3B), the control unit 170 can display a pop-up window 451 prompting the user for voice input in response to entering the rule configuration mode, and wait for voice input. In the state of fig. 4A, the user may perform a natural language based voice input for configuring at least one condition and at least one action per condition according to the type of rule to be defined (e.g., a single structure and a multi-structure).
For example, in the state of fig. 4A, the user may perform a voice input of "if" taxi "is spoken, the following operation is performed". Then, the control unit 170 may recognize the voice input of the user and provide the recognition result in the form of a pop-up window 453, as shown in fig. 4B. According to an exemplary embodiment of the present invention, the control unit 170 may display a pop-up window 453 and wait for voice input, wherein the pop-up window 453 prompts the user to make voice input for an action to be performed with satisfaction of the condition "taxi", together with a voice recognition result (such as "command [ taxi ] what i can do.
In the state of fig. 4B, the user can make a voice input of "send message". Then, the control unit 170 may recognize the voice input and check information about the action (e.g., "send message") that needs to be received from the user in order to perform the feedback, as shown in fig. 4F.
For example, as shown in fig. 4C, the control unit 170 may display a pop-up window 455 (e.g., "command [ taxi ] please say the receiver") prompting the user to input "receiver" as information required for the action "send message", and wait for voice input of the user. In the state of fig. 4C, the user may voice-input a specific recipient of the transmitted message (e.g., "boyfriend"). The control unit 170 may recognize a voice input by the user, prompt the user to input "message content" as information for the action "send message" in the form of a pop-up window 457, as shown in fig. 4D (e.g., "speak content to boyfriend"), and wait for the voice input by the user.
In the state of fig. 4D, the user may input message content to be transmitted to a recipient (e.g., "boyfriend"). According to an exemplary embodiment of the present invention, the user may make a voice input of "send my location" as the message content to be sent to "boyfriend". Then, the control unit 170 may recognize the voice input of the user and check the necessity of additional information for performing the action of "send position", display a pop-up window 459 (e.g., "how long to send to [ boyfriend ]) prompting the user to input a time interval for performing the action of" send position "), and wait to receive the voice input.
In the state of fig. 4E, the user may input a transmission interval for performing an action of "transmit my location". According to an exemplary embodiment of the present invention, the user may make a voice input of "every 5 minutes" as a transmission interval for performing an action of "transmitting my location". Then, the control unit 170 may recognize the voice input and display the recognition result, as shown in fig. 4F.
For example, as shown in fig. 4F, condition information 410 configured for a rule to be defined by a user and action information 411, 413, 415, and 417 corresponding to the condition may be provided in the form of a pop-up window 461. Specifically, in the case of the action information 411, 413, 415, and 417, an auxiliary action configured according to an initially configured action may be provided. According to an exemplary embodiment of the present invention, a first action may be set to "send message" indicated by reference numeral 411, a second action may be set to "boyfriend" indicated by reference numeral 413 as a target to be sent a message according to the first action, a third action may be set to "send my location" indicated by reference numeral 415 as a content of a message to be sent to "boyfriend" according to the first action and/or the second action, and a fourth action may be set to "5 minutes" as a transmission interval indicated by reference numeral 417 as a message including "my location" to be sent to "boyfriend" according to the first action, the second action, and/or the third action. For example, according to an exemplary embodiment of the present invention, the control unit 170 may request information required to perform a previous action from the user in a voice or text interaction. At this time, the control unit 170 may no longer recognize an action requiring additional information, and may provide information on the requested action (e.g., "send information" 411, "boy friend" 413, "send my location" 415, and "5 minutes" 417 configured for the condition (e.g., "taxi" 410)) in the form of a pop-up window 461, as shown in fig. 4F. In this way, when the condition "taxi" is reached, the user can define rules regarding the action of sending a message including "my location" to "boyfriend" every 5 minutes.
Meanwhile, in the state of fig. 4F, the user may select a "confirm" menu item to apply the definition of an action (e.g., "boyfriend", "send my location", and "5 minutes") corresponding to the condition of the rule configured through the above procedure (e.g., "taxi"), or select a "cancel" menu item to cancel or reconfigure the rule.
In the state of fig. 4F, if the user selects the "confirm" menu item (or performs voice input by speaking "confirm"), the control unit 170 may display a pop-up window 463, as shown in fig. 4G, prompting the user to perform voice input (e.g., "please speak the next command"), and wait for voice input. In the state of fig. 4G, the user may perform a voice input of "switch to vibration" for switching to the vibration mode as an additional action corresponding to the condition (e.g., "taxi"). Then, the control unit 170 may recognize the voice input of the user and display a voice recognition mode indicator, a previously input condition (e.g., "taxi"), and an additional action prompt in the form of a pop-up window 465, as shown in fig. 4H. According to an exemplary embodiment of the present invention, as shown in the pop-up window 465, the control unit 170 may notify generation of a new rule having a condition of "taxi" for switching to the vibration mode when the condition is reached and menu items (e.g., "confirm" and "cancel").
In the state of fig. 4H, if the user selects the "confirm" menu item, the control unit 170 may display a pop-up window 465 prompting the user for the next voice input (e.g., "please say the next command") and wait for the user's voice input, as shown in fig. 4I. In the state of fig. 4I, the user may make an "end (or stop)" voice input to end the additional rule configuration. Then, the control unit 170 may recognize the voice input of the user and provide information on the condition specified in the rule configured through the steps of fig. 4A to 4I and at least one action corresponding to the condition, as shown in fig. 4J.
For example, as shown in fig. 4J, the control unit 170 may provide a screen notifying: the rule has a condition "taxi" with respect to actions "send message" and "sound set" along with detailed information "send message including my location to boyfriend every 5 minutes" and "sound set to vibrate".
Meanwhile, according to various exemplary embodiments of the present invention, as shown in fig. 4A to 4F, the control unit 170 may request additional required information to the user in the form of voice or text according to an action input by the user. According to various exemplary embodiments of the present invention, as shown in fig. 4G to 4I, the control unit 170 may recognize an action (e.g., sound setting) that does not require an additional action and skip requesting additional information from the user to jump to the next step.
The operation of defining rules according to various exemplary embodiments of the present invention has been described above. Exemplary operations for executing the above-defined rules are described below. According to various exemplary embodiments of the present invention, the predefined rule may be immediately executed in response to a user's voice or text input as described above. In addition, the predefined rules may generate widgets in the user device according to the user's definition and execute the corresponding rules through the widgets, as will be described below. For example, according to various exemplary embodiments of the present invention, the instructions for the rules may be executed by the widget.
In various exemplary embodiments of the present invention, if a certain operation, such as receiving an incoming call, interrupts the process of generating a rule, the rule generating operation proceeds until saved (or temporarily stored) to process the operation causing the interruption.
Fig. 5A to 5E are diagrams illustrating an operation for explaining the execution of a predefined rule in a user device according to an exemplary embodiment of the present invention.
Fig. 5A to 5E illustrate exemplary operations of the following processes: receiving a natural language based voice input made by a user and executing a rule in response to the voice input at a rule execution module 175 of the control unit 170; the conditions specified in the rules are checked at the condition checking module 177 of the control unit 170; at the action execution module 179 of the control unit 170, when the condition is reached (e.g., when a scenario is detected), at least one action corresponding to the condition is executed.
Referring to fig. 5A to 5E, fig. 5A illustrates exemplary screens of the user device 100 when a widget is provided with respect to a CAS according to an exemplary embodiment of the present invention.
As shown in fig. 5A, the CAS widget 500 may be displayed on a main screen (or a menu screen) of the user device 100. The widget 500 may be provided with an instruction input area (or rule execution button) 510 in which a user makes an input (e.g., tap and touch) for executing a rule and an execution information area 520 showing information on a currently running rule among the user-defined rules. The widget 500 may also be provided with a refresh function item 530 for updating information about the currently running rule. According to an exemplary embodiment of the present invention, the instruction input region 510 may be provided with an image or text in consideration of intuitiveness to a user. Fig. 5A is an exemplary case showing that no rule is currently running for the execution information area 520.
In the state of fig. 5A, the user may select (e.g., touch gesture or tap) the instruction input region 510 for executing the rule. As shown in fig. 5B, the control unit 170 (e.g., the rule execution module 175) may then execute the rule and display a pop-up window 551 (e.g., "speak a command") prompting the user to input (or speak) information about the rule to be executed, and wait for a voice input by the user.
In the state of fig. 5B, the user may make a voice input (e.g., "subway") on the rule to be executed. As shown in fig. 5C, the control unit 170 may then recognize the voice input of the user and provide a pop-up window 553 informing that the rule corresponding to "subway" is being loaded (or recognized). In various exemplary embodiments of the present invention, the recognition progress screen display may be omitted.
If the recognition and loading operations are completed, the control unit 170 may provide the recognition and loading results in the form of a pop-up window 555. For example, the control unit 170 may provide information on a rule to be performed according to a voice input of a user and a condition specified in the rule and an action corresponding to the condition through the pop-up window 555. According to an exemplary embodiment of the present invention, the control unit 170 may provide the following notifications: the rule to be executed is "subway" configured with the condition "subway" (e.g., execute [ subway ]), which has the actions "Wi-Fi on" and "switch to vibrate" (e.g., "Wi-Fi on", "configured to vibrate"). In various exemplary embodiments of the present invention, the rule information screen display may be skipped and the process may jump to an operation corresponding to fig. 5E. The screen of fig. 5E may display a predetermined duration and then perform the corresponding operation.
When a predetermined duration elapses in the state of fig. 5D, the control unit 170 may provide information (e.g., subway) about a rule executed by the user (or a currently running rule) in the execution information area 520 of the widget in the form of an image or text. For example, as shown in FIG. 5E, the execution information area 520 of the widget 500 may present an icon or text indicating "subway" instead of the message "running irregularly at present". In addition, the control unit 170 may provide a notification item 550 for notifying a currently running rule in an indicator region providing various operation states of the user device 100 in response to the running of the rule, as shown in fig. 5E. The notification item is described later.
Although fig. 5E is an exemplary case for only one rule being run, a plurality of rules may be run, and in this case, the execution information area 520 may provide information on a plurality of currently running rules. Although the description is directed to an exemplary case where the user is prompted to make a voice input in the selection of the instruction input area 510, a list of rules defined by the user can be provided in the selection of the instruction input area 510 to allow the user to select at least one rule.
The control unit 170 (e.g., the condition checking module 177) is further operable to determine whether a condition specified in a currently running rule among the various rules defined through the above-described procedure is reached. The control unit 170 (e.g., the action execution module 179) is operable to execute at least one action mapped to a condition of the rule reached (e.g., at least one action mapped to a scenario corresponding to the condition of the rule).
As described above, according to various exemplary embodiments of the present invention, a rule may be executed through the widget 500. According to an exemplary embodiment of the present invention, a user may select the instruction input area (or rule execution button) 510 and speak a configuration rule (or command). Then. The control unit 170 may execute pop-up text or voice feedback to the user to provide information about the rule start time and the action to be performed. The control unit 170 may also add a corresponding rule to the execution information region 520 and display a notification item 550 indicating that there is a currently running rule in the indicator region. The notification item 550 is described later.
Hereinafter, detailed operations of the CAS control method according to various exemplary embodiments of the present invention are described with reference to the accompanying drawings (e.g., fig. 6 to 13).
Fig. 6 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 6 shows the following exemplary scenario: the user configures and executes the rule through voice interaction, and the user device 100 checks the achievement of the condition specified in the rule and executes the action triggered when the condition is achieved. Specifically, in fig. 6, the user performs action 1 (e.g., sound setting) corresponding to condition 1 (e.g., home), and performs action 2 (e.g., blinking a lamp) and action 3 (e.g., muting a TV sound) under a sub-condition (condition 2) (e.g., when receiving a phone call).
Referring to fig. 6, a user may define a rule using the user device 100. For example, the user may activate the function of generating the rule by manipulating the user device 100 and define the rule for the user device 100 to change the ring tone at a specific location and to blink the lamp and mute the TV sound when an incoming call is detected at the specific location. According to an exemplary embodiment of the present invention, a user may define a rule "switch an indication mode to a ringtone mode at home and blink a lamp and mute a TV sound when an incoming call is received at home" in a stepwise interaction with the user device 100. The operation of defining the rule may be performed by the rule generation process described with reference to fig. 3A to 3K and fig. 4A to 4J. The operation of defining the rules may be performed through natural language based interactions.
The user may command the execution of rules defined through natural language based voice or text interaction. The situation (e.g. condition) to be detected may be "say home or get phone at home", and the action to be taken when the condition is reached may be "set light flashing, mute TV sound". Although not separately defined in the case of fig. 6, the user device 100 may control the execution of additional actions according to the executed actions. For example, after flashing the lights and muting the TV sound, another action "resume the previous state of the lights when the phone was received and play the TV sound when the call session is ended" may be performed.
In the state where the rule has been defined, the user can execute the defined rule if necessary. For example, when entering home from outdoors, the user speaks "home" for voice input through the process as described in fig. 5A to 5E. Then, the user device 100 may check (e.g., determine) whether a rule corresponding to "home" exists among the currently running rules. If a rule corresponding to "home" is running, apparatus 100 can check (e.g., determine) at least one condition specified in the rule "home" and an action corresponding to the condition.
The user device 100 recognizes "home" as a first condition defined in the rule "home" and recognizes "ringtone mode switching" as a first action corresponding to the first condition. Accordingly, as indicated by reference numeral 610, the user device 100 switches the indication mode of the user device 100 to the ringtone mode in response to the user's voice input "home".
The user device 100 may also recognize a second condition "when a call is received" specified in the rule "home" and check for interruption of the condition (e.g., reception of an incoming call). Thereafter, if an incoming call is received, as indicated by reference numeral 620, the user device 100 recognizes a second condition "when a call is received" specified in the rule "home" as a reaction to the interruption. The user device 100 is operable to control a second action "flash the lights" (as indicated by reference numeral 630) and a third action "mute the TV sound" (as indicated by reference numeral 640).
If the user accepts the reception of an incoming call (e.g., call session setup) in a state in which a bell sound indicating the reception of a voice call (e.g., telephone bell sound) is played, the user device 100 may restore the lamp to a previous state and unmute the TV sound if the call session is released.
Fig. 7 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 7 shows the following exemplary scenario: the user configures and executes the rules through voice interaction, and the user device 100 recognizes the conditions specified in the rules and performs the actions triggered when the conditions are reached. Specifically, in fig. 7, the user may define a rule for automatically transmitting the user location (or the location of the user device 100) to at least one target user device at predetermined time intervals. User device 100 may execute the rule in response to user input and perform an action of sending location information to at least one target user device at time intervals specified in the execution rule.
Referring to fig. 7, a user may define a rule using the user device 100. For example, the user may activate a rule generation function (or application) by manipulating the user device 100 and define a rule that transmits location information to at least one target user device 200 at predetermined time intervals. According to an exemplary embodiment of the present invention, the user may generate a rule "if i take a taxi, send my location information to i's father and brother (or sister) every 5 minutes". At this time, the rule may be generated by voice interaction with the microphone 143 or text interaction with the input unit 120 or the touch screen 130, as described later. Preferably, the speech and text interaction is based on natural language, as described above. The situation (e.g., condition) specified in the defined rule to be detected may be "when the movement occurs" and the action taken when it is reached may be "send location information to parent affinities (sisters) every 5 minutes".
The user device 100 may provide an interface for designating at least one target user device 200 to transmit location information and map information (e.g., phone numbers, names, and nicknames) about at least one target apparatus 200 to the rules. The rules can be predefined and redefined by the user, if necessary, anytime and anywhere.
The user may enter instructions for executing the rule in the form of speech, text, or gestures through an interface provided when defining the rule. The user device 100 may then map the instructions to the defined rules and store the mapping information. In the case of using voice input, the user device 100 may store a waveform of voice, convert voice into text and store the text, or store both the waveform of voice and the converted text.
In a state where a rule has been defined, the user may recognize, activate, and/or execute the rule using predefined instructions (e.g., voice, text, and gestures), if necessary. For example, as in the exemplary cases in fig. 5A to 5E, the user may input corresponding instructions as voice inputs (e.g., "taxi", "taxi mode", and "taxi in"), just before or at the taxi. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text or in the form of a gesture through the input unit 120 or the touch screen 130. The voice instructions and text instructions may be natural language based instructions as described above.
When the user intends to make a voice input for executing a rule (or condition or situation recognition), the user may take a preparatory action in advance for notifying the user that the device 100 uses the voice input for executing the rule. In the exemplary case where the rule (or condition or situation recognition) is performed by speech, it may be desirable to activate the microphone 143. This is because if the microphone 143 is always in an on state, an accidental sound input may cause an unnecessary operation or an error. Accordingly, it is preferable to define a specific action (e.g., widget, gesture, and function key manipulation) for activating the microphone 143 in the voice input mode so that the user takes the action to open the microphone 143 before voice input. According to an exemplary embodiment of the present invention, a user may speak an instruction after a predetermined gesture, wherein the predetermined gesture is pressing a predetermined function key or selecting a rule execution button of a widget.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. For example, the user device 100 may search for a voice waveform corresponding to a voice input among predetermined rules (e.g., voice waveforms mapped to respective rules). The user device 100 may also convert the input voice into text to acquire the text in a predetermined rule (e.g., text mapped to each rule). The user device 100 may search for both the speech waveform and the text in predetermined rules (e.g., waveforms and texts mapped to respective rules).
The user device 100 can perform condition (situation) recognition according to the rule execution. For example, the user device 100 may detect the reaching of condition 1, such as "riding a taxi", based on predefined rules, and check condition 2, such as a predetermined time (e.g., 5 minutes), when condition 1 is reached. In this case, the user device 100 is operable to perform act 1 for checking the location of the user device 100 at every time specified in condition 2. The user device 100 may also be operable to perform act 2 for transmitting location information to the at least one target user device 200 according to act 1.
Meanwhile, the at least one target user device 200 may perform feedback on the location information of the user device 100 through a predetermined interface when receiving the location information from the user device 100. For example, as shown in fig. 7, the target user device 200 may display location information about the user device 100 on a map image. At this time, the map image and the location information may be transmitted as processed by the user device 100, or the location information transmitted by the user device 100 is presented on the map image provided by the target user device 200.
As described above, according to the exemplary embodiment of fig. 7, the user may notify the user's location to at least one other specified user at predetermined time intervals. At least one other user may obtain the user's location and movement path without additional action.
Fig. 8 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 8 shows the following exemplary scenario: the user configures and executes the rule using the composite application, and the user device 100 identifies the conditions specified in the rule and executes the actions that are triggered when the conditions are reached. Specifically, in fig. 8, the user may define a rule for transmitting the user location (or the location of the user device 100) to at least one target user device in response to an external event. The user device 100 executes the rule according to the user input, and performs an action of transmitting the location information to the target user device 200 when an event is received (e.g., or occurs) from the target user device 200 specified in the execution rule. Fig. 8 shows the following exemplary case: the user device 100 detects an input (event) from an external device (e.g., the target user device 200) and performs a predetermined action.
In various exemplary embodiments of the present invention, the composite application may be an application program that performs the following processes: the screen is modular to provide the end user with different information received from various sources in the most preferred way (e.g., as required by the user), and the screen modes and screen configurations are designed to switch according to the user's permissions and roles in order to optimize the user experience.
Referring to fig. 8, a user may define a rule using the user device 100. For example, the user may activate a rule generation function (or application) and define a rule that sends location information to at least one target user device 200 whenever an event (e.g., receipt of a message) occurs. According to an exemplary embodiment of the present invention, the user may generate a rule "if a phone call from the wife is received while driving, my current location information is transmitted". At this time, the rule may be generated by voice interaction with the microphone 143 or text interaction with the input unit 120 or the touch screen 130, which will be described later. Preferably, the voice interaction and the text interaction are based on natural language, as described above. The situation (e.g., condition) to be detected specified in the specified rule as shown in fig. 8 may be "if a call from a wife is received while driving", and the action to be taken when the condition is reached may be "send my current location information". In addition, another condition such as "if a message including a phrase asking for a location (such as" where ") is received from a wife while driving" may be further configured.
The user device 100 may provide an interface for designating at least one target user device 200 generating an event and mapping information (e.g., a phone number, a name, and a nickname) about the at least one target user device 200 to a rule. If necessary, the user can predefine and input or redefine the rules anytime and anywhere, and can input instructions for executing the rules through a given interface in the form of voice, text, or gestures. The user device 100 may then map the instructions to the defined rules and store the mapping information.
In a state where the rule is defined, the user may execute the rule using predefined instructions (e.g., voice, text, and gestures) as necessary. For example, as in the exemplary cases of fig. 5A-5E, the user may input the respective instructions as voice inputs (e.g., "drive", "drive mode", and "i will drive") just before or just while boarding. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or may be input in the form of a gesture. Preferably, the voice instructions and text instructions are natural language based instructions, as described above. When the user intends to make a voice input, the user may take a preparatory action (e.g., turn on the microphone 143) in advance for notifying the user that the device 100 uses a voice input for executing a rule.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also detect the reaching of a condition (situation). For example, the user device 100 may detect the attainment of condition 1 (such as "i will drive") specified in the defined rule, and check condition 3 such as receiving a text message (a text message including a specific condition (such as "where") from condition 2 (such as the specified target device 200) according to condition 1. For example, if a text message that reaches condition 3 is received (e.g., from the target user device 200) according to the reaching of condition 2, the user device 100 may perform act 1 for acquiring location information about the user device 100. The user device 100 may also perform act 2 of transmitting the location information acquired according to act 1 to the target user device 200.
If the location information transmitted by the user device 100 is received, the target user device 200 may perform feedback on the location information of the user device 100 through a predetermined interface. For example, as shown in fig. 8, target user device 200 may display the user device location on a map image.
As described above, according to the exemplary embodiment of fig. 8, if an event is received from a designated target user, the user can inform the target user of the user location through text information without additional action.
Fig. 9 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
FIG. 9 illustrates an exemplary scenario where one of the similar rules is selected by a user or programmatically recommended, or where similar rules are executed concurrently. Specifically, in the exemplary scenario of fig. 9, a user may define rules for the user device 100 to perform feedback of an alarm or control a particular function. User device 100 may execute a rule in response to user input and control a particular function of user device 100 or perform an alert action when a change in the situation specified in the rule is detected.
Referring to fig. 9, a user may define a rule using the user device 100. For example, a user may generate the following rules by manipulating user device 100 to activate a function (or application) capable of generating a rule: a rule (hereinafter, a first rule) that outputs an alarm when an event caused by a change in the environment is detected, and a rule (a second rule) that controls a function of the user device 100 when an event caused by a change in the environment is detected. According to an exemplary embodiment, it is possible to generate a first rule such as "output an alarm when the driving speed is equal to or greater than 80 Km/h" and a second rule such as "increase the volume of the car or the user device 100 when the driving speed is equal to or greater than 60 Km/h". At this time, as described above, the rule may be defined by voice input through the microphone 143 or text input through the input unit 120 or the touch screen 130. Both speech input and text input may be implemented using natural language. The condition (e.g., condition) to be detected may be "when a change in the environment (e.g., driving speed) is equal to or greater than a predetermined threshold", and the action to be taken when the condition is reached may be "outputting an alarm or controlling volume".
As described above, the user may predefine or generate and redefine the first rule and the second rule in real time whenever necessary, anytime, anywhere; the instructions for executing the rules may be entered in the form of speech, text, or gestures through a given interface. The user device 100 may then map the defined rules and instructions and store the mapping as described above. Fig. 9 shows an exemplary case where a plurality of rules such as a first rule and a second rule are mapped to the same instruction.
In a state where the rules (the first rule and the second rule) are defined, the user can execute the rules using defined instructions (e.g., voice, text, and gesture) as necessary. For example, as in the exemplary case of fig. 5A-5E, a user may make instructions (e.g., voice inputs such as "drive," "drive mode," and "i will drive") while boarding or getting on the vehicle. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or in the form of a gesture. As described above, when the user intends to make a voice input, the user may take a preparatory action (e.g., turn on the microphone 143) in advance for notifying the user that the device 100 uses a voice input for executing a rule.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also detect the attainment of a condition (situation). For example, the user device 100 detects the reaching of condition 1 (such as "i will drive") specified in the defined rule, and then checks whether the condition is reached (for example, whether the driving speed is equal to or more than 60Km/h or equal to or more than 80 Km/h). If the second condition is reached, user device 100 may take action 1. For example, if the driving speed is equal to or greater than 60Km/h, the user device 100 may take an action to increase its volume or the car volume. In addition, if the driving speed is equal to or greater than 80Km/h, the user device 100 may take an action of outputting an alarm sound according to the first rule.
Meanwhile, in a case where there are a plurality of conditions (e.g., a first rule and a second rule) matching the instruction, the user device 100 may recommend one of the conditions for the selection of the user. As shown in the exemplary scenario of FIG. 9, user device 100 may display a pop-up window presenting a first condition (≧ 80Km/h) and a second condition (≧ 60Km/h) to prompt the user for selection.
The previous exemplary embodiment is directed to an exemplary case where the rule is executed before the start of driving. Accordingly, when a predetermined rule having a plurality of conditions is run, the user device 100 may monitor the driving speed to determine whether the first and second conditions are reached, and sequentially perform actions corresponding to the two reached conditions or perform actions corresponding to the most recently reached conditions.
Meanwhile, the user may execute the predefined rule while driving (e.g., in a state of driving at 110Km/h as shown in fig. 9).
In this case, the user device 100 may recognize and parse the voice input to execute the rule indicated by the voice input. User device 100 may also detect the attainment of a condition (situation). For example, the user device 100 detects the reaching of condition 1 (such as "i will drive") specified in the defined rule, and then checks (e.g., determines) whether the second condition is reached (e.g., whether the driving speed is equal to or greater than 60Km/h or equal to or greater than 80 Km/h). Since the current driving speed of 110Km/h reaches both the first condition and the second condition, the user device 100 may simultaneously perform actions corresponding to the first condition and the second condition. According to an exemplary embodiment of the present invention, when the first condition and the second condition are reached (e.g., the current driving speed 110Km/h is equal to or greater than 60Km/h and 80Km/h), the user device 100 may increase its volume or the volume of the car and simultaneously output an alarm sound.
In the event that the instructions match a rule that specifies multiple conditions, the user device 100 may recommend one of the conditions for selection by the user. As shown in the exemplary scenario of FIG. 9, when the current situation (100Km/h) reaches both the first condition and the second condition, user device 100 may display a pop-up window that presents the first condition (≧ 80Km/h) and the second condition (≧ 60Km/h) to prompt the user for selection. By noting that the user is driving, it is preferable that the condition selection is made by voice input. Depending on user settings, condition selection may be performed in the form of text input and gesture input as well as voice input.
Fig. 10 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 10 illustrates an exemplary case of providing the CAS using the user device 100 and an external device (or an object capable of communicating with the user device 100 or an object to which a device capable of communicating with the user device 100 is attached). In the exemplary case of fig. 10, the rule may be defined to check (e.g., determine) a change in the external environment at a predetermined time interval set by a user and to feed back an alarm according to the check result. The user device 100 may execute predefined rules in response to user instructions and take an alert action when a change in the environment specified in the rules is detected.
Referring to fig. 10, a user may define a rule using the user device 100. For example, a user may activate a function (or application) capable of generating a rule by manipulating user device 100 and define the following rule: an event occurring according to a change in an external environment is checked at predetermined time intervals and an alarm triggered by the event is output. According to an exemplary embodiment of the present invention, a user may generate information such as "in addition to 10 at night: 00 and morning 7: 00, if no medicine is taken every 4 hours, a reminder message is displayed and a bathroom light is turned on. The rule may be generated by voice input through the microphone 143 or text input through the input unit 120 or the touch screen 130. Both speech input and text input may be accomplished using natural language. In the case of fig. 10, the situation (e.g., condition) to be detected may be "if no medicine is taken every 4 hours" and "except at 10: 00 and morning 7: 00 "and the action to be taken when the condition is reached may be" display a reminder message and turn on a bathroom light ".
As described above, the user may predefine or generate and redefine rules in real-time whenever and wherever necessary; the instructions for executing the rules may be entered in the form of speech, text, or gestures through a given interface. The user device 100 may then map the defined rules and instructions and store the mapping as described above.
In the state where the rule is defined, the user may execute the rule using defined instructions (e.g., voice, text, and gestures) as necessary. For example, as in the exemplary case of fig. 5A-5E, the user may make instructions (e.g., voice inputs such as "medicine," "check vial," and "medication time") to execute the predefined rules. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or in the form of a gesture. As described above, when the user intends to make a voice input, the user may take a preparatory action (e.g., turn on the microphone 143) in advance for notifying the user that the device 100 uses a voice input for executing a rule.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also monitor to detect the attainment of a condition (situation) specified in the executed rule. For example, the user device 100 may monitor to detect a scenario corresponding to an activated rule. As another example, the user device 100 may detect the attainment of condition 1 (such as "medication time") specified in the defined rule, and then check condition 2 (e.g., 4 hours), followed by condition 3 (such as whether the external device (e.g., vial) is moved (shaken)). In various exemplary embodiments of the present invention, the user device 100 may check (e.g., determine) the movement of the external device through radio communication. To accomplish this, the external device (e.g., a vial) may have a communication module (e.g., a Bluetooth Low Energy (BLE) widget, RF widget, NFC widget, etc.) capable of communicating with the user device 100.
If no movement of the external device (e.g., a vial) is detected for a predetermined duration (e.g., 4 hours), the user device 100 may be operable to perform a process such as outputting a reminder message "take medicine! "act 1. The user device 100 may perform action 2 of controlling a target apparatus (e.g., a lamp, a refrigerator, an electric kettle, etc.) and output of a reminder message as action 1.
In an exemplary embodiment of the present invention, the target device may be any one of objects used in daily life, such as a lamp, a refrigerator, an electric kettle, which require some action at a specific occasion, and an intelligent device. It is possible to control the target device directly if the target apparatus is capable of communicating with the user device 100 as in fig. 10 and otherwise indirectly via an auxiliary module (e.g. a communication capable power control device). According to an exemplary embodiment of the present invention, the user device 100 may communicate with a power control device having a communication function to supply power to a bathroom lamp as a target device.
Fig. 11 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 11 shows the following exemplary scenario according to an exemplary embodiment of the present invention: the owner of the user device enters instructions for executing the rules and a child user cannot execute the rules due to lack of dexterity.
FIG. 11 shows the following exemplary scenario for providing CAS: the rule is configured and executed using the composite application, and the user device 100 detects the reaching of the condition specified in the rule and executes the action corresponding to the condition. In particular, in the exemplary case of fig. 11, the user may define a rule that sends a location (e.g., the location of the user device) to a particular target user device 200 upon detection of an external event. The user device 100 may execute the rule according to the user input, and when an event is received from the target user device 200 specified in the executed rule, the user device 100 performs an action of transmitting the photograph information and the location information. In fig. 11, the user device 100 detects an input (event) from an external device (e.g., the target user device 200) and performs a specified action.
Referring to fig. 11, a user may define a rule using the user device 100. For example, the user (or the user's parent) may activate a rule generation function (or application) by manipulating the user device 100 and define a rule that transmits photo information and location information when an event (e.g., a message) is received from at least one target user device 200. According to an exemplary embodiment of the present invention, the user (or the user's parents) may generate a rule such as "if text is received from mom in protected mode, take a photograph and send the photograph and my location". The rule is generated by voice interaction with the microphone 143 or text interaction with the input unit 120 or the touch screen 130, which will be described later. Preferably, the speech and text interaction is based on natural language, as described above. In the rule defined as shown in fig. 11, the situation to be detected (e.g., condition) specified in the defined rule may be "if text is received from mom in protected mode", and the action to be taken when the condition is reached may be "take a photo and send the photo and my location". In addition, additional conditions such as "if a message including a phrase asking for a location (such as" where ") is received from the mother" may be further configured.
The user device 100 may provide an interface for designating at least one target user device 200 generating an event and mapping information (e.g., phone numbers, names, and nicknames) about the at least one target apparatus 200 to rules. If necessary, the user can predefine and input or redefine the rules anytime and anywhere, and can input instructions for executing the rules through a given interface in the form of voice, text, or gestures. The user device 100 may then map the instructions to the defined rules and store the mapping information.
In a state where the rule is defined, the user may execute the rule using predefined instructions (e.g., voice, text, and gestures) as necessary. For example, in the exemplary case of fig. 5A-5E, the user may enter the respective instructions as voice inputs on the way to home from school (college) (e.g., "protected mode", "after school", and "i will go home from school"). Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or may be input in the form of a gesture. Preferably, the voice instructions and text instructions are natural language based instructions, as described above. When the user intends to make a voice input, the user may take a preparatory action (e.g., turn on the microphone 143) in advance for notifying the user that the device 100 uses a voice input for executing a rule.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also detect the attainment of a condition (situation).
For example, the user device 100 may detect the achievement of a condition 1 (such as "protected mode") specified in the defined rule, and check a condition 3 such as receiving a text message (a text message including a specific condition (such as "where") from a condition 2 (such as the specified target apparatus 200) according to the condition 1. The user device 100 may also perform act 2 of acquiring photo information by automatic shooting according to act 1. User device 100 may also perform act 3 of activating position-location module 117. User device 100 may also perform act 4 of obtaining location information about user device 100. The user device 100 may also perform act 5 of transmitting the photo information and the location information acquired through acts 1 to 4 to the target user device 200.
If the location information transmitted by the user device 100 is received, the target user device 200 may execute the environment photograph information and the location information about the user device 100 through a predetermined interface. For example, the target user device 200 may display the user device location on a map image, as shown in fig. 11.
Although not shown in fig. 11, photo information around the location of the user device 100 may be acquired from an external server. For example, if the user device 100 captures a photograph in the user's pocket, the photograph may be a photograph of a power outage other than a planned scene photograph (e.g., an environmental photograph). Thus, the user device 100 must analyze the captured photograph and check (e.g., determine) whether the user device 100 is located in a pocket based on the environmental conditions of the user device (e.g., using an illumination sensor) to determine whether the photograph was taken in a normal state. The user device 100 may be configured to have conditions and actions that operate in the following manner: if the picture is taken in an abnormal state, an environmental picture around the current location of the user device is acquired from an external server.
Fig. 11 shows an exemplary case where the owner of the user device 100 inputs an instruction for executing a rule. However, a child user may not be able to execute the rules with dexterity. By noting this situation, it is preferable to allow parents to remotely execute rules to continuously monitor the child's location. For example, it is possible to configure rules such as "execute protection mode and notify my location when receiving a text message such as" where "from mom". If there is a message including "where" from the target user device 200, the user device 100 checks (e.g., determines) whether the user device 100 operates in the protection mode, and if so, performs the above-described operation. Otherwise, if the user device 100 is not in the protection mode, the user device 100 performs the protection mode and performs the above-described operation.
As described above, the CAS service according to the exemplary embodiment of fig. 11 can effectively and rapidly notify one user (parent) of an event triggered by the location and surroundings of other users (children) as much as possible. The user (parent) may also obtain information about the location, movement path and surroundings of other users (children).
Fig. 12 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 12 shows the following exemplary scenario: the user configures and executes the rules through voice input, such that the user device 100 monitors to detect the attainment of the condition specified in the rules, and performs a corresponding action when the condition is reached. As shown in fig. 12, a user may define rules that control particular functions of the user device 100 in a particular environment. The user device 100 executes the rules according to user input and performs actions that handle specific functions of the user device as defined in the execution rules.
Referring to fig. 12, a user may define a rule using the user device 100. For example, the user may activate a function (or application) capable of generating a rule by manipulating the user device 100 and define a rule for executing a plurality of functions (applications) in a specific environment. According to an exemplary embodiment of the present invention, a user may generate a rule such as "if a subway train is on, turn on Wi-Fi and execute music APP". The rule may be generated by voice input through the microphone 143 or text input through the input unit 120 or the touch screen 130. Both speech input and text input may be implemented using natural language. In the case of fig. 12, the situation (e.g., condition) to be detected may be "if subway train is on" explicitly specified by the user, and the action to be taken when the condition is reached may be "turn on Wi-Fi and execute music APP".
The user device 100 may provide an interface for selecting a plurality of actions (e.g., functions and applications) while implementing the corresponding functions, so that the user may select a plurality of actions in the defined rules. For example, if the user inputs "subway" as a condition of the rule, the user device 100 may display a list (action list) of actions (e.g., functions and applications) that can be performed in association with the condition "subway" to receive a user input for selecting an action to be performed (e.g., Wi-Fi open and music application execution) from the action list. As described above, the user may predefine or generate and redefine rules in real-time whenever and wherever necessary.
In the state where the rule is defined, the user may activate and/or execute the rule using defined instructions (e.g., voice, text, and gestures) as necessary. For example, the user may make instructions (e.g., voice inputs such as "on subway train", "subway", and "subway mode") to execute predefined rules before or while on the subway train, as in the exemplary cases of fig. 5A-5E. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or in the form of a gesture. Preferably, the voice instructions and text instructions are natural language based instructions, as described above. As described above, when the user intends to make a voice input, the user may take a preparatory action (e.g., turn on the microphone 143) in advance that notifies the user that the device 100 uses a voice input for executing a rule.
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also detect the reaching of a condition (situation). For example, the user device 100 may detect the arrival of condition 1 (such as "subway train on") specified in the defined rule, and perform act 1 of turning on Wi-Fi and act 2 of executing a music application. User device 100 may process signals exchanged for establishing a Wi-Fi connection and playing a music file as acts 1 and 2.
If the rule is executed and the condition specified in the rule is reached (e.g., getting on a subway train), the user device 100 turns on the Wi-Fi function and executes the music application, and feeds back the execution result. For example, as shown in FIG. 12, user device 100 may display an indication of Wi-Fi on state and output a sound as a result of music playing.
According to the exemplary embodiment of FIG. 12, when a condition specified in a currently running rule is reached, a user may make a voice input or a behavioral input to perform a number of actions associated with the condition.
Fig. 13 is a diagram illustrating an exemplary case of providing a CAS using a user device according to an exemplary embodiment of the present invention.
Fig. 13 shows the following exemplary scenario: the user configures and executes the rules through natural language based voice input or text input, such that the user device 100 monitors to detect the attainment of the condition specified in the rules, and performs a corresponding action when the condition is reached. For example, when a rule is activated, the user device 100 monitors to detect a scenario corresponding to a condition of the activated rule. If the scenario is detected, the user device performs a job corresponding to the condition of the rule. As shown in fig. 13, the user may define an abstract condition so that the user device 100 executes a rule according to a user input and controls a specific function specified in the processing rule and communication with an external device (or an object capable of communicating with the user device 100 or an object to which a device capable of communicating with the user device 100 is attached).
Referring to fig. 13, a user may define a rule using the user device 100. For example, the user may activate a function (or application) capable of generating a rule by manipulating the user device 100 and define a rule for executing a plurality of functions (a function of the user device and a function of an external device) designated to be executed in a specific environment.
According to an exemplary embodiment of the present invention, the user may generate a rule such as "if dim, adjust the lamp brightness to level 2 and play classical music". At this time, the rule may be generated by voice input through the microphone 143 or text input through the input unit 120 or the touch screen 130. Both speech input and text input may be implemented using natural language. In the case of fig. 13, the situation to be detected (e.g., condition) may be "if dim", and the action to be taken when the condition is reached may be "if dim, adjust the lamp brightness to level 2 and play classical music".
At this time, the user device 100 may provide an interface for the user to select a plurality of actions (e.g., function control of the user device and function control of the external device) to be performed in the process of determining the rule according to the user request. For example, if the user inputs "dim" as the condition specified in the rule, user device 100 may display a list of executable actions associated with the condition "dim" to prompt the user to select an action from the list of actions (e.g., light brightness control, music application execution, and classical music play). The user may predefine or generate and redefine rules in real time whenever and wherever necessary.
Where rules are defined, the user may execute the rules using defined instructions (e.g., voice, text, and gestures) as necessary. For example, a user may make instructions (e.g., voice inputs such as "dim", "tired", and "dim mode") to execute predefined rules, as in the exemplary cases of fig. 5A-5E. Although the description is directed to the case where the instruction is a voice instruction input through the microphone 143, the instruction may be input in the form of text through the input unit 120 or the touch screen 130, or in the form of a gesture. Preferably, the voice instructions and text instructions are natural language based instructions, as described above. As described above, when the user intends to make a voice input, the user may take a preparatory action in advance informing the user that the device 100 uses a voice input for executing a rule (e.g., opening the microphone 143 using an execution button of a function key or a widget).
User device 100 may recognize and parse the voice input to execute the rules indicated by the voice input. The user device 100 may also detect the attainment of a condition (situation). For example, the user device 100 may detect the reaching of a condition (such as "dim") specified in the defined rule, and perform act 1 of adjusting the lamp brightness to level 2 by controlling the external device and act 2 of executing a music application to perform act 3 of completing the playing of classical music.
In the exemplary case of fig. 13, if an external device (e.g., a living room light or other external device that the user device 100 has the right to communicate and/or control) is able to communicate with the user device 100, the external device may communicate with the user device 100 and the external device may be directly controlled by the user device 100, otherwise, may be indirectly controlled using an auxiliary device (e.g., a communication-capable power control device). According to an exemplary embodiment of the present invention, the user device 100 may communicate with a communication-capable power control device to adjust the brightness of the living room light.
If there is an instruction to execute the rule, the user device 100 may adjust the external light brightness and simultaneously execute the music application, and feed back the execution result. For example, as shown in fig. 13, the user device 100 may adjust the lamp brightness to level 2 and output a sound as a result of classical music playing and a corresponding execution screen.
As described above, according to the exemplary embodiment of FIG. 13, a user may define abstract conditions and configure rules to perform actions according to a situation. According to an exemplary embodiment of the present invention, a user may define a condition of the user (such as "dim (or tired)") based on natural speech and provide feedback after an action is performed when the condition is reached.
In the above, fig. 6 to 13 are for the case where the rules and instructions are defined and configured separately. However, according to various exemplary embodiments of the present invention, the instruction for executing the rule may be extracted from the predefined rule without additional definition.
Assume that a rule such as "if i am taxi, send my location to parent affinity siblings (sisters) every 5 minutes" is defined, as in the exemplary case of fig. 7. If user device 100 identifies "if i am taxi" in the rule, user device 100 may extract associated instructions (or associated words or commands having a high association with words in the rule defined for execution of the rule) such as "take taxi now", "taxi", and "taxi mode". For example, in a situation where a rule is defined without specifying a specific command, the user may execute the rule by merely inputting associated commands such as "take taxi now", "taxi", and "taxi mode".
Assume that a rule such as "if text from wife is received while driving, my current location information is transmitted" is defined, as in the exemplary case in fig. 8. The user device 100 identifies "driving" in the rule and extracts the associated commands "i will drive", "drive", and "driving mode" to execute the rule. For example, in a situation where a rule has been defined without specifying a specific command, the user may execute the rule with an input associated with any one of the commands "i will drive", "drive", and "driving mode".
If any associated command is entered, the user device 100 may search the predefined rules for any one that matches the command to execute the corresponding rule.
The operations described with reference to fig. 7 and 8 may be applied to the exemplary cases of fig. 6 and 9 to 13. For example, associated commands such as "i will drive", "drive", and "driving mode" may be input in association with the rule defined in the case of fig. 9; associated commands such as "medicine", "check medicine bottle falling (hereabouts)" and "medicine taking time" may be input in association with the rule defined in the case of fig. 10; associated commands such as "protection mode", "after school", and "i will go home" may be entered in association with the rules defined in the scenario of FIG. 11; an association command such as "on subway train", "subway", or "subway mode" may be input in association with the rule defined in the case of fig. 12; associated commands such as "dim" and "tired" may be entered in association with the rules defined in the scenario of FIG. 13. Accordingly, the user device 100 may execute the respective rule based on the associated command corresponding to the predefined rule without the need to define additional commands.
Fig. 14A and 14B are diagrams illustrating exemplary screens for explaining an operation of temporarily stopping a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 14A illustrates an exemplary screen of the user device 100 in which the CAS widget 500 is provided with an execution information area 520 listing the currently running rule.
Fig. 14A is an exemplary situation for the "taxi" and "subway" rules being run. The user may stop a particular rule by selecting the rule from the execution information area 520 of the widget 500. For example, in the state of fig. 14A, the user may select (e.g., make a tap touch gesture) the "taxi" rule. Then, the user device 100 temporarily stops the rule (e.g., taxi) in response to the selection of the user in the currently running rule (e.g., taxi and subway), as shown in fig. 14B.
In this case, the user device 100 may change the flag for the rule of temporary stop. According to an example embodiment of the invention, user device 100 may change the status indication flag of the "taxi" rule from the enabled status flag to the disabled status flag, as shown in fig. 14B. For example, each of the rules listed in the execution information area 520 of the widget 500 is provided with a status indication button, wherein the status indication button indicates whether the corresponding rule is enabled or disabled.
According to an exemplary embodiment of the invention, widget 500 may indicate the currently running rules and may temporarily stop each rule according to user input. Therefore, it is possible to stop the operation of the rule employing the repetitive action (for example, the action of periodically checking the condition or sending the text information) as necessary according to the user's intention to improve the usability.
Fig. 15A and 15B are diagrams illustrating exemplary screens for explaining an operation of temporarily stopping a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 15A and 15B illustrate screens of the user device 100 when a CAS application (e.g., a rule list) is executed according to an exemplary embodiment of the present invention.
Referring to fig. 15A and 15B, a rule "subway" is currently running, and detailed information specified in association with the selected rule is presented in a drop-down window. For example, the user may select the rule item 1510 to check the detailed information (e.g., conditions and actions) 1520 and 1530 of the respective rules, e.g., "Wi-Fi settings: turn on "and" sound settings: vibration ".
In the state of fig. 15A, the user may select one of a plurality of action items 1520 and 1530 specified in one rule to temporarily stop the operation. For example, the user may select "Wi-Fi settings" item 1520 among the actions (e.g., Wi-Fi settings item 1520 and sound settings item 1530) of rule "subway" 1510. Then, the user device 100 stops an action (e.g., Wi-Fi setting) corresponding to an item selected by the user among actions of the currently running rule (e.g., "subway") 1510.
The user device 100 may change the status indicator of the stopped action. For example, user device 100 may change the status indication flag of the action "Wi-Fi setup" from the enabled status flag to the disabled status flag, as shown in fig. 15B. For example, each of regular actions 1520 and 1530 is provided with a status indication button, wherein the status indication button indicates whether the respective action is enabled or disabled.
According to an exemplary embodiment of the present invention, it is possible to selectively stop each action listed in the current running action list when a plurality of actions run in association with one rule.
Fig. 16A to 16C are diagrams illustrating exemplary screens for explaining an operation of stopping a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 16A illustrates an exemplary screen of the user device 100 displayed in a state where the CAS widget 500 is executed with the execution information area 520 presenting information on a currently running rule.
Fig. 16A is an exemplary scenario for the "taxi" and "subway" rules currently running. The user may select (e.g., make a flicking touch gesture) the instruction input area (or rule execution button) 510 of the widget 500. Then, the control unit 170 determines the selection of the instruction input area 510 as the start of one of the execution of the rule, the temporary stop of the currently-running rule, and the termination of the currently-running rule. Fig. 16A to 16C are exemplary cases for temporarily stopping the currently running rule. As shown in fig. 16B, the user device 100 may display a pop-up window 1651 that prompts the user to input an instruction (e.g., voice input) for temporarily stopping the rule using a message (e.g., "please speak a command"), and wait for the voice input of the user.
In the state of fig. 16B, the user may make a voice input (e.g., "temporarily stop taxi") for stopping a target rule (e.g., taxi). Then, the control unit 170 may recognize the voice input of the user and stop a rule (e.g., "taxi") corresponding to the voice among the rules (e.g., "taxi" and "subway") currently running. As shown in fig. 16C, the control unit 170 may change a status indication flag of a rule (e.g., "taxi") selected in the execution information region 520 of the widget 500 by voice input, thereby temporarily stopping the rule.
According to an exemplary embodiment of the present invention, the control unit 170 may change the status indication flag of the "taxi" rule in the execution information region 520 from the enabled status flag to the disabled status flag, as shown in fig. 16C. For example, fig. 16C illustrates an exemplary screen of the user device 100 in a state in which a specified rule (e.g., taxi) is temporarily stopped in the execution information area 520 of the widget 500.
According to an exemplary embodiment of the present invention, a currently running rule is presented through widget 500 and temporarily stopped according to a user's voice input (e.g., "temporarily stop OOO", where OOO may be a rule or condition).
Fig. 17 is a diagram illustrating an exemplary screen having an indication to execute a rule in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 17, when a rule is executed in response to a user request, the user device 100 may display a notification item 700 on the screen of the display panel 131. The notification item 700 may be provided in the form of an icon or text representing the rule being executed. For example, if a rule related to driving is executed, a notification item 700 in the form of an image (or text) of a car may be set on a portion of the screen. In addition, if a rule related to a medicine is executed, a notification item 700 in the form of a medicine or a medicine bottle image (or text) is provided on a portion of the screen. According to various exemplary embodiments of the present invention, a user is notified of a currently running rule through the notification item 700, so that the user intuitively perceives the currently running rule.
Fig. 17 is an exemplary case for a notification item 700 provided on a part of the display panel 131. However, the exemplary embodiments of the present invention are not limited thereto. The notification may be presented in an indicator region for providing various operational status information of the user device 100 as described with reference to fig. 5E.
Fig. 18A and 18B are diagrams illustrating exemplary screens having items notifying a rule executed in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 18A and 18B, the user device 100 may set the notification item 550 at a portion on an indicator area 1850 on the screen of the display panel 131. According to an exemplary embodiment of the present invention, notification item 550 is presented in the left portion of indicator area 1850 to notify of the currently running rule.
Referring to fig. 18A and 18B, in various exemplary embodiments of the invention, a notification item 550 may be presented when at least one rule is currently running.
Referring to the exemplary scenario of FIG. 18A, as shown in execution information area 520 of widget 500, a number of rules (e.g., two rules, "taxi" and "subway") are running. Referring to fig. 18B, one of the rules listed in the execution information area 520 (e.g., the rules "taxi" and "subway") (e.g., "subway") is enabled, and the other rule (e.g., "taxi") is disabled (e.g., temporarily stopped as described in the exemplary embodiment).
According to various exemplary embodiments of the present invention, the user device 100 may notify the user of the existence of at least one currently running rule using the notification item 550. However, the exemplary embodiments of the present invention are not limited thereto. Notification items may be set per rule. This means that a number of notification items matching the number of currently running rules may be presented in the indicator area 1850.
Fig. 19A and 19B are diagrams illustrating exemplary screens having items notifying a rule executed in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 19A and 19B, when a rule is executed in response to a user request, the user device 100 may display a notification item 550 notifying the existence of the currently running rule at an indicator area 1850 on the screen of the display panel 131. According to an exemplary embodiment of the present invention, if a currently running rule is temporarily stopped, the user device 100 may notify the existence of the temporarily stopped rule using the corresponding notification item 550 of the indicator area 1850.
As shown in the exemplary case of fig. 19A, if a rule (e.g., a taxi) that temporarily stops the current operation is selected by the user, the user device 100 switches the state of the notification item 550 to the disabled state. Execution information area 520 of widget 500 indicates that taxi rules are not currently running.
According to an exemplary embodiment of the present invention, as shown in fig. 19B, the notification item 550 may be provided in the form of an icon representing a corresponding rule or an application execution icon. For visual representation, the icon may be presented in one of an enabled status icon and a disabled status icon.
Fig. 20A to 20C are diagrams illustrating exemplary screens having an item notifying a rule executed in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 20A through 20C, the user device 100 may set at least one notification item 550 of an execution program in an indicator area 1850. According to an example embodiment of the invention, the user may select (e.g., tap, make a touch gesture, etc.) the notification item 550 of the indicator area 1850 or touch and drag the indicator area 1850 downward to display the quick panel 2010.
The quick panel 2010 may be configured to display settings for various functions (e.g., Wi-Fi, GPS, sound, screen rotation, power saving mode, etc.) and to quickly configure the settings in the form of a translucent window that slides to cover the screen of the display, in whole or in part, in response to user input. In various exemplary embodiments of the invention, the courier panel 2010 may be provided with information items 2050 representing respective rules that are currently being executed, as shown in FIG. 20B. Thus, the user can check the condition and action of at least one rule represented by the notification item 550 visually presented in the indicator area 1850 through the quick panel 2010.
In the state of fig. 20B, the user may select (e.g., tap, make touch gesture, etc.) the information item 2050 of the quick panel 2010 to check detailed information about the corresponding rule, as shown in fig. 20C. In response to selection of information item 2050, user device 100 may display detailed information corresponding to the currently running rule (represented by the information item selected by the user). In various exemplary embodiments of the present invention, in a state where the quick panel 2010 is maintained, detailed information may be provided in the form of a text pop-up window presented on the quick panel 2010, a rule list switched by a screen as described above, or a voice output.
According to various exemplary embodiments of the present invention, if the notification item 550 of the indicator area 1850 is touched, the user device 100 may feed back the condition and action specified in the rule in the form of voice or text output.
According to various exemplary embodiments, if the notification item 550 of the pointer area 1850 is touched or touched and dragged, the quick panel 2010 is displayed to show the information items 2050 representing the corresponding rules. If the information item 2050 is touched, detailed information (e.g., conditions and actions) on the corresponding rule may be fed back in the form of a voice output. According to an exemplary embodiment of the present invention, if the information item 2050 is selected, as shown in fig. 20C, a voice of "[ home ] running" may be output. Of course, various notifications such as "the bell sound is a bell sound" and "Wi-Fi on" may be output in the form of at least one of voice and text.
Fig. 21A and 21B are diagrams illustrating exemplary screens associated with an operation of notifying an execution rule in a user device according to an exemplary embodiment of the present invention.
Fig. 21A and 21B illustrate exemplary operations of notifying a user of a currently running rule using the widget 500 according to an exemplary embodiment of the present invention.
Referring to fig. 21A, in a state in which a currently running rule (e.g., home) is displayed in the execution information area 520 of the widget 500, the user may select the corresponding rule. Then, the user device 100 may feed back detailed information about the rule selected in the execution information area 520 in the form of text or voice feedback.
For example, as shown in fig. 21B, the user device 100 may output conditions and actions configured in association with a rule (e.g., home) selected by the user, such as "[ home ] running, bell sound, Wi-Fi on", in the form of a text pop-up window 2150. The user device 100 may also output detailed information according to the settings configured by the user in the form of voice feedback or in the form of both voice and text feedback.
Fig. 22A to 22C are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 22A illustrates an exemplary screen of the user device 100 in a state in which the CAS widget 500 is executed, according to an exemplary embodiment of the present invention, wherein the CAS widget 500 shows information on a currently running rule in the execution information area 520 of the widget 500.
Fig. 22A is a case where the rule for the current operation is "subway". In this case, information about "subway" is displayed in the execution information area of the widget 500, and a notification item 550 indicating the rule "subway" currently running is displayed at the indicator area 1850.
The user may select (e.g., make a flicking touch gesture) the instruction input region (rule execution button) 510. Then, the control unit 170 determines that the selection in the instruction input area 510 corresponds to the start of one of the execution of the rule, the temporary stop of the currently running rule, and the termination of the currently running rule. Fig. 22A to 22C are exemplary cases for terminating a currently running rule. As shown in fig. 22B, the user device 100 may display a pop-up window 2251 that prompts the user to input an instruction (e.g., voice input) for terminating the rule using a guide message (e.g., "please speak a command"), and wait for the user's voice.
In the state of fig. 22B, the user may make a voice input (e.g., "end subway") for terminating the execution of the rule (e.g., "subway"). Then, the control unit 170 may recognize the voice input and terminate the rule (e.g., "subway") indicated by the voice input. The control unit 170 may change the display of the execution information region 520 of the widget 500 by reflecting the termination of the rule (e.g., "subway"), as shown in fig. 22C. The control unit 170 may control such that the notification item 550 disappears in the indication area 1850 by reflecting the termination of the rule "subway".
According to an exemplary embodiment of the present invention, the control unit 170 may change the display of the state of the rule "subway" from the enabled state to the disabled state in the execution information area 520, as shown in fig. 22C. Fig. 22C illustrates an exemplary screen of the user device 100 in which a rule (e.g., subway) disappears in the execution information area 520 of the widget 500 according to the user's voice input for termination of the corresponding rule. When the rule "subway" is terminated, the information item related to the rule "subway" is replaced with a notification stating "no rule is running". In the case where one of the currently running rules is terminated, only the information items corresponding to the terminated rule disappear, and the information items corresponding to the other currently running rules remain in the execution information area 520.
According to various exemplary embodiments of the present invention, the current rule may be terminated in response to a user's voice input that commands termination of the rule in the form of "terminate 000". In various exemplary embodiments of the present invention, if a particular rule is terminated in response to a user input, at least one setting configured in association with a condition and action of the rule may be automatically restored to a state prior to execution of the corresponding rule.
Fig. 23A and 23B are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 23A and 23B illustrate exemplary operations of providing an end button for terminating a rule represented by an item in the widget 500 or a rule list and terminating the corresponding rule using the end button.
As shown in FIG. 23A, widget 500 shows that the rule "subway" is currently running. As shown in FIG. 23A, a notification item 550 is provided on an indicator area 1850 to indicate the existence of any currently running rules.
The user may terminate the operation of the corresponding rule using an end button 525 mapped to the currently running rule before terminating the rule "subway". For example, the user may select (e.g., tap or touch gesture) an information item representing a rule (e.g., "subway") in the execution information area 520 of the widget 500, as shown in fig. 23A.
The control unit 170 recognizes a user input made to the end button 525 and terminates a rule (e.g., "subway") corresponding to the end button 525. As shown in fig. 23B, the control unit 170 may control such that an item representing a rule (e.g., "subway") terminated by a user disappears in the execution information area 520. The control unit 170 may also control such that the notification item 550 disappears in the indicator area 1850 due to the termination of the rule.
According to an exemplary embodiment, the control unit 170 may change the execution state of the rule "subway" from the enabled state to the disabled state in the execution information area 520, as shown in fig. 23B. Fig. 23B illustrates an exemplary screen of the user device 100 in a state in which an item indicating an enabled state of a rule (e.g., subway) disappears from the execution information area of the widget in response to selection of the end button 525. In response to a request for terminating the currently running rule "subway", the information item indicating the enabled state of the rule "subway" is replaced with a notification item "no running rule". Assuming that one of the plurality of run rules is terminated, only the information items of the terminated rule disappear, and the information items corresponding to the other currently run rules are retained in the execution information area 520.
Fig. 24 and 25 are diagrams illustrating a case where a CAS service is terminated in a subscriber device according to an exemplary embodiment of the present invention.
Referring to fig. 24, the user may make an instruction (e.g., voice, text, selection of a termination button of a widget, and a gesture) predefined for terminating a rule to end one of the currently running rules. The rule termination instruction may be a voice instruction or a text instruction. The rule may be terminated by a pre-designed function key, a per-rule-designed button in widget 500, a text-writing (text-script) input, or a predefined gesture.
For example, as shown in fig. 24, the user can make voice inputs such as "end 000 (command corresponding to rule)", "to home", and "to go home" based on natural language. The user device may then recognize and parse the voice input and end the rule if the voice input matches the predetermined instruction. In various exemplary embodiments of the present invention, a rule termination instruction may be set for each rule. The rule terminating instruction may be set to a general instruction for all rules. In the case of a generic rule, it is possible to end all currently running rules once by making a termination instruction.
Referring to fig. 25, although the user does not input any instruction for terminating the currently running rule, the user device 100 terminates the rule or prompts the user to terminate the rule when a specific condition is reached.
For example, as shown in fig. 25, the user device 100 may monitor the situation of the currently running rule and determine whether the situation reaches a rule termination condition registered using the mapping table. In various exemplary embodiments of the present invention, the rule termination condition may be reached when no situation change is detected for a predetermined duration or a user-specified rule termination condition (e.g., voice input, rule termination function key selection, and holding a specific condition for terminating a rule) is satisfied.
When the rule termination condition is reached, the user device 100 may display a pop-up window 900 presenting a message for prompting the user to terminate the corresponding rule (e.g., "terminate driving mode.
Fig. 26A and 26B are diagrams illustrating exemplary screens associated with an operation of deleting a rule in a user device according to an exemplary embodiment of the present invention.
Fig. 26A and 26B illustrate exemplary screens of the user device 100 when a CAS application (e.g., a rule list) is executed.
Fig. 26A and 26B are exemplary cases in which three rules "home", "taxi", and "subway" defined by the user are stored in the user device 100. In a state where the three rules are presented in the rule list, the user may make a deletion instruction through menu manipulation, voice input, or gesture input.
For example, in the state of fig. 26A, the user may request the execution of the delete function by operating a menu of the user device 100, making a voice or text input of "delete rule", or making a gesture input. The user unit 170 may then activate the rule deletion function in response to the user input and display a screen interface capable of deleting the rule, as shown in fig. 26B. According to an exemplary embodiment of the present invention, a list of predetermined rules may be displayed together with a selection item 2600 (e.g., a check box) of each rule, as shown in fig. 26B.
The user may select at least one rule by checking (check) the corresponding selection item. At this time, the control unit 170 may check the selection item 2600 using a flag to indicate that the corresponding rule is selected. According to an exemplary embodiment, unchecked selections 2600 are presented as empty boxes (e.g., selection boxes with no check mark therein), while checked selections 2600 are presented as boxes with check marks therein. The user may delete at least one selected rule through menu manipulation or using a delete button.
Although fig. 26A and 26B are for an exemplary case of deleting a rule using a menu and a selection item 2600, a user may instruct to delete a rule by inputting voice or text. For example, in the state of fig. 26A and 26B, the user may make a voice of "delete home" or write an input to delete the rule "home".
Fig. 27 is a flowchart illustrating a procedure of generating a rule in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 27, the control unit 170 (e.g., the rule configuration module 173) receives a user input requesting the CAS in step 1001. For example, the user may execute a configuration mode for defining a rule through one of the input unit 120, the microphone 143, and the touch screen 130. In the configuration mode, the user may make an input for defining the rule in the form of a voice input through a microphone or a text input through the input unit 120 or the touch screen 130. Then. The control unit 170 may recognize a user input for the CAS in the configuration mode. The user may enter instructions associated with rules, conditions (or situations), and actions through one of the input manners described above.
For example, the user may define a rule "send text if taxi is on", specify at least one target user device (e.g., "father") to which the text is sent, and define a rule execution instruction "i taxi is on". As described in the above exemplary embodiments of the present invention, it is possible to define detailed rules such as "if i am on a taxi, send my location to parent affinity (sister) every 5 minutes" or "if a text message including" where "is received from the parent, send a text message to the parent".
In step 1003, the control unit 170 recognizes a user input. For example, the control unit 170 may recognize voice or text input through a corresponding input means in the regular configuration mode. For example, the control unit 170 performs a voice recognition function for recognizing voice input through the microphone 143 or a text recognition function for recognizing text input through the input unit 120 or the touch screen 130. It may be preferred that the voice and text instructions are made based on natural language, as described above.
In step 1005, the control unit 170 parses the recognized user input (e.g., natural language based speech or text). For example, the control unit 170 parses a natural language-based voice instruction to extract and recognize a rule, a condition, and a rule execution command of the user plan. The rules, conditions (situations), actions and rule execution instructions may be continuously input according to the wizard. The control unit 170 may also search for items performing an action (e.g., a situation to be detected, an object, and a command) according to the recognized situation as a result of parsing the user input to check for a missing part. According to an exemplary embodiment of the present invention, the control unit 170 may generate the rule based on interaction with the user by providing a guide for generating the rule and receiving information according to the guide.
For example, when the user defines a rule for situation recognition such as "send text if i am on a taxi" without specifying a target, the control unit 170 may recognize that the destination (e.g., the target) lacks a text message. When the action of the defined rule is performed without specifying the target, the control unit 170 may direct the specification of the target. For example, the control unit 170 may perform voice or text interaction with the user until additional information according to the action is acquired. According to an exemplary embodiment of the present invention, the control unit 170 may display pop-up text (using voice guidance) such as "please say the recipient" or "where to send". Then, the user can make voice inputs such as "specify later", "boyfriend", and "send to father". The target may be specified by natural language based speech or text input, as described above. The rules may be supplemented with additional information; according to the above-described process, the control unit 170 may perform recognition and parsing of the voice input and match the corresponding items to a rule.
In step 1007, the control unit 170 manages the rule defined for the CAS based on the parsed user input. For example, the control unit 170 may map rules, conditions (situations), actions, and rule execution instructions acquired by parsing user input with each other, and store the mapping in a mapping table for management.
Although not depicted in FIG. 27, according to various exemplary embodiments of the present invention, a user guide may be provided in defining the course of a rule. For example, a user may enter instructions for generating rules by typing with a virtual keyboard or writing on a touch screen with a particular input tool (e.g., an electronic pen or a user's finger). The control unit 170 may then provide a screen interface capable of recognizing text entered by writing or keyboard typing and defining rules based on the recognized text.
The screen interface may provide a list of conditions and actions (or functions or applications) (e.g., a condition list and an action list), and the user may selectively turn on/off the conditions and actions (or functions or applications). For example, if the user intends to define a rule for turning off the GPS function in the office, the user may input "office" on the screen of the user device 100 through an electronic pen (write input or character selection input based on a touch keyboard).
According to various exemplary embodiments of the invention, user device 100 may provide a screen interface (e.g., a command pad or touchpad) capable of receiving write input or typing input. The user device 100 can recognize the text "office" entered by writing or typing with an input tool (electronic pen or user's finger) on a screen interface such as a command pad and a touch pad. Then, the control unit 170 controls to display the configuration list screen so that the user turns on/off at least one of the conditions and actions (functions or applications) associated with the "office" on the configuration list screen.
According to an exemplary embodiment of the present invention, the rules may be defined by one of voice or text input functions. Text entry may be made by writing/typing natural language instructions as described above or by entering keywords (e.g., "office") through a command pad or touchpad and then closing/opening at least one of the actions presented in the list.
Fig. 28 is a flowchart illustrating a process of providing a CAS in a user device according to an exemplary embodiment of the present invention.
FIG. 28 illustrates an operational procedure for executing a rule on a user device 100 in response to a user interaction and taking an action when a condition (situation) specified in the rule is reached.
Referring to fig. 28, the control unit 170 receives a user instruction for CAS in step 1101. For example, a user may make a voice input for executing a rule defined for CAS through the microphone 143. Then, the control unit 170 may receive a voice instruction of the user through the microphone 143. According to various exemplary embodiments of the present invention, an instruction for executing a rule may be input in the form of voice input through the microphone 143, text input through the input unit 120 or the touch screen 130, or gesture input.
According to various exemplary embodiments of the present invention, it is possible to designate one of the function keys of the user device 100 as an instruction key (e.g., a rule execution button or a shortcut key of the widget 500) to await a voice input of the user. In this case, when the instruction key is selected, the control unit 170 waits for a voice input to the user of the CAS and attempts voice recognition for the voice input performed in the standby state. The standby state may be configured to be maintained after a standby mode command or only when an instruction key is pressed.
In step 1103, the control unit 170 recognizes an instruction input by the user. For example, the control unit 170 may extract a command commanding the execution of the rule from a voice input of the user.
In step 1105, the control unit 170 parses the identified instruction. For example, control unit 170 may parse the recognized user speech (e.g., "i've taxi") to extract a rule execution command (e.g., "taxi").
In step 1107, the control unit 170 determines whether there is any rule matching the extracted execution command among the predetermined rules.
If there is no rule matching the execution command in step 1107, the control unit 170 controls to display a guide in step 1109. For example, the control unit 170 may display a pop-up guide notifying that there is no rule required by the user. The control unit 170 may also display a guide inquiring whether to define a rule associated with the corresponding command in the form of a guide pop-up. The control unit 170 may also provide a list of rules defined by the user.
After the guide is displayed, the control unit 170 controls to perform an operation according to the user's request in step 1111. For example, the control unit 170 may terminate a rule according to a user's selection, define a new rule associated with a command according to a user's selection, or process an operation of selecting a specific rule from a rule list.
If the rule matches the execution command in step 1107, the control unit 170 determines whether the number of rules matching the execution command is greater than 1 in step 1113. For example, a user may define one or more rules that match an instruction. According to an exemplary embodiment, a user may define a plurality of rules (e.g., a first rule "transmit a current location if an event is received from a designated external device while driving", a second rule "output an alarm if a driving speed is equal to or greater than 100 Km/h", and a third rule "increase a radio volume if the driving speed is equal to or greater than 60 Km/h"). As an example, the user may define a first rule using the command "drive 1", a second rule using the command "drive 2", and a third rule using the command "drive 3".
If the number of rules matching the execution command is not greater than 1 (e.g., if only one rule matches the execution command), the control unit 170 may control to perform an action according to a single rule in step 1115. For example, the control unit 170 monitors to detect whether a condition (situation) specified in the corresponding rule is reached, and performs one or more actions when the condition (situation) is reached.
If the number of rules matching the execution command is greater than 1 in step 1113, the control unit 170 controls to perform an action corresponding to the plurality of rules matching the execution command in step 1117. For example, the control unit 170 may monitor conditions (situations) specified in a plurality of rules matching the execution command, and execute an action of each rule for which a condition is reached whenever at least one condition is reached.
Although not depicted in fig. 28, the control unit 170 may provide an interface that prompts (recommends) one of a plurality of rules that match one instruction. The control unit 170 may control to perform a plurality of actions according to a plurality of rules selected by the user (selection input through voice, text, or gesture instructions), or perform a single action according to a single selected rule.
Although not described in fig. 28, the control unit 170 may check an item for execution of an action (e.g., a condition (situation) or a goal to be recognized)) in the rule according to a user input to check the missing part. The control unit 170 may check the user's interaction in the course of executing the rule.
For example, when the user defines a rule for situation recognition such as "send text if i am taxi" without specifying a target, the control unit 170 may recognize a destination (e.g., a target) lacking a text message. In this case, when the action of sending the text message is prepared to be performed upon detecting that the corresponding condition is reached, the control unit 170 may identify the lack of the destination. When the action of the defined rule is performed without specifying the target, the control unit 170 may direct the specification of the target. According to an exemplary embodiment of the present invention, the control unit 170 may display a message such as "who is sent the text? "using voice guidance. The user may then specify the target (such as "send to father") to which the text message is addressed by natural language based voice or text input.
Fig. 29 is a flowchart illustrating a process of providing a CAS in a subscriber device according to an exemplary embodiment of the present invention.
In particular, FIG. 29 illustrates an exemplary process of executing a rule, checking one or more conditions (scenarios) specified in the rule to perform an action, or dismissing a currently running rule.
Referring to fig. 29, the control unit 170 executes a rule according to a user request in step 1201. Next, in step 1203, the control unit 170 feeds back execution information (e.g., notification items) as a result of the execution of the rule. For example, the control unit 170 may control to display an item (icon or text) associated with the executed rule on the screen of the display panel 131, as described above.
In step 1205, the control unit 170 checks the conditions specified in the rule. For example, the control unit 170 continuously or periodically monitors to detect whether a condition (situation) specified in the currently running rule is reached.
In step 1207, the control unit 170 determines whether the action execution condition is reached based on the check result. For example, the control unit 170 monitors at least one condition (situation) specified in the currently running rule to determine whether the current situation matches a specific condition of action execution specified in the rule by referring to the mapping table.
If the condition (situation) of the user device 170 matches the action execution condition in step 1207, the control unit 170 controls the execution of the action triggered by the completion of the condition in step 1209. For example, the control unit 170 monitors a condition or situation, and if the condition or situation matches an action execution condition, executes a corresponding action. The action may be to execute a function (or application) specified in the rule, generate an execution result (e.g., context information), and output the execution result to the user or others.
If the condition (situation) of the user device 100 does not match the action execution condition in step 1207, the control unit 170 determines whether the condition (situation) of the user device 100 matches the rule release condition in step 1211. For example, the control unit 170 monitors a certain condition specified in the currently running rule, and determines whether the current situation matches the rule release condition specified in the rule by referring to the mapping table. The rule release condition may be reached when there is no change in the situation for a user-configured duration or when a user-specified rule release condition (e.g., a rule release voice instruction, a function key input, a text input, a gesture input, and a duration of a particular condition) is satisfied.
If the condition (situation) of the user device 100 does not match the rule release condition in step 1211, the control unit 170 returns the process to step 1205 to continue checking the condition of the user device.
If the condition (situation) of the user device 100 matches the rule release condition in step 1211, the control unit 170 releases the currently running rule in step 1213, and feeds back the rule release information in step 1215. For example, the control unit 170 may control to output at least one of audio, video, and tactile feedback. According to an exemplary embodiment of the present invention, the control unit 170 may control to output the rule release feedback in the form of at least one of an audio alarm (e.g., voice and sound effect), a pop-up window, and a vibration.
Although not described in fig. 29, when the rule release condition is reached, the control unit 170 may present a pop-up message prompting the user to terminate the executed rule to hold or release the currently running rule according to the user's selection.
Fig. 30 is a flowchart illustrating a process of providing a CAS in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 30, the control unit 170 monitors to detect an event in a state where a rule executed in response to a user request is running in step 1301, and determines whether an event is detected in step 1303. For example, the control unit 170 may monitor to detect internal or external events (conditions) specified in association with the currently running rule.
The event (condition) may include an internal event occurring in the user device 100 as a result of a change in the internal condition and an external event received from the outside. According to an exemplary embodiment of the present invention, the internal event may include an event occurring when the moving speed of the user device 100 is faster than a predetermined threshold, an event occurring periodically at a predetermined time interval, an event occurring in response to a voice or text input of the user, an event occurring as a result of a change in operation (e.g., motion and illumination), and the like. The external event may include an event of receiving a message from the outside, and in particular, receiving a message from a target user device specified in a currently running rule.
If an event is detected in step 1303, the control unit 170 checks a function to be executed, which is specified in the execution rule, in step 1305. For example, the control unit 170 may check a function (or application) specified in a currently running rule as an action to be performed when a corresponding event (condition) is reached.
In step 1307, the control unit 170 activates the checked function, and performs an action by the activated function in step 1309. For example, if the action corresponding to the event (condition) specified in the execution rule is to control the volume of the user device 100, the control unit 170 controls to activate the volume control function. If the action corresponding to the event (condition) specified in the execution rule is to transmit the current location of the user device 100, the control unit 170 activates a location information transmitting function (or application) such as a GPS function (navigation function) and a messaging function, so that the user device 100 transmits location information about the user device 100 to a target device. For example, the control unit 170 may perform a function (application) or at least two interoperable functions (applications) according to the type of action to be performed.
In step 1311, the control unit 170 controls to feed back the relevant information as a result of the execution of the action (e.g., information resulting from the execution of the action). For example, the control unit 170 may control to display a screen interface presenting a volume level to the user while adjusting the volume in response to the user's operation. The control unit 170 may also control to output a screen interface and/or sound (sound effect) informing that the position information of the user device 100 is transmitted to the designated target device.
If no event (condition) is detected in step 1303, the control unit 170 determines whether the current situation matches the rule release condition in step 1315. For example, the control unit 170 may monitor the condition specified in the currently running rule and determine whether the current situation matches the rule release condition specified in the rule by referring to the mapping table.
If the current situation does not match the rule release condition at step 1315, the control unit 170 returns the process to continue monitoring the event at step 1301.
If the current situation matches the rule release condition in step 1315, the control unit 170 releases the currently running rule in step 1317, and feeds back release information as a result of the rule release (e.g., information resulting from the release of the rule) in step 1319. For example, if the rule release condition is reached, the control unit 170 may display a pop-up message prompting the user to terminate the currently running rule, thereby maintaining or releasing the currently running rule according to the user's selection. The control unit 170 may inform the user of the release of the currently running rule in the form of at least one of audio, video, and tactile feedback.
Fig. 31A to 31N are diagrams illustrating exemplary screens associated with an operation of generating a rule in a user device according to an exemplary embodiment of the present invention.
Fig. 31A to 31N show a process of defining a rule in the following manner: the control unit 170 (e.g., the rule configuration module 173) recognizes a natural language-based text input written by a user and defines rules (conditions and actions) in response to the recognized text input.
Fig. 31A to 31N may correspond to the rule generation process described above with reference to fig. 3A to 3K or fig. 4A to 4J. For example, fig. 31A may correspond to fig. 4A, fig. 31C to fig. 4B, fig. 31E to fig. 4C, fig. 31G to fig. 4D, fig. 31I to fig. 4E, fig. 31K to fig. 4F, fig. 31L to fig. 4I and fig. 31N to fig. 4J. In the following description with reference to fig. 31A to 31N, the same or corresponding operations as or to those described with reference to fig. 4A to 4J are omitted or briefly mentioned.
For example, fig. 31A to 31N illustrate operations of generating a rule corresponding to the operations of fig. 4A to 4J according to an exemplary embodiment of the present invention. However, fig. 31A to 31N are for the case where the rule is generated based on the user's text writing input instead of the voice input. Unlike the rule generating process based on the voice input (fig. 4A to 4J), fig. 31A to 31N show the operation of generating a rule based on the text writing input of the user.
In the exemplary case of fig. 31A-31N, the user input for generating the rules is text-based, and thus, user device 100 may provide a pop-up message "text input" instead of "voice input" for interaction with the user. The pop-up window 451 of fig. 4A prompts the user for voice input with a pictogram of the person speaking and text such as "please say 000", according to an exemplary embodiment of the present invention. However, in fig. 31A, 31E, 31G, and 31L, pop-up windows 3151, 3155, 3157, and 3167 prompt the user for text input with pictograms of a pen used for handwriting and text such as "please write 000".
In the exemplary case of fig. 31A to 31N, the user input is made by writing text instead of speaking text. As shown in fig. 31B, 31D, 31F, 31H, 31J, and 31M, the user device 100 may provide a write window to receive a response (e.g., text input) of the user in response to a request for providing information. According to an example embodiment of the invention, user device 100 may display a wizard (query) such as "please write the command" followed by display of a write window 3100 to receive the user's text (e.g., a taxi) for write input.
According to various exemplary embodiments of the present invention, the rule generation process may be performed through interaction between the user device 100 and the user, and the user may configure the rule by performing natural language-based text input of actions and conditions of the rule according to a guide of the user device 100. For example, the user device 100 may receive a natural language-based text input made by a user for configuring conditions and rules constituting a rule, and configure the corresponding rule according to a user instruction input through the steps of fig. 31A to 31N.
Fig. 32A to 32E are diagrams illustrating exemplary screens associated with an operation of executing a rule in a user device according to an exemplary embodiment of the present invention.
Fig. 32A to 32E illustrate exemplary screens of the user device 100 when the rule execution module 175 of the control unit 170 receives a natural language-based text write of a user and executes a rule in response to a user input, the condition check module 175 of the control unit 170 monitors to detect the reaching of a condition specified in the rule, and the action execution module 179 executes at least one action corresponding to the reached condition.
Fig. 32A to 32E may correspond to the rule execution process described above with reference to fig. 5A to 5E. For example, fig. 32A may correspond to fig. 5A, fig. 32B to fig. 5B, fig. 32D to fig. 5C, and fig. 32E to fig. 5D. In the following description with reference to fig. 32A to 32E, the same or corresponding operations as or to those described with reference to fig. 5A to 5E are omitted or briefly mentioned.
For example, fig. 32A to 32E illustrate operations of executing a rule corresponding to the operations of fig. 5A to 5E according to an exemplary embodiment of the present invention. However, fig. 32A to 32E are for the case where the rule is generated based on the user's text writing input instead of the voice input. Unlike the rule execution process based on the voice input (fig. 5A to 5E), fig. 32A to 32E show the operation of executing a rule based on the text writing input of the user.
In the exemplary case of fig. 32A-32E, the user input for executing the rule is text-based, and thus, user device 100 may provide a pop-up message "text input" instead of "voice input" for interaction with the user. The pop-up window 551 in fig. 5B prompts the user for voice input using a pictogram of the person speaking and text such as "please say 000" according to an exemplary embodiment of the present invention. However, in fig. 32B, a pop-up window 3251 prompts the user for text input with a pictogram of a pen in handwriting and text such as "please write 000".
Referring to fig. 32A to 32E, user input is performed by writing text instead of speaking text. As shown in fig. 32C, the user device 100 may provide a write window to receive a response (e.g., text input) of the user in response to the request to provide information. According to an exemplary embodiment of the present invention, the user device 100 may display a guide (query) such as "please write a command", and then display the write window 3200 to receive the user's text (e.g., subway) of the write input.
According to various exemplary embodiments of the present invention, the rule generation process may be performed through interaction between the user device 100 and the user, and the user may execute the rule by performing natural language-based text input according to the guide of the user device 100. For example, the user device 100 may receive a natural language based text input made by the user for executing the rule according to the user instruction input through the steps of fig. 32A to 32E.
According to various exemplary embodiments of the present invention, a user may execute a rule using widget 500. According to an exemplary embodiment of the present invention, the user may select the instruction input area (or rule execution button) 510 of the widget 500 to input the text of the configured rule (or command). Then, the control unit 170 feeds back information about the action to be performed to the user in the form of a text pop-up or a voice announcement (e.g., speech synthesis (TTS)) together with a notification of the start of the rule. The control unit 170 adds the executed rule to the execution information region 520 and displays a notification item notifying the existence of the currently running rule in the indicator region.
As shown in FIG. 32A, the text input-based widget 500 may be provided separately from the speech input-based widget 500. According to an exemplary embodiment, the widget 500 of FIG. 5A may be set in the instruction input area (or rule execution button) 510 using a pictogram of the person speaking to indicate that the voice recognition function is running. The widget 500 of fig. 32A may be set in the instruction input area (or rule execution button) 510 using a pictogram (e.g., icon) of a handwritten pen to indicate that the text recognition function is running.
Fig. 33A to 33D are diagrams illustrating exemplary screens associated with an operation of pausing a currently running rule in a user device according to an exemplary embodiment of the present invention.
Referring to fig. 33A to 33D, the control unit 170 recognizes a text writing input based on a natural language made by a user, and temporarily stops a currently running rule in response to the text input.
Fig. 33A to 33D may correspond to the rule pause process described above with reference to fig. 16A to 16C. For example, fig. 33A may correspond to fig. 16A, fig. 33B corresponds to fig. 16B, and fig. 33D corresponds to fig. 16C. In the following description with reference to fig. 33A to 33D, the same or corresponding operations as or to those described with reference to fig. 16A to 16C are omitted or briefly mentioned.
For example, fig. 33A to 33D illustrate operations of the pause rule corresponding to the operations of fig. 16A to 16C according to an exemplary embodiment of the present invention. However, fig. 33A to 33D are for the case where the rule is generated based on the user's text writing input, not based on the voice input. Unlike the rule execution process based on the voice input (fig. 16A to 16C), fig. 33A to 33D show the operation of pausing the rule based on the text input by the user.
In the exemplary case of fig. 33A-33D, the user input for the pause rule is text-based, and thus, user device 100 may provide a pop-up message "text input" instead of "voice input" for interaction with the user. Popup window 3351 in FIG. 33B prompts the user for text entry using a pictogram (e.g., icon) on a handwritten pen and text such as "please write 000" in accordance with an exemplary embodiment of the present invention.
In the exemplary case of fig. 33A to 33D, the user input is made by writing text instead of speaking text. As shown in fig. 33C, the user device 100 may provide a write window 3300 to receive a response (e.g., a write input) of the user in response to the request to provide information. According to an example embodiment of the invention, user device 100 may display a guide (query) such as "please write a command" followed by a write window 3300 to receive the user's text of the write input (e.g., pause a taxi).
According to various exemplary embodiments of the present invention, the rule pausing process may be performed through an interaction between the user device 100 and the user, and the user may pause the rule by performing a natural language based text input according to a guide of the user device 100. For example, the user device 100 may receive a natural language based text input by the user for the pause rule according to the user instruction input through the steps of fig. 33A to 33D.
According to various exemplary embodiments of the invention, a user may execute a rule using widget 500. According to an exemplary embodiment of the present invention, the user may select an instruction input area (or instruction execution button) 510 of the widget 500 to input a text instruction for pausing a currently running rule. In this manner, the user may temporarily stop the periodic operation of the at least one rule. For example, the control unit 170 may pause the corresponding rule in response to a text input (such as "pause 000") by the user.
Fig. 34A to 34D are diagrams illustrating exemplary screens associated with an operation of terminating a currently running rule in a user device according to an exemplary embodiment of the present invention.
Fig. 34A to 34D may correspond to the rule termination process described above with reference to fig. 22A to 22C. For example, fig. 34A may correspond to fig. 22A, fig. 34B corresponds to fig. 22B, and fig. 34D corresponds to fig. 22C. In the following description with reference to fig. 34A to 34D, the same or corresponding operations as or to those described with reference to fig. 22A to 22C are omitted or briefly mentioned.
For example, fig. 34A to 34D illustrate operations of terminating a rule corresponding to the operations in fig. 22A to 22C according to an exemplary embodiment of the present invention. However, fig. 34A to 34D are for the case where the rule is terminated based on the user's text input instead of the voice input. Unlike the rule termination process based on the voice input (fig. 22A to 22C), fig. 34A to 34D illustrate an operation of terminating a rule based on a text writing input by the user.
In the exemplary case of fig. 34A-34D, the user input for terminating the rule is text-based, and thus, user device 100 may provide a pop-up message "text input" instead of "voice input" for interaction with the user. The pop-up window 3451 in fig. 34B prompts the user for text input with a pictogram of a handwritten pen and text such as "please write 000" according to an exemplary embodiment of the present invention.
In the exemplary case in fig. 34A to 34D, the user input is made by writing text instead of speaking text. As shown in fig. 34C, user device 100 can provide a write window 3400 to receive a response (e.g., text input) from the user in response to the request to provide information. According to an exemplary embodiment of the present invention, the user device 100 may display a guide (query) such as "please write a command", and then display a write window 3400 to receive a user's text of a write input (e.g., terminate a subway), as shown in fig. 34C.
According to various exemplary embodiments of the present invention, the rule termination process may be performed through an interaction between the user device 100 and the user, and the user may terminate the rule by performing a natural language-based text input according to a guide of the user device 100. For example, the user device 100 may receive a natural language based text input made by the user for terminating the rule according to the user instruction input through the steps of fig. 34A to 34D.
According to various exemplary embodiments of the invention, a user device may execute a rule using widget 500. According to an exemplary embodiment of the present invention, the user may select the instruction input area (or rule execution button) 510 of the widget 500 to input a text instruction for terminating the currently running rule. In addition, according to various exemplary embodiments of the present invention, the control unit may control such that when the corresponding rule is terminated according to the user's text input, the user device 100 automatically restores the configuration of the user device to a state before the execution of the corresponding rule.
The CAS providing method and apparatus according to an exemplary embodiment of the present invention can configure the rule (or condition) of the CAS differently according to the definition of the user. The user device 100 identifies a condition specified in at least one rule defined by the user and performs at least one action when the corresponding condition is reached. The CAS providing method and apparatus according to the exemplary embodiments of the present invention can feed back internal and/or external context information to a user as a result of action execution.
The CAS providing method and apparatus according to an exemplary embodiment of the present invention can define a rule (or situation), an instruction for executing the corresponding rule, and an action to be performed according to the rule by a text or voice input based on a natural language using the user device 100. Accordingly, the CAS providing method and apparatus of the present invention allows a user to define various rules having user-specified conditions and actions and to have the rules defined at the manufacturing stage of the user device 100. The CAS providing method and apparatus according to exemplary embodiments of the present invention can define rules and instructions by a natural language-based text or voice input and execute the rules in response to the natural language-based text or voice instructions or the detection of the movement of the user device 100. Thus, exemplary embodiments of the present invention can extend the scope of the CAS and provide user-specific availability.
The CAS providing method and apparatus according to exemplary embodiments of the present invention can configure a plurality of conditions for each rule and support a multi-scenario aware scenario corresponding to the plurality of conditions. Accordingly, the CAS providing method and apparatus according to the exemplary embodiments of the present invention can configure various conditions according to the preference of the user and simultaneously perform a plurality of actions corresponding to a multi-scenario aware scenario. The CAS providing method and apparatus according to exemplary embodiments of the present invention can improve context awareness functionality by employing a recommendation function as well as context awareness, thereby improving context recognition accuracy, as compared to a statistics-based context awareness technique according to the related art.
The CAS providing method and apparatus according to the exemplary embodiments of the present invention can optimize a CAS environment, thereby improving user convenience as well as device availability, utility, and competitiveness. The CAS providing method and apparatus according to the exemplary embodiment of the present invention may be applied to various types of CAS-enabled apparatuses including a cellular communication terminal, a smart phone, a tablet PC, a PDA, and the like.
According to example embodiments of the invention, a module may be implemented as any one or combination of software, firmware, hardware and/or any combination thereof. Some or all of the modules may be implemented as entities capable of equivalently performing the functions of the individual modules. According to various exemplary embodiments of the present invention, a plurality of operations may be sequentially, repeatedly, or in parallel performed. Some of the operations may be omitted or replaced with other operations.
The above-described exemplary embodiments of the present invention may be implemented in the form of computer-executable program commands and may be stored in a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium may store the program command, the data file, and the data structure in a single form or in a combined form. The program commands recorded in the storage medium may be designed and implemented for various exemplary embodiments of the present invention or used by those skilled in the computer software art.
Non-transitory computer-readable storage media include magnetic media such as floppy disks and magnetic tapes, optical media including Compact Disc (CD) ROMs and Digital Video Disc (DVD) ROMs, magneto-optical media such as floppy disks, and hardware devices designed to store and execute program commands such as ROMs, RAMs, and flash memories. The program command includes a language code executable by a computer using an interpreter and a machine language code created by a compiler. The hardware devices described above may be implemented using software modules for performing the operations of various exemplary embodiments of the present invention.
While the present invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.

Claims (21)

1. A method of an electronic device to provide context aware services, the method comprising:
receiving a user input;
identifying a condition, an action corresponding to the condition, and a command based on the received user input;
defining a rule including the condition and the action corresponding to the condition;
mapping the command to the rule in response to the rule being defined;
storing the rules and the mapped commands in at least one of a user device or a server;
recognizing, when a voice is received through a user device, whether the voice corresponds to the command;
if the speech corresponds to the command and if a context corresponding to the condition of the rule is detected at a user device, then performing the action corresponding to the condition.
2. The method of claim 1, wherein the user input corresponds to one of a natural language based text input and a natural language based speech input.
3. The method of claim 1, wherein the context comprises a change with respect to a user device, and
wherein the change with respect to the user device corresponds to at least one of a posture change and a position change of the user device.
4. The method of claim 1, wherein the scenario includes receiving at least one of an incoming call message and an incoming call sound.
5. The method of claim 1, wherein the action comprises an operation performed by a user device when the context is detected.
6. The method of claim 5, wherein the action comprises at least one of feeding back context information corresponding to the rule via the user device by controlling an internal component of the user device and feeding back the context information via the external device by controlling the external device.
7. The method of claim 1, wherein the step of defining the rule comprises: defining one or more rules based on the received user input.
8. The method of claim 1, wherein the rule is configured in one of a single structure comprising one action and a multi-structure comprising a plurality of actions according to an interaction between a user and a user device for receiving the user input.
9. The method of claim 1, wherein receiving the user input comprises: at least one of a user interface and guidance regarding actions supported by the user device is provided that allows a user to specify at least one action supported by the user device.
10. The method of claim 1, wherein identifying the condition, the action corresponding to the condition, and the command comprises:
when the user input is analyzed to find that the supplementary information required for defining the rule is absent, feeding back a query for prompting the user to provide the supplementary information required for the rule;
receiving a second user input in response to the query;
skipping the request for supplemental information when the user input does not require supplemental information.
11. The method of claim 1, wherein receiving the user input comprises:
providing a user interface for defining the rule;
receiving the user input through the user interface, wherein the user input is one of a natural language based speech input and a natural language based text input.
12. The method of claim 11, wherein the step of selecting the target,
wherein at least one of the condition and the command corresponds to at least one of: natural language based speech, natural language based text, motion detection events of the user device, receipt of incoming sounds, and receipt of incoming messages.
13. The method of claim 12, wherein if the voice corresponds to the command, performing the following:
determining whether a rule matching the command exists;
outputting a wizard when no rule matches the command;
when there are rules matching the command, determining whether the number of matched rules is greater than 1;
at least one rule is activated based on the number of matched rules.
14. The method of claim 13, wherein activating the at least one rule according to the number of matched rules comprises:
when the number of matched rules is greater than 1, prompting selection of one of the rules.
15. The method of claim 13, further comprising:
deactivating the activated rule when a rule deactivation condition is satisfied in a state in which the rule is activated; and
feeding back release information notifying release of the rule.
16. The method of claim 15, wherein the step of feeding back release information comprises at least one of audio feedback, video feedback, and tactile feedback.
17. A user device, comprising:
a storage unit;
a display unit; and
a control unit configured to:
receiving an indication associated with a first user input from an input device, wherein the first user input comprises at least one of a text input or a speech input,
identifying, based on a first user input, a condition, an action corresponding to the condition, and a command,
defining a rule including the condition and the action corresponding to the condition,
mapping the command to the rule in response to defining the rule,
storing the rules and the mapped commands in a storage unit,
receiving an indication associated with the second user input from the input device,
determining whether the second user input corresponds to the command,
activating the rule to detect a scenario corresponding to the condition of the rule if a second user input corresponds to the command,
controlling a display unit to display execution information if the rule is activated, an
Performing the action corresponding to the condition if the scenario is detected,
wherein the second user input is one of a natural language based text input or a natural language based speech input.
18. The user equipment of claim 17, wherein the control unit comprises:
a rule configuration module configured to: receiving user input to configure the rule by identifying at least one condition and at least one action according to at least one of a natural language based speech input and a natural language based text input;
a rule execution module configured to receive a command for activating the rule and execute the rule corresponding to the command, wherein the command is one of a natural language based voice, a natural language based text, a motion detection event of a user device, a reception of an incoming sound, and a reception of an incoming message;
a condition checking module configured to detect at least one scenario corresponding to the at least one condition specified in the rule; and
an action execution module configured to, when the at least one scenario is detected, execute at least one action corresponding to the detected scenario.
19. The user equipment of claim 18, wherein the control unit is further configured to: when an event detected in the user device satisfies the condition specified in the rule that is activated, controlling to perform at least one action corresponding to the condition and controlling to feed back a result of the performance.
20. The user equipment of claim 18, wherein the control unit is further configured to: controlling to deactivate the rule when a rule deactivation condition is satisfied in a state in which the rule is activated,
and controls feedback of release information notifying release of the rule.
21. A computer readable storage medium storing a program that, when executed, causes at least one processor to perform the method of claim 1.
CN201910012868.6A 2012-09-20 2013-09-22 Context-aware service providing method and apparatus for user device Active CN109739469B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR20120104357 2012-09-20
KR10-2012-0104357 2012-09-20
KR10-2013-0048755 2013-04-30
KR1020130048755A KR102070196B1 (en) 2012-09-20 2013-04-30 Method and apparatus for providing context aware service in a user device
CN201310432058.9A CN103677261B (en) 2012-09-20 2013-09-22 The context aware service provision method and equipment of user apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201310432058.9A Division CN103677261B (en) 2012-09-20 2013-09-22 The context aware service provision method and equipment of user apparatus

Publications (2)

Publication Number Publication Date
CN109739469A CN109739469A (en) 2019-05-10
CN109739469B true CN109739469B (en) 2022-07-01

Family

ID=49231281

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910012868.6A Active CN109739469B (en) 2012-09-20 2013-09-22 Context-aware service providing method and apparatus for user device
CN201310432058.9A Active CN103677261B (en) 2012-09-20 2013-09-22 The context aware service provision method and equipment of user apparatus
CN201910012570.5A Active CN109683848B (en) 2012-09-20 2013-09-22 Context awareness service providing method and apparatus of user device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201310432058.9A Active CN103677261B (en) 2012-09-20 2013-09-22 The context aware service provision method and equipment of user apparatus
CN201910012570.5A Active CN109683848B (en) 2012-09-20 2013-09-22 Context awareness service providing method and apparatus of user device

Country Status (6)

Country Link
US (2) US10042603B2 (en)
EP (2) EP2723049B1 (en)
JP (1) JP6475908B2 (en)
CN (3) CN109739469B (en)
AU (2) AU2013231030B2 (en)
WO (1) WO2014046475A1 (en)

Families Citing this family (227)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9954996B2 (en) 2007-06-28 2018-04-24 Apple Inc. Portable electronic device with conversation management for incoming instant messages
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US10404615B2 (en) 2012-02-14 2019-09-03 Airwatch, Llc Controlling distribution of resources on a network
US9680763B2 (en) 2012-02-14 2017-06-13 Airwatch, Llc Controlling distribution of resources in a network
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR102070196B1 (en) 2012-09-20 2020-01-30 삼성전자 주식회사 Method and apparatus for providing context aware service in a user device
US10042603B2 (en) 2012-09-20 2018-08-07 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device
US20150019229A1 (en) * 2012-10-10 2015-01-15 Robert D. Fish Using Voice Commands To Execute Contingent Instructions
ITTO20121070A1 (en) * 2012-12-13 2014-06-14 Istituto Superiore Mario Boella Sul Le Tecnologie WIRELESS COMMUNICATION SYSTEM WITH SHORT RADIUS INCLUDING A SHORT-COMMUNICATION SENSOR AND A MOBILE TERMINAL WITH IMPROVED FUNCTIONALITY AND RELATIVE METHOD
GB2526948A (en) 2012-12-27 2015-12-09 Kaleo Inc Systems for locating and interacting with medicament delivery devices
AU2014214676A1 (en) 2013-02-07 2015-08-27 Apple Inc. Voice trigger for a digital assistant
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US20140280955A1 (en) 2013-03-14 2014-09-18 Sky Socket, Llc Controlling Electronically Communicated Resources
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
US9401915B2 (en) 2013-03-15 2016-07-26 Airwatch Llc Secondary device as key for authorizing access to resources
US9219741B2 (en) 2013-05-02 2015-12-22 Airwatch, Llc Time-based configuration policy toggling
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
CN110442699A (en) 2013-06-09 2019-11-12 苹果公司 Operate method, computer-readable medium, electronic equipment and the system of digital assistants
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10223156B2 (en) 2013-06-09 2019-03-05 Apple Inc. Initiating background updates based on user activity
AU2014306221B2 (en) * 2013-08-06 2017-04-06 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9432796B2 (en) 2014-05-30 2016-08-30 Apple Inc. Dynamic adjustment of mobile device based on peer event data
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
EP3480811A1 (en) 2014-05-30 2019-05-08 Apple Inc. Multi-command single utterance input method
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) * 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US20150350141A1 (en) 2014-05-31 2015-12-03 Apple Inc. Message user interfaces for capture and transmittal of media and location content
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
CN114115459B (en) 2014-08-06 2024-04-12 苹果公司 Reduced size user interface for battery management
CN115756154A (en) 2014-09-02 2023-03-07 苹果公司 Semantic framework for variable haptic output
CN115695632B (en) 2014-09-02 2024-10-01 苹果公司 Electronic device, computer storage medium, and method of operating electronic device
DE202015006066U1 (en) 2014-09-02 2015-12-14 Apple Inc. Smaller interfaces for handling notifications
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
CN104346451A (en) * 2014-10-29 2015-02-11 山东大学 Situation awareness system based on user feedback, as well as operating method and application thereof
CN104407702A (en) * 2014-11-26 2015-03-11 三星电子(中国)研发中心 Method, device and system for performing actions based on context awareness
US10147421B2 (en) 2014-12-16 2018-12-04 Microcoft Technology Licensing, Llc Digital assistant voice input integration
CN104468814B (en) * 2014-12-22 2018-04-13 齐玉田 Wireless control system for Internet of things and method
US9584964B2 (en) * 2014-12-22 2017-02-28 Airwatch Llc Enforcement of proximity based policies
US9413754B2 (en) 2014-12-23 2016-08-09 Airwatch Llc Authenticator device facilitating file security
CN104505091B (en) * 2014-12-26 2018-08-21 湖南华凯文化创意股份有限公司 Man machine language's exchange method and system
KR20160084663A (en) 2015-01-06 2016-07-14 삼성전자주식회사 Device and method for transmitting message
US9389928B1 (en) 2015-02-11 2016-07-12 Microsoft Technology Licensing, Llc Platform for extension interaction with applications
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
CN106161054A (en) * 2015-03-31 2016-11-23 腾讯科技(深圳)有限公司 Equipment configuration method, device and system
CN104902072B (en) * 2015-04-14 2017-11-07 广东欧珀移动通信有限公司 A kind of terminal based reminding method and device
US10133613B2 (en) * 2015-05-14 2018-11-20 Microsoft Technology Licensing, Llc Digital assistant extensibility to third party applications
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10594835B2 (en) * 2015-06-05 2020-03-17 Apple Inc. Efficient context monitoring
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10003938B2 (en) 2015-08-14 2018-06-19 Apple Inc. Easy location sharing
US10600296B2 (en) * 2015-08-19 2020-03-24 Google Llc Physical knowledge action triggers
CN105224278B (en) * 2015-08-21 2019-02-22 百度在线网络技术(北京)有限公司 Interactive voice service processing method and device
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US9922648B2 (en) 2016-03-01 2018-03-20 Google Llc Developer voice actions system
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9918006B2 (en) * 2016-05-20 2018-03-13 International Business Machines Corporation Device, system and method for cognitive image capture
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179823B1 (en) 2016-06-12 2019-07-12 Apple Inc. Devices, methods, and graphical user interfaces for providing haptic feedback
DK179925B1 (en) * 2016-06-12 2019-10-09 Apple Inc. User interface for managing controllable external devices
DK179657B1 (en) 2016-06-12 2019-03-13 Apple Inc. Devices, methods and graphical user interfaces for providing haptic feedback
DE102016212681A1 (en) * 2016-07-12 2018-01-18 Audi Ag Control device and method for voice-based operation of a motor vehicle
DK201670720A1 (en) 2016-09-06 2018-03-26 Apple Inc Devices, Methods, and Graphical User Interfaces for Generating Tactile Outputs
DK179278B1 (en) 2016-09-06 2018-03-26 Apple Inc Devices, methods and graphical user interfaces for haptic mixing
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
CN108289110B (en) 2017-01-09 2021-10-08 斑马智行网络(香港)有限公司 Device association method and device, terminal device and operating system
EP3570906A4 (en) * 2017-01-17 2020-10-21 Kaleo, Inc. Medicament delivery devices with wireless connectivity and event detection
KR20180102871A (en) * 2017-03-08 2018-09-18 엘지전자 주식회사 Mobile terminal and vehicle control method of mobile terminal
KR102369309B1 (en) * 2017-03-24 2022-03-03 삼성전자주식회사 Electronic device for performing an operation for an user input after parital landing
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770427A1 (en) 2017-05-12 2018-12-20 Apple Inc. Low-latency intelligent automated assistant
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770411A1 (en) 2017-05-15 2018-12-20 Apple Inc. Multi-modal interfaces
DK201770372A1 (en) 2017-05-16 2019-01-08 Apple Inc. Tactile feedback for locked device user interfaces
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. Far-field extension for digital assistant services
EP3635578A4 (en) * 2017-05-18 2021-08-25 Aiqudo, Inc. Systems and methods for crowdsourced actions and commands
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10503467B2 (en) * 2017-07-13 2019-12-10 International Business Machines Corporation User interface sound emanation activity classification
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
EP3474095A1 (en) * 2017-10-23 2019-04-24 Mastercard International Incorporated System and method for specifying rules for operational systems
US11168882B2 (en) * 2017-11-01 2021-11-09 Panasonic Intellectual Property Management Co., Ltd. Behavior inducement system, behavior inducement method and recording medium
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
KR102532300B1 (en) * 2017-12-22 2023-05-15 삼성전자주식회사 Method for executing an application and apparatus thereof
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
JP2019144684A (en) * 2018-02-16 2019-08-29 富士ゼロックス株式会社 Information processing system and program
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
EP3849177A1 (en) 2018-05-07 2021-07-14 Apple Inc. User interfaces for viewing live video feeds and recorded video
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
CN108762712B (en) * 2018-05-30 2021-10-08 Oppo广东移动通信有限公司 Electronic device control method, electronic device control device, storage medium and electronic device
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance
WO2020018433A1 (en) 2018-07-16 2020-01-23 Kaleo, Inc. Medicament delivery devices with wireless connectivity and compliance detection
KR102607666B1 (en) * 2018-08-08 2023-11-29 삼성전자 주식회사 Apparatus and method for providing feedback for confirming intent of a user in an electronic device
KR102527107B1 (en) * 2018-08-08 2023-05-02 삼성전자주식회사 Method for executing function based on voice and electronic device for supporting the same
CN116386677A (en) 2018-08-27 2023-07-04 谷歌有限责任公司 Algorithm determination of interruption of reading by story readers
CN110874202B (en) * 2018-08-29 2024-04-19 斑马智行网络(香港)有限公司 Interaction method, device, medium and operating system
US11501769B2 (en) * 2018-08-31 2022-11-15 Google Llc Dynamic adjustment of story time special effects based on contextual data
WO2020050822A1 (en) 2018-09-04 2020-03-12 Google Llc Detection of story reader progress for pre-caching special effects
EP3837681A1 (en) 2018-09-04 2021-06-23 Google LLC Reading progress estimation based on phonetic fuzzy matching and confidence interval
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
KR20200035887A (en) * 2018-09-27 2020-04-06 삼성전자주식회사 Method and system for providing an interactive interface
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
CN109829108B (en) * 2019-01-28 2020-12-04 北京三快在线科技有限公司 Information recommendation method and device, electronic equipment and readable storage medium
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
GB2582910A (en) * 2019-04-02 2020-10-14 Nokia Technologies Oy Audio codec extension
US11580970B2 (en) * 2019-04-05 2023-02-14 Samsung Electronics Co., Ltd. System and method for context-enriched attentive memory network with global and local encoding for dialogue breakdown detection
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11307752B2 (en) * 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK201970511A1 (en) 2019-05-31 2021-02-15 Apple Inc Voice identification in digital assistant systems
US10904029B2 (en) 2019-05-31 2021-01-26 Apple Inc. User interfaces for managing controllable external devices
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
US11363071B2 (en) 2019-05-31 2022-06-14 Apple Inc. User interfaces for managing a local network
US11468890B2 (en) 2019-06-01 2022-10-11 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
WO2021056255A1 (en) 2019-09-25 2021-04-01 Apple Inc. Text detection using global geometry estimators
KR20210072471A (en) * 2019-12-09 2021-06-17 현대자동차주식회사 Apparatus for recognizing voice command, system having the same and method thereof
US11966964B2 (en) * 2020-01-31 2024-04-23 Walmart Apollo, Llc Voice-enabled recipe selection
US20230003535A1 (en) * 2020-02-27 2023-01-05 Mitsubishi Electric Corporation Rendezvous assistance system and rendezvous assistance method
JP7248615B2 (en) * 2020-03-19 2023-03-29 ヤフー株式会社 Output device, output method and output program
US11513667B2 (en) 2020-05-11 2022-11-29 Apple Inc. User interface for audio message
US11043220B1 (en) 2020-05-11 2021-06-22 Apple Inc. Digital assistant hardware abstraction
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
CN111857457A (en) * 2020-06-22 2020-10-30 北京百度网讯科技有限公司 Cloud mobile phone control method and device, electronic equipment and readable storage medium
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
IT202000021253A1 (en) * 2020-09-14 2022-03-14 Sistem Evo S R L IT platform based on artificial intelligence systems to support IT security
USD1016082S1 (en) * 2021-06-04 2024-02-27 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
CN113377426B (en) * 2021-07-01 2024-08-20 中煤航测遥感集团有限公司 Vehicle supervision rule configuration method and device, computer equipment and storage medium
CN114285930B (en) * 2021-12-10 2024-02-23 杭州逗酷软件科技有限公司 Interaction method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002175A (en) * 2004-07-01 2007-07-18 诺基亚公司 Method, apparatus and computer program product to utilize context ontology in mobile device application personalization
CN102640480A (en) * 2009-12-04 2012-08-15 高通股份有限公司 Creating and utilizing a context

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996402B2 (en) * 2000-08-29 2006-02-07 Logan James D Rules based methods and apparatus for generating notification messages based on the proximity of electronic devices to one another
US5917489A (en) * 1997-01-31 1999-06-29 Microsoft Corporation System and method for creating, editing, and distributing rules for processing electronic messages
DE69942663D1 (en) * 1999-04-13 2010-09-23 Sony Deutschland Gmbh Merging of speech interfaces for the simultaneous use of devices and applications
US6622119B1 (en) * 1999-10-30 2003-09-16 International Business Machines Corporation Adaptive command predictor and method for a natural language dialog system
US6775658B1 (en) * 1999-12-21 2004-08-10 Mci, Inc. Notification by business rule trigger control
US8938256B2 (en) * 2000-08-29 2015-01-20 Intel Corporation Communication and control system using location aware devices for producing notification messages operating under rule-based control
JP2002283259A (en) 2001-03-27 2002-10-03 Sony Corp Operation teaching device and operation teaching method for robot device and storage medium
US20020144259A1 (en) * 2001-03-29 2002-10-03 Philips Electronics North America Corp. Method and apparatus for controlling a media player based on user activity
US7324947B2 (en) * 2001-10-03 2008-01-29 Promptu Systems Corporation Global speech user interface
US20030147624A1 (en) * 2002-02-06 2003-08-07 Koninklijke Philips Electronics N.V. Method and apparatus for controlling a media player based on a non-user event
KR100434545B1 (en) 2002-03-15 2004-06-05 삼성전자주식회사 Method and apparatus for controlling devices connected with home network
KR100651729B1 (en) 2003-11-14 2006-12-06 한국전자통신연구원 System and method for multi-modal context-sensitive applications in home network environment
JP2006221270A (en) 2005-02-08 2006-08-24 Nec Saitama Ltd Multitask system and method of mobile terminal device with voice recognition function
US7352279B2 (en) * 2005-03-02 2008-04-01 Matsushita Electric Industrial Co., Ltd. Rule based intelligent alarm management system for digital surveillance system
US8554599B2 (en) 2005-03-25 2013-10-08 Microsoft Corporation Work item rules for a work item tracking system
US7640160B2 (en) 2005-08-05 2009-12-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
KR100695331B1 (en) 2005-09-23 2007-03-16 한국전자통신연구원 User interface apparatus for context-aware environments, and device controlling apparatus and it's operating method
JP2007220045A (en) * 2006-02-20 2007-08-30 Toshiba Corp Communication support device, method, and program
US8311836B2 (en) * 2006-03-13 2012-11-13 Nuance Communications, Inc. Dynamic help including available speech commands from content contained within speech grammars
JP4786384B2 (en) * 2006-03-27 2011-10-05 株式会社東芝 Audio processing apparatus, audio processing method, and audio processing program
JP2007305039A (en) 2006-05-15 2007-11-22 Sony Corp Information processing apparatus and method, and program
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US20080220810A1 (en) 2007-03-07 2008-09-11 Agere Systems, Inc. Communications server for handling parallel voice and data connections and method of using the same
US8620652B2 (en) * 2007-05-17 2013-12-31 Microsoft Corporation Speech recognition macro runtime
US8150699B2 (en) 2007-05-17 2012-04-03 Redstart Systems, Inc. Systems and methods of a structured grammar for a speech recognition command system
KR20090053179A (en) 2007-11-22 2009-05-27 주식회사 케이티 Service controlling apparatus and method for context-aware knowledge service
CA2708375C (en) * 2007-12-14 2015-05-26 Research In Motion Limited Method and system for a context aware mechanism for use in presence and location
US8958848B2 (en) 2008-04-08 2015-02-17 Lg Electronics Inc. Mobile terminal and menu control method thereof
KR20090107364A (en) 2008-04-08 2009-10-13 엘지전자 주식회사 Mobile terminal and its menu control method
KR20100001928A (en) * 2008-06-27 2010-01-06 중앙대학교 산학협력단 Service apparatus and method based on emotional recognition
US8489599B2 (en) 2008-12-02 2013-07-16 Palo Alto Research Center Incorporated Context and activity-driven content delivery and interaction
KR20100100175A (en) * 2009-03-05 2010-09-15 중앙대학교 산학협력단 Context-aware reasoning system for personalized u-city services
KR101566379B1 (en) * 2009-05-07 2015-11-13 삼성전자주식회사 Method For Activating User Function based on a kind of input signal And Portable Device using the same
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US9100465B2 (en) * 2009-08-11 2015-08-04 Eolas Technologies Incorporated Automated communications response system
KR20110023977A (en) 2009-09-01 2011-03-09 삼성전자주식회사 Method and apparatus for managing widget in mobile terminal
US20110099507A1 (en) 2009-10-28 2011-04-28 Google Inc. Displaying a collection of interactive elements that trigger actions directed to an item
TWI438675B (en) 2010-04-30 2014-05-21 Ibm Method, device and computer program product for providing a context-aware help content
US8359020B2 (en) * 2010-08-06 2013-01-22 Google Inc. Automatically monitoring for voice input based on context
US10042603B2 (en) 2012-09-20 2018-08-07 Samsung Electronics Co., Ltd. Context aware service provision method and apparatus of user device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002175A (en) * 2004-07-01 2007-07-18 诺基亚公司 Method, apparatus and computer program product to utilize context ontology in mobile device application personalization
CN102640480A (en) * 2009-12-04 2012-08-15 高通股份有限公司 Creating and utilizing a context

Also Published As

Publication number Publication date
CN109739469A (en) 2019-05-10
CN109683848A (en) 2019-04-26
EP2723049B1 (en) 2018-08-15
AU2013231030A1 (en) 2014-04-03
CN109683848B (en) 2023-06-30
WO2014046475A1 (en) 2014-03-27
EP3435645A1 (en) 2019-01-30
CN103677261A (en) 2014-03-26
US20140082501A1 (en) 2014-03-20
US10042603B2 (en) 2018-08-07
JP6475908B2 (en) 2019-02-27
JP2014064278A (en) 2014-04-10
AU2018260953B2 (en) 2020-06-04
AU2013231030B2 (en) 2018-08-09
US10684821B2 (en) 2020-06-16
EP2723049A1 (en) 2014-04-23
AU2018260953A1 (en) 2018-11-29
CN103677261B (en) 2019-02-01
US20180341458A1 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
CN109739469B (en) Context-aware service providing method and apparatus for user device
US11907615B2 (en) Context aware service provision method and apparatus of user device
US10832655B2 (en) Method and user device for providing context awareness service using speech recognition
CN104765447A (en) Limiting notification interruptions
KR20140099589A (en) Method and apparatus for providing short-cut number in a user device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant