Nothing Special   »   [go: up one dir, main page]

US20160110230A1 - System and Method for Issuing Commands to Applications Based on Contextual Information - Google Patents

System and Method for Issuing Commands to Applications Based on Contextual Information Download PDF

Info

Publication number
US20160110230A1
US20160110230A1 US14/978,655 US201514978655A US2016110230A1 US 20160110230 A1 US20160110230 A1 US 20160110230A1 US 201514978655 A US201514978655 A US 201514978655A US 2016110230 A1 US2016110230 A1 US 2016110230A1
Authority
US
United States
Prior art keywords
application
event
contextual information
text
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/978,655
Inventor
Bradford Allen Moore
Stephen W. Swales
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/566,660 external-priority patent/US8285499B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to US14/978,655 priority Critical patent/US20160110230A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOORE, BRADFORD ALLEN, SWALES, STEPHEN W.
Publication of US20160110230A1 publication Critical patent/US20160110230A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • G06F9/45508Runtime interpretation or emulation, e g. emulator loops, bytecode interpretation
    • G06F9/45512Command shells

Definitions

  • This application is related to a system and a method for providing on-screen soft keyboard services to an application executing on multifunction device, for example a mobile phone that has a touch-sensitive display and is configured to provide enhanced keyboard services to third party applications executed by the device.
  • Some electronic devices provide a user interface that includes an on-screen keyboard (also called a soft keyboard) that allows a user to enter text into the user interface by touching virtual keys displayed on a touch-sensitive display device (sometimes called a touch screen display).
  • the on-screen keyboard is a system keyboard that is provided by the operating system of the electronic device.
  • the operating system of the electronic device handles text manipulation events (e.g., insert, delete, select and replace commands) received from the system keyboard and provides enhanced keyboard functions such as spell checking.
  • the third-party application When a third-party application requires text input from a user, the third-party receives information and/or commands from the system keyboard provided by the operating system of the electronic device.
  • the enhanced keyboard functions provided by the operating system often require contextual information, such as text (or other symbols) positioned before and/or after the current text or cursor position, and therefore the enhanced keyboard functions are not available to third-party applications that store contextual information in storage locations that unknown to the operating system, and/or that store contextual information in a manner (e.g., using data structures, formats, metadata, or the like) unknown to the operating system.
  • some embodiments provide a system, computer readable storage medium including instructions, and a computer-implemented method for issuing commands from a control application to a second application (e.g., a third-party application) based on contextual information received from the second application.
  • the control application provides enhanced keyboard functions (i.e., functions other than simply delivery of single keystroke commands) to the second application.
  • the control application receives an indication that a text manipulation event has occurred in a user interface of the second application.
  • the control application queries the second application to obtain contextual information established by the second application prior to the event, the contextual information providing context to the text manipulation event that occurred in the user interface of the second application.
  • the control application then issues one or more commands to the second application based on the contextual information providing context to the text manipulation event.
  • the second application determines the contextual information providing context to the text manipulation event and responds to the querying by the control application by providing the contextual information providing context to the text manipulation event that occurred in the user interface of the second application.
  • the second application determines the contextual information providing context to the text manipulation event by determining a text direction associated with a logical location of the text manipulation event and determining boundaries of a predetermined text unit that includes the text associated with the logical location of the text manipulation event based on the text direction.
  • the second application in response to the issuing of the one or more commands by the control application, executes the one or more commands issued by the control application.
  • the contextual information relating to the text manipulation event includes a logical location and a predetermined unit of text relating to the text manipulation event.
  • the predetermined unit of text is selected from the group consisting of a character, a word, a sentence, a paragraph, a line of text, a section of a document, and a document.
  • the logical location of the text manipulation event is selected from the group consisting of a point between two characters in the user interface of the second application and a range including one or more characters that is selected in the user interface of the second application.
  • a respective query from the control application requesting the contextual information of the text manipulation event includes a physical location of the text manipulation event in the user interface of the second application, and the logical location of the text manipulation event corresponding to the physical location.
  • the text manipulation event is selected from the group consisting of an insertion of one or more characters, a deletion one or more characters, a selection of one or more characters, and a deselection of one or more characters.
  • control application determines the one or more commands based on the contextual information, which provides context to the text manipulation event, prior to issuing one or more commands to the second application.
  • control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a single character. Next, the control application determines one or more single characters from a plurality of possible single characters based on the contextual information and text manipulation event. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more single characters from the plurality of possible single characters.
  • control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a potentially misspelled word. Next, the control application determines one or more words from a plurality of possible words that represent a correct spelling of the potentially misspelled word. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more words from the plurality of possible words.
  • control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a portion of a word. Next, the control application determines one or more candidate words from a plurality of possible words determined in accordance with the portion of the word. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • control application receives the text manipulation event in the user interface of the second application.
  • the second application notifies the control application that contextual information obtained by the control application from the second application can no longer be relied upon by the control application.
  • the second application notifies the control application that a selection of text in the second application has changed.
  • FIG. 1 is a block diagram illustrating a user interface of a device, according to some embodiments.
  • FIG. 2 is a block diagram illustrating a device, according to some embodiments.
  • FIG. 3A is a block diagram illustrating exemplary components of an event handling system, according to some embodiments.
  • FIG. 3B is a block diagram illustrating an event handler, according to some embodiments.
  • FIG. 4 is a block diagram illustrating an exemplary device, according to some embodiments.
  • FIG. 5A is a block diagram illustrating a control application receiving a text manipulation event, according to some embodiments.
  • FIG. 5B is a block diagram illustrating a control application querying an application for contextual information, according to some embodiments.
  • FIG. 5C is a block diagram illustrating a control application issuing commands to an application, according to some embodiments.
  • FIG. 6 is a flowchart of a method for issuing commands to an application based on contextual information, according to some embodiments.
  • FIG. 7 is a flowchart of a method for determining contextual information that provides context to a text manipulation event, according to some embodiments.
  • FIG. 8 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a single character, according to some embodiments.
  • FIG. 9 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a potentially misspelled word, according to some embodiments.
  • FIG. 10 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a portion of a word, according to some embodiments.
  • Some embodiments provide a system, computer readable storage medium including instructions, and a computer-implemented method for allowing a third-party application executing on a device to receive enhanced keyboard services.
  • a control application of the device provides the enhanced keyboard services to the third-party application by issuing commands from the control application to the third party application based on contextual information received from the third-party application.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present invention.
  • the first contact and the second contact are both contacts, but they are not the same contact.
  • the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.
  • the phrase “if it is determined” or “if[a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
  • the computing device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions.
  • portable multifunction devices include, without limitation, the iPhone® and iPod Touch® devices from Apple Inc. of Cupertino, Calif.
  • Other portable devices such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads), may also be used.
  • the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
  • a computing device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the computing device may include one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick.
  • the device supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • applications such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • the various applications that may be executed on the device may use at least one common physical user-interface device, such as the touch-sensitive surface.
  • One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device may be adjusted and/or varied from one application to the next and/or within a respective application.
  • a common physical architecture (such as the touch-sensitive surface) of the device may support the variety of applications with user interfaces that are intuitive and transparent to the user.
  • the user interfaces may include one or more soft keyboard embodiments.
  • the soft keyboard embodiments may include standard (QWERTY) and/or non-standard configurations of symbols on the displayed icons of the keyboard, such as those described in U.S. patent application Ser. No. 11/459,606, “Keyboards For Portable Electronic Devices,” filed Jul. 24, 2006, and 111459,615, “Touch Screen Keyboards For Portable Electronic Devices,” filed Jul. 24, 2006, the contents of which are hereby incorporated by reference in their entireties.
  • the keyboard embodiments may include a reduced number of icons (or soft keys) relative to the number of keys in existing physical keyboards, such as that for a typewriter. This may make it easier for users to select one or more icons in the keyboard, and thus, one or more corresponding symbols.
  • the keyboard embodiments may be adaptive. For example, displayed icons may be modified in accordance with user actions, such as selecting one or more icons and/or one or more corresponding symbols.
  • One or more applications on the device may utilize common and/or different keyboard embodiments. Thus, the keyboard embodiment used may be tailored to at least some of the applications.
  • one or more keyboard embodiments may be tailored to a respective user. For example, one or more keyboard embodiments may be tailored to a respective user based on a word usage history (lexicography, slang, individual usage) of the respective user. Some of the keyboard embodiments may be adjusted to reduce a probability of a user error when selecting one or more icons, and thus one or more symbols, when using the soft keyboard embodiments.
  • touch-based gestures include not only gestures, made by one or more fingers or one or more styluses, that make physical contact a touch-sensitive screen 112 or other touch-sensitive surface, but also gestures that occur, in whole or in part, sufficiently close to touch-sensitive screen 112 or other touch-sensitive surface that the one or more sensors of touch-sensitive screen 112 or other touch-sensitive surface are able to detect those gestures.
  • FIG. 1 is a block diagram illustrating a user interface 104 of a device 102 , according to some embodiments.
  • FIG. 1 illustrates the user interface when the device 102 is executing an application.
  • the user interface 104 includes a soft keyboard 106 and a text view region 108 , both displayed on a touch-sensitive display of the device 102 .
  • the device 102 is a portable multifunction electronic device that includes a touch-sensitive display (sometimes called a touch screen or touch screen display) configured to present the user interface 104 .
  • the device 102 is a consumer electronic device, mobile telephone, smart phone, video game system, electronic music player, tablet PC, electronic book reading system, e-book, personal digital assistant, navigation device, electronic organizer, email device, laptop or other computer, kiosk computer, vending machine, smart appliance, or the like.
  • FIG. 2 is block diagram illustrating device 102 according to some embodiments.
  • Device 102 includes one or more processing units (CPU's) 202 , one or more network or other communications interfaces 204 , memory 210 , and one or more communication buses 209 for interconnecting these components.
  • Communication buses 209 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components.
  • Device 102 also includes a user interface 205 having a display device 206 (e.g., a touch-sensitive display having a touch-sensitive surface) and optionally including additional input devices 208 (e.g., one or more of: keyboard, mouse, trackball, touchpad having a touch-sensitive surface, keypad having physical keys or buttons, microphone, etc.).
  • additional input devices 208 e.g., one or more of: keyboard, mouse, trackball, touchpad having a touch-sensitive surface, keypad having physical keys or buttons, microphone, etc.
  • the input devices 208 include a touchpad having touch-sensitive surface.
  • device 102 includes one or more sensors 203 , such as one or more accelerometers, magnetometer, gyroscope, GPS receiver, microphone, one or more infrared (IR) sensors, one or more biometric sensors, camera, etc. Any input device 208 herein described as an input device may equally well be described as a sensor 203 , and vice versa.
  • signals produced by the one or more sensors 203 are used as input sources for detecting events.
  • Memory 210 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 210 may optionally include one or more storage devices remotely located from the CPU(s) 202 . Memory 210 , or alternately the non-volatile memory device(s) within memory 210 , comprises a computer readable storage medium. In some embodiments, memory 210 stores the following programs, modules and data structures, or a subset thereof:
  • Each of the above identified modules, applications and systems is stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
  • the set of instructions are executed by one or more processors (e.g., CPUs 202 ).
  • the above identified modules or programs i.e., sets of instructions
  • memory 210 stores a subset of the modules and data structures identified above.
  • memory 210 may store additional modules and data structures not described above.
  • FIG. 2 shows a block diagram of a device 102
  • FIG. 2 is intended more as functional description of the various features which may be present in the device 102 than as a structural schematic of the embodiments described herein.
  • items shown separately could be combined and some items could be separated.
  • the control application 220 is included in the operating system 212 .
  • the user When performing a touch-based gesture on the touch-sensitive surface of display device 206 or a touchpad, the user generates a sequence of events and/or sub-events that are processed by one or more processing units of the device 102 (e.g., the one or more processors 202 illustrated in FIG. 2 ). In some embodiments, the one or more processing units of the device 102 process the sequence of events and/or sub-events to recognize events.
  • FIG. 3A is a block diagram illustrating exemplary components of the event handling system 270 , according to some embodiments.
  • display device 206 is a touch-sensitive display having a touch-sensitive surface.
  • the event handling system 270 is used in conjunction with a device 102 having a non-touch sensitive display and a touch pad having a touch-sensitive surface.
  • the event handling system 270 includes event sorter 301 , which receives event information and determines an application 240 - 1 and at least one application view 311 to which to deliver the event information.
  • application 240 - 1 has an application internal state 312 .
  • application internal state 312 includes contextual information 314 (e.g., text and metadata) needed to provide enhanced keyboard services to application 240 - 1 .
  • Application internal state 312 is not directly accessible by control application 220 , because the memory location(s) of the application internal state 312 is(are) not known to control application 220 , or the memory location(s) of the application internal state 312 is(are) not directly accessible by the control application, and/or because application 240 - 1 stores information in application internal state 240 - 1 in a manner (e.g., using data structures, formats, metadata, or the like) unknown to control application 220 .
  • event sorter 301 includes an event monitor 302 and an event dispatcher module 305 .
  • Event monitor 302 receives event information from operating system 212 .
  • event information includes information about a sub-event (e.g., a portion of a touch-based gesture on touch-sensitive display 206 ).
  • the operating system 212 transmits information it receives from the user interface 205 to event monitor 302 .
  • Information that the operating system 212 receives from the user interface 205 includes event information from the display device 206 (i.e., a touch-sensitive display) or touch pad having a touch-sensitive surface).
  • event monitor 302 sends requests to operating system 212 at predetermined intervals. In response, operating system 212 transmits event information to event monitor 302 . In other embodiments, operating system 212 transmits event information only when there is a significant event (e.g., receiving an input beyond a predetermined noise threshold and/or for more than a predetermined duration).
  • event sorter 301 also includes a hit view determination module 303 and/or an active event recognizer determination module 304 .
  • the hit view determination module 303 includes software procedures for determining where a sub-event has taken place within one or more views (e.g., application views 311 ), when the display device 206 displays more than one view.
  • a spatial aspect of the user interface associated with an application is a set views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur.
  • the application views (of a respective application) in which a touch is detected may correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected may be called the hit view, and the set of events that are recognized as proper inputs may be determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
  • the hit view determination module 303 receives information related to sub-events of a touch-based gesture.
  • the hit view determination module 303 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-events that form an event or potential event).
  • the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
  • active event recognizer determination module 304 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some circumstances, active event recognizer determination module 304 determines that only the hit view should receive a particular sequence of sub-events. In other circumstances, active event recognizer determination module 304 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of subevents. In some embodiments, even when touch sub-events are entirely confined to the area associated with one particular view, views higher in the view hierarchy continue to be actively involved views.
  • event dispatcher module 305 dispatches the event information to an event recognizer (e.g., event recognizer 320 ). In embodiments including active event recognizer determination module 304 , event dispatcher module 305 delivers the event information to an event recognizer determined by active event recognizer determination module 304 . In some embodiments, event dispatcher module 305 stores event information, which is retrieved by a respective event receiver module 331 , in an event queue.
  • an event recognizer e.g., event recognizer 320
  • event dispatcher module 305 delivers the event information to an event recognizer determined by active event recognizer determination module 304 .
  • event dispatcher module 305 stores event information, which is retrieved by a respective event receiver module 331 , in an event queue.
  • operating system 212 includes event sorter 301
  • application 240 - 1 includes event sorter 301
  • event sorter 301 is a stand-alone module, or a part of another module stored in memory 210 .
  • application 240 - 1 includes one or more application views 311 , each of which includes instructions for handling touch events that occur with a respective view of the application's user interface.
  • Each application view 311 of the application 240 - 1 includes one or more event recognizers 320 and one or more event handlers 322 .
  • a respective application view 311 includes a plurality of event recognizers 320 and a plurality of event handlers 322 .
  • one or more of the event recognizers 320 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 240 - 1 inherits methods and other properties.
  • a respective application view 311 also includes event data 321 received from event sorter 301 .
  • a respective event recognizer 320 receives event information (e.g., event data 321 ) from the event sorter 301 and identifies an event from the event information.
  • Event recognizer 320 includes event receiver 331 and event comparator 332 .
  • event recognizer 320 also includes at least a subset of: metadata 335 , event delivery instructions 336 , and sub-event delivery instructions 337 .
  • Event receiver 331 receives event information from event sorter 301 .
  • the event information includes information about a sub-event, for example, a touch or a movement.
  • the event information also includes additional information, such as a location (e.g., a physical location) of the sub-event.
  • the event information may also include speed and direction of the sub-event.
  • a respective event includes rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
  • Event comparator 332 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event.
  • event comparator 332 includes event definitions 333 .
  • Event definitions 333 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 ( 334 - 1 ), event 2 ( 334 - 2 ), and others.
  • sub-events in an event 334 include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching.
  • the definition for event 1 ( 337 - 1 ) is a double tap on a displayed object.
  • the double tap for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase.
  • the definition for event 2 ( 334 - 2 ) is a dragging on a displayed object.
  • the dragging for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across the display device 206 , and lift-off of the touch (touch end).
  • the event also includes information for the event's associated event handlers 322 .
  • event definitions 333 includes a definition of an event for a respective user-interface object.
  • event comparator 332 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on a touch-sensitive display such as the display device 206 , when a touch is detected on the display device 206 , the event comparator 332 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 322 , the event comparator uses the result of the hit test to determine which event handler 322 should be activated. For example, event comparator 332 selects an event handler associated with the sub-event and the object triggering the hit test.
  • the definition for a respective event 334 also includes delayed actions that delays delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
  • the event recognizer 320 When a respective event recognizer 320 determines that the series of sub-events do not match any of the events in the event definitions 333 , the event recognizer 320 enters an event impossible or event cancel state, after which is disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
  • a respective event recognizer 320 includes metadata 335 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers.
  • metadata 335 includes configurable properties, flags, and/or lists that indicate how event recognizers may interact with one another.
  • metadata 335 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
  • a respective event recognizer 320 activates event handler 322 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 320 delivers event information associated with the event to event handler 322 . Activating an event handler 322 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 320 throws a flag associated with the recognized event, and event handler 322 associated with the flag catches the flag and performs a predefined process.
  • event delivery instructions 336 include sub-event delivery instructions 337 that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
  • FIG. 3B is a block diagram illustrating a respective event handler 322 - 1 , according to some embodiments.
  • Event handler 322 - 1 includes a control application event handler 340 and an application event handler 350 .
  • control application event handler 340 is included in control application 220 and application event handler 350 is included in a respective application 240 (e.g., a third party application).
  • event handler 322 - 1 is implemented partially in control application 220 and partially in application 240 .
  • application event handler 350 utilizes and updates application internal state 312 of application 240
  • control application event handler 340 utilizes and updates retained contextual information 344 , which is typically a subset of contextual information 314 in application internal state 312 .
  • control application event handler 340 includes a query module 341 , a command module 342 , and a listener module 343 .
  • the modules of control application event handler 340 form an application programming interface (API) for providing enhanced keyboard services to applications 240 executed by device 102 .
  • API application programming interface
  • Control application 220 stores retained contextual information 344 that has been obtained from or for application 240 . It is noted that the retained contextual information 344 includes both contextual information obtained from application 240 as well as contextual information updates corresponding to commands issued by command module 342 (of a respective control application event handler 340 ) to application 240 . If two or more applications 240 are currently active, control application 220 separately retains contextual information 344 for each currently active application 240 .
  • Query module 341 queries the application 240 for contextual information relevant to a text manipulation event (e.g., adding text, deleting text, editing text, selecting text, deselecting text, etc.). For example, if the user of the device 102 entered text, the contextual information may include a range of text including the entered text (e.g., one or more characters, one or more words, one or more sentences, one or more paragraphs, one or more lines of text, one or more sections of a document, etc.). In some embodiments, queries from the query module 341 are received and processed by a responder module 351 of application event handler 350 , as discussed below. Information obtained by query module 341 is used by the control application 220 to update the contextual information 344 retained by the control application for application 240 . The retained contextual information 344 typically includes one or more characters, words, sentences, lines of text, or sections of a document preceding the location associated with a current text manipulation event.
  • a text manipulation event e.g., adding text, deleting text, editing text
  • the retained contextual information 344 is used by event handler 322 to provide enhanced keyboard services, such as one or more of: spelling correction; auto completion of incomplete words; grammar checking; adjusting the hit zone of one or more keys in a soft keyboard based on the context of the current text entry (or cursor) location, so as to enlarge the hit zone of one or more keys representing letters or symbols having a statistically high likelihood of being the next key to be selected by the user and/or decreasing the size of the hit zone of one or more keys representing letters or symbols having a statistically high likelihood of being the next key to be selected by the user; and the like.
  • enhanced keyboard services such as one or more of: spelling correction; auto completion of incomplete words; grammar checking; adjusting the hit zone of one or more keys in a soft keyboard based on the context of the current text entry (or cursor) location, so as to enlarge the hit zone of one or more keys representing letters or symbols having a statistically high likelihood of being the next key to be selected by the user and/or decreasing the size of the hit zone of one or
  • the command module 342 issues commands to application 240 based on the text manipulation event and the obtained/retained contextual information 344 .
  • command module 342 may instruct application 240 to replace a potentially misspelled word with a correct spelling of the potentially misspelled word.
  • commands issued by command module 342 are received and processed by command execution module 352 of application event handler 350 , as discussed below.
  • Listener module 343 listens to notifications by application 240 (e.g., notifications issued via notification module 353 of application 240 ) that contextual information 344 retained by control application 220 for application 240 can no longer be relied upon by control application 220 .
  • application event handler 350 includes responder module 351 , command execution module 352 , and notification module 353 .
  • Responder module 351 responds to queries by control application 220 (e.g., queries from query module 341 of control application 220 ) for contextual information that provides context to a text manipulation event.
  • Responder module 351 obtains the requested contextual information from the contextual information 314 stored by application 240 in application internal state 312 .
  • Command execution module 352 executes commands issued by control application 220 (e.g., issued by command module 342 of control application 220 ). Execution of those commands updates the contextual information 314 (e.g., text and/or metadata for text) stored by application 240 in application internal state 312 .
  • Notification module 353 notifies control application 220 that retained contextual information 344 for application 240 can no longer be relied upon by control application 220 .
  • command execution module can ignore a command issued by control application 220 .
  • command execution module 352 ignores a command issued by control application 220 when command execution module 352 and/or application 240 determines that the command is contrary to a predefined policy, fails to meet predefined criteria, or implements a feature not supported by application 240 .
  • application 240 will typically invoke notification module 353 to notify control application 220 that retained contextual information 344 for application 240 can no longer be relied upon by control application 220 .
  • event handling for touch-based gestures on touch-sensitive displays also applies to other forms of user inputs from various input devices, which may be utilized as inputs corresponding to sub-events which define an event to be recognized.
  • user inputs include one or more of: mouse movements; mouse button presses, with or without single or multiple keyboard presses or holds; user movements, taps, drags, scrolls, etc., on a touch pad; pen stylus inputs; movement or rotation of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof.
  • FIG. 4 presents one particular implementation of device 102 , according to some embodiments. It is noted that FIG. 4 is the same as FIG. 2 except with respect to control application 220 , application 240 and event handling 270 . Aspects of FIG. 4 that are the same as FIG. 2 are already described above, and thus not repeated here.
  • Control application 220 includes event recognizers 320 , and control application event handlers 340 .
  • Each control application event handler 340 includes a query module 341 (e.g., the query module 341 in FIG. 3B ) that queries a respective application 240 for contextual information 314 that provides context to text manipulation events; a command module 342 (e.g., the command module 342 in FIG. 3B ) that issues commands 426 to the application 240 ; and a listener module 343 (e.g., the listener module 343 in FIG. 3B ) that listens for notifications from a notification module 353 of application 240 , as described herein.
  • control application event handler 340 utilizes and updates retained contextual information 344 , which is typically a subset of contextual information 314 in application internal state 312 of application 240 .
  • Application 240 (e.g., an email application, a web browser application, a text messaging application, or a third party application) has an application internal state 312 , which includes contextual information 314 .
  • contextual information 314 is typically text and metadata (color, font, size, selection status, etc.) concerning the text, but may include other information as well.
  • Application 240 also includes a plurality of application event handler 350 , one of which is shown in FIG. 4 .
  • Application event handler 350 includes responder module 351 that responds to queries by the query module 341 , command execution module 352 that executes commands 447 issued by the control application 220 , and notification module 353 that notifies control application 220 when contextual information previously provided to control application 220 can no longer be relied upon, as described above with reference to FIG. 3B .
  • Memory 210 of device 102 also stores language data 460 for one or more languages.
  • Language data 460 provides information used to provide the aforementioned enhanced keyboard services.
  • language data 460 includes data structures that represent valid words 461 , characters 462 , and/or phrases 463 for the one or more languages.
  • Each of the above identified systems, modules and applications is stored in one or more of the previously mentioned memory devices of device 102 , and corresponds to a set of instructions for performing a function described above.
  • the set of instructions can be executed by one or more processors (e.g., CPUs 202 ).
  • the above identified modules or programs i.e., sets of instructions
  • memory 210 may store a subset of the modules and data structures identified above.
  • memory 210 may store additional modules and data structures not described above.
  • Each of the above identified systems, modules and applications is stored in one or more of the previously mentioned memory devices of device 102 , and corresponds to a set of instructions for performing a function described above.
  • the set of instructions can be executed by one or more processors (e.g., CPUs 202 ).
  • the above identified modules or programs i.e., sets of instructions
  • memory 210 may store a subset of the modules and data structures identified above.
  • memory 210 may store additional modules and data structures not described above.
  • control application 220 of device 102 provides the functionality of a system keyboard with enhanced keyboard services to third-party applications executing on device 102 .
  • FIGS. 5A, 5B and 5C are block diagrams 500 , 510 , and 520 , respectively, illustrating an exemplary sequence of events between control application 502 of a device (e.g., the device 102 ) and an application 504 (e.g., a third-party application) also executed by of the device so as to provide enhanced keyboard services to the application 504 , according to some embodiments.
  • the control application 502 receives an indication of a text manipulation event 506 .
  • the control application 502 queries 512 the application 504 for contextual information that provides context for the text manipulation event 506 .
  • the application 504 responds to the query 512 by providing contextual information 514 to the control application 502 .
  • the control application 502 issues commands 522 to the application 504 based on the contextual information 514 (and/or based on other retained contextual information 344 obtained in response to prior text manipulation events) and the text manipulation event 506 .
  • commands 522 when executed by application 504 , may instruct application 504 to replace one or more characters of in the user interface of the application 504 .
  • the queries, contextual information, and commands described in FIGS. 5A to 5C allow a third-party application to not only receive keystrokes from a keyboard, but to also received enhanced keyboard services even though the internal state ( 242 , FIG. 4 ) of the third-party application is not directly accessible to the operating system or control application of the device.
  • FIGS. 5A to 5C are described in more detail with respect to FIGS. 6 to 10 below.
  • FIG. 6 is a flowchart of a method 600 for issuing commands to an application based on contextual information, according to some embodiments.
  • a control application e.g., control application 220 or 502 receives ( 602 ) an indication that a text manipulation event has occurred in a user interface of a second application (e.g., a third-party application such as application 240 or 504 ).
  • the control application may be notified by an event handler that a text manipulation event occurred within a particular application view, or set of application views, of the second application.
  • the control application receives the text manipulation event prior to the second application.
  • control application may receive event information for the text manipulation event (e.g., the characters inserted, deleted, selected, etc.) directly from an event handler prior to that information being provided to the second application.
  • the text manipulation event is selected from the group consisting of an insertion of one or more characters, a deletion one or more characters, a selection of one or more characters, and a deselection of one or more characters.
  • the control application queries ( 604 ) the second application to obtain contextual information established by the second application prior to the event, wherein the contextual information provides context to the text manipulation event that occurred in the user interface of the second application.
  • the contextual information relating to the text manipulation event includes a logical location and a predetermined unit of text relating to the text manipulation event.
  • the logical location of the text manipulation event is selected from the group consisting of a point between two characters in the user interface of the second application and a range including one or more characters that is selected in the user interface of the second application.
  • the predetermined unit of text is selected from the group consisting of a character, a word, a sentence, a paragraph, a line of text, a section of a document, and a document.
  • a respective query from the control application requesting the contextual information of the text manipulation event includes a physical location of the text manipulation event in the user interface of the second application, and the logical location of the text manipulation event corresponding to the physical location.
  • a physical location may include a set of coordinates relative to a coordinate system for a display device.
  • the coordinate system may be defined relative to a display area of the display device.
  • the coordinate system may be defined relative to an input area of the display device (e.g., for a touch screen or a touch-sensitive display device).
  • operation 604 is performed only when the control application does not already have sufficient contextual information to determine the one or more commands to be sent to the second application.
  • the control application obtains and retains contextual information for the second application. Therefore, contextual information for the second application may already be known (i.e., retained) by the control application prior to the text manipulation event. If the information received for the current text manipulation event is at a location (in the user interface of the second application) for which the control application already has contextual information (i.e., sufficient contextual information to provide enhanced keyboard services), and the second application has not sent a command to the control application invalidating the retained contextual information, the control application skips operation 604 .
  • operation 604 is a conditional operation.
  • FIG. 7 is a flowchart of a method 700 for determining, by the second application, contextual information that provides context to a text manipulation event, according to some embodiments.
  • the second application determines ( 702 ) contextual information providing context to the text manipulation event.
  • the second application determines ( 702 ) the contextual information providing context to the text manipulation event by determining ( 704 ) a text direction associated with a logical location of the text manipulation event and determining ( 706 ) boundaries of a predetermined text unit that includes the text associated with the logical location of the text manipulation event based on the text direction.
  • the contextual information depends on a direction of the text and the boundaries of the predetermined text unit.
  • the contextual information may include characters to the left and above the last character inserted (i.e., the preceding characters entered).
  • the contextual information may include characters above and to the right of the last character inserted (i.e., the preceding characters entered).
  • the second application responds ( 708 ) to the querying by the control application by providing the contextual information providing context to the text manipulation event that occurred in the user interface of the second application.
  • control application determines ( 606 ) the one or more commands based on the contextual information providing context to the text manipulation event.
  • the text manipulation and the contextual information indicate that a user is entering one or more characters that form a single character. For example, when entering characters in some Asian languages, multiple characters (or strokes) are required to build up a single character.
  • FIG. 8 is a flowchart of a method for determining ( 606 ) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a single character, according to some embodiments.
  • the control application determines ( 802 ) that the contextual information and text manipulation event indicate a sequence of characters that represent a single character.
  • the control application determines ( 804 ) one or more candidate single characters from a plurality of possible single characters based on the contextual information and text manipulation event.
  • the candidate single characters may be selected so as to be not only consistent with the text manipulation event, but also to be consistent with the contextual information.
  • statistical information concerning historical usage of character sequences (by a community of users, or by the user of the device, or both) that include the contextual information may be used to select either a predefined number or context dependent number of candidate characters.
  • any of a variety of auto-completion methodologies may be used to identify the candidate single characters.
  • the control application then generates ( 806 ) one or more commands for instructing the second application to display, for user selection, the one or more candidate single characters.
  • the text manipulation event and the contextual information indicate that a word is potentially misspelled.
  • a user may enter a sequence of characters (e.g., the text manipulation event) that forms a word that is potentially misspelled.
  • FIG. 9 is a flowchart of a method for determining ( 606 ) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a potentially misspelled word, according to some embodiments.
  • the control application determines ( 902 ) that the contextual information and text manipulation event indicate a sequence of characters that represent a potentially misspelled word.
  • a potentially misspelled word may be a word that is not in a dictionary for a respective language or that may be included in a database of historically misspelled words.
  • the control application determines ( 904 ) one or more candidate words from a plurality of possible words that represent a correct spelling of the potentially misspelled word.
  • the control application then generates ( 906 ) one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • the text manipulation event and the contextual information indicate a user is entering characters that represent a portion of a word.
  • the user may have typed the characters “a”, “u”, “t”, “0”, and “m” representing a portion of the word (e.g., “automatic”, “automobile”, etc.).
  • the control application may attempt to predict one or more words (sometimes called candidate words) that the user intends to type.
  • FIG. 10 is a flowchart of a method for determining ( 606 ) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a portion of a word, according to some embodiments.
  • the control application determines ( 1002 ) that the contextual information and text manipulation event indicate a sequence of characters that represent a portion of a word. Next, the control application determines ( 1004 ) one or more candidate words in accordance with the portion of the word. For example, the candidate words are typically selected from a set of complete words that include the portion of the word. The control application then generates ( 1006 ) one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • the control application issues ( 608 ) the one or more commands to the second application.
  • the second application typically executes the one or more commands issued by the control application.
  • the second application need not execute the command or commands issued by the control application, or may execute some but not all of the commands issued by the control application.
  • the contextual information for the second application that has been obtained by the control application may no longer be relied upon by the control application.
  • the second application may have modified text independent of user input (e.g., regardless of whether a text manipulation event has occurred or not).
  • the second application notifies the control application that contextual information retained by the control application for the second application can no longer be relied upon by the control application.
  • the second application may modify a selection of text independent of user input.
  • the second application notifies the control application that a selection of text in the second application has changed.
  • the second application does not execute one or more commands issued by the control application, which would typically render the contextual information retained by the control information invalid, the second application notifies the control application that contextual information retained by the control application can no longer be relied upon by the control application.
  • the methods and systems described above for responding to and processing text manipulation events in the user interface of an application are applied to content manipulation events, which manipulate content (e.g., text, images, objects, etc.) in the user interface of an application, while providing enhanced content services (including, for example, one or more of the aforementioned enhanced keyboard services) to the application.
  • Content manipulation events are a superset of text manipulation events.
  • FIGS. 6-10 may be governed by instructions that are stored in a computer readable storage medium and that are executed by one or more processors of a respective multifunction device. Each of the operations shown in FIGS. 6-10 may correspond to instructions stored in a computer memory or computer readable storage medium.
  • the computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices.
  • the computer readable instructions stored on the computer readable storage medium are in source code, assembly language code, object code, or other instruction format that is interpreted and/or executable by the device's one or more processors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and method for issuing commands to an application based on contextual information. A control application receives an indication that a text manipulation event has occurred in a user interface of a second application. Next, the control application queries the second application to obtain contextual information established by the second application prior to the event, the contextual information providing context to the text manipulation event that occurred in the user interface of the second application. The control application then issues one or more commands to the second application based on the contextual information providing context to the text manipulation event.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of co-pending U.S. application Ser. No. 12/789,684, filed May 28, 2010, which claims the benefit of U.S. Provisional Application No. 61/292,818 filed Jan. 6, 2010, which applications are incorporated by reference in their entirety.
  • This application is related to U.S. application Ser. No. 12/566,660, filed Sep. 24, 2009, which application is incorporated by reference herein in its entirety.
  • TECHNICAL FIELD
  • This application is related to a system and a method for providing on-screen soft keyboard services to an application executing on multifunction device, for example a mobile phone that has a touch-sensitive display and is configured to provide enhanced keyboard services to third party applications executed by the device.
  • Some electronic devices (e.g., a mobile phone, a portable game console, etc.) provide a user interface that includes an on-screen keyboard (also called a soft keyboard) that allows a user to enter text into the user interface by touching virtual keys displayed on a touch-sensitive display device (sometimes called a touch screen display). Typically, the on-screen keyboard is a system keyboard that is provided by the operating system of the electronic device. In addition to providing the system keyboard, the operating system of the electronic device handles text manipulation events (e.g., insert, delete, select and replace commands) received from the system keyboard and provides enhanced keyboard functions such as spell checking. When a third-party application requires text input from a user, the third-party receives information and/or commands from the system keyboard provided by the operating system of the electronic device. Unfortunately, the enhanced keyboard functions provided by the operating system often require contextual information, such as text (or other symbols) positioned before and/or after the current text or cursor position, and therefore the enhanced keyboard functions are not available to third-party applications that store contextual information in storage locations that unknown to the operating system, and/or that store contextual information in a manner (e.g., using data structures, formats, metadata, or the like) unknown to the operating system.
  • SUMMARY OF DISCLOSED EMBODIMENTS
  • To address the aforementioned deficiencies, some embodiments provide a system, computer readable storage medium including instructions, and a computer-implemented method for issuing commands from a control application to a second application (e.g., a third-party application) based on contextual information received from the second application. In these embodiments, the control application provides enhanced keyboard functions (i.e., functions other than simply delivery of single keystroke commands) to the second application. In some embodiments, the control application receives an indication that a text manipulation event has occurred in a user interface of the second application. Next, the control application queries the second application to obtain contextual information established by the second application prior to the event, the contextual information providing context to the text manipulation event that occurred in the user interface of the second application. The control application then issues one or more commands to the second application based on the contextual information providing context to the text manipulation event.
  • In some embodiments, in response to the querying by the control application, the second application determines the contextual information providing context to the text manipulation event and responds to the querying by the control application by providing the contextual information providing context to the text manipulation event that occurred in the user interface of the second application.
  • In some embodiments, the second application determines the contextual information providing context to the text manipulation event by determining a text direction associated with a logical location of the text manipulation event and determining boundaries of a predetermined text unit that includes the text associated with the logical location of the text manipulation event based on the text direction.
  • In some embodiments, in response to the issuing of the one or more commands by the control application, the second application executes the one or more commands issued by the control application.
  • In some embodiments, the contextual information relating to the text manipulation event includes a logical location and a predetermined unit of text relating to the text manipulation event.
  • In some embodiments, the predetermined unit of text is selected from the group consisting of a character, a word, a sentence, a paragraph, a line of text, a section of a document, and a document.
  • In some embodiments, the logical location of the text manipulation event is selected from the group consisting of a point between two characters in the user interface of the second application and a range including one or more characters that is selected in the user interface of the second application.
  • In some embodiments, a respective query from the control application requesting the contextual information of the text manipulation event includes a physical location of the text manipulation event in the user interface of the second application, and the logical location of the text manipulation event corresponding to the physical location.
  • In some embodiments, the text manipulation event is selected from the group consisting of an insertion of one or more characters, a deletion one or more characters, a selection of one or more characters, and a deselection of one or more characters.
  • In some embodiments, the control application determines the one or more commands based on the contextual information, which provides context to the text manipulation event, prior to issuing one or more commands to the second application.
  • In some embodiments, the control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a single character. Next, the control application determines one or more single characters from a plurality of possible single characters based on the contextual information and text manipulation event. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more single characters from the plurality of possible single characters.
  • In some embodiments, the control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a potentially misspelled word. Next, the control application determines one or more words from a plurality of possible words that represent a correct spelling of the potentially misspelled word. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more words from the plurality of possible words.
  • In some embodiments, the control application determines that the contextual information and text manipulation event indicate a sequence of characters that represent a portion of a word. Next, the control application determines one or more candidate words from a plurality of possible words determined in accordance with the portion of the word. The control application then generates one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • In some embodiments, the control application receives the text manipulation event in the user interface of the second application.
  • In some embodiments, the second application notifies the control application that contextual information obtained by the control application from the second application can no longer be relied upon by the control application.
  • In some embodiments, the second application notifies the control application that a selection of text in the second application has changed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating a user interface of a device, according to some embodiments.
  • FIG. 2 is a block diagram illustrating a device, according to some embodiments.
  • FIG. 3A is a block diagram illustrating exemplary components of an event handling system, according to some embodiments.
  • FIG. 3B is a block diagram illustrating an event handler, according to some embodiments.
  • FIG. 4 is a block diagram illustrating an exemplary device, according to some embodiments.
  • FIG. 5A is a block diagram illustrating a control application receiving a text manipulation event, according to some embodiments.
  • FIG. 5B is a block diagram illustrating a control application querying an application for contextual information, according to some embodiments.
  • FIG. 5C is a block diagram illustrating a control application issuing commands to an application, according to some embodiments.
  • FIG. 6 is a flowchart of a method for issuing commands to an application based on contextual information, according to some embodiments.
  • FIG. 7 is a flowchart of a method for determining contextual information that provides context to a text manipulation event, according to some embodiments.
  • FIG. 8 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a single character, according to some embodiments.
  • FIG. 9 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a potentially misspelled word, according to some embodiments.
  • FIG. 10 is a flowchart of a method for determining commands to be sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a portion of a word, according to some embodiments.
  • Like reference numerals refer to corresponding parts throughout the drawings.
  • DETAILED DESCRIPTION
  • Some embodiments provide a system, computer readable storage medium including instructions, and a computer-implemented method for allowing a third-party application executing on a device to receive enhanced keyboard services. In these embodiments, a control application of the device provides the enhanced keyboard services to the third-party application by issuing commands from the control application to the third party application based on contextual information received from the third-party application. These embodiments are described in detail below.
  • Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
  • It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present invention. The first contact and the second contact are both contacts, but they are not the same contact.
  • The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if[a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
  • Embodiments of computing devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the computing device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone® and iPod Touch® devices from Apple Inc. of Cupertino, Calif. Other portable devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads), may also be used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
  • In the discussion that follows, a computing device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the computing device may include one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick.
  • The device supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
  • The various applications that may be executed on the device may use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device may be adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device may support the variety of applications with user interfaces that are intuitive and transparent to the user.
  • The user interfaces may include one or more soft keyboard embodiments. The soft keyboard embodiments may include standard (QWERTY) and/or non-standard configurations of symbols on the displayed icons of the keyboard, such as those described in U.S. patent application Ser. No. 11/459,606, “Keyboards For Portable Electronic Devices,” filed Jul. 24, 2006, and 111459,615, “Touch Screen Keyboards For Portable Electronic Devices,” filed Jul. 24, 2006, the contents of which are hereby incorporated by reference in their entireties. The keyboard embodiments may include a reduced number of icons (or soft keys) relative to the number of keys in existing physical keyboards, such as that for a typewriter. This may make it easier for users to select one or more icons in the keyboard, and thus, one or more corresponding symbols. The keyboard embodiments may be adaptive. For example, displayed icons may be modified in accordance with user actions, such as selecting one or more icons and/or one or more corresponding symbols. One or more applications on the device may utilize common and/or different keyboard embodiments. Thus, the keyboard embodiment used may be tailored to at least some of the applications. In some embodiments, one or more keyboard embodiments may be tailored to a respective user. For example, one or more keyboard embodiments may be tailored to a respective user based on a word usage history (lexicography, slang, individual usage) of the respective user. Some of the keyboard embodiments may be adjusted to reduce a probability of a user error when selecting one or more icons, and thus one or more symbols, when using the soft keyboard embodiments.
  • In some of the embodiments of the systems and methods described below, “touch-based gestures” (sometimes called “touch gestures”) include not only gestures, made by one or more fingers or one or more styluses, that make physical contact a touch-sensitive screen 112 or other touch-sensitive surface, but also gestures that occur, in whole or in part, sufficiently close to touch-sensitive screen 112 or other touch-sensitive surface that the one or more sensors of touch-sensitive screen 112 or other touch-sensitive surface are able to detect those gestures.
  • FIG. 1 is a block diagram illustrating a user interface 104 of a device 102, according to some embodiments. FIG. 1 illustrates the user interface when the device 102 is executing an application. In this example the user interface 104 includes a soft keyboard 106 and a text view region 108, both displayed on a touch-sensitive display of the device 102.
  • In some embodiments, the device 102 is a portable multifunction electronic device that includes a touch-sensitive display (sometimes called a touch screen or touch screen display) configured to present the user interface 104. In various embodiments, the device 102 is a consumer electronic device, mobile telephone, smart phone, video game system, electronic music player, tablet PC, electronic book reading system, e-book, personal digital assistant, navigation device, electronic organizer, email device, laptop or other computer, kiosk computer, vending machine, smart appliance, or the like.
  • FIG. 2 is block diagram illustrating device 102 according to some embodiments. Device 102 includes one or more processing units (CPU's) 202, one or more network or other communications interfaces 204, memory 210, and one or more communication buses 209 for interconnecting these components. Communication buses 209 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device 102 also includes a user interface 205 having a display device 206 (e.g., a touch-sensitive display having a touch-sensitive surface) and optionally including additional input devices 208 (e.g., one or more of: keyboard, mouse, trackball, touchpad having a touch-sensitive surface, keypad having physical keys or buttons, microphone, etc.). In embodiments in which the display device 206 is not a touch-sensitive display, the input devices 208 include a touchpad having touch-sensitive surface. In some embodiments, device 102 includes one or more sensors 203, such as one or more accelerometers, magnetometer, gyroscope, GPS receiver, microphone, one or more infrared (IR) sensors, one or more biometric sensors, camera, etc. Any input device 208 herein described as an input device may equally well be described as a sensor 203, and vice versa. In some embodiments, signals produced by the one or more sensors 203 are used as input sources for detecting events.
  • Memory 210 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 210 may optionally include one or more storage devices remotely located from the CPU(s) 202. Memory 210, or alternately the non-volatile memory device(s) within memory 210, comprises a computer readable storage medium. In some embodiments, memory 210 stores the following programs, modules and data structures, or a subset thereof:
      • operating system 212 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
      • communication module 214 that is used for connecting the device 102 to other computers via the one or more communication interfaces 204 (wired or wireless) and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
      • user interface module 216 that receives commands input by the user via the display 206 (if the display is a touch-sensitive display), input devices 208, and/or sensor 203, and generates user interface objects for display by display device 206;
      • control application 220 that receives text manipulation events, queries applications 240 for contextual information that provide context to the text manipulation events, and issue commands to applications 240 based on the contextual information, as described herein;
      • one or more applications 240 (e.g., an email application, a web browser application, a text messaging application, third party applications, etc.), as described herein; and
      • event handling system 270 (in the device 102) that may be implemented in various embodiments within control application 220 and/or applications 240, as described herein; in some embodiments, however, some aspects of event handling system 270 are implemented in control application 220 while other aspects are implemented in applications 240; and
      • device/global internal state 242.
  • Each of the above identified modules, applications and systems is stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The set of instructions are executed by one or more processors (e.g., CPUs 202). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 210 stores a subset of the modules and data structures identified above. Furthermore, memory 210 may store additional modules and data structures not described above.
  • Although FIG. 2 shows a block diagram of a device 102, FIG. 2 is intended more as functional description of the various features which may be present in the device 102 than as a structural schematic of the embodiments described herein. In practice, and as recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, in some embodiments, the control application 220 is included in the operating system 212.
  • When performing a touch-based gesture on the touch-sensitive surface of display device 206 or a touchpad, the user generates a sequence of events and/or sub-events that are processed by one or more processing units of the device 102 (e.g., the one or more processors 202 illustrated in FIG. 2). In some embodiments, the one or more processing units of the device 102 process the sequence of events and/or sub-events to recognize events.
  • FIG. 3A is a block diagram illustrating exemplary components of the event handling system 270, according to some embodiments. In the following discussion of FIGS. 3A and 3B, display device 206 is a touch-sensitive display having a touch-sensitive surface. However, in other embodiments the event handling system 270 is used in conjunction with a device 102 having a non-touch sensitive display and a touch pad having a touch-sensitive surface. The event handling system 270 includes event sorter 301, which receives event information and determines an application 240-1 and at least one application view 311 to which to deliver the event information.
  • As shown in FIG. 3A, application 240-1 has an application internal state 312. As discussed in more detail below, application internal state 312 includes contextual information 314 (e.g., text and metadata) needed to provide enhanced keyboard services to application 240-1. Application internal state 312, however, is not directly accessible by control application 220, because the memory location(s) of the application internal state 312 is(are) not known to control application 220, or the memory location(s) of the application internal state 312 is(are) not directly accessible by the control application, and/or because application 240-1 stores information in application internal state 240-1 in a manner (e.g., using data structures, formats, metadata, or the like) unknown to control application 220.
  • In some embodiments, event sorter 301 includes an event monitor 302 and an event dispatcher module 305. Event monitor 302 receives event information from operating system 212. In some embodiments, event information includes information about a sub-event (e.g., a portion of a touch-based gesture on touch-sensitive display 206). The operating system 212 transmits information it receives from the user interface 205 to event monitor 302. Information that the operating system 212 receives from the user interface 205 includes event information from the display device 206 (i.e., a touch-sensitive display) or touch pad having a touch-sensitive surface).
  • In some embodiments, event monitor 302 sends requests to operating system 212 at predetermined intervals. In response, operating system 212 transmits event information to event monitor 302. In other embodiments, operating system 212 transmits event information only when there is a significant event (e.g., receiving an input beyond a predetermined noise threshold and/or for more than a predetermined duration).
  • In some embodiments, event sorter 301 also includes a hit view determination module 303 and/or an active event recognizer determination module 304. The hit view determination module 303 includes software procedures for determining where a sub-event has taken place within one or more views (e.g., application views 311), when the display device 206 displays more than one view. A spatial aspect of the user interface associated with an application is a set views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected may correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected may be called the hit view, and the set of events that are recognized as proper inputs may be determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
  • In some embodiments, the hit view determination module 303 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, the hit view determination module 303 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit-view is identified by the hit view determination module, the identified hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
  • In some embodiments, active event recognizer determination module 304 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some circumstances, active event recognizer determination module 304 determines that only the hit view should receive a particular sequence of sub-events. In other circumstances, active event recognizer determination module 304 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of subevents. In some embodiments, even when touch sub-events are entirely confined to the area associated with one particular view, views higher in the view hierarchy continue to be actively involved views.
  • In some embodiments, event dispatcher module 305 dispatches the event information to an event recognizer (e.g., event recognizer 320). In embodiments including active event recognizer determination module 304, event dispatcher module 305 delivers the event information to an event recognizer determined by active event recognizer determination module 304. In some embodiments, event dispatcher module 305 stores event information, which is retrieved by a respective event receiver module 331, in an event queue.
  • In some embodiments, operating system 212 includes event sorter 301, while in some other embodiments application 240-1 includes event sorter 301. In yet other embodiments, event sorter 301 is a stand-alone module, or a part of another module stored in memory 210.
  • In some embodiments, application 240-1 includes one or more application views 311, each of which includes instructions for handling touch events that occur with a respective view of the application's user interface. Each application view 311 of the application 240-1 includes one or more event recognizers 320 and one or more event handlers 322. Typically, a respective application view 311 includes a plurality of event recognizers 320 and a plurality of event handlers 322. In other embodiments, one or more of the event recognizers 320 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 240-1 inherits methods and other properties. In some embodiments, a respective application view 311 also includes event data 321 received from event sorter 301.
  • A respective event recognizer 320 receives event information (e.g., event data 321) from the event sorter 301 and identifies an event from the event information. Event recognizer 320 includes event receiver 331 and event comparator 332. In some embodiments, event recognizer 320 also includes at least a subset of: metadata 335, event delivery instructions 336, and sub-event delivery instructions 337.
  • Event receiver 331 receives event information from event sorter 301. The event information includes information about a sub-event, for example, a touch or a movement. Depending on the sub-event, the event information also includes additional information, such as a location (e.g., a physical location) of the sub-event. When the subevent concerns motion of a touch, the event information may also include speed and direction of the sub-event. In some embodiments, a respective event includes rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
  • Event comparator 332 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 332 includes event definitions 333. Event definitions 333 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (334-1), event 2 (334-2), and others. In some embodiments, sub-events in an event 334 include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (337-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase. In another example, the definition for event 2 (334-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across the display device 206, and lift-off of the touch (touch end). In some embodiments, the event also includes information for the event's associated event handlers 322.
  • In some embodiments, event definitions 333 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 332 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on a touch-sensitive display such as the display device 206, when a touch is detected on the display device 206, the event comparator 332 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 322, the event comparator uses the result of the hit test to determine which event handler 322 should be activated. For example, event comparator 332 selects an event handler associated with the sub-event and the object triggering the hit test.
  • In some embodiments, the definition for a respective event 334 also includes delayed actions that delays delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
  • When a respective event recognizer 320 determines that the series of sub-events do not match any of the events in the event definitions 333, the event recognizer 320 enters an event impossible or event cancel state, after which is disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
  • In some embodiments, a respective event recognizer 320 includes metadata 335 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 335 includes configurable properties, flags, and/or lists that indicate how event recognizers may interact with one another. In some embodiments, metadata 335 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
  • In some embodiments, a respective event recognizer 320 activates event handler 322 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 320 delivers event information associated with the event to event handler 322. Activating an event handler 322 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 320 throws a flag associated with the recognized event, and event handler 322 associated with the flag catches the flag and performs a predefined process.
  • In some embodiments, event delivery instructions 336 include sub-event delivery instructions 337 that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
  • FIG. 3B is a block diagram illustrating a respective event handler 322-1, according to some embodiments. Event handler 322-1 includes a control application event handler 340 and an application event handler 350. In some embodiments, control application event handler 340 is included in control application 220 and application event handler 350 is included in a respective application 240 (e.g., a third party application). Stated another way, event handler 322-1 is implemented partially in control application 220 and partially in application 240. Furthermore, while application event handler 350 utilizes and updates application internal state 312 of application 240, control application event handler 340 utilizes and updates retained contextual information 344, which is typically a subset of contextual information 314 in application internal state 312.
  • In some embodiments, control application event handler 340 includes a query module 341, a command module 342, and a listener module 343. The modules of control application event handler 340 form an application programming interface (API) for providing enhanced keyboard services to applications 240 executed by device 102.
  • Control application 220 stores retained contextual information 344 that has been obtained from or for application 240. It is noted that the retained contextual information 344 includes both contextual information obtained from application 240 as well as contextual information updates corresponding to commands issued by command module 342 (of a respective control application event handler 340) to application 240. If two or more applications 240 are currently active, control application 220 separately retains contextual information 344 for each currently active application 240.
  • Query module 341 queries the application 240 for contextual information relevant to a text manipulation event (e.g., adding text, deleting text, editing text, selecting text, deselecting text, etc.). For example, if the user of the device 102 entered text, the contextual information may include a range of text including the entered text (e.g., one or more characters, one or more words, one or more sentences, one or more paragraphs, one or more lines of text, one or more sections of a document, etc.). In some embodiments, queries from the query module 341 are received and processed by a responder module 351 of application event handler 350, as discussed below. Information obtained by query module 341 is used by the control application 220 to update the contextual information 344 retained by the control application for application 240. The retained contextual information 344 typically includes one or more characters, words, sentences, lines of text, or sections of a document preceding the location associated with a current text manipulation event.
  • The retained contextual information 344 is used by event handler 322 to provide enhanced keyboard services, such as one or more of: spelling correction; auto completion of incomplete words; grammar checking; adjusting the hit zone of one or more keys in a soft keyboard based on the context of the current text entry (or cursor) location, so as to enlarge the hit zone of one or more keys representing letters or symbols having a statistically high likelihood of being the next key to be selected by the user and/or decreasing the size of the hit zone of one or more keys representing letters or symbols having a statistically high likelihood of being the next key to be selected by the user; and the like.
  • The command module 342 issues commands to application 240 based on the text manipulation event and the obtained/retained contextual information 344. For example, command module 342 may instruct application 240 to replace a potentially misspelled word with a correct spelling of the potentially misspelled word. In some embodiments, commands issued by command module 342 are received and processed by command execution module 352 of application event handler 350, as discussed below.
  • Listener module 343 listens to notifications by application 240 (e.g., notifications issued via notification module 353 of application 240) that contextual information 344 retained by control application 220 for application 240 can no longer be relied upon by control application 220.
  • In some embodiments, application event handler 350 includes responder module 351, command execution module 352, and notification module 353. Responder module 351 responds to queries by control application 220 (e.g., queries from query module 341 of control application 220) for contextual information that provides context to a text manipulation event. Responder module 351 obtains the requested contextual information from the contextual information 314 stored by application 240 in application internal state 312. Command execution module 352 executes commands issued by control application 220 (e.g., issued by command module 342 of control application 220). Execution of those commands updates the contextual information 314 (e.g., text and/or metadata for text) stored by application 240 in application internal state 312. Notification module 353 notifies control application 220 that retained contextual information 344 for application 240 can no longer be relied upon by control application 220.
  • In some situations, the command execution module can ignore a command issued by control application 220. For example, in some embodiments command execution module 352 ignores a command issued by control application 220 when command execution module 352 and/or application 240 determines that the command is contrary to a predefined policy, fails to meet predefined criteria, or implements a feature not supported by application 240. In such situations, application 240 will typically invoke notification module 353 to notify control application 220 that retained contextual information 344 for application 240 can no longer be relied upon by control application 220.
  • The foregoing discussion regarding event handling for touch-based gestures on touch-sensitive displays also applies to other forms of user inputs from various input devices, which may be utilized as inputs corresponding to sub-events which define an event to be recognized. In some embodiments, such user inputs include one or more of: mouse movements; mouse button presses, with or without single or multiple keyboard presses or holds; user movements, taps, drags, scrolls, etc., on a touch pad; pen stylus inputs; movement or rotation of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof.
  • FIG. 4 presents one particular implementation of device 102, according to some embodiments. It is noted that FIG. 4 is the same as FIG. 2 except with respect to control application 220, application 240 and event handling 270. Aspects of FIG. 4 that are the same as FIG. 2 are already described above, and thus not repeated here.
  • Control application 220 includes event recognizers 320, and control application event handlers 340. Each control application event handler 340 includes a query module 341 (e.g., the query module 341 in FIG. 3B) that queries a respective application 240 for contextual information 314 that provides context to text manipulation events; a command module 342 (e.g., the command module 342 in FIG. 3B) that issues commands 426 to the application 240; and a listener module 343 (e.g., the listener module 343 in FIG. 3B) that listens for notifications from a notification module 353 of application 240, as described herein. As noted above, and shown in FIG. 4, control application event handler 340 utilizes and updates retained contextual information 344, which is typically a subset of contextual information 314 in application internal state 312 of application 240.
  • Application 240 (e.g., an email application, a web browser application, a text messaging application, or a third party application) has an application internal state 312, which includes contextual information 314. As noted above, contextual information 314 is typically text and metadata (color, font, size, selection status, etc.) concerning the text, but may include other information as well. Application 240 also includes a plurality of application event handler 350, one of which is shown in FIG. 4. Application event handler 350 includes responder module 351 that responds to queries by the query module 341, command execution module 352 that executes commands 447 issued by the control application 220, and notification module 353 that notifies control application 220 when contextual information previously provided to control application 220 can no longer be relied upon, as described above with reference to FIG. 3B.
  • Memory 210 of device 102 also stores language data 460 for one or more languages. Language data 460 provides information used to provide the aforementioned enhanced keyboard services. In some embodiments, language data 460 includes data structures that represent valid words 461, characters 462, and/or phrases 463 for the one or more languages.
  • Each of the above identified systems, modules and applications is stored in one or more of the previously mentioned memory devices of device 102, and corresponds to a set of instructions for performing a function described above. The set of instructions can be executed by one or more processors (e.g., CPUs 202). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 210 may store a subset of the modules and data structures identified above. Furthermore, memory 210 may store additional modules and data structures not described above.
  • Each of the above identified systems, modules and applications is stored in one or more of the previously mentioned memory devices of device 102, and corresponds to a set of instructions for performing a function described above. The set of instructions can be executed by one or more processors (e.g., CPUs 202). The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 210 may store a subset of the modules and data structures identified above. Furthermore, memory 210 may store additional modules and data structures not described above.
  • As discussed above, control application 220 of device 102 provides the functionality of a system keyboard with enhanced keyboard services to third-party applications executing on device 102. FIGS. 5A, 5B and 5C are block diagrams 500, 510, and 520, respectively, illustrating an exemplary sequence of events between control application 502 of a device (e.g., the device 102) and an application 504 (e.g., a third-party application) also executed by of the device so as to provide enhanced keyboard services to the application 504, according to some embodiments. In FIG. 5A, the control application 502 receives an indication of a text manipulation event 506. In FIG. 5B, the control application 502 queries 512 the application 504 for contextual information that provides context for the text manipulation event 506. The application 504 responds to the query 512 by providing contextual information 514 to the control application 502. In FIG. 5C, the control application 502 issues commands 522 to the application 504 based on the contextual information 514 (and/or based on other retained contextual information 344 obtained in response to prior text manipulation events) and the text manipulation event 506. For example, commands 522, when executed by application 504, may instruct application 504 to replace one or more characters of in the user interface of the application 504.
  • The queries, contextual information, and commands described in FIGS. 5A to 5C allow a third-party application to not only receive keystrokes from a keyboard, but to also received enhanced keyboard services even though the internal state (242, FIG. 4) of the third-party application is not directly accessible to the operating system or control application of the device.
  • The events illustrated in FIGS. 5A to 5C are described in more detail with respect to FIGS. 6 to 10 below.
  • FIG. 6 is a flowchart of a method 600 for issuing commands to an application based on contextual information, according to some embodiments. A control application (e.g., control application 220 or 502) receives (602) an indication that a text manipulation event has occurred in a user interface of a second application (e.g., a third-party application such as application 240 or 504). For example, the control application may be notified by an event handler that a text manipulation event occurred within a particular application view, or set of application views, of the second application. In some embodiments, the control application receives the text manipulation event prior to the second application. For example, the control application may receive event information for the text manipulation event (e.g., the characters inserted, deleted, selected, etc.) directly from an event handler prior to that information being provided to the second application. In some embodiments, the text manipulation event is selected from the group consisting of an insertion of one or more characters, a deletion one or more characters, a selection of one or more characters, and a deselection of one or more characters.
  • Next, the control application queries (604) the second application to obtain contextual information established by the second application prior to the event, wherein the contextual information provides context to the text manipulation event that occurred in the user interface of the second application. In some embodiments, the contextual information relating to the text manipulation event includes a logical location and a predetermined unit of text relating to the text manipulation event. In some embodiments, the logical location of the text manipulation event is selected from the group consisting of a point between two characters in the user interface of the second application and a range including one or more characters that is selected in the user interface of the second application. In some embodiments, the predetermined unit of text is selected from the group consisting of a character, a word, a sentence, a paragraph, a line of text, a section of a document, and a document.
  • In some embodiments, a respective query from the control application requesting the contextual information of the text manipulation event includes a physical location of the text manipulation event in the user interface of the second application, and the logical location of the text manipulation event corresponding to the physical location. In general, a physical location may include a set of coordinates relative to a coordinate system for a display device. For example, the coordinate system may be defined relative to a display area of the display device. Alternatively, the coordinate system may be defined relative to an input area of the display device (e.g., for a touch screen or a touch-sensitive display device).
  • In some embodiments, operation 604 is performed only when the control application does not already have sufficient contextual information to determine the one or more commands to be sent to the second application. In particular, while processing prior text manipulation events, the control application obtains and retains contextual information for the second application. Therefore, contextual information for the second application may already be known (i.e., retained) by the control application prior to the text manipulation event. If the information received for the current text manipulation event is at a location (in the user interface of the second application) for which the control application already has contextual information (i.e., sufficient contextual information to provide enhanced keyboard services), and the second application has not sent a command to the control application invalidating the retained contextual information, the control application skips operation 604. Thus, in these embodiments operation 604 is a conditional operation.
  • Attention is now directed to FIG. 7, which is a flowchart of a method 700 for determining, by the second application, contextual information that provides context to a text manipulation event, according to some embodiments. In response to the querying by the control application, the second application determines (702) contextual information providing context to the text manipulation event. In some embodiments, the second application determines (702) the contextual information providing context to the text manipulation event by determining (704) a text direction associated with a logical location of the text manipulation event and determining (706) boundaries of a predetermined text unit that includes the text associated with the logical location of the text manipulation event based on the text direction. The contextual information depends on a direction of the text and the boundaries of the predetermined text unit. For example, English is written from the left to the right of a page and then from the top of the page to the bottom of the page. In contrast, Chinese is written from the top of the page to the bottom the page and then from right of the page to the left of the page. In the case of English, when the text manipulation event is the insertion of a sequence of characters, the contextual information may include characters to the left and above the last character inserted (i.e., the preceding characters entered). In the case of Chinese, when the text manipulation event is the insertion of a sequence of characters, the contextual information may include characters above and to the right of the last character inserted (i.e., the preceding characters entered). The second application then responds (708) to the querying by the control application by providing the contextual information providing context to the text manipulation event that occurred in the user interface of the second application.
  • Returning to FIG. 6, the control application then determines (606) the one or more commands based on the contextual information providing context to the text manipulation event.
  • In some embodiments, the text manipulation and the contextual information indicate that a user is entering one or more characters that form a single character. For example, when entering characters in some Asian languages, multiple characters (or strokes) are required to build up a single character. These embodiments are described with respect to FIG. 8, which is a flowchart of a method for determining (606) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a single character, according to some embodiments. The control application determines (802) that the contextual information and text manipulation event indicate a sequence of characters that represent a single character. Next, the control application determines (804) one or more candidate single characters from a plurality of possible single characters based on the contextual information and text manipulation event. The candidate single characters may be selected so as to be not only consistent with the text manipulation event, but also to be consistent with the contextual information. Optionally, statistical information concerning historical usage of character sequences (by a community of users, or by the user of the device, or both) that include the contextual information may be used to select either a predefined number or context dependent number of candidate characters. Optionally, any of a variety of auto-completion methodologies may be used to identify the candidate single characters. The control application then generates (806) one or more commands for instructing the second application to display, for user selection, the one or more candidate single characters.
  • In some embodiments, the text manipulation event and the contextual information indicate that a word is potentially misspelled. For example, a user may enter a sequence of characters (e.g., the text manipulation event) that forms a word that is potentially misspelled. These embodiments are described with respect to FIG. 9, which is a flowchart of a method for determining (606) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a potentially misspelled word, according to some embodiments. The control application determines (902) that the contextual information and text manipulation event indicate a sequence of characters that represent a potentially misspelled word. Note that a potentially misspelled word may be a word that is not in a dictionary for a respective language or that may be included in a database of historically misspelled words. Next, the control application determines (904) one or more candidate words from a plurality of possible words that represent a correct spelling of the potentially misspelled word. The control application then generates (906) one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • In some embodiments, the text manipulation event and the contextual information indicate a user is entering characters that represent a portion of a word. For example, the user may have typed the characters “a”, “u”, “t”, “0”, and “m” representing a portion of the word (e.g., “automatic”, “automobile”, etc.). The control application may attempt to predict one or more words (sometimes called candidate words) that the user intends to type. These embodiments are described with respect to FIG. 10, which is a flowchart of a method for determining (606) commands sent to an application when the contextual information and text manipulation event indicate that a sequence of characters represent a portion of a word, according to some embodiments. The control application determines (1002) that the contextual information and text manipulation event indicate a sequence of characters that represent a portion of a word. Next, the control application determines (1004) one or more candidate words in accordance with the portion of the word. For example, the candidate words are typically selected from a set of complete words that include the portion of the word. The control application then generates (1006) one or more commands for instructing the second application to display, for user selection, the one or more candidate words.
  • Returning to FIG. 6, after the one or more commands have been determined based on the contextual information that provides context to the text manipulation event (606), the control application issues (608) the one or more commands to the second application. In response to the issuing of the one or more commands by the control application, the second application typically executes the one or more commands issued by the control application. However, in some situations, the second application need not execute the command or commands issued by the control application, or may execute some but not all of the commands issued by the control application.
  • In some embodiments, the contextual information for the second application that has been obtained by the control application may no longer be relied upon by the control application. For example, the second application may have modified text independent of user input (e.g., regardless of whether a text manipulation event has occurred or not). Thus, in some embodiments, the second application notifies the control application that contextual information retained by the control application for the second application can no longer be relied upon by the control application. Similarly, the second application may modify a selection of text independent of user input. Thus, in some embodiments, the second application notifies the control application that a selection of text in the second application has changed. In addition, if the second application does not execute one or more commands issued by the control application, which would typically render the contextual information retained by the control information invalid, the second application notifies the control application that contextual information retained by the control application can no longer be relied upon by the control application.
  • In some embodiments, the methods and systems described above for responding to and processing text manipulation events in the user interface of an application are applied to content manipulation events, which manipulate content (e.g., text, images, objects, etc.) in the user interface of an application, while providing enhanced content services (including, for example, one or more of the aforementioned enhanced keyboard services) to the application. Content manipulation events are a superset of text manipulation events.
  • The methods illustrated in FIGS. 6-10 may be governed by instructions that are stored in a computer readable storage medium and that are executed by one or more processors of a respective multifunction device. Each of the operations shown in FIGS. 6-10 may correspond to instructions stored in a computer memory or computer readable storage medium. The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium are in source code, assembly language code, object code, or other instruction format that is interpreted and/or executable by the device's one or more processors.
  • The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (3)

What is claimed is:
1. A method comprising:
at a device with a touch-sensitive display:
determining context at a first application;
obtaining, at a second application that is different from the first application, the context of the first application, wherein the first application is a third-party application that is restricted from accessing information available to the second application;
receiving at the second application, text input information based on interactions with a soft keyboard displayed on the touch-sensitive display; and
in response to receiving the text information, issuing one or more commands from the second application to the first application based on the context at the first application and the text input information.
2. The method of claim 1, wherein the text information includes location information that describes a physical location at which the text information was received by the device.
3. The method of claim 1, wherein the first application is a third-party application.
US14/978,655 2009-09-24 2015-12-22 System and Method for Issuing Commands to Applications Based on Contextual Information Abandoned US20160110230A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/978,655 US20160110230A1 (en) 2009-09-24 2015-12-22 System and Method for Issuing Commands to Applications Based on Contextual Information

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12/566,660 US8285499B2 (en) 2009-03-16 2009-09-24 Event recognition
US29281810P 2010-01-06 2010-01-06
US12/789,684 US9223590B2 (en) 2010-01-06 2010-05-28 System and method for issuing commands to applications based on contextual information
US14/978,655 US20160110230A1 (en) 2009-09-24 2015-12-22 System and Method for Issuing Commands to Applications Based on Contextual Information

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/789,684 Continuation US9223590B2 (en) 2009-09-24 2010-05-28 System and method for issuing commands to applications based on contextual information

Publications (1)

Publication Number Publication Date
US20160110230A1 true US20160110230A1 (en) 2016-04-21

Family

ID=44225431

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/789,684 Active 2033-03-02 US9223590B2 (en) 2009-09-24 2010-05-28 System and method for issuing commands to applications based on contextual information
US14/978,655 Abandoned US20160110230A1 (en) 2009-09-24 2015-12-22 System and Method for Issuing Commands to Applications Based on Contextual Information

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/789,684 Active 2033-03-02 US9223590B2 (en) 2009-09-24 2010-05-28 System and method for issuing commands to applications based on contextual information

Country Status (2)

Country Link
US (2) US9223590B2 (en)
WO (1) WO2011085118A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9798459B2 (en) 2008-03-04 2017-10-24 Apple Inc. Touch event model for web pages
US9965177B2 (en) 2009-03-16 2018-05-08 Apple Inc. Event recognition
US9971502B2 (en) 2008-03-04 2018-05-15 Apple Inc. Touch event model
US10175876B2 (en) 2007-01-07 2019-01-08 Apple Inc. Application programming interfaces for gesture operations
US10216408B2 (en) 2010-06-14 2019-02-26 Apple Inc. Devices and methods for identifying user interface objects based on view hierarchy
US10318562B2 (en) 2016-07-27 2019-06-11 Google Llc Triggering application information
US10719225B2 (en) 2009-03-16 2020-07-21 Apple Inc. Event recognition
US10732997B2 (en) 2010-01-26 2020-08-04 Apple Inc. Gesture recognizers with delegates for controlling and modifying gesture recognition
US10963142B2 (en) 2007-01-07 2021-03-30 Apple Inc. Application programming interfaces for scrolling
US11429190B2 (en) 2013-06-09 2022-08-30 Apple Inc. Proxy gesture recognizer

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320974A1 (en) * 2010-06-29 2011-12-29 Kun Bai Method and system having a virtual keyboard on devices with size limited touch screen
US20120159341A1 (en) 2010-12-21 2012-06-21 Microsoft Corporation Interactions with contextual and task-based computing environments
US20120166522A1 (en) * 2010-12-27 2012-06-28 Microsoft Corporation Supporting intelligent user interface interactions
CN103136285A (en) * 2011-12-05 2013-06-05 英顺源(上海)科技有限公司 Translation query and operation system used for handheld device and method thereof
US9530120B2 (en) * 2012-05-31 2016-12-27 Apple Inc. Automatically updating a display of text based on context
CN103116408A (en) * 2013-01-30 2013-05-22 北京网秦天下科技有限公司 Intelligent input method and equipment
CN105518657B (en) * 2013-10-24 2019-09-24 索尼公司 Information processing equipment, information processing method and computer readable recording medium
CN104461049A (en) * 2014-04-23 2015-03-25 刘通 Chinese character input coding technology combining letters and components
CN104331393A (en) * 2014-05-06 2015-02-04 广州三星通信技术研究有限公司 Equipment and method for providing option by aiming at input operation of user
US9811352B1 (en) 2014-07-11 2017-11-07 Google Inc. Replaying user input actions using screen capture images
US9760560B2 (en) * 2015-03-19 2017-09-12 Nuance Communications, Inc. Correction of previous words and other user text input errors
CN106293114B (en) * 2015-06-02 2019-03-29 阿里巴巴集团控股有限公司 Predict the method and device of user's word to be entered
US10582011B2 (en) 2015-08-06 2020-03-03 Samsung Electronics Co., Ltd. Application cards based on contextual data
US9929863B2 (en) * 2015-10-30 2018-03-27 Palo Alto Research Center Incorporated System and method for efficient and semantically secure symmetric encryption over channels with limited bandwidth
US10078482B2 (en) * 2015-11-25 2018-09-18 Samsung Electronics Co., Ltd. Managing display of information on multiple devices based on context for a user task
US10409487B2 (en) * 2016-08-23 2019-09-10 Microsoft Technology Licensing, Llc Application processing based on gesture input
US10535005B1 (en) * 2016-10-26 2020-01-14 Google Llc Providing contextual actions for mobile onscreen content

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4674066A (en) * 1983-02-18 1987-06-16 Houghton Mifflin Company Textual database system using skeletonization and phonetic replacement to retrieve words matching or similar to query words
JPH05197573A (en) * 1991-08-26 1993-08-06 Hewlett Packard Co <Hp> Task controlling system with task oriented paradigm
US5588072A (en) * 1993-12-22 1996-12-24 Canon Kabushiki Kaisha Method and apparatus for selecting blocks of image data from image data having both horizontally- and vertically-oriented blocks
JP3889466B2 (en) * 1996-11-25 2007-03-07 ソニー株式会社 Text input device and method
US6377965B1 (en) * 1997-11-07 2002-04-23 Microsoft Corporation Automatic word completion system for partially entered data
IL136465A0 (en) * 1997-12-01 2001-06-14 Cedara Software Corp Architecture for an application framework
CA2244431C (en) * 1998-07-30 2002-02-19 Ibm Canada Limited-Ibm Canada Limitee Touchscreen keyboard support for multi-byte character languages
US8938688B2 (en) * 1998-12-04 2015-01-20 Nuance Communications, Inc. Contextual prediction of user words and user actions
US7030863B2 (en) * 2000-05-26 2006-04-18 America Online, Incorporated Virtual keyboard system with automatic correction
US6631501B1 (en) * 1999-06-30 2003-10-07 Microsoft Corporation Method and system for automatic type and replace of characters in a sequence of characters
US6922810B1 (en) * 2000-03-07 2005-07-26 Microsoft Corporation Grammar-based automatic data completion and suggestion for user input
US6976172B2 (en) * 2000-12-28 2005-12-13 Intel Corporation System and method for protected messaging
US7414616B2 (en) * 2002-01-03 2008-08-19 Mahesh Jayachandra User-friendly Brahmi-derived Hindi keyboard
US7490296B2 (en) 2003-01-31 2009-02-10 Microsoft Corporation Utility object for specialized data entry
US20040225965A1 (en) * 2003-05-06 2004-11-11 Microsoft Corporation Insertion location tracking for controlling a user interface
US8117540B2 (en) * 2005-05-18 2012-02-14 Neuer Wall Treuhand Gmbh Method and device incorporating improved text input mechanism
US20060271520A1 (en) * 2005-05-27 2006-11-30 Ragan Gene Z Content-based implicit search query
US20070152980A1 (en) * 2006-01-05 2007-07-05 Kenneth Kocienda Touch Screen Keyboards for Portable Electronic Devices
US7694231B2 (en) * 2006-01-05 2010-04-06 Apple Inc. Keyboards for portable electronic devices
US7941760B2 (en) * 2006-09-06 2011-05-10 Apple Inc. Soft keyboard display for a portable multifunction device
US8056007B2 (en) * 2006-11-15 2011-11-08 Yahoo! Inc. System and method for recognizing and storing information and associated context
US7912700B2 (en) * 2007-02-08 2011-03-22 Microsoft Corporation Context based word prediction
US7949516B2 (en) * 2007-08-31 2011-05-24 Research In Motion Limited Handheld electronic device and method employing logical proximity of characters in spell checking
US8289283B2 (en) * 2008-03-04 2012-10-16 Apple Inc. Language input interface on a device
US8285499B2 (en) * 2009-03-16 2012-10-09 Apple Inc. Event recognition

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10613741B2 (en) 2007-01-07 2020-04-07 Apple Inc. Application programming interface for gesture operations
US11954322B2 (en) 2007-01-07 2024-04-09 Apple Inc. Application programming interface for gesture operations
US11449217B2 (en) 2007-01-07 2022-09-20 Apple Inc. Application programming interfaces for gesture operations
US10175876B2 (en) 2007-01-07 2019-01-08 Apple Inc. Application programming interfaces for gesture operations
US10963142B2 (en) 2007-01-07 2021-03-30 Apple Inc. Application programming interfaces for scrolling
US10936190B2 (en) 2008-03-04 2021-03-02 Apple Inc. Devices, methods, and user interfaces for processing touch events
US9971502B2 (en) 2008-03-04 2018-05-15 Apple Inc. Touch event model
US10521109B2 (en) 2008-03-04 2019-12-31 Apple Inc. Touch event model
US9798459B2 (en) 2008-03-04 2017-10-24 Apple Inc. Touch event model for web pages
US11740725B2 (en) 2008-03-04 2023-08-29 Apple Inc. Devices, methods, and user interfaces for processing touch events
US9965177B2 (en) 2009-03-16 2018-05-08 Apple Inc. Event recognition
US10719225B2 (en) 2009-03-16 2020-07-21 Apple Inc. Event recognition
US11755196B2 (en) 2009-03-16 2023-09-12 Apple Inc. Event recognition
US11163440B2 (en) 2009-03-16 2021-11-02 Apple Inc. Event recognition
US10732997B2 (en) 2010-01-26 2020-08-04 Apple Inc. Gesture recognizers with delegates for controlling and modifying gesture recognition
US12061915B2 (en) 2010-01-26 2024-08-13 Apple Inc. Gesture recognizers with delegates for controlling and modifying gesture recognition
US10216408B2 (en) 2010-06-14 2019-02-26 Apple Inc. Devices and methods for identifying user interface objects based on view hierarchy
US11429190B2 (en) 2013-06-09 2022-08-30 Apple Inc. Proxy gesture recognizer
US11106707B2 (en) 2016-07-27 2021-08-31 Google Llc Triggering application information
US10318562B2 (en) 2016-07-27 2019-06-11 Google Llc Triggering application information

Also Published As

Publication number Publication date
US9223590B2 (en) 2015-12-29
US20110167340A1 (en) 2011-07-07
WO2011085118A1 (en) 2011-07-14

Similar Documents

Publication Publication Date Title
US9223590B2 (en) System and method for issuing commands to applications based on contextual information
US11893230B2 (en) Semantic zoom animations
JP6404267B2 (en) Correction of language input
US9557909B2 (en) Semantic zoom linguistic helpers
US9141200B2 (en) Device, method, and graphical user interface for entering characters
US9442654B2 (en) Apparatus and method for conditionally enabling or disabling soft buttons
AU2011376310B2 (en) Programming interface for semantic zoom
US8806362B2 (en) Device, method, and graphical user interface for accessing alternate keys
US9052894B2 (en) API to replace a keyboard with custom controls
US20130067420A1 (en) Semantic Zoom Gestures
US20130067398A1 (en) Semantic Zoom
US20110175826A1 (en) Automatically Displaying and Hiding an On-screen Keyboard
US20110242138A1 (en) Device, Method, and Graphical User Interface with Concurrent Virtual Keyboards
US10048771B2 (en) Methods and devices for chinese language input to a touch screen

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOORE, BRADFORD ALLEN;SWALES, STEPHEN W.;REEL/FRAME:037791/0785

Effective date: 20100526

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION