Nothing Special   »   [go: up one dir, main page]

CN110109730A - For providing the equipment, method and graphic user interface of audiovisual feedback - Google Patents

For providing the equipment, method and graphic user interface of audiovisual feedback Download PDF

Info

Publication number
CN110109730A
CN110109730A CN201910417641.XA CN201910417641A CN110109730A CN 110109730 A CN110109730 A CN 110109730A CN 201910417641 A CN201910417641 A CN 201910417641A CN 110109730 A CN110109730 A CN 110109730A
Authority
CN
China
Prior art keywords
user interface
display
interface object
input
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910417641.XA
Other languages
Chinese (zh)
Other versions
CN110109730B (en
Inventor
M·I·布朗
A·E·西普林斯基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/866,570 external-priority patent/US9928029B2/en
Application filed by Apple Computer Inc filed Critical Apple Computer Inc
Priority to CN201910417641.XA priority Critical patent/CN110109730B/en
Publication of CN110109730A publication Critical patent/CN110109730A/en
Application granted granted Critical
Publication of CN110109730B publication Critical patent/CN110109730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application involves the equipment, method and graphic user interface for providing audiovisual feedback.Electronic equipment provides the data of the user interface with multiple user interface objects for rendering.Current focus is in the first user interface object of multiple user interface objects.Equipment receives input.In response, direction and/or amplitude based on input, equipment is provided for current focus to be moved to the data of second user interface object from the first user interface object, and provides acoustic information concomitantly to provide sound output from the first user interface object to the movement of second user interface object with current focus.Size of the pitch based on the first user interface object of sound output, the type of the first user interface object, the type of the size of second user interface object and/or second user interface object.

Description

For providing the equipment, method and graphic user interface of audiovisual feedback
The application be the applying date be August in 2016 15, application No. is 201610670699.1, it is entitled " for providing view Listen the equipment, method and graphic user interface of feedback " application for a patent for invention divisional application.
Technical field
It relates generally to provide the electronic equipment of sound output herein, and more particularly, in conjunction with graphical user circle Face provides the electronic equipment of sound output.
Background technique
Audiovisual interface is used as by many electronic equipments provides the mode of the feedback of the interaction with equipment about user.But Conventional method for providing audiovisual feedback is limited.For example, simple audiovisual feedback only provides a user limited information. If performing unexpected operation based on simple audiovisual feedback, user needs to provide additional input to cancel such behaviour Make.Therefore, these methods spend the time longer than the necessary time, to waste energy.
Summary of the invention
Accordingly, there exist for having for providing the need of the more effective way of audiovisual feedback and the electronic equipment at interface It wants.Such method and interface optionally supplement or replace for providing the conventional method of audiovisual feedback.Such method and It reduces number, degree and/or the property of input from the user and generates more efficient man-machine interface in interface.Further, Processing power consumed by such method reduction processing touch input, saving power, reduction are unnecessary/additional/repetition Input, and potentially reduce memory use.
Reduced or eliminated by disclosed equipment disadvantages described above and with for the electronic equipment with touch sensitive surface The associated other problems of user interface.In some embodiments, which is digital media player, such as from Jia Lifu The Apple of the Apple company of the cupertino in Buddhist nun Asia stateIn some embodiments, which is desktop computer.? In some embodiments, which is portable (for example, notebook computer, tablet computer or handheld device).? In some embodiments, which is personal electronic equipments (for example, wearable electronic, such as wrist-watch).In some embodiments In, which has touch tablet.In some embodiments, the equipment have touch-sensitive display (also referred to as " touch screen " or " touch-screen display ").In some embodiments, the equipment have graphic user interface (GUI), one or more processors, Memory and the one or more modules, program or the instruction set that store in memory are for executing multiple functions.Some In embodiment, user mainly passes through remote controler (for example, one or more buttons of remote controler and/or touch sensitive surface of remote controler) It is interacted with GUI.Executable instruction for executing these functions be optionally included in non-transient computer-readable storage media or Person is configured in other computer program products being performed by one or more processors.Alternatively or additionally, it uses In execute these functions executable instruction be optionally included in transient state computer readable storage medium or be configured for by In other computer program products that one or more processors execute.
According to some embodiments, a kind of side is executed at the electronic equipment with one or more processors and memory Method.The equipment is communicated with display and audio system.This method includes being given birth at present by equipment for rendering to display offer At user interface data.The user interface includes having the first user interface object of First look characteristic.User circle Face further comprises the second user interface object with second visual characteristic different from the first user interface object.The equipment The acoustic information that sound exports is provided for audio system.Sound output includes corresponding to the first user interface object The first audio component.The sound output further comprise corresponding to second user interface object and with the first audio component not The second same audio component.When user interface is presented over the display and sound output is provided, the equipment is to display Device provides the data for more new user interface and provides the acoustic information exported for updating sound to audio system.It updates User interface and update sound output include: that Binding change corresponds to the first audio component of the first user interface object to change At least one visual characteristic in the First look characteristic of first user interface object;And Binding change corresponds to second user At least one vision in the second visual characteristic of second audio component of interface object to change second user interface object is special Property.It provides and inputs and occur in user for the Dynamic data exchange of more new user interface.
According to some embodiments, a kind of side is executed at the electronic equipment with one or more processors and memory Method.The equipment is communicated with display and audio system.This method comprises: provide to display has multiple users circle for rendering In face of the data of the user interface of elephant, the multiple user interface object includes that the control at first position over the display is used Family interface object.The control user interface object is configured as control relevant parameter.This method further comprises: reception corresponds to The first input interacted with first of the control user interface object on display.This method further comprises: corresponding to when receiving In with first of the control user interface object on display interact first input when: to display provide for according to first Input will control user interface object and be moved to different from the first position on display show from the first position on display Show the data of the second position on device;And it is provided for the first acoustic information that the first sound exports to audio system, First sound output has different and according to from display from the relevant parameter controlled by control user interface object The movement of the control user interface object of the second position on first position to display and the one or more characteristics changed.
According to some embodiments, a kind of side is executed at the electronic equipment with one or more processors and memory Method.The equipment is communicated with display and audio system.This method comprises: provide to display has multiple users circle for rendering In face of the data of the first user interface of elephant, wherein first user interface pair of the current focus in multiple user interface objects As upper.This method further comprises: when just the first user interface is presented in display, receiving and for changing the first user interface In current focus position the corresponding input of request, it is described input have direction and amplitude.This method further comprises: ringing Ying Yu receives input corresponding with the request of the position for changing the current focus in the first user interface: mentioning to display For the data for current focus to be moved to second user interface object from the first user interface object, wherein according to input Direction and/or amplitude and for current focus select second user interface object;And is provided for audio system First acoustic information of one sound output, first sound output correspond to current focus from the first user interface object to second The movement of user interface object, wherein the output of the first sound and current focus are moved to second user from the first user interface object The display of interface object is concomitantly provided, and the pitch of the first sound output is at least partially based on the first user interface object Size, the type of the first user interface object, the class of the size of second user interface object and/or second user interface object Type determines.
According to some embodiments, a kind of side is executed at the electronic equipment with one or more processors and memory Method.The equipment is communicated with display and audio system.This method comprises: provide to display includes about the first view for rendering The data of first video information user interface of the descriptive information of frequency.This method further comprises: providing use to audio system In in the first sound output by display to being provided during the presentation of the first video information user interface corresponding to the first video Acoustic information.This method further comprises: when first including the descriptive information about the first video is just being presented in display When video information user interface, input corresponding with for playing back the request of the first video is received.This method further comprises: ringing Ying Yu receives input corresponding with for playing back the request of the first video, provides to display for returning using the first video Put the data of the presentation of the first video information user interface of replacement.This method further comprises: during the playback of the first video, Receive input corresponding with being used to show the request of the second video information user interface about the first video.This method is further Including in response to receiving input corresponding with for showing the request of the second video information user interface about the first video: The number of the playback for replacing the first video using the second video information user interface about the first video is provided to display According to, and to audio system provide acoustic information, the acoustic information for provide by display to the second video information user Different second sounds outputs are exported from the first sound corresponding to the first video during the presentation at interface.
According to some embodiments, a kind of side is executed at the electronic equipment with one or more processors and memory Method.The equipment is communicated with display.This method comprises: providing the data that the first video is presented to display.This method further include: When just the first video is presented in display, input corresponding with for suspending user's request of the first video is received;And response In receiving input corresponding with for suspending user's request of the first video, first in the timeline of the first video is played back Suspend the presentation of the first video at position.This method further comprises: the first playback position in the timeline in the first video The place of setting suspends the presentation of the first video later and when the presentation of the first video is suspended, and provides to come for rendering to display From the data of multiple selected static images of the first video.Based on the first playback position for suspending the first video at which, Select multiple selected static image.
According to some embodiments, electronic equipment is communicated with display unit and audio unit, and the display unit is configured as Show user interface, and the audio unit is configured to supply sound output.The equipment includes: processing unit, is configured To provide for rendering to display unit by the data of equipment user interface generated.The user interface includes having the first view Feel the first user interface object of characteristic.The user interface further comprises different from the first user interface object with second The second user interface object of visual characteristic.The equipment is configured as being provided for the sound of sound output to audio unit Information.Sound output includes the first audio component corresponding to the first user interface object.The sound exports Corresponding to second user interface object and second audio component different from the first audio component.When user interface is present in On display unit and when sound output is provided by audio unit, which is provided to display unit for more new user interface Data and the acoustic information exported to audio unit offer for updating sound.More new user interface and update sound output packet Include: Binding change corresponds to the first audio component of the first user interface object to change the first view of the first user interface object Feel at least one visual characteristic in characteristic;And Binding change comes corresponding to the second audio component of second user interface object Change at least one visual characteristic in the second visual characteristic of second user interface object.It provides for more new user interface Data be independently of user input and occur.
According to some embodiments, electronic equipment is communicated with display unit, audio unit and optional remote control unit, institute It states display unit and is configured as display user interface, the audio unit is configured to supply sound output, and the remote control Device unit (it optionally includes touch sensitive surface unit) is configured as detection user and inputs and send them to electronic equipment. The equipment includes processing unit, is configured as providing user for rendering with multiple user interface objects to display unit The data at interface, the multiple user interface object include the control user interface pair at first position on the display unit As.The control user interface object is configured as control relevant parameter.The processing unit be configured to receive and with it is aobvious Show corresponding first input of the first interaction of the control user interface object on unit.The processing unit is configured to: When receiving and with first the corresponding first input of interaction of the control user interface object on display unit: being mentioned to display unit For for control user interface object to be moved to and display unit from the first position on display unit according to the first input The different display unit in first position on the second position data;And the first sound is provided for audio unit First acoustic information of output, first sound output have with the relevant parameter difference that is controlled by control user interface object simultaneously And changed according to control user interface object from the moving for the second position on the first position to display unit on display unit One or more characteristics of change.
According to some embodiments, electronic equipment is communicated with display unit, audio unit and optional remote control unit, institute It states display unit and is configured as display user interface, the audio unit is configured to supply sound output, and the remote control Device unit (it optionally includes touch sensitive surface unit) is configured as detection user and inputs and send them to electronic equipment. The equipment includes: processing unit, is configured as provided to display unit for rendering with multiple user interface objects first The data of user interface, wherein in the first user interface object of the current focus in multiple user interface objects.The processing list Member is configured to when just the first user interface is presented in display unit, is received and for changing in the first user interface The corresponding input of the request of the position of current focus, the input have direction and amplitude.The processing unit is configured to: In response to receiving input corresponding with the request of the position for changing the current focus in the first user interface: single to display Member provides the data for current focus to be moved to second user interface object from the first user interface object, wherein second uses Family interface object is the direction and/or amplitude according to input and selects for current focus;And use is provided to audio unit In providing the first acoustic information of the first sound output, first sound output corresponds to current focus from the first user interface pair Movement as arriving second user interface object, wherein the output of the first sound and current focus are moved to from the first user interface object The display of second user interface object is concomitantly provided, and the pitch of the first sound output is at least partially based on the first user The size of interface object, the type of the first user interface object, the size of second user interface object and/or second user interface The type of object determines.
According to some embodiments, electronic equipment is communicated with display unit, audio unit and optional remote control unit, institute It states display unit and is configured as display user interface, the audio unit is configured to supply sound output, and the remote control Device unit (it optionally includes touch sensitive surface unit) is configured as detection user and inputs and send them to electronic equipment. The equipment includes: processing unit, is configured as the descriptive letter provided to display unit include for rendering about the first video The data of first video information user interface of breath.The processing unit is additionally configured to provide to audio unit for by showing Unit is believed the sound that the first sound for corresponding to the first video exports is provided during the presentation of the first video information user interface Breath.The processing unit is configured to when display unit presentation include the first view of the descriptive information about the first video When frequency message user interface, input corresponding with for playing back the request of the first video is received.The processing unit is further matched It is set in response to receiving input corresponding with for playing back the request of the first video, provides to display unit for utilizing first The data of the presentation of playback the first video information user interface of replacement of video.The processing unit is configured to first During the playback of video, receives and be used to show that the request of the second video information user interface about the first video to be corresponding defeated Enter.The processing unit is configured in response to receiving and being used to show the second video information use about the first video The corresponding input of the request at family interface: it provides to display unit for utilizing the second video information user circle about the first video Replace the data of the playback of the first video in face;And it provides to audio unit for being used by display unit the second video information The sound letter that different second sound outputs are exported from the first sound for corresponding to the first video is provided during the presentation at family interface Breath.
According to some embodiments, electronic equipment includes processing unit.The electronic equipment is communicated with display unit.The display list Member is configured as display video playback information.The processing unit is configured as providing first video for rendering to display unit Data;When just the first video is presented in display unit, input corresponding with for suspending user's request of the first video is received;It rings Ying Yu receive with for suspending the corresponding input of user's request of the first video, first time in the timeline of the first video Put the presentation for suspending the first video at position;And the first view of pause at the first playback position in the timeline of the first video After the presentation of frequency and when the presentation of the first video is suspended, provide to display unit for rendering from the first video The data of multiple selected static images, wherein being selected multiple based on the first playback position for suspending the first video at which Selected static image.
According to some embodiments, (it optionally includes touching for electronic equipment and display, audio system and optional remote controler Sensitive surfaces) communication.The electronic equipment includes one or more processors, memory and one or more programs;It is one or more Program is stored in memory and is configured as being performed by one or more processors, and one or more programs include For executing the operation of any one of approach described herein method or causing appointing in approach described herein A kind of what instruction that the operation of method executes.According to some embodiments, computer readable storage medium is (for example, non-transient calculating Machine readable storage medium storing program for executing or alternatively transient state computer readable storage medium) store instruction wherein, the instruction when by When the electronic equipment communicated with display and audio system executes, so that equipment executes any one in approach described herein The operation of kind of method or the operation for the method that any one of causes approach described herein execute.According to some implementations Example, with display, touch sensitive surface, memory and the one or more for executing one or more programs stored in memory Graphic user interface on the electronic equipment of processor includes in response to any one of method as described herein method Described in input and one in element shown in any one of method as described above for updating method Or multiple element.According to some embodiments, electronic equipment is communicated with display and audio system.The electronic equipment includes for holding The operation of any one of row approach described herein method any one of causes approach described herein The device that the operation of method executes.It is a kind of for using in the electricity communicated with display and audio system according to some embodiments Information processing unit in sub- equipment include operation for executing any one of approach described herein method or The device for causing the operation of any one of approach described herein method to execute.
Therefore, the electronic equipment communicated with display and audio system is provided with improved for providing audiovisual feedback Method and interface, to increase validity, efficiency and user satisfaction about such equipment.Such method and interface It can supplement or replace for providing the conventional method of audiovisual feedback.
Detailed description of the invention
The embodiment of various descriptions in order to better understand, below should in conjunction with the description of each embodiment of attached drawing reference pair, Wherein similar reference label refers to the corresponding part for running through attached drawing.
Figure 1A is the block diagram for illustrating the portable multifunction device in accordance with some embodiments with touch-sensitive display.
Figure 1B is the block diagram for illustrating the example components in accordance with some embodiments for event handling (handle).
Fig. 2 illustrates the portable multifunction device in accordance with some embodiments with touch screen.
Fig. 3 is the block diagram of the exemplary multifunctional equipment in accordance with some embodiments with display and touch sensitive surface.
Fig. 4 A illustrates the exemplary of the menu in accordance with some embodiments for the application on portable multifunction device User interface.
Fig. 4 B illustrates in accordance with some embodiments for the multifunctional equipment with the touch sensitive surface of displays separated Exemplary user interface.
Fig. 4 C illustrates the example electronic device in accordance with some embodiments communicated with display and touch sensitive surface, wherein For at least subset of electronic equipment, display and/or touch sensitive surface are integrated into electronic equipment.
Fig. 5 A to Fig. 5 SS illustrates in accordance with some embodiments for providing the exemplary user interface of audiovisual feedback.
Fig. 6 A to Fig. 6 C is the audio point for illustrating Binding change in accordance with some embodiments corresponding to user interface object It measures to change the flow chart of the method for the visual characteristic of user interface object.
Fig. 7 A to Fig. 7 D is to illustrate the friendship with user interface object in accordance with some embodiments for providing and corresponding to user The flow chart of the method for mutual acoustic information.
Fig. 8 A to Fig. 8 C is to illustrate the friendship with user interface object in accordance with some embodiments for providing and corresponding to user The flow chart of the method for mutual acoustic information.
Fig. 9 A to Fig. 9 C is to illustrate the acoustic information in accordance with some embodiments for providing and being directed to video information user interface Method flow chart.
Figure 10 A to Figure 10 B illustrate it is in accordance with some embodiments when video placed in a suspend state when audio-visual information is provided The flow chart of method.
Figure 11 is the functional block diagram of electronic equipment in accordance with some embodiments.
Figure 12 is the functional block diagram of electronic equipment in accordance with some embodiments.
Figure 13 is the functional block diagram of electronic equipment in accordance with some embodiments.
Specific embodiment
Many electronic equipments update graphic user interface in response to user's input and provide audible feedback.Conventional method Simple audible feedback is provided including inputting in response to same subscriber.For example, in response to for moving asking for current focus Corresponding each user's input is asked, identical audible feedback is provided.Such simple audible feedback does not provide the response of equipment Context.If interactive context is not understood completely by user, user may execute unexpected operation.Unexpected operation It for a user may be gloomy.In addition, such unexpected operation requires to cancel such unexpected behaviour Make and provide user's input again until desired operation is performed, may be trouble and inefficient.
In some embodiments being described below, it is a kind of it is improved for provide audible feedback method include provide There are the data of the user interface of control user interface object (for example, sliding block (thumb) of slider bar) for rendering.Work as reception The data of control user interface object are moved to data are provided when input, and for with control user interface pair The sound output of the movement of elephant and the characteristic that changes provides acoustic information.Therefore, characteristic instruction control user circle of sound output In face of the movement of elephant.
In addition, in some other embodiments being described below, it is a kind of improved for providing the side of audible feedback Method includes providing the data of the user interface with multiple icons for rendering, and wherein current focus is on the first icon.Response In receiving input, data are provided by current focus and are moved to the second icon, and provide acoustic information for sound output, Wherein the size of size or type and/or second icon of the pitch based on the first icon of sound output or type are come really It is fixed.
In addition, the conventional method for suspending video is included therein the position for suspending video when suspending the playback of video The single image of video is presented in place.The playback of pause video and the user for restoring the playback of video in later time return With about the limited information for wherein playing video.Therefore, after the playback of video is resumed, user may be spent The context of Understanding Time video.
In some embodiments being described below, it is a kind of it is improved for suspend video playback method include work as The data of multiple static images from video for rendering are provided when the playback of video is suspended.It is multiple static from video Even if image promotes user before the playback of video is resumed, also understand that the playback of wherein video is suspended the video of surrounding Context.Therefore, user can understand the context of video soon after the playback of video is resumed.
Moreover, the conventional method of video information user interface includes in spite of the playback for having initiated video for rendering (for example, whether user has been returned to video information user interface after at least part of viewing video) all provides single Sound output.Therefore, sound output only provides the limited fix information about video.
In some embodiments being described below, a kind of method of the improved user interface of video information for rendering It include: to provide after the playback of video has been initiated and export different sound outputs from commonly (stock) sound, so that sound Sound output can be used to convey additional information, the mood that such as wherein the playback of video is interrupted.
Moreover, the conventional method of screen protection program includes that video is presented for rendering.However, screen protection program does not wrap It includes sound output or is exported including limited sound.
In some embodiments being described below, a kind of method of improved screen protection program for rendering includes Offer includes the sound output corresponding to the audio component of shown user interface object in screen protection program.Therefore, Sound output can be used to audibly indicate additional information, such as the visual characteristic of shown user interface object and The change of the state of screen protection program.
Below, Figure 1A provides the description of example devices to Figure 1B, Fig. 2 and Fig. 3.Fig. 4 A to Fig. 4 C and Fig. 5 A extremely schemes 5SS is illustrated for providing the user interface of audible feedback.Fig. 6 A to Fig. 6 C illustrates Binding change in accordance with some embodiments Change the flow chart of the method for the visual characteristic of user interface corresponding to the audio component of user interface object.Fig. 7 A to Fig. 7 D The side in accordance with some embodiments for providing and corresponding to the sound output information of the interaction with user interface object of user is provided The flow chart of method.Fig. 8 A to Fig. 8 C illustrates the friendship with user interface object in accordance with some embodiments for providing and corresponding to user The flow chart of the method for mutual sound output information.Fig. 9 A to Fig. 9 C illustrates the sound for providing and being directed to video information user interface The flow chart of the method for output.Figure 10 A to Figure 10 B illustrate when video placed in a suspend state when the method for audio-visual information is provided Flow chart.User interface in Fig. 5 A to Fig. 5 SS is used to pictorial image 6A to Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A to Fig. 8 C, Fig. 9 A extremely Process in Fig. 9 C and Figure 10 A to Figure 10 B.
Example devices
The embodiment illustrated in the accompanying drawings to its example is referred in detail now.In the following detailed description, it illustrates Many specific details in order to provide various described embodiments thorough understanding.However, for the ordinary skill people of this field For member it will be apparent that, various described embodiments can be practiced without these specific details.At other In example, known method, process, component, circuit and network is described in detail not yet so as not to can unnecessarily make each of embodiment Aspect indigestion.
Although being used to describe various elements it will also be understood that term first, second etc. are herein in some instances, It is that these elements should not be limited by these terms.These terms are only used to distinguish an element with another element. For example, the first user interface object can be referred to as second user interface object, and similarly, second user interface object can To be referred to as the first user interface object, without departing from the range of various described embodiments.First user interface object and Second user interface object is user interface object, but unless context otherwise explicitly points out, and otherwise they are not It is the same user interface object.
The term used in the description of the embodiment of various descriptions is only used for the mesh of description specific embodiment herein And it is not intended to limit.Unless context otherwise explicitly points out, otherwise as the description of the embodiment described in various with And " one " of singular used in the attached claims, "one" and " described " are intended to also include plural form.Also It should be appreciated that term as used herein "and/or" refers to and covers one or more items of the associated project enumerated Any project and all possible combination in mesh.It will be further understood that term " includes ", " having ", "comprising" and/or " containing " specifies feature, the entirety, step, operation, the presence of element and/or component of statement when used in this manual, But be not precluded other one or more features, entirety, step, operation, element, component and/or the appearance of their grouping or Addition.
As it is used herein, term " if " depend on context can be construed as to imply that " when ... when " or " once ... if " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detection To [condition or event of statement] " depend on context be optionally construed as to imply that " once it is determined that then " or " in response to Determine " either " once detecting [condition or event of statement] " or " in response to detecting [condition or event of statement] ".
Describe electronic equipment, for the user interface of such equipment and for using the associated of such equipment The embodiment of process.In some embodiments, equipment is digital media player, such as from the Ku Bidi of California The Apple of the Apple company of promiseIn some embodiments, equipment is portable communication device (such as mobile phone), It also includes other function, such as PDA and/or music player functionality.The exemplary embodiment packet of portable multifunction device It includes but is not limited to: the Apple company from California cupertinoiPodWithEquipment.Optionally using such as with the on knee of touch sensitive surface (for example, touch-screen display and/or touch tablet) Other portable electronic devices of computer or tablet computer etc.It is to be further understood that in some embodiments, this sets Standby not instead of portable communication device, desktop computer.In some embodiments, desktop computer has touch sensitive surface (example Such as, touch-screen display and/or touch tablet).
In the following discussion, a kind of communicate with display and touch sensitive surface and/or including display and touch-sensitive table is described The electronic equipment in face.It is to be understood, however, that the electronic equipment optionally includes other one or more physical user interfaces Equipment, such as physical keyboard, mouse and/or control stick.
The equipment typically supports various applications, one or more application such as in applying below: notes application, picture Using, demonstration application, text processing application, website creation application, disk writes (disk authoring) application, electrical form is answered With, game application, phone application, video conference application, e-mail applications, instant messaging application, take exercise support application, Photo management application, digital camera application, digital video recorder application, web-browsing application, digital music player application and/or Video frequency player application.
The various applications being performed in equipment optionally use at least one public physical user-interface device, such as touch Sensitive surfaces.The one or more functions of touch sensitive surface and the corresponding informance shown in equipment optionally by from an application to Next application and/or adjustment and/or variation in each self-application.In this manner it is achieved that the public physical structure of equipment is (such as Touch sensitive surface) using user interface intuitive and transparent for a user optionally support various applications.
Attention now turn to the embodiments of the portable device with touch-sensitive display.Figure 1A is illustrated according to one The block diagram of the portable multifunction device 100 with touch-sensitive display system 112 of a little embodiments.Touch-sensitive display system 112 has When be referred to as " touch screen " for convenience's sake, and sometimes referred to simply as touch-sensitive display.Equipment 100 includes memory 102 (it optionally includes one or more non-transient computer-readable storage medias), Memory Controller 122, one or more A processing unit (CPU) 120, peripheral device interface 118, RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, Input/output (I/O) subsystem 106, other inputs or control equipment 116 and outside port 124.Equipment 100 is optionally wrapped Include one or more optical sensors 164.Equipment 100 optionally include in equipment 100 (for example, touch sensitive surface, such as sets Standby 100 touch-sensitive display system 112) on detect contact intensity one or more intensity sensors 165.Equipment 100 is optional Ground includes for generating tactile output on the appliance 100 (for example, in touch sensitive surface (such as, in the touch-sensitive display system of equipment 100 System 112 or equipment 300 touch tablet 335) on generate tactile output) one or more tactiles export generator 167.These Component is optionally communicated on one or more communication bus or signal wire 103.
As used in the specification and in the claims, term " tactile output " refers to will be utilized the touching of user by user Feel the equipment of detection relative to equipment the physical displacement of prelocalization, equipment component (for example, touch sensitive surface) relative to setting The displacement of mass center of the physical displacement or component of another standby component (for example, shell) relative to equipment.For example, setting Standby or equipment component is with the surface to the user for touching sensitivity (for example, other portions of finger, palm or the hand of user Point) in the case where contact, the tactile output generated by physical displacement will be interpreted as corresponding to the group of equipment or equipment by user The tactile sensation that perception in the physical characteristic of part changes.For example, touch sensitive surface (for example, touch-sensitive display or Trackpad) It is mobile that " clicking (down click) downwards " of physical actuation device button is optionally interpreted as by user or " clicks (up upwards click)".In some cases, even if when there is no the touch-sensitive tables for physically pressing (for example, displacement) with the movement by user The associated physical actuation device button in face it is mobile when, user also will feel that tactile sensation, such as " click downwards " or " to Upper click ".As another example, though when touch sensitive surface it is smooth in there is no change when, the movement of touch sensitive surface can also Selection of land interprets or feels " roughness " for touch sensitive surface by user.Although such interpretation to touch of user by with The influence of the personalized sensory perception at family, but in the presence of many sensory perceptions for most users being common touch.Cause This, when tactile output is described as the specific sensory perception corresponding to user (for example, " click upwards ", " clicking downwards ", " thick Rugosity ") when, unless otherwise noted, the tactile output generated, which corresponds to generate, is used for typical (or average) use The equipment of the sensory perception of the description at family or the physical displacement of its component.
It is to be appreciated that equipment 100 is only an example of portable multifunction device, and equipment 100 optionally has There are components more more or fewer than shown component, optionally combine two or more components, or optionally has different Component Configuration or arrangement.Various assemblies shown in Figure 1A are in hardware, software, firmware or combinations thereof (including one or more Signal processing and/or specific integrated circuit) in be carried out.
Memory 102 optionally includes high-speed random access memory, and optionally further comprising nonvolatile memory, Such as one or more disk storage equipments, flash memory device or other non-volatile solid state memory equipment.By its of equipment 100 He optionally stores the access of memory 102 component (such as (one or more) CPU 120 and peripheral device interface 118) Device controller 122 controls.
Peripheral device interface 118 can be used to couple (one or more) for the peripheral equipment that outputs and inputs of equipment CPU 120 and memory 102.The operation of one or more processors 120 executes the various software journeys stored in the memory 102 Sequence and/or instruction set, to execute various functions and processing data for equipment 100.
In some embodiments, peripheral device interface 118, (one or more) CPU 120 and Memory Controller 122 It can be carried out on one single chip (such as, chip 104).In some other embodiments, they are optionally in isolated core On piece is carried out.
RF (radio frequency) circuit 108 receives and sends RF signal, also referred to as electromagnetic signal.RF circuit 108 is by electric signal Be converted into electromagnetic signal/convert electromagnetic signal into electric signal, and via electromagnetic signal and communication network and other communicate Equipment communication.RF circuit 108 optionally includes the known circuits for executing these functions, including but not limited to: antenna system, RF transceiver, one or more amplifier, tuner, one or more oscillators, digital signal processor, CODEC chipset, Subscriber identity module (SIM) card, memory etc..RF circuit 108 is also referred to as WWW alternately through wireless communication and such as (WWW) internet, Intranet and/or such as cellular telephone network, WLAN (LAN) and/or Metropolitan Area Network (MAN) (MAN) Wireless network and other equipment communication.Wireless communication is optionally using any one in a variety of communication standards, agreement and technology Kind, including but not limited to: global system for mobile communications (GSM), enhanced data gsm environment (EDGE), high-speed down link point Group access (HSDPA), High Speed Uplink Packet access (HSUPA), only evolution data (EV-DO), HSPA, HSPA+, double small area HSPA (DC-HSPDA), long term evolution (LTE), near-field communication (NFC), wideband code division multiple access (W-CDMA), CDMA (CDMA), time division multiple acess (TDMA), bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) are (for example, IEEE 802.11a, IEEE802.ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), Internet protocol voice (VoIP), Wi-MAX, for Email agreement (for example, Internet Message Access Protocol (IMAP) And/or post office protocol (POP)), instant messaging (for example, scalable message sending and receiving and Presence Protoco (XMPP), for instant Information receiving and Session initiation Protocol (SIMPLE), instant messaging and the presence service (IMPS) for attending Leveraging Extensions) and/ Or short message service (SMS) or any other communication protocol appropriate, until being included in literature date of filing also it is untapped go out Communication protocol.
Voicefrequency circuit 110, loudspeaker 111 and microphone 113 provide audio interface between user and equipment 100.Audio Circuit 110 receives audio data from peripheral device interface 118, which is converted into electric signal, and to loudspeaker 111 Transmit the electric signal.The electric signal is converted into the audible sound wave of the mankind by loudspeaker 111.Voicefrequency circuit 110 is also received by wheat The electric signal that gram wind 113 is converted from sound wave.Voicefrequency circuit 110 converts the electrical signal to audio data and to the periphery equipment interface 118 transmission audio datas are for handling.Audio data is alternately through peripheral device interface 118 from memory 102 and/or RF Circuit 108 obtains and/or is transferred to memory 102 and/or RF circuit 108.In some embodiments, voicefrequency circuit 110 also wraps Include earphone jack (for example, 212, Fig. 2).Earphone jack is in voicefrequency circuit 110 and removable audio input/output peripheral equipment (such as only output receiver or can export (for example, receiver of monaural or ears) again input (for example, microphone) Earphone) between interface is provided.
I/O subsystem 106 by equipment 100 input/output peripheral equipment (such as touch screen 112 and other input or Control equipment 116) it is coupled with peripheral device interface 118.I/O subsystem 106 optionally includes display controller 156, optics Sensor controller 158, intensity sensor controller 159, tactile feedback controller 161 and for other input or control One or more input controllers 160 of equipment.The one or more input controller 160 is from other inputs or controls equipment 116 reception electric signals/input or control the transmission electric signal of equipment 116 to other.Other inputs or control equipment 116 are optionally wrapped Include physical button (for example, pushing button, rocker button etc.), driver plate, slide switch, control stick, click type rotating disk etc..Some In alternative embodiment, (one or more) input controller 160 is optionally coupled with the Arbitrary Term (or not having) in the following terms: Keyboard, infrared port, USB port, stylus and such as mouse etc pointer device.One or more buttons (for example, 208, Fig. 2) optionally include the up down button of the volume control for loudspeaker 111 and/or microphone 113.The one or more is pressed Button, which optionally includes, pushes button (for example, 206, Fig. 2).
Touch-sensitive display system 112 provides input interface and output interface between equipment and user.Display controller 156 receive electric signal from touch-sensitive display system 112 and/or send electric signal to touch-sensitive display system 112.Touch-sensitive display System 112 shows visual output to user.The visual output optionally includes figure, text, icon, video and above-mentioned items Any combination (being referred to as " figure ").In some embodiments, some or all of visual outputs correspond to user-interface object.
Touch-sensitive display system 112, which has, receives the touch-sensitive of input from the user based on sense of touch and/or tactile Surface, sensor or sensor collection.Touch-sensitive display system 112 and display controller 156 are (together with appointing in memory 102 What associated module and/or instruction set) in detection touch-sensitive display system 112 contact (and contact it is any move or Block), and the contact that will test be converted into and shown in touch-sensitive display system 112 user interface object (for example, One or more soft-key buttons, icon, webpage or image) interaction.In some embodiments, touch-sensitive display system 112 with Contact point between user corresponds to the finger or stylus of user.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer displays) Technology or LED (light emitting diode) technology, but other display technologies are used in other embodiments.Touch-sensitive display system 112 It is optionally examined using any one of a variety of touch-sensing technologies that are currently known or developing later with display controller 156 Survey contact and contact any movement or blocking, these touch-sensing technologies include but is not limited to: capacitor, resistance, it is infrared and Surface acoustic wave technique, and for determining other close sensings with one or more contact points of touch-sensitive display system 112 Device array or other elements.In some embodiments, using such as can California, cupertino Apple it is public DepartmentiPodWithIn the projection-type mutual capacitance detection technology that finds.
Touch-sensitive display system 112 is optionally with the video resolution for being more than 100dpi.In some embodiments, it touches Screen video resolution is more than 400dpi (for example, 500dpi, 800dpi or bigger).User optionally uses such as stylus, hand Any appropriate object or adjunct of finger or the like are contacted with touch-sensitive display system 112.In some embodiments, user Interface is designed to through contact based on finger and gesture come work, since the contact area of finger on the touchscreen is bigger, Therefore the input based on stylus is likely lower than in precision based on finger contact and gesture.In some embodiments, equipment will be thick The input based on finger slightly is translated into accurate pointer/cursor positioning or the order for executing the desired movement of user.
In some embodiments, in addition to a touch, equipment 100 is optionally included for activating or deactivating specific function The touch tablet (not shown) of energy.In some embodiments, touch tablet be equipment it is different from touch screen, do not show visual output Touch sensitive regions.Touch tablet is optionally the touch sensitive surface isolated with touch-sensitive display system 112 or is formed by touch screen The extension of touch sensitive surface.
Equipment 100 further includes the electric system 162 for powering to various assemblies.Electric system 162 optionally includes electricity Power management system, one or more power supplys (for example, battery, alternating current (AC)), charging system, power failure detection circuit, electricity Force transducer or phase inverter, power status indicator (for example, light emitting diode (LED)) and with the electric power in portable device Generate, manage, and distribute any other associated component.
Equipment 100 is optionally further comprising one or more optical sensors 164.Figure 1A show in I/O subsystem 106 Optical sensor controller 158 couple optical sensor.(one or more) optical sensor 164 optionally includes charge Coupled apparatus (CCD) or complementary metal oxide semiconductor (CMOS) phototransistor.(one or more) optical sensor 164 The light from environment by one or more lens projects is received, and converts the light to the data of representative image.With imaging Module 143 (also referred to as camera model) combines, (one or more) optical sensor 164 optionally capturing still image or Video.In some embodiments, optical sensor is located at the back side of equipment 100, with the touch-sensitive display system on equipment front 112 is opposite, so that touch-screen display can be used as the view finder that static and/or video image obtains.In some implementations In example, another optical sensor is located at the front of equipment, so that (for example, being directed to self-timer, checking touch screen in user While other video conference participants on display etc.), obtain the image of user.
Equipment 100 is optionally further comprising one or more contact strength sensor 165.Figure 1A is shown and I/O subsystem The contact strength sensor that intensity sensor controller 159 in system 106 couples.(one or more) contact strength sensor 165 optionally include one or more compression resistance deformeters, capacitive force sensor, power sensor, piezoelectric force transducer, light Educational level sensor, the touch sensitive surface of capacitor or other intensity sensors are (for example, be used to measure the power of contact on touch sensitive surface The sensor of (or pressure)).(one or more) contact strength sensor 165 from environment receive contact strength information (for example, Pressure information or representative for pressure information).In some embodiments, at least one contact strength sensor and touch-sensitive table Face (for example, touch-sensitive display system 112) juxtaposition is close.In some embodiments, at least one contact strength sensor It is opposite with the touch-screen display system 112 before equipment 100 positioned at the back side of equipment 100.
Equipment 100 is optionally further comprising one or more proximity sensors 166.Figure 1A is shown and peripheral device interface The proximity sensor 166 of 118 couplings.Alternatively, 160 coupling of input controller in proximity sensor 166 and I/O subsystem 106 It closes.When in some embodiments, near the ear that multifunctional equipment is in user (for example, when user is making a phone call), Proximity sensor closes and disables touch-sensitive display system 112.
Equipment 100 is optionally further comprising one or more tactiles export generator 167.Figure 1A is shown and I/O subsystem The tactile that tactile feedback controller 161 in 106 couples exports generator.(one or more) tactile exports generator 167 can Selection of land includes one or more electroacoustic equipment (such as, loudspeaker or other audio components) and/or converts the energy into line Property movement electromechanical equipment (such as, motor, solenoid, electroactive polymer, piezo actuator, electrostatic actuator or its His tactile exports formation component (for example, the component for converting the electrical signal to the tactile output in equipment)).In some embodiments In, (one or more) tactile exports generator 165 from the reception touch feedback generation instruction of tactile feedback module 133, and The tactile that generating in equipment 100 can be felt by the user of equipment 100 exports.In some embodiments, at least one tactile is defeated Generator and touch sensitive surface (for example, touch-sensitive display system 112) juxtaposition or close out, and alternately through vertically (for example, surface of entry/exit equipment 100) or laterally (in plane identical with the surface of equipment 100 back and forth) mobile touching Sensitive surfaces export to generate tactile.In some embodiments, at least one tactile output generator sensor is located at equipment 100 The back side, it is opposite with the touch-screen display system 112 before equipment 100.
Equipment 100 is optionally further comprising one or more accelerometers 168.Figure 1A is shown and peripheral device interface 118 The accelerometer 168 of coupling.Alternatively, accelerometer 168 optionally with the input controller 160 in I/O subsystem 106.? In some embodiments, based on to the analysis from the received data of one or more accelerometers and according to longitudinal view or transverse direction View shows information on touch-screen display.Other than (one or more) accelerometer 168, equipment 100 is optionally Including magnetometer (not shown) and GPS (either GLONASS or other Global Navigation Systems) receiver (not shown) to be used for It obtains and the position of equipment 100 and the related information of orientation (for example, vertical or horizontal).
In some embodiments, the component software for storing in the memory 102 include operating system 126, communication module (or Instruction set) 128, contact/motion module (or instruction set) 130, figure module (or instruction set) 132, haptic feedback module (or refer to Enable collection) 133, text input module (or instruction set) 134, global positioning system (GPS) module (or instruction set) 135 and application (or instruction set) 136.In addition, as shown in Figure 1A and Fig. 3, in some embodiments, memory 102 (Figure 1A) or storage Device 370 (Fig. 3) stores equipment/overall situation internal state 157.Equipment/overall situation internal state 157 include the following terms in one or It is multinomial: to enliven application state, indicate the application (if any) of current active;Which application, view display state indicates Or other information occupies the various regions of touch-sensitive display system 112;Sensor states, including from the various sensors of equipment and its The information that he inputs or control equipment 116 obtains;And position related with the position of equipment and/or posture and/or positioning are believed Breath.
Operating system 126 is (for example, iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS or such as The embedded OS of VxWorks) it include for controlling and managing general system task (for example, memory management, storage are set Standby control, electrical management etc.) various component softwares and/or driver, and facilitate various hardware and component software it Between communication.
Communication module 128 facilitates the communication on one or more outside ports 124 with other equipment, and further includes For disposing the various component softwares for passing through RF circuit 108 and/or the received data of outside port 124.124 (example of outside port Such as, universal serial bus (USB), FIREWIRE etc.) it is suitable for directly or through between network (for example, internet, Wireless LAN etc.) Ground connection is coupled to other equipment.In some embodiments, outside port be in the cupertino from California Apple companyiPod With30 pin connectors used in equipment are identical, similar And/or compatible multi-pipe pin (for example, 30 pins) connector.In some embodiments, outside port is and is coming from Jia Lifu The Apple company of the cupertino in Buddhist nun Asia stateiPod WithIt is dodged used in equipment Identical, the similar and/or compatible lightning connector of electric connector.
Contact/motion module 130 is optionally detected with touch-sensitive display system 112 (with 156 phase knot of display controller Close) with the contact of other touch-sensitive devices (for example, touch tablet or physics click type rotating disk).Contact/motion module 130 includes being used for The various component softwares (for example, pass through finger or pass through stylus) for detecting relevant various operations for executing and contacting, such as Determine whether that contact (for example, detection finger-down event) has occurred, determines the intensity of contact (for example, the power or pressure of contact Power, either for the power of contact or the substitution of pressure), it is determined whether there are the movement of contact and track across touch sensitive surface Mobile (for example, detecting one or more finger-dragging events) and determine contact whether it has stopped (for example, detection finger is upward Event or contact block).Contact/motion module 130 receives contact data from touch sensitive surface.Determining the movement of contact, (it is by one Represented by collection of contact data) optionally include the rate (magnitude), speed (magnitude and direction) and/or acceleration of determining contact (change on magnitude and/or direction).These operations are alternatively applied to individually contact (for example, a finger contact or stylus Contact) or it is multiple while contacting (for example, " multi-touch "/multiple fingers contact).In some embodiments, contact/movement mould Block 130 detects the contact on touch tablet with display controller 156.
Contact/motion module 130 optionally detects gesture input by user.Different gestures has not on touch sensitive surface Same contact mode (for example, intensity of the contact of different movements, timing and/or detection).Therefore, gesture is alternately through inspection Specific contact mode is surveyed to be detected.For example, detection finger tap gesture include: detection finger-down event, be followed by with this Identical positioning (or substantially the same positioning) the detection finger of finger-down event (for example, at positioning of icon) is upward (lifting) event.As another example, detecting finger swipe (swipe) gesture on touch-surface includes: that detection finger is downward Event is followed by the one or more finger-dragging events of detection and is followed by detection finger (lifting) event upwards again.It is similar Ground, the specific contact mode by detection for stylus optionally detect knocking, sweeping, towing and other gestures for stylus.
Figure module 132 includes for figure to be rendered and shown in touch-sensitive display system 112 or other displays Various known software components, including the visual effect for changing shown figure (for example, brightness, transparency, saturation degree, right Than degree or other perceptual properties) component.As it is used herein, term " figure " include can be shown to user it is any Object, including but not limited to: text, webpage, icon (such as including the user interface object of soft-key button), digital picture, video, Animation etc..
In some embodiments, figure module 132 stores the data for indicating figure ready for use.Each figure is by optionally Assign corresponding code.Figure module 132 specifies one or more codes of figure to be shown together with (such as from receptions such as applications Fruit is in need) coordinate data and other graphic attribute data, and generate screen image data then with to display controller 156 outputs.
Tactile feedback module 133 includes various component softwares, these component softwares are used in response to the user with equipment 100 Interaction uses (one or more by one or more position that tactile output generator 167 is used on the appliance 100 to generate It is a) tactile output generator 167 generates the instruction (for example, as used in tactile feedback controller 161 instruct) of tactile output.
Text input module 134 (it is optionally the component of figure module 132) is provided for answering Characters are various With in (for example, contact person 137, Email 140, IM 141, browser 147 and any other application for needing text input) Soft keyboard.
GPS module 135 determine equipment position, and provide this information in various applications use (for example, Phone 138 is supplied to for using in location-based during dialing;It is supplied to camera 143 and is used as picture/video metadata;With And it is supplied to and such as weather widget, local Yellow Page widget and map/navigation widget location based service is provided Using).
It optionally includes using 136 with lower module (or instruction set) or its subset or superset:
Contact module 137 (sometimes referred to as address book or contacts list);
Phone module 138;
Video conference module 139;
Email client module 140;
Instant messaging (IM) module 141;
Temper support module 142;
For static and/or video image camera model 143;
Image management module 144;
Browser module 147;
Calendaring module 148;
Widget module 149, optionally includes one or more of the following items: weather widget 149-1, stock It ticket widget 149-2, calculator widget 149-3, alarm clock widget 149-4, dictionary widget 149-5 and is obtained by user Other widgets and user creation widget 149-6;
For making the widget builder module 150 of the widget 149-6 of user's creation;
Search module 151;
Video and musical player module 152, optionally by video player module and musical player module group At;
Memorandum module 153;
Mapping module 154;And/or
Online Video module 155.
The example for the other application 136 being optionally stored in memory 102 include other text processing applications, other Picture editting's application, draw application, demonstration application, the application with JAVA function, encryption, Digital Right Management, speech recognition And speech reproduction.
In conjunction with touch-sensitive display system 112, display controller 156, contact/motion module 130,132 and of figure module Text input module 134, contact module 137 include management address book or contacts list (for example, being stored in memory 102 Or the contact module 137 in memory 370 application internal state 192 in) executable instruction, comprising: name is added To address book;Name is deleted from address book;By telephone number, e-mail address, physical address or other information and name phase Association;Image is associated with name;Classified to name and is sorted;There is provided telephone number and/or e-mail address with It initiates and/or facilitates through phone 138, video conference 139, Email 140 or communication of IM 141 etc..
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch-sensitive display system 112, display Device controller 156, contact/motion module 130, figure module 132 and text input module 134, phone module 138 include typing The sequence of character corresponding to telephone number, one or more telephone numbers in access address book 137, modification have been logged Telephone number, dial corresponding telephone number, conversate and when session complete when disconnect or hang up.Institute as above It states, wireless communication is optionally using any one of a variety of communication standards, agreement and technology.
In conjunction with RF circuit 108, voicefrequency circuit 110, loudspeaker 111, microphone 113, touch-sensitive display system 112, display Device controller 156, (one or more) optical sensor 164, optical sensor controller 158, contact/motion module 130, figure Shape module 132, text input module 134, contacts list 137 and phone module 138, video conference module 139 include to The executable of the video conference between user and other one or more participants is initiated, carries out and terminated according to user instructions Instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module 132 and text input module 134, email client module 140 include to create, send in response to user instruction, connect Receive and manage the executable instruction of Email.In conjunction with image management module 144, email client module 140 makes to create Become very easy using static or video image the Email that camera model 143 is shot with sending to have.
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module 132 and text input module 134, instant messaging module 141 include the character string for corresponding to instant message to typing, To modify the character of previous typing, to transmit corresponding instant message (for example, using for the instant message based on phone Short message service (SMS) or multimedia messaging service (MMS) agreement, or using for the instant message based on internet XMPP, SIMPLE, apple push notification service (APN) or IMPS), in receive instant message and to check it is received immediately The executable instruction of message.In some embodiments, as supported in MMS and/or enhanced messaging messaging service (EMS) Like that, transmit and/or the received instant message of institute optionally include figure, photo, audio file, video file and/or its His attachment.As used herein, " instant messaging " refers to the message based on phone (for example, sending out using SMS or MMS Both the message sent) and the message (for example, the message sent using XMPP, SIMPLE, APN or IMPS) based on internet.
In conjunction with RF circuit 108, touch-sensitive display system 112, display controller 156, contact module 130, figure module 132, text input module 134, GPS module 135, mapping module 154 and musical player module 146 temper support module 142 Including tempering (for example, there is time, distance and/or caloric burn target) to create;It (is set in movement with sensor is tempered In standby and smartwatch) communication;Receive workout sensor data;Calibration is used to monitor the sensor of exercise;It selects and plays Music for exercise;And the executable instruction for showing, storing and transmitting exercise data.
In conjunction with touch-sensitive display system 112, display controller 156, optical sensor 164, optical sensor controller 158, contact/motion module 130, figure module 132 and image management module 144, camera model 143 include static to capture Image or video (including video flowing) and store them in memory 102, modify static image or video characteristic and/ Or the executable instruction of static image or video is deleted from memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact/motion module 130, figure module 132, text This input module 134 and camera model 143, image management module 144 include to arrange, modify (for example, editor) or with it His mode manipulates, marks, deleting, presenting (for example, in digital slide presentation or photograph album) and storing static and/or video The executable instruction of image.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure Module 132 and text input module 134, browser module 147 include (including searching to browse internet according to user instructions Rope, the part for being linked to, receiving and showing webpage or webpage and attachment and the alternative document for being linked to webpage) it is executable Instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure Module 132, text input module 134, email client module 140 and browser module 147, calendaring module 148 include To create, show, modify and store according to user instructions calendar and data associated with calendar (for example, calendar, to Working item list etc.) executable instruction.
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact/motion module 130, Figure module 132, text input module 134 and browser module 147, widget module 149 be optionally by user downloading and The small application used is (for example, weather widget 149-1, stock widget 149-2, calculator widget 149-3, alarm clock are small Component 149-4 and dictionary widget 149-5), or the small application created by user is (for example, the widget of user's creation 149-6).In some embodiments, widget includes HTML (hypertext markup language) file, CSS (cascading style sheets) file And JavaScript file.In some embodiments, widget includes XML (extensible markup language) file and JavaScript File is (for example, Yahoo!Widget).
In conjunction with RF circuit 108, touch-sensitive display system 112, display system controller 156, contact module 130, figure Module 132, text input module 134 and browser module 147, widget builder module 150 include creation widget (example Such as, user's specified portions of webpage are transformed into widget) executable instruction.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132 and text This input module 134, search module 151 include to searching in searching storage 102 according to user instructions with one or more The matched text of rope criterion (for example, search terms that one or more user specifies), music, sound, image, video and/or its The executable instruction of his file.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132, sound Frequency circuit 110, loudspeaker 111, RF circuit 108 and browser module 147, video and musical player module 152 include permitting The music of family allowable downloading and playback and according to one or more stored in file format other audio files (such as MP3 or AAC file) executable instruction, and including to (for example, in touch-sensitive display system 112 or wirelessly or On display via 124 external connection of outside port) display, presentation or the executable finger for otherwise playing back video It enables.In some embodiments, equipment 100 optionally includes the function of the MP3 player of such as iPod (trade mark of Apple company) Energy.
It is defeated in conjunction with touch-sensitive display system 112, display controller 156, contact module 130, figure module 132 and text Enter module 134, memorandum module 153 include create and manage memorandum, do list etc. according to user instructions can It executes instruction.
In conjunction with RF circuit 108, touch-sensitive display circuit 112, display controller 156, contact module 130, figure module 132, text input module 134, GPS module 135 and browser module 147, mapping module 154 include connecing according to user instructions It receives, display, modify and store map and data associated with map (for example, steering direction;About in specific position or attached The data of close shop and other points of interest;And other location-based data) executable instruction.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, figure module 132, sound Frequency circuit 110, loudspeaker 111, RF circuit 108, text input module 134, email client module 140 and browser mould Block 147, Online Video module 155 include allowing user to access, browsing, receiving (for example, by spreading defeated and/or downloading), (example Such as, on the touchscreen specific online view is played back or wirelessly or via on the display via 124 external connection of outside port) Frequently, send have to the link of specific Online Video Email and otherwise manage according to such as H.264 etc One or more file formats Online Video executable instruction.In some embodiments, instant messaging module 141 Rather than email client module 140 is used to be sent to the link of specific Online Video.
Each of above-mentioned module and application both correspond to for execute one or more functions described above and The instruction of the method (for example, computer implemented method as described herein and other information processing method) described in this application Collection.These modules (that is, instruction set) are without being implemented as individual software program, process, or module, and therefore these modules Each subset be optionally combined or rearrange in various embodiments.In some embodiments, memory 102 is optional Ground stores the subset of above-mentioned module and data structure.In addition, the optionally stored add-on module being not described above of memory 102 and Data structure.
In some embodiments, equipment 100 is wherein exclusively to be executed in equipment by touch screen and/or touch tablet The equipment of the operation of scheduled function set.By the main input that touch screen and/or touch tablet are used as to the operation of equipment 100 Equipment is controlled, the number for being physically entered control equipment (pushing button, driver plate etc.) in equipment 100 is optionally reduced.
User interface is exclusively optionally included in by the scheduled function set that touch screen and/or touch tablet execute Between navigation.In some embodiments, when touched by user, touch tablet by equipment 100 from equipment 100 display on appoint User interface navigation anticipate to key frame screen, key frame or root menu.In such embodiments, " menu button " uses touch tablet It is carried out.In some other embodiments, menu button be physics push button or other be physically entered control equipment rather than Touch tablet.
Figure 1B is the block diagram for illustrating the example components in accordance with some embodiments for event handling (handle).? In some embodiments, memory 102 (in figure 1A) or memory 370 (Fig. 3) include event classifier 170 (for example, operating In system 126) and apply 136-1 (for example, any application in aforementioned applications 136,137-155,380-390) accordingly.
Event classifier 170 receives event information, and determining should using 136-1 and delivering to it for application 136-1 The application view 191 of event information.Event classifier 170 includes event monitor 171 and event dispatcher module 174.One It include applying internal state 192 using 136-1, instruction is when application is active or executes in touch-sensitive display in a little embodiments (one or more) the current application view shown in system 112.In some embodiments, equipment/157 quilt of overall situation internal state Event classifier 170 is used to determine that those are used to using current active, and using internal state 192 by event classifier 170 Determine the application view 191 that event information is delivered to it.
It in some embodiments, include additional information using internal state 192, one or more in such as the following terms : when application 136-1 restores to execute, recovery information to be used, instruction are showing or are being ready to show using 136-1 Information user interface state information, allow users to return to using the state queue of state or view before 136-1 with And the reforming of prior actions performed by the user/cancel queue.
Event monitor 171 receives event information from peripheral device interface 118.Event information includes about subevent (example Such as, as in the touch-sensitive display system 112 of a part of multi-touch gesture user touch) information.Peripheral device interface 118 transmit it from I/O subsystem 106 or sensor (such as proximity sensor 166, accelerometer 168 and/or microphone 113 (passing through voicefrequency circuit 110)) received information.Peripheral device interface 118 includes coming from from the received information of I/O subsystem 106 The information of touch-sensitive display system 112 or touch sensitive surface.
In some embodiments, equipment interface 118 sends request to event monitor 171 to the periphery at a predetermined interval.As Response, 118 transmitting event information of peripheral device interface.In other embodiments, only there are critical events (for example, receiving More than predetermined noise threshold and/or it is longer than the input of predetermined lasting time) when, 118 ability transmitting event information of peripheral device interface.
In some embodiments, event classifier 170 further includes that hit view determination module 172 and/or Active event are known Other device determining module 173.
View determination module 172 is hit to provide for determining when touch-sensitive display system 112 shows more than one view The software program for the position that subevent occurs in one or more views.The control that view can see over the display by user Part and other elements composition.
Another aspect with the associated user interface of application is one group of view, sometimes referred to herein as application view Or user interface windows, wherein the gesture of display information and generation based on touch.Wherein detect the (respective application of touch ) application view can correspond to the program level in the program or view level of the application.For example, wherein detecting touch Lowest hierarchical level view is alternatively referred to as hit view, and can be at least partially based on the initial of the gesture for starting based on touch The hit view of touch optionally determines the event set for being identified as correctly entering.
It hits view determination module 172 and receives information related with the subevent of the gesture based on touch.When application has When with multiple views of hierarchical organization, hit view determination module 172 will should dispose the subevent most in the hierarchical structure Low layer view identification is hit view.In most cases, hit view is wherein to have occurred to initiate subevent (that is, sub- thing First subevent of event or potential event is formed in part sequence) lowest hierarchical level view.Once hitting view by hitting View determination module mark, which, which just usually receives, is identified as the identical touch of hit view or input source with it is made Related all subevents.
Active event identifier determining module 173 determine should be received in view level one of specific subevent sequence or Multiple views.In some embodiments, Active event identifier determining module 173 determines that only hit view should receive specific Subevent sequence.In other embodiments, Active event identifier determining module 173 determines the physical location including subevent All views are all the views being effectively related to, and it is thus determined that all views being effectively related to should all receive specific subevent Sequence.In other embodiments, even if touch subevent is limited to region associated with a particular figure, level completely In higher view still will keep as the view being effectively related to.
Event information is assigned to event recognizer (for example, event recognizer 180) by event dispatcher module 174.It is wrapping In the embodiment for including Active event identifier determining module 173, event information is delivered to by active by event dispatcher module 174 173 definite event identifier of event recognizer determining module.In some embodiments, event dispatcher module 174 is by event Information is stored in event queue, which is fetched by corresponding Event receiver module 182.
In some embodiments, operating system 126 includes event classifier 170.It alternatively, include event using 136-1 Classifier 170.In more other embodiments, event classifier 170 is stored in separate modular or memory 102 A part of another module (such as contact/motion module 130).
It in some embodiments, include multiple event handlers 190 and one or more application view using 136-1 191, each of these includes for disposing the finger that the touch event in the corresponding views of the user interface of the application occurs It enables.Each application view 191 using 136-1 includes one or more event recognizers 180.In general, corresponding application view 191 include multiple event recognizers 180.In other embodiments, one or more event recognizers in event recognizer 180 A part of separate modular (such as user interface external member (not shown)), or application 136-1 therefrom inheritance method and other The higher object of attribute.In some embodiments, each event handler 190 includes one or more of the following items: number According to renovator 176, object renovator 177, GUI renovator 178 and/or from the received event data 179 of event classifier 170. Event handler 190 optionally with or call data renovator 176, object renovator 177 or GUI renovator 178 to update Using internal state 192.Alternatively, one or more application views in application view 191 include one or more corresponding things Part disposer 190.Equally, in some embodiments, in data renovator 176, object renovator 177 and GUI renovator 178 One or more is included in corresponding application view 191.
Corresponding event recognizer 180 receives event information (for example, event data 179) from event classifier 170, and base In the event information identified event.Event recognizer 180 includes Event receiver 182 and event comparator 184.In some implementations In example, event recognizer 180 further includes at least subset of the following terms: (its is optional for metadata 183 and event delivery instruction 188 Ground includes subevent delivery instructions).
Event receiver 182 receives event information from event classifier 170.The event information includes about subevent (example Such as, touch or touch movement) information.Depending on subevent, event information further includes additional information, the position of such as subevent It sets.When subevent is related to the movement touched, event information also optionally includes speed and the direction of subevent.In some implementations In example, event includes that equipment is directed to the rotation of another orientation from one (for example, from longitudinal direction to lateral rotation, otherwise also So), and event information include the current orientation (also referred to as equipment posture) about equipment corresponding informance.
Event information and scheduled event or subevent definition are compared by event comparator 184, and are based on the ratio More determining event or subevent, or the state of determining or update event or subevent.In some embodiments, event comparator 184 include that event defines 186.Event defines 186 definition comprising event (for example, scheduled subevent sequence), for example, event 1 (187-1), event 2 (187-2) etc..In some embodiments, the subevent in event 187 starts for example including touch, touches It touches end, touch mobile, touch cancellation and multi-touch.In one example, the definition for event 1 (187-1) is to display Object double knockings.Double knockings touch (touch starts), pre- for example including the object to display, predefined phase first Determine the first of the stage lift (touch terminates), touch (touch starts) to display object, predefined phase second and make a reservation for The second of stage lifts (touch terminates).In another example, the definition for event 2 (187-2) is dragged to display object It drags.The dragging is for example including the object to display, predefined phase touch (or contact), across the touch-sensitive display system of the touch (touch terminates) is lifted in movement and touch on 112.In some embodiments, event further includes for one or more phases The information of associated event handler 190.
In some embodiments, it includes the definition for the event of respective user interfaces object that event, which defines 187,.Some In embodiment, event comparator 184 executes hit test to determine which user interface object is associated with subevent.For example, It is displayed in the application view in touch-sensitive display system 112 in wherein three user interface objects, when in touch-sensitive display When detecting touch in system 112, event comparator 184 execute hit test with determine three user interface objects in (if If having) which user interface object it is associated with touch (subevent).If each display object with corresponding thing Part disposer 190 is associated, then event comparator is come using the result of hit test it is determined which event handler activated 190.For example, the selection of event comparator 184 event handler associated with the subevent of triggering hit test and object.
It in some embodiments, further include the movement of delay for the definition of each event 187, delay event information is passed It send, until having determined whether subevent sequence is corresponding with the event type of event recognizer.
It, should when a series of determining subevents of each event recognizer 180 do not define any event in 186 with event to be matched Corresponding 180 entry event of event recognizer is impossible, event fails or event terminates state, and hereafter the corresponding event is known Other device 180 ignores the subsequent subevent of the gesture based on touch.In this case, active other are kept for hit view Event recognizer (if any) continues the subevent for tracking and handling the ongoing gesture based on touch.
In some embodiments, each event recognizer 180 includes with configurable attribute, mark and/or the member of list How data 183, instruction event delivery system should execute the subevent delivering for going to the event recognizer being effectively related to.? In some embodiments, metadata 183 includes configurable attribute, mark and/or list, instruction event recognizer how or It can be interactively with each other.In some embodiments, metadata 183 includes configurable attribute, mark and/or list, instruction Whether event is delivered to the different levels in view or program level.
In some embodiments, each event recognizer 180 swashs when one or more specific subevents of event are identified Event handler 190 associated with event living.In some embodiments, each event recognizer 180 is passed to event handler 190 Send event information associated with event.Event handler 190 is activated to be different from sending (or delay hair to corresponding hit view Send) subevent.In some embodiments, event recognizer 180 is dished out the associated mark of event with identification, and with the mark The associated event handler 190 of will catches the mark and executes scheduled process.
In some embodiments, event delivery instruction 188 includes subevent delivery instructions, delivers the thing about subevent Part information is without activating event handler.On the contrary, subevent delivery instructions to a series of subevents or the effective view that is related to Associated event handler delivers event information.At a series of subevents or event associated with the view being effectively related to Device is set to receive the event information and execute prior defined procedure.
In some embodiments, data renovator 176 creates and updates the data used in application 136-1.For example, number The telephone number used in contact module 137 is updated according to renovator 176, or is stored in video player module 145 The video file used.In some embodiments, object renovator 177 creates and updates the object used in application 136-1. For example, object renovator 176 creates the positioning of new user interface object or update user interface object.GUI renovator 178 is more New GUI.For example, GUI renovator 178 prepares monitor information, and figure module 132 is sent it to for touch-sensitive aobvious Show and is shown on device.
In some embodiments, event handler 190 includes or is able to access that data renovator 176, object renovator 177 With GUI renovator 178.In some embodiments, data renovator 176, object renovator 177 and GUI renovator 178 are included Accordingly using in the individual module of 136-1 or application view 191.In other embodiments, they be included in two or In more software modules.
It should be appreciated that the discussion for the event handling that the user being previously with regard on touch-sensitive display touches is also applied for operation tool There is the user of the other forms of the multifunctional equipment 100 of input equipment to input, wherein not all user input is all to touch It is initiated on screen.For example, with single or multiple keyboard pressings or keeping the mouse that optionally matches mobile and mouse button Pressing;Contact on Trackpad is mobile (knocking, dragging, rolling etc.);Stylus input, the movement of equipment;Spoken command;Inspection The eyes measured are mobile, the input of bioassay;And/or any combination of above-mentioned items, optionally all being used as will quilt with definition The corresponding input in the subevent of identification.
Fig. 2 illustrates in accordance with some embodiments with touch screen 112 (for example, touch-sensitive display system 112, Figure 1A) Portable multifunction device 100.Touch screen optionally shows one or more figures in user interface (UI) 200.At these In embodiment and in other embodiments as described below, user is enabled to by making gesture (for example, passing through to figure One or more fingers 202 (being drawn not in scale in figure) or one or more stylus (being drawn not in scale in figure)) come Select one or more figures in figure.In some embodiments, the selection of one or more figures is occurred to hinder in user When the disconnected pattern contact with one or more.In some embodiments, gesture optionally includes one or more knockings, one Or multiple rotations for sweeping (from left to right, from right to left, up and/or down) and/or the finger contacted with equipment 100 (from right to left, from left to right, up and/or down).In some implementations or situation, with contacting not unintentionally for figure Select figure.For example, the sweeping gesture of inswept application icon will not optionally select when gesture corresponding with selection is knocking Corresponding application.
Equipment 100 is optionally further comprising one or more physical buttons, such as " key frame (home) " or menu button 204.As previously mentioned, menu button 204 is optionally used to navigate to appointing in the set of applications optionally executed on the appliance 100 What applies 136.Alternatively, in some embodiments, menu button is implemented as soft in the GUI shown on touch-screen display Key.
In some embodiments, equipment 100 includes touch-screen display, menu button 204, is used to open/pass hull closure The pushing button 206 of power supply and locking device, (one or more) volume knob 208, subscriber identity module (SIM) card slot 210, receiver J-Horner 212 and docking/charging external port 124.Button 206 is pushed to be optionally used to by depressing the button and making The button is maintained at the scheduled time interval of depressed state to open/close equipment power supply;By depressing the button and passing through The button, which is discharged, before scheduled time interval carrys out locking device;And/or unlocker device or initiate unlocking process.In some implementations In example, equipment 100 also receives the Oral input for activating or deactivating certain functions by microphone 113.Equipment 100 is optional Ground further includes one or more contact strength sensor 165 for detecting contact strength in touch-sensitive display system 112 And/or one or more tactile for generating the tactile output for being used for the user of equipment 100 exports generator 167.
Fig. 3 is the block diagram of the exemplary multifunctional equipment in accordance with some embodiments with display and touch sensitive surface.If Standby 300 need not be portable.In some embodiments, equipment 300 is laptop computer, desktop computer, plate meter Calculation machine, multimedia player device, navigation equipment, educational facilities (such as children's study toy), game station or control are set Standby (for example, household or industrial controller).Equipment 300 generally includes one or more processing units (CPU) 310, one or more A network or other communication interfaces 360, memory 370 and one or more communication bus 320 for interconnecting these components. Communication bus 320 optionally includes the circuit (sometimes referred to as chipset) that interconnection and control communicate between system components.If Standby 300 include input/output (I/O) interface 330 containing display 340 (it typically is touch-screen displays).I/O interface 330 also optionally include keyboard and/or mouse (or other pointer devices) 350 and touch tablet 355, for raw in equipment 300 At the tactile output generator 357 of tactile output (for example, being similar to the tactile above with reference to described in attached drawing 1A exports generator 167), sensor 359 is (for example, the optics similar with the contact strength sensor 165 above with reference to described in attached drawing 1A, acceleration Degree, close, touch-sensitive and/or contact strength sensor).Memory 370 include high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid storage devices;And nonvolatile memory is optionally included, such as one Or multiple disk storage equipments, optical disc memory apparatus, flash memory device or other non-volatile solid-state memory devices.Memory 370 Optionally include one or more storage equipment far from (one or more) CPU 310.In some embodiments, memory 370 storages and program, module and the data structure class being stored in the memory 102 of portable multifunction device 100 (Fig. 1) As program, module and data structure or its subset.In addition, memory 370 is optionally stored not to exist in portable multi-function Appendage, module and data structure in the memory 102 of equipment 100.For example, the memory 370 of equipment 300 is optionally deposited Storage picture module 380, demonstration module 382, word processing module 384, website creation module 386, disk write module 388 and/or Spreadsheet module 390, and the memory 102 of portable multifunction device 100 (Figure 1A) does not store these modules optionally.
Each element in said elements in Fig. 3 is optionally stored on one or more of aforementioned memory equipment In equipment.Each module in above-mentioned module corresponds to the instruction set for executing function as described above.Above-mentioned module or journey Sequence (that is, instruction set) does not need to be implemented as independent software program, process, or module, and therefore each subset of these modules It is optionally combined or otherwise rearranges in embodiments.In some embodiments, memory 370 is optionally deposited Store up the subset of above-mentioned module and data structure.In addition, the optionally stored add-on module being not described above of memory 370 and data Structure.
Attention now turn to the realities of the user interface optionally implemented on portable multifunction device 100 (" UI ") Apply example.
Fig. 4 A illustrates the example of the menu in accordance with some embodiments for the application on portable multifunction device 100 Property user interface.Similar user interfaces are optionally carried out in equipment 300.In some embodiments, user interface 400 includes Following elements or its subset or superset:
(one or more) signal for (one or more) wireless communication (such as cellular signal and Wi-Fi signal) is strong Spend indicator 402;
Time 404;
Bluetooth indicator;
Battery status indicator 406;
The pallet 408 of icon with the application for frequently using, such as:
The icon 416 of phone module 138, is marked as " phone ", optionally includes missed call or speech electronic postal The indicator 414 of the number of part message;
The icon 418 of email client module 140, is marked as " mail ", optionally includes and does not read electronics postal The indicator 410 of the number of part;
The icon 420 of browser module 147, is marked as " browser ";And
The icon 422 of video and musical player module 152, also referred to as iPod (trade mark of Apple company) module 152, it is marked as " iPod ";And
The icon of other application, such as:
The icon 424 of IM module 141, is marked as " message ";
The icon 426 of calendaring module 148, is marked as " calendar ";
The icon 428 of image management module 144, is marked as " photo ";
The icon 430 of camera model 143, is marked as " camera ";
The icon 432 of Online Video module 155, is marked as " Online Video ";
The icon 434 of stock widget 149-2, is marked as " stock ";
The icon 436 of mapping module 154, is marked as " map ";
The icon 438 of weather widget 149-1, is marked as " weather ";
The icon 440 of alarm clock widget 149-4, is marked as " clock ";
The icon 442 for tempering support module 142 is marked as " take exercise and support ";
The icon 444 of memorandum module 153, is marked as " memorandum ";And
For the icon 446 of application or module to be arranged, provide to the setting of equipment 100 and its various application 136 Access.
It should be noted that illustrated icon label is merely illustrative in Figure 4 A.For example, in some embodiments, depending on The icon 422 of frequency and musical player module 152 is marked as " music " or " music player ".Other labels optionally for Various application icons.In some embodiments, the label of respective application icon includes the application name corresponding to respective application icon Claim.In some embodiments, the label of specific application icon is different from the Apply Names of specific application icon are corresponded to.
Fig. 4 B, which is illustrated, has the touch sensitive surface 451 isolated with display 450 (for example, the tablet computer of Fig. 3 or touch Plate 355) equipment (for example, equipment 300 of Fig. 3) on exemplary user interface.Equipment 300 is also optionally included for touching Detected on sensitive surfaces 451 contact strength one or more contact strength sensor (for example, one in sensor 357 or The multiple sensors of person) and/or for generating one or more tactile output life for being used for the tactile of the user of equipment 300 and exporting Grow up to be a useful person 359.
Fig. 4 B, which is illustrated, has the touch sensitive surface 451 isolated with display 450 (for example, the tablet computer of Fig. 3 or touch Plate 355) equipment (for example, equipment 300 of Fig. 3) on exemplary user interface.Following many examples will with reference to detection with The equipment of input on the touch sensitive surface of displays separated is presented, as shown in Figure 4 B.In some embodiments, touch sensitive surface (for example, 451 in Fig. 4 B) have the main shaft of the main shaft (for example, 453 in Fig. 4 B) corresponded on display (for example, 450) (for example, 452 in Fig. 4 B).According to these embodiments, equipment detection is at the position of the corresponding position corresponded on display (for example, in figure 4b, 460 correspond to 468 and 462 corresponding to the contact with touch sensitive surface 451 470) (for example, in Fig. 4 B 460 and 462).In this way, when touch sensitive surface and displays separated, by equipment in touch sensitive surface (for example, Fig. 4 B In 451) on user's input (for example, contact 460 and contact 462 and its mobile) for detecting more function are manipulated by equipment use User interface on the display (for example, 450 in Fig. 4 B) of energy equipment.It should be appreciated that similar approach is optionally for herein The other users interface.
In addition, although following example is inputted mainly for finger (for example, finger contact, finger tap gesture, finger swipe Gesture etc.) and be presented it should be appreciated that in some embodiments, one or more fingers in finger input are defeated Entering can be by another in the input (for example, input or stylus based on mouse input) from another input equipment or same equipment The input (button press) of one type is replaced.For example, sweeping gesture optionally clicks (for example, rather than contacting) then by mouse Cursor (for example, rather than movement of contact) replacement is moved along path is swept.As another example, tap gesture optionally works as light Mark is clicked by mouse and is replaced in (for example, rather than detection contact then stops the detection contact) on the position of tap gesture. Similarly, when being detected simultaneously by multiple users input, it should be understood that multiple computer mouses are optionally used simultaneously, or Person's mouse and finger contact are optionally used simultaneously.
Fig. 4 C illustrates the example electronic device communicated with display 450 and touch sensitive surface 451.According to some implementations Example, for at least subset of electronic equipment, display 450 and/or touch sensitive surface 451 are integrated into electronic equipment.Though So reference is communicated with electronic equipment (for example, portable multifunction device 100 of the Figure 1A into Figure 1B or equipment 300 in Fig. 3) Touch sensitive surface 451 and display 450 describe the example being described more fully below, but it is to be understood that according to some implementations Example, touch sensitive surface and/or display and electronic equipment are integrated, and in other embodiments, one in touch sensitive surface and display It is a or multiple separated with electronic equipment.In addition, in some embodiments, electronic equipment has integrated display and/or integrated touching Sensitive surfaces and and separated with electronic equipment one or more additional displays and/or touch sensitive surface communication.
In some embodiments, with user interface navigation logic 480 Single Electron equipment (for example, below with reference to Fig. 4 C is described to calculate equipment A) on execute below with reference to Fig. 5 A to Fig. 5 SS, Fig. 6 A to Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A extremely All operations described in Fig. 8 C, Fig. 9 A to Fig. 9 C and Figure 10 A to Figure 10 B.It will be appreciated, however, that usually multiple and different electricity Sub- equipment be linked together with execute below with reference to Fig. 5 A to Fig. 5 SS, Fig. 6 A to Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A to Fig. 8 C, Described in Fig. 9 A to Fig. 9 C and Figure 10 A to Figure 10 B operation (for example, with user interface navigation logic 480 electronic equipment with Isolated electronic equipment with display 450 and/or the isolated electronic equipment communication with touch sensitive surface 451).At these In any one of embodiment embodiment, below with reference to Fig. 5 A to Fig. 5 SS, Fig. 6 A to Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A to figure The electronic equipment of 8C, Fig. 9 A to Fig. 9 C and Figure 10 A to Figure 10 B description includes user interface navigation logic 480 (one or more It is a) electronic equipment.In addition, it should be understood that in various embodiments, can be drawn between multiple and different modules or electronic equipment Divide user interface navigation logic 480;However, user interface navigation logic 480 will mainly for the purpose of description herein Other aspect indigestions for unnecessarily making each embodiment are referred to as resided in Single Electron equipment.
In some embodiments, user interface navigation logic 480 includes one or more modules (for example, one or more Event handler 190, including one or more object renovators 177 and one or more GUI renovator 178, such as above with reference to Fig. 1 C is in greater detail), the input for receiving the input of interpretation and interpreting in response to these is generated for according to interpretation Input updates the instruction of graphic user interface, and the input of the interpretation is subsequently used to update the graphic user interface on display. In some embodiments, the input of interpretation be detected (for example, by the contact movement 130 of Figure 1A into Figure 1B and Fig. 3), Identification (for example, passing through the event recognizer 180 in Fig. 1 C) and/or priorization are (for example, by the event classifier in Fig. 1 C 170) input.In some embodiments, the input of interpretation is generated (for example, electronic equipment connects by the module at electronic equipment Original contact input data is received to identify the gesture from original contact input data).In some embodiments, interpretation is defeated Some or all of enter the input of interpretation by electronic equipment be received as interpretation input (e.g., including the electricity of touch sensitive surface 451 Sub- equipment handles original contact input data to identify the gesture from original contact input data and to including user circle The electronic equipment of face navigational logic 480 sends the information of instruction gesture).
In some embodiments, both display 450 and touch sensitive surface 451 with include user interface navigation logic 480 Electronic equipment (for example, calculating equipment A in Fig. 4 C) is integrated.For example, electronic equipment can be with integrated display (for example, 340 in Fig. 3) and the desktop computer or laptop computer of touch tablet (for example, 355 in Fig. 3).As another example, Electronic equipment can be with touch screen (for example, 122 in Fig. 2) portable multifunction device 100 (for example, smart phone, PDA, tablet computer etc.).
In some embodiments, touch sensitive surface 451 and electronic equipment are integrated, and display 450 not with include user interface The electronic equipment (for example, calculating equipment B in Fig. 4 C) of navigational logic 480 is integrated.For example, electronic equipment, which can be, has connection (via wired or wireless connection) arrives the integrated touch plate (example of isolated display (for example, computer monitor, TV etc.) Such as, 355 in Fig. 3) equipment 300 (for example, desktop computer or laptop computer).As another example, electronic equipment It can be with connection (via wired or wireless connection) to isolated display (for example, computer monitor, TV etc.) The portable multifunction device 100 (for example, smart phone, PDA, tablet computer etc.) of touch screen (for example, 122 in Fig. 2).
In some embodiments, display 450 and electronic equipment are integrated, and touch sensitive surface 451 not with include user interface The electronic equipment (for example, calculating equipment C in Fig. 4 C) of navigational logic 480 is integrated.For example, electronic equipment, which can be, has connection (via wired or wireless connection) arrives the collection of isolated touch sensitive surface (for example, remote touch plate, portable multifunction device etc.) At the equipment 300 of display (for example, 340 in Fig. 3) (for example, desktop computer, laptop computer, there is integrated machine top The TV of box).As another example, electronic equipment can be with connection (via wired or wireless connection) to the touch-sensitive of separation The touching on surface (for example, remote touch plate, with another portable multifunction device of touch screen etc. for being used as remote touch plate) Touch the portable multifunction device 100 (for example, smart phone, PDA, tablet computer etc.) of screen (for example, 112 in Fig. 2).
In some embodiments, display 450 and touch sensitive surface 451 neither with include user interface navigation logic 480 electronic equipment (for example, calculating equipment D in Fig. 4 C) is integrated.For example, electronic equipment can be connection (via wired or It is wirelessly connected) arrive isolated touch sensitive surface (for example, remote touch plate, portable multifunction device etc.) and isolated display The individual electronic equipment 300 of (for example, computer monitor, TV etc.) is (for example, desktop computer, laptop computer, control Platform processed, set-top box etc.).As another example, electronic equipment, which can be, has connection (via wired or wireless connection) to separation Touch sensitive surface (for example, remote touch plate, be used as remote touch plate touch screen another portable multifunction device Deng) touch screen (for example, 112 in Fig. 2) portable multifunction device 100 (for example, smart phone, PDA, plate calculate Machine etc.).
In some embodiments, calculating equipment has integrated audio system.In some embodiments, equipment and audio are calculated System communication, the audio system are separated with equipment is calculated.In some embodiments, audio system is (for example, be integrated in TV list Audio system in member) it is integrated with independent display 450.In some embodiments, audio system (for example, stereophonic sound system) It is the individual system separated with calculating equipment and display 450.
User interface and associated process
Attention now turn to can use to communicate with display and touch sensitive surface and/or including display and touch-sensitive table User interface (" UI ") that the electronic equipment (one of calculating equipment A-D in such as Fig. 4 C) in face is implemented and associated process Embodiment.
Fig. 5 A to Fig. 5 SS illustrates in accordance with some embodiments for providing the exemplary user interface of audible feedback.This User interface in a little attached drawings is used to illustrate process disclosed below, including Fig. 6 A to Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A extremely scheme Process in 8C, Fig. 9 A to Fig. 9 C and Figure 10 A to Figure 10 B.Although the touch sensitive surface 451 isolated with display 450 will be referred to On input provide some examples in following example, but in some embodiments, equipment detects touch-screen display (its In, combine touch sensitive surface and display) on input, as shown in Figure 4 A.
Attention now turn to can be in electronic equipment (such as portable more function communicated with display and audio system Can equipment 100 or equipment 300, as shown in Figure 4 C) on the embodiment of user interface (" UI ") and associated process implemented.? In some embodiments, electronic equipment includes display.In some embodiments, electronic equipment includes audio system.In some realities It applies in example, electronic equipment neither includes display nor including audio system.In some embodiments, display includes audio system It unites (for example, component that display and audio system are TV).In some embodiments, certain groups of audio system and display Part is separation (for example, display is the component of TV, and audio system includes the bar-shaped acoustic device separated with TV).One In a little embodiments, electronic equipment is communicated with isolated remote controler, and the electronic equipment receives user's input by the remote controler (for example, remote controler includes, user interacts passed through touch sensitive surface or touch screen with electronic equipment).In some embodiments, Remote controler include detect remote controler movement (for example, user picks up remote controler) motion sensor (for example, accelerometer and/ Or gyroscope).
Fig. 5 A to Fig. 5 G illustrates the audio in accordance with some embodiments for corresponding to user interface object for Binding change Component changes the exemplary user interface of the visual characteristic of user interface.User interface in these attached drawings is used to illustrate following Described process, including the process in Fig. 6 A to Fig. 6 C.
Fig. 5 A, which is illustrated, to be shown on display 450 by equipment user interface 517 generated.In some embodiments, The visual characteristic of various user interface objects with reference to described in Fig. 5 A to Fig. 5 G independently of user input and determine (for example, The visual characteristic of user interface object is determined in the case where lacking user's input).In some embodiments, user interface 517 is Screen protection program user interface.
User interface 517 includes the first user interface object 501-a (for example, first bubble).First user interface object 501-a has various visual characteristics, including the position on shape (for example, round), size and display 450.Equipment is also to sound Display system (for example, speaker system or individual audio system on display 450) is provided (for example, mentioning with to display 450 For Data Concurrent) the first audio component 503 exported corresponding to the sound of the first user interface object 501-a.
In some embodiments, one of the first audio component 503 associated with the first user interface object 501-a or Multiple characteristics correspond to the visual characteristic of the first user interface object 501-a.For example, as shown in sonagram 516, the first audio The pitch of component 503 (passes through first in expression sonagram 516 corresponding to the initial size of the first user interface object 501-a The circular upright position of audio component 503 indicates the pitch of the first audio component 503).As another example, the first audio point The stereo balance (for example, the left/right in sonagram 516 is distributed) of amount 503 corresponds to the first user interface on display 450 The horizontal position of object 501-a.In some embodiments, according to one or more visions of the first user interface object 501-a Characteristic determines one or more characteristics of the first audio component 503 corresponding to the first user interface object 501-a.Alternatively, In some embodiments, determine the first user interface object 501-a's according to one or more characteristics of the first audio component 503 One or more visual characteristics.
When user interface 517 is present on display 450 and sound output is provided by audio system, equipment is to aobvious Show that device 450 provides the data for more new user interface 517 (for example, the first user interface object 501-a crosses over display 450 Mobile and the first user interface object 501-a size increases, as shown in Figure 5 B).It provides and is used for more new user interface 517 Dynamic data exchange inputted in user and occur and (inputted for example, user is not detected on remote controler 5001 in fig. 5).Equipment The acoustic information exported for updating sound is also provided to audio system, such as illustrated (example in the sonagram 516 in Fig. 5 B Such as, the stereo balance right shift of audio component 503, as in sonagram 516 in figure 5B by the figure of audio component 503 It indicates represented by movement to the right, and the volume down of audio component 503, by sound in sonagram 516 such as in figure 5B Represented by the figured reduced size of frequency component 503).
Fig. 5 B shows the user interface 517 at the time after fig. 5 A soon.In figure 5B, user interface 517 is wrapped Include the second user interface object with visual characteristics optionally different from the visual characteristic of the first user interface object 501-a 501-b (for example, second bubble) is (for example, the position of second user interface object 501-b and size and the first user interface object The position of 501-a and of different sizes).Equipment also provides (for example, with to the offer Data Concurrent of display 450) to audio system The second audio component 505 that sound corresponding to second user interface object 501-b exports.For example, due to second user interface Initial size (Fig. 5 A) of the initial size (Fig. 5 B) of object 501-b than the first user interface object 501-a is bigger, thus audio The pitch of component 505 is lower than the pitch of the first audio component 503 (by the second audio component in the sonagram 516 in Fig. 5 B 505 lower position indicates).In some embodiments, it is at least partially based on the first audio component 503, selects the second audio point Amount 505.For example, in some embodiments, the first audio component 503 and the second audio component 505 have composition chord (for example, A Ditty chord) two pitches (for example, note) corresponding pitch.
As shown in Figure 5 B, more new user interface 517 and update sound output include combining to correspond to the first user circle Change the first audio component 503 in face of the mode of the visual characteristic of the change as 501-a to change the first user interface object At least one characteristic in the visual characteristic of 501-a.For example, compared with the first user interface object 501-a in Fig. 5 A, figure The first user interface object 501-a in 5B has expanded, and accordingly the volume of the first audio component 503 has been in figure 5B It is reduced.
Fig. 5 C shows the user interface 517 at the time after Fig. 5 B soon.In figure 5 c, user interface 517 is wrapped It includes with views optionally different from the visual characteristic of the first user interface object 501-a and second user interface object 501-b Feel the third user interface object 501-c (for example, third bubble) of characteristic (for example, the position of third user interface object 501-c With size and the position of the first user interface object 501-a and the position and size of size and second user interface object 501b It is different).Equipment also provides (for example, with to the offer Data Concurrent of display 450) to audio system and corresponds to third user circle In face of the third audio component 507 of the sound output as 501-c.In some embodiments, due to third user interface object The initial size (Fig. 5 C) of 501-c is than second user interface object 501-b (shown in Fig. 5 B) or the first user interface pair As the initial size of 501-a (shown in Fig. 5 A) is smaller, the pitch of third audio component 507 than the first audio component 503 or The pitch of the second audio component of person 505 is higher (by the higher vertical position of the audio component 507 in the sonagram 516 in Fig. 5 C Set expression), as discribed in Fig. 5 C.In some embodiments, it is at least partially based on the first audio component 503, selects third Audio component 507.For example, in some embodiments, the first audio component 503, the second audio component 505 and third audio component 507 have the corresponding pitch of three pitches (for example, note) of composition chord (for example, A ditty chord).
As shown in Figure 5 C, more new user interface 517 and update sound output include combining to correspond to second user circle Change the second audio component 505 in face of the mode of the visual characteristic of the change as 501-b to change second user interface object At least one characteristic in the visual characteristic of 501-b.For example, Fig. 5 C shows the second user interface object compared with Fig. 5 B 501-b has expanded and accordingly the volume of the second audio component 505 is reduced (for example, such as by Fig. 5 C in figure 5 c Sonagram 516 in audio component 505 figured reduced size represented by).In addition, Fig. 5 B and Fig. 5 C it Between similarly update the first user interface object 501-a visual characteristic and corresponding first audio component 503.
Fig. 5 D illustrates another update to sound output and user interface 517.In this example, second user interface pair As 501-b becomes much larger and the volume down of corresponding second audio component 505, and third user interface object 501-c It becomes much larger and the volume down of corresponding third audio component 507.In addition, the first user interface object 501-a becomes more It is big and move right, thus the volume down of corresponding first audio component 503 and the balance of the first audio component 503 to Dextroposition.
Fig. 5 E illustrates another update to sound output and user interface 517.In this example, second user interface pair As 501-b becomes much larger and the volume down of corresponding second audio component 505, and third user interface object 501-c It becomes much larger and the volume down of corresponding third audio component 507.However, equipment provides data to display 450 Carry out more new user interface 517, including stops the first user interface object 501-a of display (for example, by making the first user interface pair As 501-a move/skids off display 450 and/or fades out).In combination, equipment provides data to audio system to update sound Sound output provides the first audio component 503 for corresponding to the first user interface object 501-a including stopping.
Fig. 5 F illustrates the user interface 517 at later time.In Fig. 5 F, fourth user interface object 501-d It is mobile with the 5th user interface object 501-e.In combination, audio component 509 and audio component 511 will be according to fourth users The movement of interface object 501-d and the 5th user interface object 501-e are in its respective party upward displacement.In Fig. 5 F, equipment is also User's input 513 of (for example, on menu button 5002) is detected on the corresponding button of remote controler 5001.In response to detection To user's input 513, the acoustic information (example that equipment is provided to audio system for changing audio component 509 and audio component 511 Such as, by stopping audio component 509 and audio component 511), as shown in figure 5g.Equipment is also provided to display 450 for more New user interface 517 and the one or more control user interface objects of display are (for example, application icon 532-a is to 532-e and film The data of icon 534-a to 534-c).In some embodiments, fourth user interface object 501-d and the 5th user interface pair As 501-e continues to display together with control user interface object.For example, compared with control user interface object, fourth user Interface object 501-d and the 5th user interface object 501-e show in a z-direction it is lower so that fourth user interface object 501-d and the 5th user interface object 501-e are overlapped by control user interface object, as shown in figure 5g.
Fig. 5 H illustrates audio envelope 515 in accordance with some embodiments.The vertical axis of audio envelope 515 indicates amplitude (sound Amount), and trunnion axis indicates the time t inputted in user0The time of beginning.Audio envelope 515 is included in t0With t1Between rise Sound (attack) phase A (wherein, amplitude increase with time), in t1With t2Between decaying (decay) phase D (wherein, amplitude is at any time Between reduce), in t2With t3Between maintenance phase S (wherein, amplitude is kept constant at any time) and in t3With t4Between acquisition time R (wherein, amplitude exponentially/progressively reduction at any time).In time t4Later, stop the sound output for corresponding to user's input. In some embodiments, audio envelope 515 does not include degradation period D and/or maintenance phase S.
In some embodiments, the corresponding audio component as provided by audio system has and audio shown in Fig. 5 H The similar audio envelope of envelope 515.In response to detecting user's input (for example, user's input 513 in Fig. 5 F), electronic equipment The acoustic information for changing corresponding audio component is provided to audio system.In some embodiments, in response to detecting use The one or more aspects (for example, the sound for increasing respective audio component) of audio envelope are modified in family input.
Fig. 5 I to Fig. 5 S illustrate it is in accordance with some embodiments when user manipulate user interface in control object (for example, Sliding block on slider bar or knob) when provide audible feedback user interface.User interface in these attached drawings is used to illustrate down Process described in face, including the process in Fig. 7 A to Fig. 7 D.
Fig. 5 I illustrates display 450 and remote controler 5001, both and execute the electricity of certain operations disclosed below Sub- equipment communication.In some embodiments, remote controler 5001 has touch sensitive surface 451.In some embodiments, remote controler 5001 Also have one or more buttons or can piece supplying (affordance), such as menu button 5002 microphone button 5003, broadcasts Put/pause button 5004, viewing list button 5005, Volume up button 5006 and/or volume down button 5007.Some In embodiment, menu button 5002 or it is similar can piece supplying allow home screen user interface display on display 450.Some In embodiment, microphone button 5003 or it is similar can piece supplying allow user to provide verbal order or voice strip to electronic equipment Mesh.In some embodiments, play/pause button 5004 be used to play or pause at the audio described on display 450 or Person's video media.In some embodiments, viewing list button 5005 allows to watch listuser interface display in display 450 On.In some embodiments, viewing listuser interface provides the user with the multiple audio/video matchmakers played using electronic equipment Body item.
Fig. 5 I illustrates the video playback view 500 shown on display 450.Video playback view 500 is to provide matchmaker The user interface of the display of body item (for example, film or TV programme).In some cases, the display of media item is in pause Or broadcast state.In some embodiments, video playback view 500 provides video information associated with the navigation of media item Display.Fig. 5 I illustrates the prologue subtitle of the film shown during normal playback.In some embodiments, although media item In pause or broadcast state, but detect that user inputs 502 (touching contact for example, touching) on touch sensitive surface 451.
Fig. 5 J is illustrated in some embodiments, and in response to receiving user's input 502, electronic equipment is to display 450 The data of multiple user interface objects on video playback view 500 are provided for (for example, video playback View user circle Face).Multiple user interface objects include navigation slider bar 506 (sometimes referred to as rubbing item) on sliding block 504 (sometimes also by Referred to as playback head).Sliding block 504 be configured as control parameter (for example, indicate the total duration of shown media item when Between line navigation slider bar 506 in current location/time) control user interface object example.Multiple user interfaces pair As further including volume control user interface object 508 (for example, indicating the audio frequency control of the volume of the sound exported by audio system User interface object).
In fig. 5j, sliding block 504 is represented as square, user is provided with the interaction of video playback view 500 Current focus not visually indicating on sliding block 504.In order to compare, in Fig. 5 K, sliding block 504 is illustrated as having video preview 510 circle provides current focus the visually indicating on sliding block 504 with the interaction of video playback view 500 of user. In some embodiments, the position of the display of video preview 510 and sliding block 504 in the corresponding media item in position in slider bar 506 The preview image set.As shown in subsequent drawings (for example, Fig. 5 L), in some embodiments, sliding block 504 (has circular shape Shape) it is deformed as user interacts with sliding block 504.
Fig. 5 K, which illustrates remote controler 5001 and detects, starts at the 512-1 of position and terminates at the 512-2 of position (figure User's input 512 5L), is the interaction for pulling position of the sliding block 504 in slider bar 506.
In some embodiments, remote controler 5001 detects user's input described herein and conveys to electronic equipment and closes In the information of user's input.When the information inputted about user is communicated to electronic equipment, electronic equipment receives user's input. In some embodiments, electronic equipment directly receives user's input (for example, on the touch sensitive surface that detection is integrated with electronic equipment User's input).
In some embodiments, electronic equipment determines that user's input 512 is for interaction below: when user inputs 512 Meeting predetermined criterion, (such as when current focus is on sliding block 504, remote controler 5001 detects the contact strength of user's input Increase) when position of the adjusting slider 504 in slider bar 506.For example, user's input 512 is when current focus exists in Fig. 5 K Having for detecting when on sliding block 504 is greater than light press threshold value ITLIntensity towing gesture.
Sliding block 504 is towed to the (figure of position 504 from position 504-1 (Fig. 5 K) on display 450 by user's input 512 5L).Therefore, when electronic equipment receives user's input 512 (for example, concomitantly, with user inputting 512 with user's input 512 Continuously and/or in response to user's input 512), electronic equipment provides data to display 450 and to use to move sliding block 504 Family seems real-time towing sliding block 504.
When receiving user's input 512 (for example, with user's input 512 concomitantly, with user's input 512 continuously and/ Or 512) in response to user's input, electronic equipment is also provided and (is indicated in the sonagram 516 of Fig. 5 K-5L) for providing sound The acoustic information of output 514.In some embodiments, sound output 514 corresponds to the audible feedback of the towing of sliding block 504 (for example, sound output 514 has according to sliding block 504 from position 504-1 to the towing of position 504-2 and change one or more A characteristic).For example, according to 514 arrow that draws of sound output in sonagram 516 correspond to sliding block 504 from position 504-1 to The towing and instruction of position 504-2 concomitantly and/or continuously provides sound output 514 with user's input 512.In addition, root The mode that the arrow instruction sound output 514 drawn according to sound output 514 changes according to the movement of sliding block 504 is (for example, sound The stereo balance right shift of output 514), as described below.
In this example, audio system includes two or more loudspeakers comprising left speaker and right loudspeaker.Sound One or more characteristics of output 514 include (being indicated on the trunnion axis of sonagram 516) left speaker and right loudspeaker it Between balance (for example, ratio of sound output intensity).In some embodiments, one or more characteristics further include (in audio Indicated on the upright position of sound output 514 in Figure 51 6) pitch of sound output 514.In some embodiments, sound Output 514 only has the position based on user's input 512 or the single characteristic (for example, such as pitch or balance) of movement. In this example, the direction of the arrow drawn according to the sound output 514 in sonagram 516 and amplitude instruction pitch and balance are such as What changes according to sliding block 504 from position 504-1 to the towing of position 504-2.Therefore, when sliding block 504 from position 504-1 to the right When being moved to position 504-2, the balance right shift of sound output 514, the audio impression that this given user moves right.Sound The pitch of output 514 is also higher in the period displacement that moves right of sliding block 504, this intuitively gives user by slider bar 506 The impression of " higher " is moved in the time of expression.Alternatively, in some embodiments, pitch moving right the phase in sliding block 504 Between shift it is lower.
Fig. 5 M to Fig. 5 N is similar with Fig. 5 K to Fig. 5 L.However, remote controler 5001 is detected at other in Fig. 5 M to Fig. 5 N Similar with the user's input 512 but user's input 518 with bigger speed of aspect.As user's input 512, user's input 518 start to pull sliding block 504 at the 504-1 of position.But since user's input 518 has bigger speed, user input Sliding block 504 is further towed into position 504-3 than user's input 512.When receiving user's input 518 (for example, and user 518) input 518 concomitantly, with user's input 518 continuously and/or in response to user inputs, electronic equipment is provided (in Fig. 5 M Indicated into the sonagram 516 of 5N) for providing the acoustic information of sound output 520.
In some embodiments, electronic equipment provides a user the various audios and view of the speed of instruction relative users input Feel label.For example, sound output 514 volume based on sliding block 504 from position 504-1 to the speed of the movement of position 504-2 (or Speed of person user's input 512 from position 512-1 to 512-2), as shown in Fig. 5 K to 5L;And the volume of sound output 520 Based on sliding block 504 from position 504-1 to the speed of the movement of position 504-3 (or user input 518 from position 518-1 to The speed of 518-2).In sonagram 516 (Fig. 5 K to Fig. 5 N), by indicating that it is every that the circular size of corresponding sound output is described The volume of a corresponding sound output.514 (Fig. 5 K to Fig. 5 L) such as are exported from sound and sound exports the ratio of 520 (Fig. 5 M to Fig. 5 N) Relatively it can be seen that, sound with slower user 512 (Fig. 5 K to Fig. 5 L) of input and more in a low voice exports compared in the of 514, faster User, which inputs 518 (Fig. 5 M-5N), causes more loud sound to export 520.
In some embodiments, movement (for example, speed or position) of the electronic equipment based on sliding block 504 or based on using The movement (for example, speed and/or position) of family input 512/518, visually distinguishes sliding block 504.For example, such as Fig. 5 L and Shown in Fig. 5 N, based on user input speed and/or direction, sliding block 504 show tail portion (for example, sliding block 504 be extended/ Stretching, extension).Since sliding block 504, thus the sliding block in the two examples are pulled to the right in both user's input 512 and user's input 518 504 stretch (for example, with similar with the comet to move right) to the left.But more than user's input 512 due to user's input 518 Fastly, thus sliding block 504 as user input 518 result (Fig. 5 N) than as user input 512 result (Fig. 5 L) stretch more It is more.
Fig. 5 O to Fig. 5 P illustrates continuation of user's input 518 from position 518-2 (Fig. 5 O) to Figure 51 8-3 (Fig. 5 P), Sliding block 504 is towed into the terminal corresponding to slider bar 506 from the position 504-3 (Fig. 5 O) near the centre in slider bar 506 Position 504-4 (Fig. 5 P).As described above, when receiving the continuation of user's input 518 (for example, being inputted with user 518) 518 concomitantly, with user's input 518 continuously and/or in response to user input, electronic equipment is provided for sound The acoustic information (indicating in the sonagram 516 in Fig. 5 O to Fig. 5 P) of the continuation of output 520 is until 504 in-position of sliding block 504-4 (or until short time until 504 in-position 504-4 of sliding block).In some embodiments, electronic equipment is to audio System provides acoustic information, indicates that sliding block 504 is located at the terminal point of slider bar 506 (for example, sound to provide sound output 522 Sound output 522 be instruction sliding block 504 with " poh chirp (the BOING) " sound that echoes of the terminal of slider bar 506 " conflict ").Sound Sound output 522 exports 514 and 520 different (for example, in time or acoustically) from sound.In some embodiments, sound Sound output 522 do not have based on user input 518 one or more characteristics (for example, no matter when sliding block 504 and slider bar 506 terminal conflict, audio system all provide same sound but regardless of making sliding block 504 conflict with the terminal of slider bar 506 How is the characteristic (such as speed) of user's input).Alternatively, in some embodiments, the volume of sound output 522 is based on working as and use The speed of user's input 518 is (for example, very fast with the terminal of slider bar 506 when family input 518 reaches the terminal of slider bar 506 Conflict lead to loud " poh the chirp " sound that echoes).In some embodiments, it once reaching the terminal of slider bar 506, just shows Sliding block 504 against slider bar 506 terminal and squeaky animation.Therefore, in some embodiments, electronic equipment provides discrete (for example, not being continuous) about certain user's interface navigation event audio and visual feedback (for example, control user interface Object (such as sliding block), the end (end of such as slider bar) for reaching its control range).
Fig. 5 Q shows how diagram electronic equipment provides audible feedback dynamically and glibly to user to assist pair Control the figure 524 of the manipulation of user interface object (for example, sliding block on slider bar).In some embodiments, sound exports 514/520 one or more characteristics (for example, balance, pitch and/or volume) are per second (for example, 10 times per second, 20 times, 30 times Or 60 times) update repeatedly.For example, in some embodiments, current location and user based on user's input input previous Difference between position (for example, previously measured 1/60 second), the speed of user's input is calculated with 60 times per second, and be based on The speed is with the volume of the corresponding sound output of 60 determinations per second.Therefore, Figure 52 4 is illustrated connects with user's input 512/518 Continuous ground and concomitantly offer sound output 514/520.Position (or the position of sliding block 504 based on user's input 512/518 Set, as described above), the pitch and balance of sound output 514/520 are perceptibly determined immediately.With user's input 512/ The change of 518 position (for example, speed) perceptibly determines the volume of sound output 514/520 immediately.
In some embodiments, the change of the position (for example, speed) based on user's input 512/518, determines that sound is defeated 514/520 pitch and balance out.In some embodiments, the volume of sound output 514/520 is based on user's input 512/518 Position (or position of sliding block 504, as described above).
Similarly, in some embodiments, it is updated with repeatedly (for example, 10 times per second, 20 times, 30 times or 60 times) per second The visual characteristic (for example, extension/stretching, extension) of sliding block 504.Thus, for example, the speed based on user's input, is updated with 60 times per second The length of the tail portion of sliding block 504, as described above.
Fig. 5 R to Fig. 5 S and Fig. 5 K to Fig. 5 L are largely similar, but illustrate the sound of the sound output wherein continuously provided The high embodiment proportional to the change of user's input or movement.Fig. 5 R to Fig. 5 S illustrates remote controler 5001 and detects The user's input 526 for starting and terminating at the 526-2 of position (Fig. 5 S) at 526-1, is for that will slide in slider bar 506 The position of block 504 tows to the interaction of position 504-2 from position 504-1.Therefore, when remote controler 5001 detects that user inputs 526 (for example, with user's inputs 526 concomitantly, with user's input 526 continuously and/or in response to user's input 526), electronics Equipment is provided to display 450 seems user by real-time towing sliding block 504 for moving the data of sliding block 504.Work as reception (for example, it is continuously and/or defeated in response to user concomitantly, with user to input 526 with user's input 526 when inputting 526 to user Enter 526), electronic equipment also provides (describing in the sonagram 516 in Fig. 5 R to Fig. 5 S) for providing sound to audio system The acoustic information of sound output 528.Sound exports the difference between 514 (Fig. 5 K to Fig. 5 L) of 528 (Fig. 5 R to Fig. 5 S) and sound output Different to be, the pitch of sound output 528 inputs 526 movement (for example, change of speed or position) independently of user, and The pitch of sound output 514 changes with the position of user's input 512.In some embodiments, corresponding sound output has The balance in the direction (or direction of the movement of sliding block 504) of the movement based on user's input is (for example, being moved to the left has left put down Weigh and move right with right balance, but regardless of sliding block 504 position how).
Fig. 5 T to Fig. 5 HH illustrates in accordance with some embodiments when the discrete user interface pair of user in the user interface The user interface of audible feedback is provided when as navigating on (for example, icon).User interface in these attached drawings is used to illustrate following Described process, including the process in Fig. 8 A to Fig. 8 C.
Fig. 5 T illustrates the key frame on-screen user interface 530 shown on display 450.Key frame on-screen user interface 530 include multiple user interface objects, in this example includes application icon 532 (for example, application icon 532-a to 532- E, each of these is the user interface object of the first kind) and movie icon 534 (for example, movie icon 534-a to 534- C, each of these is the user interface object of Second Type).Moreover, in Fig. 5 T, key frame on-screen user interface 530 Current focus is on application icon 532-e, and the other users interface in application icon 532-e and multiple user interface objects Object visually distinguishes (for example, application icon 532-e is than other application icon 532 more slightly greater and has and highlights Boundary) come indicate current focus application 532-e on.
In Fig. 5 T, when showing key frame on-screen user interface 530, electronic equipment receives the user on remote controler 5001 Input 536.User, which inputs 536 (for example, sweeping gesture inputs), has amplitude (for example, Fig. 5 T's inputs 536 by being detached from user The speed and/or distance that the length of arrow indicates) and direction (for example, the direction for inputting 536 arrow by being detached from user of Fig. 5 T The user of expression pulls the direction of its finger on touch sensitive surface 45).User's input 536 is for by key frame screen user circle The current focus in face 530 is moved to the request of application icon 532-d from application icon 532-e.
Fig. 5 U, which is shown, has been moved to current focus from application icon 532-e in response to 536 (Fig. 5 T) of user's input Application icon 532-d.In Fig. 5 U, by the other users interface object in application icon 532-d and multiple user interface objects It visually distinguishes to indicate current focus on application icon 532-d.
Fig. 5 U also illustrates sonagram 516, shows as provided by audio system and current focus is from application icon The corresponding sound output of the movement of 532-e to application icon 532-d is (for example, sound output 538-1 and the output of optional sound Expression 540-1).Trunnion axis on sonagram 516 indicates the stereo balance of audio component (for example, in sonagram 516 Left/right distribution).Sound output 538-1 instruction current focus has moved to application icon 532-d.Optionally, audio system mentions 540-1 is exported for sound, instruction current focus is mobile from application icon 532-e.In some embodiments, audio system Sound, which is provided, before sound output 538-1 is provided exports 540-1.In some embodiments, audio system is not providing sound Sound, which is provided, in the case where exporting 540-1 exports 538-1.
The vertical axis of sonagram 516 indicates that sound exports 538 and 540 pitch.In some embodiments, corresponding sound The pitch of output (for example, sound output 538-1 and/or sound export 540-1) is based on exporting with corresponding sound associated The size of user interface object (for example, current focus disposed thereon user interface object).For example, sound output 538-1 tool There is the pitch of the size based on application icon 532-d.As discussed below, in some embodiments, with large user interface pair As the output of (for example, movie icon 534) associated sound have than with small user interface object (for example, application icon 532) Associated sound exports lower pitch.
In some embodiments, user interface object of the pitch that corresponding sound exports based on current focus on it Type.For example, sound output associated with movie icon 534 has bass height and sound associated with application icon 532 Sound output has high pitch, but regardless of application icon 532 and movie icon 534 it is correspondingly sized how.
In Fig. 5 U, electronic equipment receives user's input 542 on remote controler 5001.
Fig. 5 V, which is shown, has been moved to current focus from application icon 532-d in response to 542 (Fig. 5 U) of user's input Application icon 532-c.In Fig. 5 V, by the other users interface object in application icon 532-c and multiple user interface objects It visually distinguishes to indicate current focus on application icon 532-c.
Sonagram 516 in Fig. 5 V includes by audio system offer and current focus is from application icon 532-d to application The expression of the corresponding sound output of the movement of icon 532-c (for example, sound output 538-2 and optional sound export 540-2). In some embodiments, other than sound exports 538-2, audio system provides sound and exports 540-2, indicates current focus It is mobile from application icon 532-d.In some embodiments, audio system offer sound before sound output 538-2 is provided Sound exports 540-2.In some embodiments, audio system provides sound output in the case where not providing sound output 540-2 538-2。
In Fig. 5 V, electronic equipment receives user's input 544 on remote controler 5001.
Fig. 5 W, which is shown, has been moved to current focus from application icon 532-c in response to 544 (Fig. 5 V) of user's input Application icon 532-b.In Fig. 5 W, by the other users interface object in application icon 532-b and multiple user interface objects It visually distinguishes to indicate current focus on application icon 532-b.
Sonagram 516 in Fig. 5 W includes by audio system offer and current focus is from application icon 532-c to application The expression of the corresponding sound output of the movement of icon 532-b (for example, sound output 538-3 and optional sound export 540-3).
In Fig. 5 W, electronic equipment receives user's input 546 on remote controler 5001.User's input 546 has than user Input 536 (Fig. 5 T), 542 (Fig. 5 U) and 544 (Fig. 5 V) higher amplitudes (for example, speed and/or distance).
Fig. 5 X, which is illustrated, has been moved to current focus from application icon 532-b in response to 546 (Fig. 5 W) of user's input Application icon 532-e (passes through application icon 532-c and 532-d).
Sonagram 516 in Fig. 5 X include provided by audio system with current focus by application icon 532-c and 532-d is from application icon 532-b to the table of corresponding sound output 538-4,538-5 and the 538-6 of the movement of application icon 532-e Show that (for example, sound output 538-4 corresponds to application icon 532-c, sound exports 538-5 and corresponds to application icon 532-d, sound Sound exports 538-6 and corresponds to application icon 532-e).Although sound output 538-4,538-5 and 538-6 are shown together in audio In Figure 51 6, but sound output 538-4,538-5 and 538-6 are sequentially provided (for example, sound output 538-4 is followed by sound 538-5 is exported, the sound output 538-5 is followed by sound output 538-6).
Sound, which exports 538-4,538-5 and 538-6, which to be had, (such as leads to compared with the expression of the sound output 538-3 in Fig. 5 W Cross in sonagram 516 represented by the expression of smaller size) volume of reduction avoids reducing the weight of the big volume of user experience Multiplexed voice.
In Fig. 5 X, electronic equipment receives user's input 548 on remote controler 5001.User's input 548, which corresponds to, to be used for Current focus is moved in next line (for example, row of the icon below application icon 532-e) from application icon 532-e The request of icon.
Fig. 5 Y is illustrated have been crimped in response to user's 548 key frame on-screen user interfaces 530 of input, this presents icon 550-a to 550-d.In addition, current focus is moved to icon from application icon 532-e in response to user's input 548 550-d。
Sonagram 516 in Fig. 5 Y includes by audio system offer and current focus is from application icon 532-e to icon The expression of the corresponding sound output 538-7 of the movement of 550-d.Sound, which exports 538-7, to be had than associated with application icon 532 Sound output (for example, sound exports 538-1 to 538-6) lower pitch.
In Fig. 5 Y, electronic equipment receives user's input 552 on remote controler 5001.User's input 552, which corresponds to, to be used for Current focus is moved to the icon in icon 550-d draw above target row (for example, application icon 532 from icon 550-d Row) request.
Fig. 5 Z is illustrated have been crimped back in response to user's 552 key frame on-screen user interfaces 530 of input.In addition, ringing 552 should be inputted in user, current focus is moved to application icon 532-e from icon 550-d.
Sonagram 516 in Fig. 5 Z includes by audio system offer and current focus is from icon 550-d to application icon The expression of the corresponding sound output 538-8 of the movement of 532-e.
In Fig. 5 Z, electronic equipment receives the user on remote controler 5001 and inputs 554 (for example, tap gestures).User is defeated Enter 552 corresponding to the request for activating application icon 532-e (or corresponding application).
Fig. 5 AA illustrates user circle in response to 554 display game application (for example, table tennis game application) of user's input Face 594.
Sonagram 516 in Fig. 5 AA includes the expression of sound output 556-1, and instruction has activated application icon 532-e (Fig. 5 Z).
Fig. 5 AA also illustrates the user that electronic equipment receives on the menu button 5002 of remote controler 5001 and inputs 558 (examples Such as, button press).
Fig. 5 BB, which is illustrated, inputs 558 (Fig. 5 AA) display key frame on-screen user interface 530 in response to user.
Sonagram 516 in Fig. 5 BB includes the expression of sound output 560-1, indicates the user interface benefit of game application It is replaced with key frame on-screen user interface 530.
Fig. 5 BB also illustrates the user that electronic equipment receives on remote controler 5001 and inputs 562.User's input 562 corresponds to For current focus to be moved to application icon 532-e draw above target row (for example, movie icon from application icon 532-e 534 row) in icon request.
Fig. 5 CC illustrates that current focus is mobile from application icon 532-e in response to 562 (Fig. 5 BB) of user's input To movie icon 534-c.
Sonagram 516 in Fig. 5 CC includes and current focus is from application icon 532-e to the movement of movie icon 534-c The expression of corresponding sound output 538-9.
Fig. 5 CC also illustrates the user that electronic equipment receives on remote controler 5001 and inputs 564.
Fig. 5 DD illustrates that current focus is mobile from movie icon 534-c in response to 564 (Fig. 5 CC) of user's input To movie icon 534-b.
Sonagram 516 in Fig. 5 DD includes and current focus is from movie icon 534-c to the movement of movie icon 534-b The expression of corresponding sound output 538-10.
Fig. 5 DD also illustrates the user that electronic equipment receives on remote controler 5001 and inputs 566.
Fig. 5 EE illustrates that current focus is mobile from movie icon 534-b in response to 566 (Fig. 5 DD) of user's input To movie icon 534-a.
Sonagram 516 in Fig. 5 EE includes and current focus is from movie icon 534-b to the movement of movie icon 534-a The expression of corresponding sound output 538-11.
The user that 5EE also illustrates that electronic equipment receives on remote controler 5001 in figure inputs 568 (for example, tap gestures).
Fig. 5 FF, which is illustrated, inputs 568 (Fig. 5 EE) display product page view 572 in response to user.
Sonagram 516 in Fig. 5 FF includes the expression of sound output 556-2, indicates activated film icon 534-a (Fig. 5 EE).
Fig. 5 FF also illustrates the user that electronic equipment receives on the menu button 5002 of remote controler 5001 and inputs 570 (examples Such as, button press).
Fig. 5 GG, which is illustrated, inputs 570 (Fig. 5 FF) display key frame on-screen user interface 530 in response to user.
Sonagram 516 in Fig. 5 GG includes the expression of sound output 560-2, indicates the user of product page view 572 It is replaced using key frame on-screen user interface 530 at interface.
Fig. 5 GG also illustrates the user that electronic equipment receives on the menu button 5002 of remote controler 5001 and inputs 574 (examples Such as, button press).
Fig. 5 HH, which is illustrated, inputs 574 (Fig. 5 GG) display screen protection program user interface 517 in response to user.
Sonagram 516 in Fig. 5 HH includes the expression of sound output 560-3, indicates key frame on-screen user interface 530 It is replaced using screen protection program user interface 517.In some embodiments, in the case where lacking user's input, subsequently Screen protection program user interface 517 is updated, as illustrated in Fig. 5 A to Fig. 5 E.
In some embodiments, when screen protection program user interface 517 is displayed on display 450, user is defeated Enter (for example, tap gesture in button press or touch sensitive surface 451 on the button of remote controler 5001) to initiate to utilize main picture Face on-screen user interface 530 replaces screen protection program user interface 517.
In some embodiments, as illustrated in Fig. 5 T to Fig. 5 Z and Fig. 5 BB to Fig. 5 GG, in addition to other aspects, main picture Face on-screen user interface 530 be include multiple media items (for example, movie icon 534) expression video selection user interface. In some embodiments, selecting user's input (for example, user's input 568 in Fig. 5 EE) of certain movie item causes to include closing In the display of the product page view 572 (Fig. 5 II) of the descriptive information 576 of corresponding film.Therefore, in some embodiments, Fig. 5 II is for the starting point below with reference to Fig. 5 JJ to Fig. 5 MM described function.
Fig. 5 II to Fig. 5 MM illustrates operation associated with product page view in accordance with some embodiments.These attached drawings In user interface be used to illustrate process disclosed below, including the process in Fig. 9 A to Fig. 9 C.
Fig. 5 II illustrates the display of product page view 572.Product page view 572 includes the description as described in media item Property information 576 (for example, video corresponding to movie icon 534-a of Fig. 5 AA), such as title 576-a, runing time 576-b, Plot summarizes 576-c, grading 576-d and can piece supplying for play media item.Although showing product page view 572, It is that electronic equipment also provides acoustic information to audio system to provide the first sound output based on media item (for example, sound is defeated Out).In some embodiments, the first sound exports the type based on media item.For example, " big to climb (The Great Climb) " it is classified as breathtaking documentary film, and the output of the first sound includes the orchestral music of triumph.In some embodiments In, the output of the first sound includes the track of the Sound Track from media item.For example, when user not yet starts to watch media item When, the output of the first sound includes the representative track for being preselected the common sensation to indicate film.In some embodiments, when with When family not yet starts to watch media item, the output of the first sound includes the track for being also used for the trailer of media item.In some realities It applies in example, when user not yet starts to watch media item, the output of the first sound corresponds to the tone of the first scene or is used for The Sound Track of prologue subtitle.
Fig. 5 II illustrate remote controler 5001 detection with for the request of playback media item it is corresponding input 578 (for example, with Family inputs, the button press on such as play/pause button 5004).Fig. 5 JJ illustrates electronic equipment by providing to display Information is with playback media item (for example, by showing video playback user interface 500 disclosed below) come to for playing back matchmaker The request of body item makes a response.
Fig. 5 JJ is illustrated in response to receiving user's input 578 corresponding with the request of playback media item is used for, and electronics is set It is standby that the data for being used for playback media item are provided to display.
Fig. 5 JJ also illustrates the user that electronic equipment receives on the menu button 5002 of remote controler 5001 and inputs 580 (examples Such as, button press).
Fig. 5 KK, which is illustrated, inputs 580 (Fig. 5 JJ) display product page view 572 in response to user.
In some embodiments, in response to 580 (Fig. 5 KK) of the user's input product page view 572 shown and in media The product page view 572 shown before the playback of item (Fig. 5 II) is different.For example, the product page view 572 in Fig. 5 KK wraps Include the static image 582 of one or more selections from media item.In some embodiments, one or more selections is static Image 582 is based on playback position.In some embodiments, the frame of the static image 582 and playback position of one or more selections Or pause image is different.For example, such as can see from Fig. 5 JJ, user just suspends media item before goat reaches the top of the hill. But as shown in Fig. 5 KK, the static image 582 of selection is the image of the goat at mountain top.In some embodiments, static Image 582 is the preselected image for the scene of playback position.In this way, static image 582 can be selected as More representational scene and when at the out of season time suspend media item when can have been seen to avoid the face of such as performer Come awkward.Alternatively, in some embodiments, in response to user input 580 display product page view 572 in media item The product page view 572 shown before the playback of (Fig. 5 II) is identical.
Electronic equipment also to audio system provide acoustic information with provide by display to media item message user interface Presentation during correspond to media item sound export.For example, when display shows product page view 572 (Fig. 5 KK), sound Display system plays to come from (for example, in Fig. 5 II, to be started to watch media in user with when previously display product page view 572 Before) track of the different Sound Track of track that plays.In some embodiments, track corresponds to playback position (example Such as, wherein user suspend media item or stop viewing media item position) track.In some embodiments, sound exports Not instead of for a part of the Sound Track of media item, the one or more based on the media item at playback position is special on the contrary Property.For example, when playback position is in the dark scene in film (for example, the color of the color based on the display in scene Analysis), second sound output is " dark " music.
Fig. 5 LL illustrates the end subtitle that media item is shown in video playback user interface 500.Fig. 5 LL is also illustrated When display terminates subtitle, electronic equipment receives user's input 582 on the menu button 5002 of remote controler 5001 (for example, button Pressing).
Fig. 5 MM, which is illustrated, inputs 582 (Fig. 5 LL) display product page view 572 in response to user.In some embodiments In, the product page view 572 in Fig. 5 MM include one or more selections from media item static image 584 (for example, This illustrates terminate subtitle).
Fig. 5 NN to Fig. 5 SS illustrates operation associated with the halted state of video in accordance with some embodiments.These are attached User interface in figure is used to illustrate process disclosed below, including the process in Figure 10 A to Figure 10 B.
Fig. 5 NN illustrates the video playback view 500 during the playback of media item.Fig. 5 NN is also illustrated in broadcasting/pause Detect that user inputs 586 (for example, button press) on button 5004.
Fig. 5 OO is illustrated during exemplary pause mode or during halted state (for example, in response to detecting use Family inputs the media item shown in 586) video playback view 500.In some embodiments, in the exemplary pause mode phase Between, (for example, point in the video for indicating wherein to suspend video is quiet in the media item that is shown in video playback view 500 Only on image or frame) display countdown clock 588.In some embodiments, countdown clock 588 is translucent or part is saturating Bright.In some embodiments, although showing media item during park mode, 590 quilt of one or more static images Display overlaps in media item.In some embodiments, static image 590 includes from the media item for wherein suspending media item Point before predetermined time interval selection representative frame.For example, static image 590 includes working as from the playback of film Four frames of drama or scene of interest in five minutes of the film before preceding pause point.
Fig. 5 PP is illustrated in some embodiments, and the display of countdown clock 588 includes and showing that another exemplary is temporary Stop the display of the corresponding animation of predetermined time interval before the another exemplary expression of state or halted state (for example, screen Curtain protective program or lantern slide).In some embodiments, if by countdown clock 588 in the past represented by pre- timing Between detect that user inputs before interval, then the playback of media item restores, however in some embodiments, countdown clock 588 Advance pause (for example, for predetermined time or indefinitely) asked until detecting with the playback that is used to restore media item Ask corresponding another user's input.
Fig. 5 QQ is illustrated after its associated predetermined time interval is pass by completely (for example, the tinkle of bells is full of, Elapsed time clock 588 reaches 100% opaque, and countdown clock 588 rises to some size etc.) example of countdown clock 588 Property indicate.In some embodiments, after passing by preset time intervals, in another example of the halted state of display media item Property indicate before display animation or conversion.
The another exemplary that Fig. 5 RR illustrates the halted state of media item indicates.In some embodiments, static figure is shown The lantern slide or screen protection program of picture, this corresponds to the point in the media item for wherein suspending media item.For example, display (example Such as, randomly first three before including or with circular order) pause point in film was to five minutes ten static figures The lantern slide or screen protection program of picture.In some embodiments, when showing lantern slide or screen protection program, display One or more halted state elements, such as current time, status indicator (for example, flashing pause symbol), media information and/ Or end time indicator.In some embodiments, static image is preselected for lantern slide or screen protection program The representative frame of (for example, preselected by movie director) media item.In some embodiments, static image be from media item from The dynamic frame extracted.
Fig. 5 SS is illustrated in some embodiments, as time go by, modification or the one or more halted states of update Element.For example, it is 8:00PM (rather than 7:41PM, as shown in Fig. 5 RR) now that current time, which shows the time, and terminate Time indicator has also been updated to indicate that the media item of pause will be in 8:53PM (rather than in 8:34PM, such as institute in Fig. 5 RR Show) terminate playback.Fig. 5 SS, which is also illustrated, inputs 592 (for example, button press) to the example user on menu button 5002 Detection.
In some embodiments, in response to 592 display product page view 572 (for example, Fig. 5 KK) of user's input.
Fig. 6 A to Fig. 6 C illustrates the audio component that Binding change in accordance with some embodiments corresponds to user interface object Come change user interface visual characteristic method 600 flow chart.In the electronic equipment communicated with display and audio system Method 600 is executed at (for example, the equipment 300 of Fig. 3 or portable multifunction device 100 of Figure 1A).
In some embodiments, audio system includes digital analog converter.In some embodiments, audio system includes Signal amplifier.In some embodiments, audio system is coupled with one or more speakers.In some embodiments, audio System is coupled with multiple loudspeakers.In some embodiments, audio system includes one or more speakers.In some embodiments In, audio system and display (for example, TV with audio frequency processing circuit and loudspeaker) are integrated.In some embodiments, Audio system is different from display (for example, display screen and isolated audio system) and separates.
Optionally some operations in combined method 600 and/or optionally change the sequences of some operations.In some implementations In example, the user interface in Fig. 5 A to Fig. 5 G is used to illustrate the process described in method 600.
As described below, method 600 includes the sound output provided for screen protection program user interface.The party Method reduces the cognitive load of the user when interacting with user interface object (for example, control user interface object), so that creation is more Efficient man-machine interface.By providing additional information the state of screen protection program (for example, instruction), can to avoid or reduce Unnecessary operation (for example, the state to check equipment is interacted with equipment).There is provided sound output help user more efficiently with Equipment is interactive and reduces unnecessary operation saving electric power.
Equipment provides the data for the user interface that (602) are generated by equipment for rendering to display.In some embodiments In, user interface is automatically generated by equipment.The user interface includes having the first user interface object of First look characteristic.It should User interface further includes the second user interface object with second visual characteristic different from the first user interface object.Example Such as, equipment (for example, the equipment 300 of Fig. 3 or portable multifunction device 100 of Figure 1A) automatically generates graphical user circle Face comprising the first user interface object with First look characteristic and the second user interface pair with the second visual characteristic As.Equipment sends to display (for example, display 450) and is used by display to show, show or otherwise present Graphic user interface is (for example, have the first user interface object 501-a (first bubble) and second user interface object 501-b The user interface 517 of (the second bubble), as shown in Figure 5 B) data.In figure 5B, the first user interface object 501-a and Second user interface object 501-b has different visual characteristics, such as different sizes and different positions over the display.
In some embodiments, First look characteristic includes size and/or the position of (604) first user interface objects. In some embodiments, the second visual characteristic includes size and/or the position of second user interface object.For example, such as institute above It explains, the first user interface object 501-a and second user interface object 501-b in Fig. 5 B have difference over the display Size and different positions.
In some embodiments, the First look characteristic for determining (606) first user interface objects is inputted independently of user With the second visual characteristic of second user interface object.The first user circle is determined for example, can initially input independently of user In face of the First look characteristic of elephant and the second visual characteristic of second user interface object.In some embodiments, pseudorandomly Generate the first user interface object and second user interface object.For example, pseudorandomly determining the shifting of respective user interfaces object Dynamic direction, speed, position and/or size.In some embodiments, it is inputted independently of user and generates owning in user interface User interface object.In some embodiments, it pseudorandomly determines to the first user interface object and second user interface object Change.
Equipment provides (608) to audio system and is used to provide the acoustic information that sound exports.Sound output includes corresponding to In the first audio component of the first user interface object.Sound output further comprises corresponding to second user interface object simultaneously And second audio component different from the first audio component.For example, the first audio component can be the first tone and the second sound Frequency component can be the second tone, wherein each tone has one or more auditory properties such as pitch, tone color, volume, rises Sound, maintenance, decaying etc..In some embodiments, sound output includes independently of the first user interface object and second user circle In face of the third audio component of elephant.In some embodiments, third audio component is independently of any user circle in user interface In face of as.
In some embodiments, it is at least partially based on the first audio component, selects (610) second audio components.Some In embodiment, based on the pitch of the first audio component, the pitch of the second audio component is selected.For example, when the first audio component and Have specific chord (for example, A ditty with any other audio component (if any) that the first audio component concomitantly exports Chord) pitch (perhaps note) when the second audio component be chosen to have the pitch (or note) of specific chord.
In some embodiments, user interface includes multiple user interface objects, and sound output has corresponding to more The respective audio component of respective user interfaces object in a user interface object.In some embodiments, sound output has Independently of at least one audio component (for example, reference tone reftone or melody) of multiple user interface objects.
When user interface is presented over the display and sound output is provided, equipment provides (Fig. 6 B to display 612) for more new user interface data and to audio system provide for update sound export acoustic information.It updates User interface and update sound output include: to change combination (for example, concomitantly) to correspond to the first of the first user interface object In First look characteristic of the audio component to change the first user interface object at least one visual characteristic (for example, size and/ Or position);And it combines (for example, concomitantly) to change and corresponds to the second audio component of second user interface object to change the At least one visual characteristic (for example, size and/or position) in second visual characteristic of two user interface objects.For example, setting Standby send to display (for example, display 450 of Fig. 5 A to Fig. 5 G) is used by display to update graphic user interface (example Such as, by by the first user interface object 501-a and second user interface object 501-b at them the phase in user interface 517 Moved between the position answered, as is illustrated in figs. 5 b and 5 c) data.Note that the size of the first user interface object 501-a and Position changes between Fig. 5 B and Fig. 5 C.Similarly, the size of second user interface object 501-b is in Fig. 5 B and figure Change between 5C.The change of the visual characteristic of first user interface object 501-a combines (for example, such as by Fig. 5 B and Fig. 5 C Change the first audio component 503 indicate) corresponding to the first user interface object 501-a audio component change and send out It is raw.For example, corresponding to the sound of the first user interface object 501-a as first bubble expands over the display and movement and changes Become.Similarly, the change of the visual characteristic of second user interface object 501-b, which combines, corresponds to second user interface object 501- The change of the audio component of b and occur.For example, corresponding to the sound of second user interface object 501-b as the second bubble exists Expand on display and changes.
It provides and inputs and occur in user for the Dynamic data exchange of more new user interface.In some embodiments, use is provided It inputs and occurs independently of user in the acoustic information for updating sound output.For example, in the absence of user input, automatically Ground updates shown user interface and corresponding sound.In some embodiments, it as long as user's input is not detected, updates Shown user interface and corresponding sound are (for example, by with the first user interface object and second user interface object Equipment generate user interface be screen protection program user interface, as long as and on a remote control non-push button, be remotely controlled Contact etc. is not detected on the touch sensitive surface of device, then screen protection program continues to update).In some embodiments, ought be not After updating shown user interface and corresponding sound when detecting user's input, user's input is detected and as sound It answers, equipment stops providing the data for more new user interface and stops providing the acoustic information for updating sound output. On the contrary, equipment provides data to display second user interface is presented (for example, just in display screen protection program user circle Face (screen protection program user interface 517 shown in such as Fig. 5 A to Fig. 5 F) with by equipment the first user interface generated The user interface shown before object and second user interface object).
In some embodiments, according at least one visual characteristic of the First look characteristic to the first user interface object Change come change (614) correspond to the first user interface object first audio component.For example, determining to the first user circle After the change of the First look characteristic (at least one visual characteristic) of elephant, based on to the first user interface object The change of one visual characteristic determines the change to the first audio component.In some embodiments, according to second user interface pair The change of at least one visual characteristic of the second visual characteristic of elephant is to change the second sound corresponding to second user interface object Frequency component.For example, after determining to the change of the second visual characteristic of second user interface object, based on to second user circle In face of the change of the second visual characteristic of elephant, the change to the second audio component is determined.
In some embodiments, independently of the change to other users interface object (for example, to second user interface object The second visual characteristic at least one visual characteristic change), according to the change to respective user interfaces object (for example, right The change of at least one visual characteristic of the First look characteristic of the first user interface object) change corresponding to relative users circle In face of the audio component as (for example, first user interface object).For example, corresponding to the audio component of respective user interfaces object It is based only upon the change of respective user interfaces object and changes.
In some embodiments, according to the change to respective user interfaces object (for example, to the first user interface object The change of at least one visual characteristic of First look characteristic) it (including is mutually applied to change corresponding to multiple user interface objects Family interface object) audio component.For example, corresponding to multiple use when respective user interfaces object occurs in the user interface The volume of the audio component of family interface object (in addition to respective user interfaces object) reduces.
In some embodiments, change (616) first user interface objects according to the change to the first audio component At least one visual characteristic of First look characteristic.For example, determining after determining to the change of the first audio component to first The change of the First look characteristic of user interface object.In some embodiments, changed according to the change to the second audio component Become at least one visual characteristic of the second visual characteristic of second user interface object.
In some embodiments, more new user interface and update sound output further comprise that (618) stop display first User interface object and stopping offer include the sound output (example of the first audio component corresponding with the first user interface object Such as, the first user interface object expands, fades out and disappears from user interface, as shown in fig. 5e);Stop display second user Interface object and stop providing include the second audio component corresponding with second user interface object sound output (for example, the Two user interface objects expand, fade out and disappear from user interface);And/or show one or more respective user interfaces pair As and provide include one or more respective audio components corresponding with one or more respective user interfaces objects sound Output (for example, the user interface object that display is different from the first user interface object and second user interface object, in Fig. 5 C It is shown).
In some embodiments, updating sound output includes that (620) determine whether to meet predetermined inactivity criterion (example Such as, user's input is not yet received for predetermined amount of time or puts down remote controler).Meet predetermined inactivity standard according to determining Then, equipment changes the volume of sound output.In some embodiments, the volume for changing sound output includes increasing or reducing phase Answer the volume of audio component.
In some embodiments, the pitch of respective audio component corresponds to (the 622 of Fig. 6 C) corresponding user interface object Initial size (for example, the pitch of Fig. 5 A middle pitch frequency component 503 correspond to user interface object 501-a initial size), phase The position for the corresponding user interface object for answering the stereo balance of audio component to correspond on display is (for example, Fig. 5 A middle pitch The stereo balance of frequency component 503 corresponds to the position of the user interface object 501-a on display 450) and/or corresponding sound The change of the volume of frequency component corresponds to the change of the size of corresponding user interface object (for example, Fig. 5 B middle pitch frequency component 503 Volume change corresponding to user interface object 501-a size change).In some embodiments, respective audio component Volume correspond to the size of corresponding user interface object (for example, size of the volume with corresponding user interface object Increase and reduce, as shown in Fig. 5 A to Fig. 5 F, or alternatively, volume with corresponding user interface object size Increase and increase).In some embodiments, it is pseudo-randomly generated audio component.For example, pseudorandomly determining respective audio component Pitch, volume and/or stereo balance.Therefore, audio component is not a part of the predetermined sequence of note.
In some embodiments, equipment detection (624) user input is (for example, detecting push button or picking up remote control Device).In response to detecting that user inputs, equipment provides to correspond to the phase of respective user interfaces object to audio system Answer the acoustic information (for example, reduce volume and/or increase the sound of respective audio component) of audio component.As used herein , it plays sound and refers to and hit note how fiercely (for example, the amplitude of sound is at any time to the increased rate of its peak loudness, such as Shown in Fig. 5 H).In response to detecting that user inputs, equipment is further provided for more new user interface and is shown to display Show one or more control user interface objects (e.g., including (additional) control user interface object, in such as user interface Button, icon, slider bar, menu etc., or utilize second users circle for including one or more control user interface objects Face replace user interface, as shown in figure 5g) data.
In some embodiments, the acoustic information provided to audio system includes (626) for providing including audio component Sound output information, the audio component be to be discord corresponding to the respective audio component of respective user interfaces object 's.It in some embodiments, include being exported for providing the sound including audio component to the acoustic information that audio system provides Information, the audio component and correspond to respective user interfaces object respective audio component be discordant.In some realities It applies in example, there is preset (for example, fixed) pitch with respective audio component discordant audio component.
In some embodiments, the first audio component and the second audio component are harmonious.In some embodiments, corresponding In the respective audio component of respective user interfaces object be it is harmonious (for example, respective audio component have specific chord sound It is high).
In some embodiments, before detecting that user inputs (for example, user picks up remote controler), equipment is to display (628) are provided to be used to show user interface in the case where not being provided for the acoustic information of sound output to audio system And the data of more new user interface.After detecting user's input, equipment is provided to display for showing user interface The data of more new user interface and the sound letter that sound output and update sound output are provided for audio system Breath (for example, stopping providing sound output, such as by volume etc. illustrated in Fig. 5 G, or alternatively reducing sound output). In some embodiments, the first user interface object and second user the interface object ratio before detecting user's input are detecting Movement is slower after to user's input.
It should be appreciated that the particular order that the operation in Fig. 6 A to Fig. 6 C has been described is merely exemplary and is not intended to finger Show that described sequence is can to execute the only sequence of operation.Those skilled in the art, which will be recognized, is retouched this paper The various modes for the operation rearrangement stated.Additionally, it should be noted that about other methods described herein (for example, side The details of method 700,800,900 and other processes 1000) described herein with above for Fig. 6 A to Fig. 6 C to be retouched The similar mode of the method 600 stated is also applicable.For example, above with reference to user interface object described in method 600, user Interface and sound output optionally have with reference to other methods described herein (for example, 700,800,900 and of method 1000) one or more characteristics in the characteristic of user interface object described herein, user interface and sound output. For simplicity, these details are not repeated herein.
Fig. 7 A to Fig. 7 D is to illustrate the friendship with user interface object in accordance with some embodiments for providing and corresponding to user The flow chart of the method 700 of mutual acoustic information.In the electronic equipment communicated with display and audio system (for example, Fig. 3's sets The portable multifunction device 100 of standby 300 or Figure 1A) at execute method 700.In some embodiments, electronic equipment and tool There is the user input equipment (for example, remote user input device, such as remote controler) of touch sensitive surface to communicate.In some embodiments In, display is touch-screen display, and touch sensitive surface is over the display or integrated with display.In some embodiments In, display is separated with touch sensitive surface.In some embodiments, user input equipment and electronic equipment are integrated.In some implementations In example, user input equipment is separated with electronic equipment.
In some embodiments, audio system includes digital analog converter.In some embodiments, audio system includes Signal amplifier.In some embodiments, audio system includes one or more speakers.In some embodiments, audio system System and display (for example, TV with audio frequency processing circuit and loudspeaker) are integrated.In some embodiments, audio system with Display (for example, display screen and independent audio system) is different and separates.In some embodiments, equipment includes touch-sensitive table Face.In some embodiments, display is touch-screen display, and touch sensitive surface over the display or with display collection At.In some embodiments, display separates (for example, the remote controler of touch sensitive surface and TV is integrated) with touch sensitive surface.
Optionally some operations in combined method 700 and/or optionally change the sequences of some operations.In some implementations In example, the process described in method 700 is illustrated using the user interface in Fig. 5 I to Fig. 5 S.
As described below, method 700 provides the sound output for the interaction with user interface object for corresponding to user. This method reduces the cognitive load of the user when interacting with user interface object (for example, control user interface object), to create Build more efficient man-machine interface.There is provided sound output helps user faster and more efficiently to manipulate user interface object, thus Save electric power.
Equipment provides (702) to display has multiple user interface objects (including first on display for rendering Control user interface object (for example, sliding block etc. of slider bar) at position) user interface data.Control user interface pair As being configured as control relevant parameter (for example, current location in navigation slider bar).In some embodiments, user circle is controlled In face of as not being audio frequency control user interface object.It is shown over the display for example, control user interface object is that control is positive The sliding block (for example, playback head) of the slider bar of the current location of video (for example, film), such as Fig. 5 J to Fig. 5 P and Fig. 5 R to Fig. 5 S Shown in (sliding block 504 of slider bar 506).
Equipment reception (704), which corresponds to, to be interacted with first of the control user interface object on display (for example, for adjusting Save slider bar sliding block position interaction) first input (for example, towing gesture on touch sensitive surface).When reception (706) Corresponding to first with the control user interface object on display interact first input when (for example, with first input extremely Few a part is concomitantly), equipment to display provide (708) be used for according to the first input will control user interface object from display First position on device is moved to the data of the second position on the display different from the first position on display.For example, As shown in Fig. 5 K to Fig. 5 L, gesture 512 is pulled by sliding block 504 and tows to position 504-2 from position 504-1.
In some embodiments, it is handed in response to receiving to correspond to first of the control user interface object on display The first mutual input: equipment provides (710) data to display will control user interface object from display according to the first input First position on device is moved to the second position on the display different from the first position on display and from display The mobile period of the control user interface object of the second position on first position to display on device exists according to the first input Control user interface object is visually distinguished (for example, equipment shows the mobile tail portion of sliding block and/or from the on display Extend on the direction of the second position of one position on display or stretch sliding block, such as Fig. 5 K to Fig. 5 L, Fig. 5 M to Fig. 5 N and Shown in Fig. 5 R to Fig. 5 S).
When receiving and with first the corresponding first input of interaction of the control user interface object on display, equipment (712), which are provided, to audio system provides the first acoustic information to provide the corresponding ginseng for having and being controlled by control user interface object Number is different and according to control movement of the user interface object from the second position on the first position to display on display And the first sound output of the one or more characteristics changed is (for example, the output of the first sound corresponds to the shifting of slider bar control Dynamic audible feedback).In some embodiments, in response to receiving the first input, data are provided to display and first Acoustic information is provided to audio system.In some embodiments, with control user interface object first interact continue The output of the first sound is provided by audio system during time.
In some embodiments, according to first input satisfaction the first input criterion is determined, the output of the first sound has (Fig. 7 B 714) the first feature set (for example, pitch, volume).According to first input satisfaction the second input criterion is determined, the first sound is defeated Provide second feature set different from the first feature set (for example, pitch, volume).For example, if the first input is than pre- constant speed The volume increase that threshold value is mobile faster, then the first sound exports is spent, and if the first input is more mobile than predetermined speed threshold value more Slowly, then the volume down that the first sound exports.
In some embodiments, one or more characteristics include the first sound output in (716) multiple spatial channels The distribution (also referred to as " balancing ") of pitch, the volume of the first sound output and/or the output of the first sound.In some embodiments In, one or more characteristics include the tone color of the first sound output and/or one or more audio envelopes of the first sound output Characteristic (for example, playing sound, maintenance, delay and/or release characteristics).For example, equipment is according to control as illustrated in Fig. 5 I to Fig. 5 R Movement of the user interface object processed from the first position on display to the second position come change sound output pitch and balance. In some embodiments, the only one characteristic (for example, pitch or balance) of sound output is based on control user interface object It is mobile.
In some embodiments, audio system couples (718) with the multiple loudspeakers for corresponding to multiple spatial channels.One In a little embodiments, multiple spatial channels include left channel and right channel.In some embodiments, multiple spatial channels include left logical Road, right channel, prepass and rear channel.In some embodiments, multiple spatial channels include left channel, right channel, upper channel And lower channel.In some embodiments, multiple spatial channels include left channel, right channel, prepass, rear channel, upper channel and Lower channel.In some embodiments, being provided for the first acoustic information that the first sound exports to audio system includes root According to control user interface object from the direction of the movement of the second position on the first position to display on display, the is determined Distribution (also referred to as balance) of the one sound output in multiple spatial channels.In some embodiments, it is provided to audio system The first acoustic information for providing the output of the first sound includes according to control user interface object from first on display The direction of the movement of the second position to display is set, the first sound is adjusted and exports the distribution in multiple spatial channels.Example Such as, control user interface object be moved to the left cause the first sound export distribution in multiple spatial channels to moving to left It is dynamic;Moving right for control user interface object causes the first sound to export moving right for the distribution in multiple spatial channels It is dynamic.In some embodiments, the first acoustic information includes for being exported according to identified first sound in multiple spatial channels On distribution come provide the first sound output information.
In some embodiments, audio system couples (720) (example with the multiple loudspeakers for corresponding to multiple spatial channels Such as, as described above).In some embodiments, the first sound that the first sound exports is provided for audio system Message breath includes in control mobile phase of the user interface object from the third place on the second position to display on display Between determined according to the position of the control user interface object on display the first sound output in multiple spatial channels point Cloth by the intensity of left channel the first sound output to be output and by right channel the first sound to be output (for example, export Intensity ratio).In some embodiments, the first acoustic information that the first sound exports is provided for audio system Including control user interface object from mobile period of the third place on the second position to display on display according to The position of control user interface object on display exports the distribution in multiple spatial channels to adjust the first sound.Example Such as, when the sliding block of horizontal slider is positioned to the left side at the midpoint of slider bar, the first sound is exported in multiple spatial channels On distribution shifts to the left side;When the sliding block of horizontal slider is positioned to the right at the midpoint of slider bar, the first sound is defeated The distribution shifts in multiple spatial channels are to the right out.
In some embodiments, the first acoustic information includes for being exported according to identified first sound in multiple spaces Being distributed to provide the information of the first sound output on channel.For example, being determined using the position based on control user interface object Shift value (for example, stereo translation (left/right) or other multichannels translation) play sound output.
In some embodiments, being provided for the first acoustic information that the first sound exports to audio system includes (the 722 of Fig. 7 C) are according to control movement of the user interface object from the second position on the first position to display on display Speed come determine the first sound output volume.In some embodiments, the first sound is provided for audio system First acoustic information of output includes according to control user interface object from the on the first position to display on display The speed of the movement of two positions come adjust the first sound output volume.In some embodiments, the first acoustic information includes root The information of the first sound output is provided according to the volume of identified first sound output.In some embodiments, user is controlled Speed ratio (reference operation 728 institute of the interface object from the movement of the second position on the first position to display on display Description) control user interface object from the speed of the movement of the third place on the second position to display on display more Height, and the first sound output volume ratio (described in reference operation 728) second sound output volume it is lower (for example, When controlling user interface object movement faster, the volume of sound output is reduced).In some embodiments, user interface is controlled Speed ratio of the object (for example, sliding block of slider bar) from the movement of the second position on the first position to display on display User interface object is controlled from (described in reference operation 728) the third place on the second position to display on display Movement speed it is higher, and the first sound output volume ratio (described in reference operation 728) second sound output Volume is higher (for example, when controlling user interface object movement faster, the volume of sound output increases).
In some embodiments, control user interface object is the sliding block (Fig. 5 J to Fig. 5 S) on (724) slider bar.First The pitch of sound output is according to control user interface object position on the slider bar (for example, control user interface object and cunning The distance of one end of dynamic item, control user interface object are at a distance from the center of slider bar or control user interface object and cunning The distance of nearest one end of dynamic item) and change.In some embodiments, the output of the first sound has when sliding block is in first position When the first pitch and lower second pitch of the first pitch of ratio when sliding block is in the second position on the first position left side.? In some embodiments, when sliding block is further from the right, pitch is lower.
In some embodiments, to first input respond after, equipment receive (726) correspond to on display Control user interface object the second interaction (for example, interaction of the position of the sliding block for further adjusting slider bar) the Two inputs.In response to and when receive corresponding to when being interacted with second of the control user interface object on display: equipment (728) are provided to display to be used to be moved to control user interface object from the second position on display according to the second input The data of the third place on the display different from the second position on display;And it is provided for audio system The second sound information of second sound output, second sound output have according to control user interface object from display The movement of the third place on the second position to display and change one or more characteristics (for example, second sound output be The audible feedback of additional movement corresponding to slider bar control).In some embodiments, corresponding sound output has the first pitch And subsequent sound output has second pitch different from the first pitch.
In some embodiments, control user interface object is the sliding block on (the 730 of Fig. 7 D) slider bar.On display The second position is not the terminal of slider bar.In some embodiments, the third place on display is (or as on display The another location of previous position) be not slider bar terminal.In some embodiments, equipment receives (732) and corresponds to and display The accordingly interactive input of control user interface object on device.Correspond to and the control user on display in response to receiving The accordingly interactive input of interface object: equipment provides (734) to display and is used to that user interface object will to be controlled according to input It is moved to the data of the 4th position on display, wherein the 4th position on display is the terminal of slider bar.In some realities It applies in example, control user interface object is mobile from the second position on display.In some embodiments, user interface pair is controlled As mobile from the third place (or another location as the previous position on display) on display.In some embodiments In, the 4th position on display is different from the second position on display.In some embodiments, the 4th on display It sets different from the third place on display.Equipment is also provided for the acoustic information that third sound exports to audio system Indicate that control user interface object is located at the terminal point of slider bar, wherein third sound exports different from the output of the first sound. In some embodiments, the output of third sound is different from second sound output.In some embodiments, falling tone sound output is to mention For correspond to rubber band effect audible feedback spring sound (for example, echoing " poh chirp ") (for example, such as institute in Fig. 5 O to Fig. 5 P Diagram).
It should be appreciated that the particular order that the operation in Fig. 7 A to Fig. 7 C has been described is merely exemplary and is not intended to finger Show that described sequence is can to execute the only sequence of operation.Those skilled in the art, which will be recognized, to be used for this paper The various modes of described operation rearrangement.Additionally, it should be noted that about other methods (example described herein Such as, the details of method 600,800,900 and other processes 1000) described herein with above for Fig. 7 A to Fig. 7 C The similar mode of described method 700 is also applicable.For example, above with reference to user interface object described in method 700, User interface and sound output optionally have with reference to other methods described herein (for example, method 600,800,900 It is special with the one or more in the characteristic of 1000) user interface object described herein, user interface and sound output Property.For simplicity, these details are not repeated herein.
Fig. 8 A to Fig. 8 C is to illustrate the friendship with user interface object in accordance with some embodiments for providing and corresponding to user The flow chart of the method 800 of mutual acoustic information.In the electronic equipment communicated with display and audio system (for example, Fig. 3's sets The portable multifunction device 100 of standby 300 or Figure 1A) at execute method 800.In some embodiments, electronic equipment and tool There is the user input equipment (for example, remote user input device, such as remote controler) of touch sensitive surface to communicate.In some embodiments In, display is touch-screen display, and touch sensitive surface is over the display or integrated with display.In some embodiments In, display is separated with touch sensitive surface.In some embodiments, user input equipment and electronic equipment are integrated.In some implementations In example, user input equipment is separated with electronic equipment.Optionally some operations in combined method 800 and/or optionally change The sequence of some operations.
In some embodiments, audio system includes digital analog converter.In some embodiments, audio system includes Signal amplifier.In some embodiments, audio system includes one or more speakers.In some embodiments, audio system System and display (for example, TV with audio frequency processing circuit and loudspeaker) are integrated.In some embodiments, audio system with Display (for example, display screen and isolated audio system) is different and separates.In some embodiments, equipment includes touch-sensitive table Face.In some embodiments, display is touch-screen display, and touch sensitive surface over the display or with display collection At.In some embodiments, display separates (for example, the remote controler of touch sensitive surface and TV is integrated) with touch sensitive surface.
Optionally some operations in combined method 800 and/or optionally change the sequences of some operations.In some implementations In example, the user interface in Fig. 5 T to Fig. 5 AA is used to illustrate the process described in method 800.
As described below, method 800 provides the sound output for the interaction with user interface object for corresponding to user. This method reduces the cognitive load of the user when interacting with user interface object (for example, by mobile current focus), to create Build more efficient man-machine interface.There is provided sound output helps user faster and more efficiently to manipulate user interface object, thus Save electric power.
The equipment has the number of the first user interface of multiple user interface objects to display offer (802) for rendering According to wherein current focus is in the first user interface object of multiple user interface objects.In some embodiments, when current burnt O'clock when in the first user interface object, by the other users interface in the first user interface object and multiple user interface objects Object visually distinguishes.For example, as shown in Fig. 5 T, application icon 532-e passes through slightly greater and have and to highlight Boundary and visually distinguished with other application icon 532.
When just the first user interface is presented in display, equipment receives (804) and uses with for changing current focus first The corresponding input (for example, towing gesture on touch sensitive surface) of the request of position in the interface of family, the input have direction and width Degree (for example, speed and/or distance of input).In some embodiments, electronic equipment is communicated with remote controler, and from remote controler Receive input.For example, detecting user's input 536 on the touch sensitive surface 451 of remote controler 5001 as shown in Fig. 5 T.
In response to receiving input corresponding with the request of the position for changing current focus in the first user interface, Equipment provides (806) to display and is used to current focus being moved to second user interface object from the first user interface object Data, wherein selecting second user interface object for current focus according to the direction of input and/or amplitude.For example, such as Fig. 5 T To shown in Fig. 5 U, current focus is moved to application from application icon 532-e (Fig. 5 T) in response to user's input 536 by equipment Icon 532-d (Fig. 5 U).In some embodiments, when current focus is on second user interface object, by second user circle In face of as visually being distinguished with the other users interface object in multiple user interface objects.In some embodiments, when Current focus is when on respective user interfaces object, by other use in respective user interfaces object and multiple user interface objects Family interface object visually distinguishes.
Moreover, corresponding with the request of the position for changing current focus in the first user interface in response to receiving Input, equipment are provided for the first acoustic information that the first sound exports to audio system, and first sound output corresponds to In current focus from the first user interface object to the movement of second user interface object, wherein with current focus from the first user The display that interface object is moved to second user interface object concomitantly provides the first sound and exports, and the output of the first sound Pitch is at least partially based on following item and determines: the size of the first user interface object is (for example, if the first user interface object Big then bass is high, and the Gao Yingao if the first user interface object is small), the type of the first user interface object (for example, bass is high if the first user interface object is type icon, and if the first user interface object is film sea Report then Gao Yingao), (bass is high if second user interface object is big, and such as the size of second user interface object The small then Gao Yingao of fruit second user interface object) and/or second user interface object type (for example, if second use Family interface object is type icon then bass height, and the Gao Yingao if second user interface object is film poster).Example Such as, the pitch ratio of the output of sound shown in Fig. 5 U for application icon 532-d 538-1 is for more than application icon 532-d The pitch that sound shown in Fig. 5 CC of big movie icon 534-c exports 538-9 is higher.
In some embodiments, pass through the distribution in multiple spatial channels, one or more audio envelope trait (examples Such as, sound, decaying, maintenance and/or release), tone color, volume and/or pitch characterize the output of the first sound.In some embodiments In, the distribution, one or more audio envelope traits in multiple spatial channels are (for example, playing sound, decaying, maintaining and/or release Put), tone color, volume and/or pitch determined based on the Arbitrary Term in following item: the size of the first user interface object, first The type of user interface object, the size of second user interface object, the type of second user interface object, the amplitude of input The direction of input and/or.
In some embodiments, the characteristic (for example, size and/or type) based on second user interface object is (for example, simultaneously And not be the first user interface object any characteristic), determine the first sound output pitch.In some embodiments, first Sound output is to provide the user with the audible feedback of the size and/or type of the user interface object for indicating that she will navigate to " entrance " sound or " moving to " sound.In some embodiments, based on the first user interface object characteristic (size and/or Type) any characteristic of second user interface object (for example, and not), determine the pitch of the first sound output.Some In embodiment, the output of the first sound is to provide the user with to indicate her by the size of the user interface navigated away from and/or type " exiting " sound or " removing " sound of audible feedback.
In some embodiments, the amplitude based on input (for example, speed and/or distance of input), determines (808) first The volume of sound output.For example, being more than predetermined threshold according to the speed and/or distance that determine input, the output of the first sound is reduced Volume.
In some embodiments, one or more user interface objects be located at the first user interface object on display with Between second user interface object, and current focus according to the direction and/or amplitude of input via one or more users circle In face of as being moved to second user interface object (for example, the current focus in Fig. 5 W to Fig. 5 X passes through from the first user interface object Application icon 532-e is moved to from application icon 532-b by application icon 532-c and application icon 532-d).
In some embodiments, meet predetermined input criterion (for example, speed and/or apart from standard according to the amplitude of input Then) come reduce (810) first sound output volume.For example, the output of the first sound is when second user interface object is being shown On device further from when ratio when second user interface object is closer to the first user interface object more quiet " moving to " sound (example Such as, as described above), as shown in Fig. 5 W to Fig. 5 X.In some embodiments, the respective number of user interface object (for example, counting) is located at the first user interface object on display and between second user interface object.Current focus via User interface object between the first user interface object and second user interface object is moved from the first user interface object It moves to second user interface object, and the volume of the first sound output is based on the first user interface object being located on display The respective number (for example, counting) of user interface object between second user interface object is (for example, provide instruction user just The audio user feedback moved in how many user interface object).
In some embodiments, according to the predetermined input criterion of amplitude satisfaction for determining input, it is defeated that (812) first sound are reduced Release out.For example, for crossing over discrete object (for example, in video selection user interface (such as TV key frame screen) The expression of multiple videos) navigation for, the output of the first sound is when amplitude meets predetermined input criterion with shorter release (for example, about speed and/or distance), and have longer release (for example, working as when amplitude is unsatisfactory for predetermined input criterion The speed of first input has longer release when slower, this provides user and indicates to input more gradual audible feedback).
In some embodiments, position according to second user interface object in the first user interface adjusts (814) Distribution of the first sound output in multiple spatial channels is (for example, when current focus is moved to the left side positioned at the first user interface When the user interface object of side, left voice-grade channel increases and/or right voice-grade channel reduces, and is located at when current focus is moved to When the user interface object on the right side of the first user interface, right voice-grade channel increases and/or left voice-grade channel reduces, such as Fig. 5 CC To shown in Fig. 5 EE).In some embodiments, according to second user interface object about the opposite of the first user interface object Position (for example, up/down, left or right) adjusts the first sound and exports the distribution in multiple spatial channels.In some embodiments, Movement (for example, up/down, left or right) according to current focus from the first user interface object to second user interface object is adjusted It saves the first sound and exports the distribution in multiple spatial channels.In some embodiments, multiple spatial channels include that left audio is logical Road, right voice-grade channel, upper voice-grade channel and subaudio frequency channel.For example, when current focus is moved to positioned at the first user interface When the user interface object of upside, upper voice-grade channel increases and/or subaudio frequency channel reduces, and when current focus is moved to position When the user interface object of the downside of the first user interface, subaudio frequency channel increases and/or upper voice-grade channel reduces.
In some embodiments, the type of size and/or second user interface object based on second user interface object (for example, and being not based on the size of the first user interface object and/or the type of the first user interface object) determines (Fig. 8 B 816) the first sound output pitch.In response to receiving and the position for changing current focus in the first user interface The corresponding input of request, equipment is provided for audio system and current focus is from the first user interface object to second The second sound information of the corresponding second sound output of the movement of user interface object, is wherein at least based partially on the first user circle In face of the size of elephant and/or the type of the first user interface object (for example, and being not based on the size of second user interface object And/or the type of second user interface object), determine the pitch of second sound output.For example, the output instruction of the first sound is current Focus " moves to " second user interface object (for example, into sound), and second sound output indicates current focus from first User interface object " removing " (for example, exiting sound).As shown in Fig. 5 T to Fig. 5 U, in conjunction with by current focus from application icon 532-e is moved to application icon 532-d, sequentially offer sound output 540-1 (exemplary to exit sound) and sound output 538-1 (exemplary to enter sound).In some embodiments, second sound output starts before the output of the first sound starts. In some embodiments, second sound output terminates before the first sound output termination.In some embodiments, with the first sound Sound output concomitantly provides second sound output at least partly.In some embodiments, the output of the first sound is in second sound Start (for example, the output of the first sound and second sound output are not overlapped) after output termination.
In some embodiments, the first user interface includes having three or more different size of user interfaces pair As, and three or more user interface objects correspond to one or more different sound properties (for example, different Pitch) sound output.
In some embodiments, in response to receiving and the position for changing current focus in the first user interface The corresponding one or more inputs of one or more request: equipment provides (818) to display and is used for current focus from second User interface object is moved to the data of third user interface object.Equipment is also provided for and current coke to audio system The third acoustic information exported from second user interface object to the corresponding third sound of the movement of third user interface object is put, Wherein third sound is concomitantly provided from the display that second user interface object is moved to third user interface object with current focus Sound output.Equipment is also provided to display for current focus to be moved to fourth user interface pair from third user interface object The data of elephant, and be provided for audio system and current focus is from third user interface object to fourth user interface The falling tone message breath of the corresponding falling tone sound output of the movement of object.It is moved to current focus from third user interface object The display of fourth user interface object concomitantly provides the output of falling tone sound.For example, current in the case where sound exports 538-7 Focus is moved to icon 550-d (Fig. 5 Y), and in the case where sound exports 538-8, current focus is moved to application icon later 532-e (Fig. 5 Z), and movie icon 534-c (Fig. 5 CC) is moved in the case where sound exports 538-9.
In some embodiments, it is used with current focus to second user interface object, third user interface object and the 4th The movement of largest object in the interface object of family corresponding sound output have than with current focus to second user interface object, The corresponding corresponding sound output of the movement of two objects of residue in third user interface object and fourth user interface object is more Low pitch is (for example, when third user interface object is second user interface object, third user interface object and fourth user When largest object in interface object, the sound output of the movement corresponding to current focus to third user interface object have than The pitch and correspond to current focus to the 4th that the sound of movement corresponding to current focus to second user interface object exports The lower pitch of pitch of the sound output of the movement of user interface object).For example, in Fig. 5 Y to Fig. 5 CC, movie icon 534-c is the largest object in icon 550-d, application icon 532-e and movie icon 534-c, and corresponding sound exports 538-9 has the double bass in sound associated with icon 550-d, application icon 532-e and movie icon 534-c output It is high.
In some embodiments, it is used with current focus to second user interface object, third user interface object and the 4th The movement of smallest object in the interface object of family corresponding sound output have than with current focus to second user interface object, The corresponding corresponding sound output of the movement of two objects of residue in third user interface object and fourth user interface object is more High pitch is (for example, when second user interface object is second user interface object, third user interface object and fourth user When smallest object in interface object, the sound output of the movement corresponding to current focus to second user interface object have than The pitch and correspond to current focus to the 4th that the sound of movement corresponding to current focus to third user interface object exports The higher pitch of pitch of the sound output of the movement of user interface object).For example, in Fig. 5 Y to Fig. 5 CC, application icon 532-e is the smallest object in icon 550-d, application icon 532-e and movie icon 534-c, and corresponding sound exports 538-9 has the descant in sound associated with icon 550-d, application icon 532-e and movie icon 534-c output It is high.
When first user interface with multiple user interface objects is presented in display, wherein having multiple user interfaces First user interface of object includes in the hierarchical structure of user interface, and equipment receives (the 820 of Fig. 8 C) and uses with for utilizing The corresponding input of request of the first user interface is replaced (for example, pressing dish in second user interface in the hierarchical structure at family interface The input 574 of single button 5002, user's input 554 as shown in Fig. 5 GG or on touch sensitive surface 451, such as institute in Fig. 5 Z Show).In order to describe these features and correlated characteristic, it is assumed that the illustrative hierarchy structure of user interface includes that screen protection program is used Main picture below family interface (for example, screen protection program user interface 517 in Fig. 5 HH), screen protection program user interface Below face on-screen user interface (for example, key frame on-screen user interface 530 in Fig. 5 GG) and key frame on-screen user interface Using user interface (for example, gaming user interface 594 in Fig. 5 AA) (for example, the hierarchical structure of user interface is with from top to bottom The sequence of hierarchical structure include screen protection program user interface 517, key frame on-screen user interface 530 and game user circle Face 594).In response to receiving input corresponding with for utilizing the replacement request of the first user interface of second user interface: setting It is standby to provide (822) for replacing the data of the first user interface using second user interface (for example, in response to pressing to display The input 574 of menu button 5002, screen protection program user interface 517 are replaced key frame on-screen user interface 530, are such as schemed Shown in 5GG to Fig. 5 HH, and in response to user's input 554 on touch sensitive surface 451, gaming user interface 594 replaces main picture Face on-screen user interface 530).It is located at second user interface according to determining the first user interface in the hierarchical structure of user interface Above (for example, from higher user interface navigation to lower user interface in illustrative hierarchy structure, such as from key frame On-screen user interface 530 navigates to gaming user interface 594), equipment is provided for the output of fifth sound sound to audio system The fifth sound message breath of (for example, high pitch high sound, the sound in such as Fig. 5 AA exports 556-1).In some embodiments, with The first user interface is replaced using second user interface, and the output of fifth sound sound is concomitantly provided.In some embodiments, in user The first user interface is located immediately above second user interface (for example, in illustrative hierarchy structure in the hierarchical structure at interface Middle key frame on-screen user interface 530 is located immediately above gaming user interface 594).According to the layer determined in user interface The first user interface is located at below second user interface (for example, from lower user in illustrative hierarchy structure in secondary structure Interface navigation such as navigates to screen protection program user circle from key frame on-screen user interface 530 to higher user interface Face 517), equipment is provided for exporting different the 6th sound outputs (for example, bass is high from fifth sound sound to audio system Sound, the sound in such as Fig. 5 HH export 560-3) the 6th acoustic information.In some embodiments, with utilize second user It replaces the first user interface and the output of the 6th sound is concomitantly provided in interface.In some embodiments, in the level knot of user interface The first user interface is located immediately at below second user interface (for example, the key frame screen in illustrative hierarchy structure in structure User interface 530 is located immediately at below screen protection program user interface 517).Therefore, fifth sound sound output and/or the 6th Sound output can be used to refer to whether user will navigate to the top of hierarchical structure or bottom.
In some embodiments, the output of fifth sound sound is different from the output of the first sound.In some embodiments, fifth sound sound It exports different from second sound output.In some embodiments, the output of fifth sound sound is different from the output of third sound.In some realities It applies in example, the output of fifth sound sound is different from the output of falling tone sound.In some embodiments, the output of the 6th sound is defeated with the first sound It is different out.In some embodiments, the output of the 6th sound is different from second sound output.In some embodiments, the 6th sound It exports different from the output of third sound.In some embodiments, the output of the 6th sound is different from the output of falling tone sound.In some realities It applies in example, the output of the 6th sound is different from the output of fifth sound sound.
When the first user interface is presented in display, equipment receives (824) and is used to activate the user with current focus The corresponding input of the request of interface object (for example, user interface object is overlapped by current focus, is surrounded or in its vicinity).It rings Ying Yu, which is received, has the corresponding input of the request of user interface object of current focus with for activate, according to determining the first use Family interface object has current focus, and equipment provides (826) to audio system and is used to provide the seven tunes sound that seven tunes sound exports Information, seven tunes sound output correspond to the activation of the first user interface object.Sound output 556-1 (figure is provided for example, combining 5AA) activate application icon 532-e (Fig. 5 Z).According to determining that second user interface object has current focus, equipment is to audio system System is provided for the 8th acoustic information of the 8th sound output, and the output of the 8th sound corresponds to second user interface object Activation.The output of 8th sound is different from the output of seven tunes sound.Sound output 556-2 (Fig. 5 FF) is provided for example, combining, activation Movie icon 534-a (Fig. 5 EE).At one of sound corresponding with the movement of current focus to the first user interface object output Or the relationship between multiple characteristics and one or more characteristics of second sound output corresponds to one exported in seven tunes sound Or the relationship between multiple characteristics and one or more characteristics of the 8th sound output.For example, when the first user interface object ratio Second user interface object more hour, the movement corresponding to current focus to the first user interface object sound output have than The higher pitch of pitch of the sound output of movement corresponding to current focus to second user interface object, and used with first The activation corresponding sound output of family interface object has to be exported than sound corresponding with the activation of second user interface object The higher pitch of pitch is (for example, sound corresponding with the movement of current focus to application icon 532-e exports 538- in Fig. 5 Z 8 have sound more higher than sound corresponding with the movement of current focus to movie icon 534-a in Fig. 5 EE output 538-11 Height, and in Fig. 5 AA corresponding with the activation of application icon 532-e sound output 556-1 have than in Fig. 5 FF with film The corresponding sound of the activation of icon 534-a exports the higher pitch of 556-2).
In some embodiments, corresponding sound output is single tone or chord (for example, sound " stining (DING) ").? In some embodiments, corresponding sound output is single tone in melody or chord (for example, short melody " ding-dong (DING DONG the sound " stining (DING) " in) ", wherein melody includes at least two tones and chord).In some embodiments, work as root When providing the single tone or chord in (either determining, modification etc.) melody according to identified characteristic, according to identified spy Property provides (or determining, modification etc.) sound output, and (for example, working as melody, " sound " stining " in ding-dong is provided with identified sound Gao Shi provides the corresponding sound output with identified pitch).In some embodiments, it is mentioned when according to identified characteristic It (or is determined, modification according to identified characteristic as melody and providing when for the entire melody such as (perhaps determine, modify) Deng) sound output (for example, sound output be V-I rhythm, wherein I indicate according to identified pitch determine root chord and V is the chord of five scales above root chord I).In some embodiments, pitch is the pitch of perception.
It should be appreciated that the particular order that the operation in Fig. 8 A to Fig. 8 C has been described is merely exemplary and is not intended to finger Show that described sequence is can to execute the only sequence of operation.Those skilled in the art, which will be recognized, to be used for this paper The various modes of described operation rearrangement.Additionally, it should be noted that about other methods (example described herein Such as, the details of method 600,700,900 and other processes 1000) described herein with above for Fig. 8 A to Fig. 8 C The similar mode of described method 800 is also applicable.For example, above with reference to user interface object described in method 800, User interface and sound output optionally have with reference to other methods described herein (for example, method 600,700,900 It is special with the one or more in the characteristic of 1000) user interface object described herein, user interface and sound output Property.For simplicity, these details are not repeated herein.
Fig. 9 A to Fig. 9 C illustrates in accordance with some embodiments for video information user interface offer acoustic information The flow chart of method 900.In the electronic equipment communicated with display and audio system (for example, the equipment 300 or Figure 1A of Fig. 3 Portable multifunction device 100) at execute method 900.In some embodiments, electronic equipment and the use with touch sensitive surface Family input equipment (for example, remote user input device, such as remote controler) communication.In some embodiments, display is to touch Panel type display, and touch sensitive surface is over the display or integrated with display.In some embodiments, display and touch-sensitive table Face separation.In some embodiments, user input equipment and electronic equipment are integrated.In some embodiments, user input equipment It is separated with electronic equipment.Optionally some operations in combined method 900 and/or optionally change the sequences of some operations.
In some embodiments, audio system includes digital analog converter.In some embodiments, audio system includes Signal amplifier.In some embodiments, audio system includes one or more speakers.In some embodiments, audio system System and display (for example, TV with audio frequency processing circuit and loudspeaker) are integrated.In some embodiments, audio system with Display (for example, display screen and isolated audio system) is different and separates.
Optionally some operations in combined method 900 and/or optionally change the sequences of some operations.In some implementations In example, the user interface in Fig. 5 II to Fig. 5 MM is used to illustrate the process described in method 900.
As described below, the playback for suspending video includes providing for presenting when the playback of video suspends to consider oneself as The data of multiple static images of frequency.Even if multiple static images from video help user to be resumed it in the playback of video Before, also understand that the playback of wherein video is suspended the context of the video of surrounding.Therefore, user can video playback it is extensive Soon the context of video is understood after multiple.
Equipment provides the first video letter that (902) include the descriptive information about the first video for rendering to display Cease the data of user interface.For example, the first video information user interface (for example, product page view 572 in Fig. 5 II) includes Information, title, runing time, plot summarize, grading, can piece supplying etc. for play the first video.
In some embodiments, before the first video information user interface is presented in display: equipment is provided to display (904) data of the video selection user interface of the expression including multiple videos (correspond to multiple views for example, having for rendering Each of the frequency poster of video and/or the icon of title).Equipment receives and the expression of the first video in multiple videos Corresponding input is selected, wherein presenting in response to receiving input corresponding with the selection of the expression of the first video and being used for first First video information user interface of video.For example, showing the use in Fig. 5 GG before the display of the user interface in Fig. 5 II Family interface, and the user interface in Fig. 5 II is presented in response to user's activated film icon 534-a (Fig. 5 GG).
Equipment to audio system provide (906) be used for by display to during the presentation of the first video information user interface The the first sound output for corresponding to (for example, being based on) first video is provided.In some embodiments, acoustic information is based on first The audio (for example, dark surrounds sound of drama or the bright light environments sound of comedy etc.) of the type of video.In some implementations In example, using metadata associated with video (for example, one or more of the instruction to the first scene in video or video The metadata of type classification) determine the type of the first video.In some embodiments, acoustic information is from the first video itself Sound or music generate audio (for example, audio that audio is the Sound Track from the first video).In some implementations In example, acoustic information is selected to the audio of the tone of the special scenes corresponded in the first video.For example, in some implementations In example, the distribution of color of the first scene of the first video of device analysis come determine scene be " bright " or " dark " and It is " bright " or " dark " by Audio Matching.In some embodiments, when display is used about the first video information of the first video When the interface of family, the first sound output circulation (repetition).
In some embodiments, the first video information user interface includes (908) multiple user interface objects.For example, such as Shown in Fig. 5 II, user interface include " now viewing (WatchNow) " can piece supplying and " trailer/preview " can piece supplying. First user interface object of multiple user interface objects is configured as: when being selected (or activation), initiating electronic equipment The sound that sound at least partly corresponding with the first Sound Track of the first video exports is provided for audio system Information is (for example, the broadcasting user interface object in the first video information user interface of activation initiates output from the first video Shot).The second user interface object of multiple user interface objects is configured as: when being selected (or activation), initiating electricity Sub- equipment is provided for the corresponding to second video different from the first Sound Track of the first video to audio system The acoustic information of at least part of sound output of two Sound Tracks is (for example, pre- in the first video information user interface of activation It accuses piece user interface object and initiates Ma Sheng of the output from the first video).
When display presentation includes the first video information user interface about the descriptive information of the first video, equipment (910) input corresponding with for playing back the request of the first video is received (to correspond in video information user interface for example, receiving Broadcasting icon activation or the broadcast button on the remote controler that is communicated with equipment activation input).In response to receiving Input corresponding with the request for playing back the first video, equipment provide (912) to display and are used for returning using the first video Put the data that (for example, video playback view 500 in Fig. 5 JJ) replaces the presentation of the first video information user interface.For example, with Family determines the first video of viewing and therefore activates the playback of the first video.
During the playback of the first video, equipment receives (the 914 of Fig. 9 B) and is used to show second about the first video The request of video information user interface it is corresponding input (for example, receive correspond to pause icon or return icon activation or The input of the activation for the pause button or return push-button (such as menu button 5002) on remote controler that person communicates with equipment 580, as shown in Fig. 5 JJ).In some embodiments, about the second video information user interface of the first video with about the First video information user interface of one video is different.For example, the second video information is different from product page view " temporary Stop " screen.In some embodiments, about the second video information user interface of the first video and first about the first video Video information user interface is identical.In some embodiments, when user suspends video, equipment is back to the first video information use Family interface.
In response to receiving and being used to show that the request of the second video information user interface about the first video is corresponding Input: equipment provides (916) to display and is used for using the second video information user interface about the first video (for example, figure Product page view 572 in 5KK) replacement the first video playback data.Equipment is provided to audio system for by showing Show device to provided during the presentation of the second video information user interface exported with the first sound it is different corresponding to (for example, being based on) The second sound of first video exports.In some embodiments, the second video information user circle when display about the first video When face, second sound output circulation (repetition).
In some embodiments, it is that (918) receive with working as and use for the second video information of display that second sound, which exports, The Sound Track of corresponding first video in position of the first video just played when the corresponding input of request at family interface.Some In embodiment, from and cover and just broadcast when receiving with for showing the corresponding input of the request of the second video information user interface Select second sound defeated in the Sound Track of corresponding first video of the chapters and sections of first video of the position in the first video put Out.
In some embodiments, it receives and uses in the predetermined lasting time the end from the first video according to determination In show the second video information user interface request it is corresponding input (for example, when display terminate subtitle when input 582, such as Shown in Fig. 5 LL), the end subtitle Sound Track of (920) first videos of selection is exported for second sound.For example, if One video terminates close to (such as being sufficiently close to), then being played using video information user interface terminates subtitle Sound Track.
In some embodiments, after the playback for initiating the first video, equipment receives (the 922 of Fig. 9 C) and is used to suspend The corresponding input of the request of first video.In response to receiving input corresponding with for suspending the request of the first video: equipment Suspend the playback of (924) first videos at the first playback position in the timeline of the first video and provides use to display In the data that the selected static image of one or more from the first video is presented, wherein based on the first view of pause at which First playback position of frequency, the one or more selected static images of selection are (for example, if when the first view of audio system output Received when the first Sound Track of frequency with for suspending the corresponding input of the request of the first video, then when suspending the first video When, audio system continues to output the first Sound Track of the first video).Equipment is further provided for sound to audio system The acoustic information of sound output, sound output correspond to the Sound Track of the first video at the first playback position.
In some embodiments, after the playback for initiating the first video, equipment receives (926) and is used to suspend the first view The corresponding input of the request of frequency.In response to receiving input corresponding with for suspending the request of the first video: equipment is first Suspend the playback of (928) first videos at the first playback position in the timeline of video;And it provides to display for being in The now data (for example, Fig. 5 OO to Fig. 5 SS) of the selected static image of one or more from the first video.Based at it First playback position of the first video of place's pause, the one or more selected static images of selection.Equipment is also to audio system It is provided for the acoustic information of sound output, sound output corresponds to one of the first video at the first playback position Or multiple characteristics (for example, beat of original sound track, chord).In some embodiments, this method includes that mark covers the The beat and/or chord of original sound track at the time window of the predetermined lasting time of one playback position or in the time window, And beat and/or chord based on the original sound track at the first playback position select different from original sound track Music.
In some embodiments, the output of (330) first sound and/or second sound are selected from the Sound Track of the first video Output.In some embodiments, the output of the first sound is the theme music of the first video.In some embodiments, the first sound Output is independently of the present playback position in the first video.For example, even selecting the first sound defeated before playing the first video Out.
In some embodiments, one or more characteristics based on the first video are (for example, type, user rating, comment are commented Grade etc.), (for example, from the Sound Track independently of the first video, the collection of the Sound Track such as from various films Closing) (932) first sound of selection export and/or second sound output.For example, electronic music is selected for science fiction movies, and For Western Movies selection western music (for example, being based on metadata associated with the first video).For example, being higher than for having The user rating of predetermined criterion and/or the film of comment grading are selected with fast beat and/or the music started with big chord, and It selects for having lower than the user rating of predetermined criterion and/or the film of comment grading with slow beat and/or is opened with small chord The music of beginning.
It should be appreciated that the particular order that the operation in Fig. 9 A to Fig. 9 C has been described is merely exemplary and is not intended to finger Show that described sequence is can to execute the only sequence of operation.Those skilled in the art, which will be recognized, to be used for this paper The various modes of described operation rearrangement.Additionally, it should be noted that about other methods (example described herein Such as, the details of method 600,700,800 and other processes 1000) described herein with above for Fig. 9 A to Fig. 9 C The similar mode of described method 900 is also applicable.For example, above with reference to user interface object described in method 900, User interface, sound output and static image optionally have with reference to other methods described herein (for example, method 600,700,800 and user interface object 1000) described herein, user interface, sound output and static image One or more characteristics in characteristic.For simplicity, these details are not repeated herein.
Figure 10 A to Figure 10 B illustrate it is in accordance with some embodiments when video placed in a suspend state when audio-visual information is provided The flow chart of method 1000.Communicated with display (and in some embodiments with touch sensitive surface) electronic equipment (for example, The equipment 300 of Fig. 3 or the portable multifunction device 100 of Figure 1A) at execute method 1000.In some embodiments, it shows Device is touch-screen display, and touch sensitive surface is over the display or integrated with display.In some embodiments, display It is separated with touch sensitive surface.Optionally some operations in combined method 10 and/or optionally change the sequences of some operations.
As described below, method 1000 provide for when video placed in a suspend state when provide audio-visual information it is straight The mode of sight.This method reduce when video placed in a suspend state when observing audio-visual information user cognitive load, thus Create more efficient man-machine interface.When video placed in a suspend state when allow users to observe that audio-visual information also saves electricity Power.
Equipment 100 provides the data of (1002) first video for rendering to display.For example, for rendering film or The data (for example, video playback view 500 in Fig. 5 NN) of TV programme.When (for example, playback) first view is just being presented in display When frequency, equipment receives (1004) input corresponding with for suspending user's request of the first video.For example, receiving and pause figure Target activation, pause gesture on the touch sensitive surface in equipment or on the remote controler that communicates with equipment are communicated with equipment The corresponding input of the activation of pause button on remote controler is (for example, the input on play/pause button 5004 in Fig. 5 NN 586)。
In response to receiving input corresponding with for suspending user's request of the first video, equipment pause (1006) first The presentation of the first video at the first playback position in the timeline of video.First in the timeline for suspending the first video After the presentation of the first video at playback position and when suspending the presentation of the first video, equipment is provided to display (1008) number of multiple selected static images (for example, the static image automatically selected) from the first video for rendering According to wherein selecting multiple selected static image based on the first playback position for suspending the first video at which.For example, Equipment provides the data of multiple selected static image for rendering to display, as shown in Fig. 5 OO to Fig. 5 SS.
In some embodiments, when the first video pause, multiple selected static images are sequentially presented.Some In embodiment, when the first video pause, selected static image is presented with time sequencing.In some embodiments, when When one video pause, selected static image is presented with random sequence.In some embodiments, when the first video pause, Selected static image is sequentially provided to display.In some embodiments, when the first video pause, with time sequencing Selected static image is presented.In some embodiments, it when the first video pause, is presented with random sequence selected quiet Only image.
In some embodiments, the first playback position from timeline and the timeline for leading over the first playback position In the second playback position between the playback position range for the first video in selection (1010) it is multiple selected static Image.In some embodiments, the second playback position in timeline leads over the first playback position with predetermined time interval (1012).For example, selected multiple static images in range from 30 seconds, and the first playback position is at 0:45:00, and second Playback position is in 0:44:30.In some embodiments, image is selected to exclude corresponding to the video after the first playback position Playback any image.For example, selection image is to avoid announcement about any interior of the story line after the first playback position Hold.
In some embodiments, the second playback position in timeline is to receive and be used to suspend asking for the first video The time interval determined after corresponding input is asked to lead over the first playback position (1014).For example, in response to receiving correspondence It is presented in the input for the request for suspending the first video or immediately follows in offer multiple selected static from the first video Before the data of image, time interval is determined.In some embodiments, if the first playback position in timeline with it is leading The change of the frame between the second playback position in the timeline of the first playback position changes criterion less than the first predetermined frame, then Use longer time interval.In some embodiments, predetermined frame change criterion first is that the amount of movement detected in frame.For example, If time interval increases to leading there are considerably less movement in 30 seconds or 60 seconds of leading first playback position One playback position 2 minutes.In some embodiments, if the first playback position in timeline and leading over the first playback position The change of the frame between the second playback position in the timeline set is greater than the second predetermined frame and changes criterion, then using it is shorter when Between be spaced.In some embodiments, predetermined frame change criterion first is that the type of the video just shown.For example, if the first view Frequency is played for classical music, then uses longer time interval, and if the first video is action movie, use compared with Short time interval.
In some embodiments, multiple selected static images of video include (the 1016 of Figure 10 B) for multiple institutes Any other static image in the static image of selection discontinuous static image in video.For example, extremely by video A few frame separates static image with any other static image (for example, one or more framing bit is selected by any two Static image between video in).In some embodiments, static image is not played (for example, each static with video rate Image can show the several seconds).In some embodiments, multiple selected static images include (1018) representative frame.One In a little embodiments, method includes based on predetermined representative frame criterion mark representative frame (for example, the central area with respective frame In character and/or object frame, with frame of movement of object for being less than predetermined mobile criterion etc.).
In some embodiments, the first video at the first playback position in the timeline for suspending the first video is in After now and when the presentation of the first video is suspended, equipment provides (1020) to display and is indicated for rendering to lantern slide The data (for example, countdown clock 588 in Fig. 5 PP) of the animation of mode conversion.In some embodiments, work as video pause When, multiple selected static images are shown in slide show mode.In some embodiments, it indicates to convert to slide show mode Animation include (1022) countdown clock.In some embodiments, show that multiple images include display in slide show mode Time label, time label instruction correspond to the timeline of the video of the first playback position (for example, suspending video here) In position.In some embodiments, shown in slide show mode multiple images include display instruction current time (for example, Current 8:03pm) clock.
In some embodiments, equipment repeats (1024) and provides multiple institutes from the first video for rendering to display The data of the static image of selection.In some embodiments, (for example, circulation) is repeated to the suitable of multiple selected static images Sequence is shown.In some embodiments, the display of multiple selected static images is repeated in a randomised way.In some embodiments In, it is multiple selected static with panning effect and/or zooming effect for rendering that equipment to display provides (1026) The data of the respective stationary image of image.In some embodiments, equipment is provided to display has transparency for rendering The data (for example, when showing next static image) of the respective stationary image of multiple selected static images.
In some embodiments, equipment is communicated with audio system, and equipment provides (1028) for mentioning to audio system For the acoustic information of the first sound output, first sound output corresponds to the first video just presented over the display.One In a little embodiments, equipment provides (1030) to audio system and is used to provide the acoustic information that sound exports, and sound output is base It is selected in the first playback position for suspending the first video at which.
It should be appreciated that the particular order that the operation in Figure 10 A to Figure 10 B has been described is merely exemplary and is not intended to Indicate that described sequence is can to execute the only sequence of operation.Those skilled in the art, which will be recognized, to be used to incite somebody to action this The various modes of literary described operation rearrangement.Additionally, it should be noted that about other methods described herein The details of (for example, method 600,700,800 and 900) other processes described herein with above for Figure 10 A extremely The similar mode of method 1000 is also applicable described in Figure 10 B.For example, above with reference to user circle described in method 1000 In face of as, user interface, static image and sound output optionally have refer to other methods (example described herein Such as, method 600,700,800 and user interface object 900) described herein, user interface, static image and sound One or more characteristics in the characteristic of output.For simplicity, these details are not repeated herein.
According to some embodiments, Figure 11 shows the electronic equipment of the principle configuration according to various described embodiments 1100 functional block diagram.The functional block of equipment is optionally implemented by hardware, software, firmware or combinations thereof various described to realize Embodiment principle.It will be understood by those skilled in the art that functional block described in Figure 11 is optionally combined or is separated Implement the principle of various described embodiments for sub- frame.Therefore, description herein is optionally supported described herein Any possible combination or separation of functional block or further definition.
As shown in Figure 11, electronic equipment 1100 includes processing unit 1106.In some embodiments, electronic equipment 1100 With display unit 1102 (for example, being configured as display user interface) and audio unit 1104 (for example, being configured to supply sound Output) communication.In some embodiments, processing unit 1106 includes: display enabling unit 1108,1110 and of audio enabling unit Detection unit 1112.
Processing unit 1106 is configured as providing to display unit 1102 and (showing enabling unit 1108 for example, utilizing) being used for The data of the user interface generated by equipment are presented.The user interface includes having the first user interface pair of First look characteristic As.The user interface further includes the second user interface pair with second visual characteristic different from the first user interface object As.
Processing unit 1106 is configured as being used for the offer of audio unit 1104 (for example, using audio enabling unit 1110) The acoustic information of sound output is provided.Sound output includes the first audio component corresponding to the first user interface object.It should Sound output further includes the second audio components corresponding to second user interface object and different from the first audio component.
Processing unit 1106 is configured as: when user interface is present on display unit 1102 and sound output is mentioned For when, provide the data of (for example, utilize display enabling unit 1108) for more new user interface to display unit 1102, and (for example, utilizing audio enabling unit 1110), which is provided, to audio unit 1104 is used to update the acoustic information that sound exports.It updates User interface and update sound output include: that Binding change corresponds to the first audio component of the first user interface object to change At least one visual characteristic and Binding change in the First look characteristic of first user interface object correspond to second user At least one vision in the second visual characteristic of second audio component of interface object to change second user interface object is special Property.It provides and inputs and occur in user for the Dynamic data exchange of more new user interface.
In some embodiments, First look characteristic includes size and/or the position of the first user interface object.
In some embodiments, more new user interface and update sound output further comprises: stopping the first user of display It includes the sound output for corresponding to the first audio component of the first user interface object that interface object and stopping, which provide,;Stop display It includes the sound output for corresponding to the second audio component of second user interface object that second user interface object and stopping, which provide,; And/or showing one or more respective user interfaces objects and providing includes corresponding to one or more respective user interfaces pair The sound of one or more respective audio components of elephant exports.
In some embodiments, special according at least one vision in the First look characteristic to the first user interface object Property change to change the first audio component corresponding to the first user interface object.
In some embodiments, according to the first view for changing the first user interface object to the change of the first audio component Feel at least one visual characteristic in characteristic.
In some embodiments, the pitch of respective audio component corresponds to the initial size of corresponding user interface object, The stereo balance of respective audio component corresponds to the position of the corresponding user interface object on display unit 1102, and/or The change of the volume of respective audio component corresponds to the change of the size of corresponding user interface object.
In some embodiments, the First look characteristic and of the first user interface object is determined independently of user's input Second visual characteristic of two user interface objects.
In some embodiments, it is at least partially based on the first audio component, selects the second audio component.
In some embodiments, update sound, which exports, comprises determining whether to meet predetermined inactivity criterion, and according to It determines and meets predetermined inactivity criterion, change the volume of sound output.
In some embodiments, it is defeated to be configured as detection (for example, using detection unit 1112) user for processing unit 1106 Enter.Processing unit 1106 is configured to respond to detect that user inputs, and provides to audio unit 1104 (for example, using audio Enabling unit 1110) for changing the acoustic information for the respective audio component for corresponding to respective user interfaces object, and to aobvious Show that unit 1102 provides (for example, using display enabling unit 1108) for more new user interface and the one or more controls of display The data of user interface object.
It in some embodiments, include for providing including audio component to the acoustic information that audio unit 1104 provides The information of sound output, the audio component and the respective audio component for corresponding to respective user interfaces object are discordant.
In some embodiments, processing unit 1106 is configured as before detecting user's input, to display unit 1102, which provide (for example, using display enabling unit 1108), is used to show the data of user interface and more new user interface, without The acoustic information that sound exports is provided for audio unit 1104.Processing unit 1106 is configured as detecting user After input, (for example, using display enabling unit 1108) is provided for showing user interface and update to display unit 1102 The data of user interface, and be provided for sound output to audio unit 1104 and update the sound letter that sound exports Breath.
According to some embodiments, Figure 12 shows the electronic equipment of the principle configuration according to various described embodiments 1200 functional block diagram.The functional block of equipment is optionally implemented by hardware, software, firmware or combinations thereof various described to realize Embodiment principle.It will be understood by those skilled in the art that functional block described in Figure 12 is optionally combined or is separated Implement the principle of various described embodiments for sub- frame.Therefore, description herein is optionally supported described herein Any possible combination or separation of functional block or further definition.
As shown in Figure 12, electronic equipment 1200 and display unit 1202 (for example, being configured as display user interface), sound Frequency unit 1216 (for example, be configured to supply sound output) communication, and in some embodiments with remote control unit 1206 Communication, remote control unit 1206 are configured as detection user and input and send it to equipment 1200.In some embodiments, Remote control unit 1206 includes the touch sensitive surface unit 1204 for being configured as receiving contact.In some embodiments, processing unit 1208 include: display enabling unit 1210, receiving unit 1212 and audio enabling unit 1214.
According to some embodiments, processing unit 1208 is configured as providing for rendering to display unit 1202 with multiple The number of the user interface of user interface object (including the control user interface object at the first position on display unit 1202) According to (for example, utilizing display enabling unit 1210).The control user interface object is configured as control relevant parameter.Processing unit 1208 be configured as receive (for example, using receiving unit 1212) correspond to and the control user interface pair on display unit 1202 First input (for example, on touch sensitive surface unit 1204) of the first interaction of elephant.Processing unit 1208 is configured as when reception To correspond to first of the control user interface object on display unit 1202 interact first input when, to display unit 1202 provide for according to first input will control user interface object from the first position on display unit 1202 be moved to The data of the second position on the different display unit 1202 in first position on display unit 1202;And to audio unit 1216 (for example, utilizing audio enabling unit 1214) are provided for the first acoustic information of the first sound output, the first sound Sound output have from the relevant parameter controlled by control user interface object it is different and according to control user interface object from show The one or more characteristics showing the movement of the second position on the first position to display unit 1202 on unit 1202 and changing.
In some embodiments, according to first input satisfaction the first input criterion is determined, the output of the first sound has first Feature set, and according to first input satisfaction the second input criterion is determined, second sound output has different from the first feature set The second feature set.
In some embodiments, processing unit 1208 is configured as after responding to the first input, receives (example Such as, utilize receiving unit 1212) correspond to second of the control user interface object on display unit 1202 interact second It inputs (for example, on touch sensitive surface unit 1204).Processing unit 1208 is configured to respond to and corresponds to when receiving With second of the control user interface object on display unit 1202 interact second input when, to display unit 1202 provide (for example, utilizing display enabling unit 1210) will be for that will control user interface object from display unit 1202 according to the second input The second position be moved to the number of the third place on the display unit 1202 different from the second position on display unit 1202 According to.Processing unit 1208 is additionally configured in response to and when receiving the second input, provides (example to audio unit 1216 Such as, audio enabling unit 1214 is utilized) for providing the second sound information of second sound output, second sound output has According to the shifting of the third place of the control user interface object on from the second position on display unit 1202 to display unit 1202 Dynamic and change one or more characteristics.
In some embodiments, one or more characteristics include the sound of the pitch of the first sound output, the output of the first sound Amount and/or the first sound export the distribution in multiple spatial channels.
In some embodiments, audio unit 1216 is coupled with the multiple loudspeakers for corresponding to multiple spatial channels.Xiang Yin It includes: according to control user interface object from aobvious that frequency unit 1216, which is provided for the first acoustic information of the first sound output, Show the direction of the movement of the second position on the first position to display unit 1202 on unit 1202 to determine (for example, utilizing Audio enabling unit 1214) distribution of the first sound output in multiple spatial channels.
In some embodiments, audio unit 1216 is coupled with the multiple loudspeakers for corresponding to multiple spatial channels.Xiang Yin It includes: according to control user interface object from aobvious that frequency unit 1216, which is provided for the first acoustic information of the first sound output, Show that control of the mobile period user interface object of the third place on the second position to display unit 1202 on unit 1202 exists Position on display unit 1202 leads to determine that (for example, using audio enabling unit 1214) the first sound exports in multiple spaces Distribution on road.
In some embodiments, the first acoustic information packet that the first sound exports is provided for audio unit 1216 It includes: according to the second position of the control user interface object on from the first position on display unit 1202 to display unit 1202 Mobile speed come determine (for example, using audio enabling unit 1214) the first sound output volume.
In some embodiments, control user interface object is the sliding block on slider bar.The pitch root of first sound output Change according to the positioning (for example, position) on the slider bar of control user interface object.
In some embodiments, control user interface object is the sliding block on slider bar.Second on display unit 1202 Position is not the terminal of slider bar.Processing unit 1208 be configured as receive (for example, using receiving unit 1212) correspond to The accordingly interactive input (for example, on touch sensitive surface unit 1204) of control user interface object on display unit 1202. Processing unit 1208 is configured to respond to the phase received Dui Yingyu with the control user interface object on display unit 1202 The input that should be interacted provides (for example, utilizing display enabling unit 1210) for that will control according to the input to display unit 1202 User interface object processed is moved to the data of the 4th position on display unit 1202, wherein the 4th on display unit 1202 Set be slider bar terminal;And (for example, utilizing audio enabling unit 1214) is provided for providing the to audio unit 1216 The acoustic information of three sound output, to indicate that control user interface object is located at the terminal of slider bar, wherein third sound is exported It is different from the output of the first sound.
In some embodiments, processing unit 1208 be configured to respond to receive correspond to on display unit 1202 Control user interface object the first interaction the first input, provide to display unit 1202 (for example, enabled single using display Member is 1210) for being moved to control user interface object and showing from the first position on display unit 1202 according to the first input Show the data of the second position on the different display unit 1202 in first position on unit 1202, and in control user interface The mobile period of the second position of the object on from the first position on display unit 1202 to display unit 1202 is defeated according to first Enter visually to distinguish (for example, using display enabling unit 1210) control user interface object.
According to some embodiments, processing unit 1208 is configured as providing to display unit 1202 (for example, making using display Can unit 1210) data of the first user interface with multiple user interface objects for rendering, wherein current focus is more In first user interface object of a user interface object.Processing unit 1208 is configured as being presented first when display unit 1202 When user interface, (for example, utilizing receiving unit 1212) and the position for changing current focus in the first user interface are received The corresponding input (for example, on touch sensitive surface unit 1204) of the request set, the input have direction and amplitude.Processing unit 1208 be configured to respond to receive it is corresponding with the request of the position for changing current focus in the first user interface Input provides (for example, utilize display enabling unit 1210) to display unit (1202) and is used for current focus from the first user Interface object is moved to the data of second user interface object, wherein second user interface object be according to the direction of input and/ Or amplitude and for current focus selection;And (for example, utilizing audio enabling unit 1214) is provided to audio unit 1216 It is provided for the first acoustic information of the first sound output, first sound output corresponds to current focus from the first user Interface object to second user interface object movement, wherein being moved to second from the first user interface object with current focus The display of user interface object concomitantly provides the output of the first sound.Be at least partially based on the first user interface object size, The type of the type of first user interface object, the size of second user interface object and/or second user interface object determines The pitch (for example, passing through audio enabling unit 1214) of first sound output.
In some embodiments, based on the amplitude of input, determine the volume of the first sound output (for example, making by audio Energy unit 1214).
In some embodiments, according to the predetermined input criterion of amplitude satisfaction for determining input, the output of the first sound is reduced Volume (for example, passing through audio enabling unit 1214).
In some embodiments, the position according to second user interface object in the first user interface adjusts the first sound Distribution (for example, pass through audio enabling unit 1214) of the sound output in multiple spatial channels.
In some embodiments, the type of size and/or second user interface object based on second user interface object, Determine the pitch (for example, by audio enabling unit 1214) of the first sound output.In response to receive with for changing current The corresponding input of request of position of the focus in the first user interface, processing unit 1208 are configured as to audio unit 1216 (for example, utilizing audio enabling unit 1214) is provided to be used to provide the second sound information of second sound output, the second sound Output corresponds to current focus from the first user interface object to the movement of second user interface object, is wherein at least based partially on The type of the size of first user interface object and/or the first user interface object determines the pitch of second sound output.
In some embodiments, according to the predetermined input criterion of amplitude satisfaction for determining input, the output of the first sound is reduced Release (for example, passing through audio enabling unit 1214).
In some embodiments, processing unit 1208 be configured to respond to receive with for changing current focus the The corresponding one or more inputs of the one or more request of position in one user interface are (for example, pass through receiving unit 1212), (for example, utilize display enabling unit 1210) is provided to display unit 1202 to be used for current focus from second user circle In face of the data as being moved to third user interface object;It provides to audio unit 1216 (for example, utilizing audio enabling unit 1214) it is provided for the third acoustic information of third sound output, third sound output corresponds to current focus from the Two user interface objects to third user interface object movement, wherein being moved to current focus from second user interface object The display of third user interface object concomitantly provides the output of third sound;It provides to display unit 1202 (for example, using display Enabling unit 1210) for current focus to be moved to the data of fourth user interface object from third user interface object;And And (for example, utilizing audio enabling unit 1214) is provided to audio unit 1216 and is used to provide the falling tone sound that falling tone sound exports Information, third sound output correspond to current focus from third user interface object to the movement of fourth user interface object, Wherein the falling tone is concomitantly provided from the display that third user interface object is moved to fourth user interface object with current focus Sound output.With in current focus to second user interface object, third user interface object and fourth user interface object most The movement of blob corresponding sound output have than with current focus to second user interface object, third user interface object Corresponding sound corresponding with the movement of two objects of residue in fourth user interface object exports lower pitch.With current coke The mobile correspondence of smallest object of the point into second user interface object, third user interface object and fourth user interface object Sound output have than with current focus to second user interface object, third user interface object and fourth user interface pair The corresponding corresponding sound of the movement of two objects of residue as in exports higher pitch.
In some embodiments, the first user interface with multiple user interface objects is included in the layer of user interface In secondary structure.Processing unit 1208 is configured as that first user with multiple user interface objects is presented when display unit 1202 When interface, receives (for example, utilizing receiving unit 1212) and be used to utilize second user circle in the hierarchical structure of user interface Replace the corresponding input (for example, on touch sensitive surface unit 1204) of request of the first user interface in face;And in response to receiving To input corresponding with for utilizing the replacement request of the first user interface of second user interface, provided to display unit 1202 (for example, utilizing display enabling unit 1210) is for replacing the data of the first user interface using second user interface;According to true It is scheduled on the first user interface in the hierarchical structure of user interface to be located above second user interface, be provided to audio unit 1216 (for example, utilizing audio enabling unit 1214) is used to provide the fifth sound message breath of fifth sound sound output;And existed according to determination In the hierarchical structure of user interface the first user interface be located at second user interface in the following, to audio unit 1216 provide (for example, Utilize audio enabling unit 1214) for providing the 6th acoustic information for exporting different the 6th sound outputs from fifth sound sound.
In some embodiments, processing unit 1208 is configured as when the first user interface is presented in display unit 1202, Receive (for example, utilizing receiving unit 1212) has the request of user interface object of current focus corresponding defeated with for activating Enter (for example, on touch sensitive surface unit 1204);In response to receiving and being used to activate the user interface pair with current focus The corresponding input of the request of elephant: current focus is had according to determining first user interface object, provides (example to audio unit 1216 Such as, audio enabling unit 1214 is utilized) for providing the 7th acoustic information of seven tunes sound output, seven tunes sound output corresponds to In the activation of the first user interface object;And current focus is had according to determining second user interface object, to audio unit 1216 are provided for the 8th acoustic information of the 8th sound output, and the output of the 8th sound corresponds to second user interface pair The activation of elephant.The output of 8th sound is different from the output of seven tunes sound.In the movement with current focus to the first user interface object Relationship between one or more characteristics of corresponding sound output and one or more characteristics of second sound output corresponds to Relationship between one or more characteristics of seven tunes sound output and one or more characteristics of the 8th sound output.
According to some embodiments, processing unit 1208 is configured as providing to display unit 1202 (for example, making using display Can unit 1210) include for rendering descriptive information about the first video the first video information user interface data; (for example, utilizing audio enabling unit 1214) is provided to audio unit 1216 to be used to believe the first video by display unit 1202 It is provided during ceasing the presentation of user interface and corresponds to the acoustic information that the first sound of the first video exports;When display unit 1202 It when presentation includes the first video information user interface about the descriptive information of the first video, receives (for example, single using receiving First input (for example, in touch sensitive surface unit 1204) 1212) corresponding with for playing back the request of the first video;In response to connecing Input corresponding with for playing back the request of the first video is received, is provided to display unit 1202 (for example, enabled single using display 1210) member replaces the data of the presentation of the first video information user interface for the playback using the first video;In the first video Playback during, receive (for example, utilizing receiving unit 1212) with for show about the first video the second video information use The corresponding input (for example, on touch sensitive surface unit 1204) of the request at family interface;In response to receive with for show about The corresponding input of request of second video information user interface of the first video, to display unit 1202 provide for using about Second video information user interface of the first video replaces the data of the playback of the first video, and provides to audio unit 1216 For by display unit 1202 to provided during the presentation of the second video information user interface export from the first sound it is different The acoustic information that second sound corresponding to the first video exports.
In some embodiments, the output of the first sound and/or second sound output are selected from the Sound Track of the first video.
In some embodiments, second sound output is to receive with working as and be used to show the second video information user interface Request corresponding input when the first video for just playing in corresponding first video in position Sound Track.
In some embodiments, it receives and uses in the predetermined lasting time the end from the first video according to determination In the corresponding input of request for showing the second video information user interface, for second sound output selection (for example, passing through audio Enabling unit 1214) the first video end subtitle Sound Track.
In some embodiments, processing unit 1208 is configured as: after the playback for initiating the first video, receiving (example Such as, utilize receiving unit 1212) with for suspend the request of the first video it is corresponding input (for example, in touch sensitive surface unit On 1204);And in response to receiving input corresponding with for suspending the request of the first video, in the timeline of the first video In the first playback position at pause (for example, using display enabling unit 1210) first video playback;To display unit 1202 provide (for example, using display enabling unit 1210), and the one or more from the first video is selected quiet for rendering The only data of image, wherein being selected selected by the one or more based on the first playback position for suspending the first video at which Static image;And (for example, utilizing audio enabling unit 1214) is provided for providing and first time to audio unit 1216 Put the acoustic information of the corresponding sound output of Sound Track of the first video at position.
In some embodiments, processing unit 1208 is configured as: after the playback for initiating the first video, receiving (example Such as, utilize receiving unit 1212) with for suspend the request of the first video it is corresponding input (for example, in touch sensitive surface unit On 1204);And in response to receiving input corresponding with for suspending the request of the first video, in the timeline of the first video In the first playback position at pause (for example, using display enabling unit 1210) first video playback;To display unit 1202 provide (for example, using display enabling unit 1210), and the one or more from the first video is selected quiet for rendering The only data of image, wherein being selected selected by the one or more based on the first playback position for suspending the first video at which Static image, and to audio unit 1216 provide (for example, utilizing audio enabling unit 1214) for provide with first time Put the acoustic information of the corresponding sound output of one or more characteristics of the first video at position.
In some embodiments, the first video information user interface includes multiple user interface objects.Multiple user interfaces First user interface object of object is configured as: when selected, being initiated electronic equipment 1200 and is provided to audio unit 1216 (for example, utilizing audio enabling unit 1214) is for providing at least partly corresponding sound with the first Sound Track of the first video The acoustic information of sound output.The second user interface object of multiple user interface objects is configured as: when selected, initiating electricity Sub- equipment 1200 provides (for example, utilizing audio enabling unit 1214) to audio unit 1216 and is used to provide the sound that sound exports Information, sound output correspond to the second sound tracks different from the first Sound Track of the first video at least partly.
In some embodiments, based on one or more characteristics of the first video, the output of the first sound and/or second are selected Sound output.
In some embodiments, processing unit 1208 is configured as: the first video information is presented in display unit 1202 and uses Before the interface of family, the number of the video selection user interface of the expression including multiple videos for rendering is provided to display unit 1202 According to;And receive (for example, utilizing receiving unit 1212) and the selection of expression of the first video in multiple videos it is corresponding defeated Enter (for example, on touch sensitive surface unit 1204), wherein corresponding with the selection of the expression of the first video defeated in response to receiving Enter, the first video information user interface for being directed to the first video is presented.
According to some embodiments, Figure 13 shows the electronic equipment of the principle configuration according to various described embodiments 1300 functional block diagram.The functional block of equipment is optionally implemented by hardware, software, firmware or combinations thereof various described to realize Embodiment principle.It will be understood by those skilled in the art that functional block described in Figure 13 is optionally combined or is separated Implement the principle of various described embodiments for sub- frame.Therefore, description herein is optionally supported described herein Any possible combination or separation of functional block or further definition.
As shown in Figure 13, electronic equipment 1300 is communicated with display unit 1302.Display unit 1302 is configured as showing Video playback information.In some embodiments, electronic equipment 1300 is communicated with audio unit 1312.Electronic equipment 1300 include with The processing unit 1304 that display unit 1302 is communicated and communicated in some embodiments with audio unit 1312.In some implementations In example, processing unit 1304 includes: that data providing unit 1306, input receiving unit 1308, pause unit 1310 and sound mention For unit 1314.
Processing unit 1304 is configured as: being provided (for example, using data providing unit 1306) to display unit 1302 and is used In the data that the first video is presented;When the first video is presented in display unit 1302, receive (for example, utilizing input receiving unit 1308) input corresponding with for suspending user's request of the first video;In response to receiving and being used to suspend the first video User requests corresponding input, and pause is (for example, use pause unit at the first playback position in the timeline of the first video 1310) presentation of the first video;And suspend the first video at the first playback position in the timeline of the first video is in After now and when the presentation of the first video is suspended, provide to display unit 1302 (for example, utilizing data providing unit 1306) data of multiple selected static images from the first video for rendering, wherein based on suspending first at which First playback position of video selects multiple selected static image.
In some embodiments, the first playback position from timeline and the timeline for leading over the first playback position In the second playback position between the playback position range for the first video in select multiple selected static images.
In some embodiments, the second playback position in timeline leads over the first playback position with predetermined time interval It sets.
In some embodiments, the second playback position in timeline is to receive and be used to suspend asking for the first video The time interval determined after corresponding input is asked to lead over the first playback position.
In some embodiments, multiple selected static images of video include for multiple selected static images In any other static image discontinuous static image in video.
In some embodiments, multiple selected static images include representative frame.
In some embodiments, equipment 1300 is communicated with audio unit 1312, and processing unit 1304 is further matched It is set to audio unit 1312 and provides (for example, providing unit 1314 using acoustic information) for providing and in display unit 1302 The acoustic information of the corresponding first sound output of first video of upper presentation.
In some embodiments, processing unit 1304 is configured as providing to audio unit 1312 (for example, believing using sound Cease and unit 1314 be provided) for providing the acoustic information of sound output, sound output is to be based on suspending the first video at which The first playback position and select.
In some embodiments, processing unit 1304 is configured as: the first playback position in the timeline of the first video The place of setting suspends the presentation of the first video later and when the presentation of the first video is suspended, and provides (example to display unit 1302 Such as, using data providing unit 1306) indicate for rendering to slide show mode convert animation data.
In some embodiments, indicate that the animation converted to slide show mode includes countdown clock.
In some embodiments, processing unit 1304 is configured as repeating to come from for rendering to the offer of display unit 1302 The data of multiple selected static images of first video.
In some embodiments, processing unit 1304 is configured as providing to display unit 1302 (for example, mentioning using data For unit 1306) the respective stationary image with multiple selected static images of panning effect and/or zooming effect is presented Data.
Optionally, by operation information processing unit, (such as general processor is (for example, as above with reference to Figure 1A and Fig. 3 institute Description) or special chip) in one or more functional modules implement the behaviour in information processing method as described above Make.
Optionally, implemented above with reference to Fig. 6 A extremely by Figure 1A to Figure 1B or Figure 10 or component depicted in figure 11 It is operated described in Fig. 6 C, Fig. 7 A to Fig. 7 D, Fig. 8 A to Fig. 8 C and Figure 10 A to Figure 10 B.For example, optionally, passing through event point Class device 170, event recognizer 180 and event handler 190 are implemented to receive operation 704, receive operation 804 and receive operation 910. Event monitor 171 in event classifier 170 detects the contact on touch-sensitive display 112, and event dispatcher module 174 Event information is delivered to application 136-1.Event information and corresponding event are defined using the corresponding event identifier 180 of 136-1 186 compare, and determine whether that the first contact (or whether the rotation of equipment) at the first position on touch sensitive surface is right Another should be directed to from one in the selection of the object on scheduled event perhaps subevent such as user interface or equipment to determine To rotation.The activation of event recognizer 180 and event or subevent when detecting corresponding scheduled event perhaps subevent The associated event handler 190 of detection.Event handler 190 optionally uses or calls data renovator 176 or object Renovator 177 is updated using internal state 192.In some embodiments, event handler 190 accesses corresponding GUI renovator 178 come update by application display content.Similarly, for the ordinary skill in the art it will be clear that, can be with How to be based on Figure 1A discribed component into Figure 1B and implements other processes.
For illustrative purposes, the description of front is described by reference to specific implementation scheme.However, illustrative opinion above It states and is not intended in detail or limits the invention to disclosed precise forms.In view of introduction above, many modifications and variations It is possible.It selects and describes embodiment to best explain the principle of the present invention and its practical application, so that this Field technical staff can be best using the present invention and with the various institutes such as suitable for the various modifications of expected special-purpose The embodiment of description.

Claims (12)

1. a kind of method, comprising:
At the electronic equipment with one or more processors and memory, wherein the equipment and display and audio system Communication:
The data of the user interface with multiple user interface objects for rendering, the multiple user are provided to the display Interface object includes the control user interface object at first position on the display, wherein the control user interface Object is configured for controlling corresponding parameter;
The first input is received, first input corresponds to first with the control user interface object on the display Interaction;And
Receive correspond to described first of the control user interface object on the display interact described the When one input:
To the display provide for according to first input by the control user interface object from the display The first position be moved to the data of the second position on the display, the second position on the display is not It is same as the first position on the display;And
It is provided for the first acoustic information that the first sound exports to the audio system, the first sound output has One or more characteristics, one or more of characteristics are different from by the corresponding of the control user interface object control Parameter, and according to the control user interface object from the first position to the display on the display The movement of the second position and change.
2. according to the method described in claim 1, wherein:
Meet the first input criterion according to determination first input, the first sound output has first group of characteristic;And
Meet the second input criterion according to determination first input, the first sound output has and first group of characteristic Second group of different characteristics.
3. method described in any one of -2 according to claim 1, comprising:
To it is described first input make a response after, receive second input, it is described second input correspond to on the display The control user interface object second interaction;
Correspond to and the described second institute interacted of the control user interface object on the display in response to receiving State the second input and when receiving the described second input:
To the display provide for according to second input by the control user interface object from the display The second position be moved to the data of the third place on the display, the third place on the display is not It is same as the second position on the display;And
It is provided for the second sound information that second sound exports to the audio system, the second sound output has One or more characteristics, one or more of characteristics according to the control user interface object from the display described in The movement of the third place on the second position to the display and change.
4. method according to any one of claim 1-3, wherein one or more of characteristics include first sound The pitch of sound output, the volume of first sound output, and/or first sound export point in multiple spatial channels Cloth.
5. method according to any of claims 1-4, in which:
The audio system is coupled with multiple loudspeakers, and the multiple loudspeaker corresponds to multiple spatial channels;And
Being provided for first acoustic information that first sound exports to the audio system includes: according to Control institute of the user interface object from the third place on the second position to the display on the display Mobile direction is stated, determines that first sound exports the distribution in the multiple spatial channel.
6. method according to any of claims 1-4, in which:
The audio system is coupled with multiple loudspeakers, and the multiple loudspeaker corresponds to multiple spatial channels;And
Being provided for first acoustic information that first sound exports to the audio system includes: according in institute Control user interface object is stated from the third place on the second position to the display on the display The position of the control user interface object on the display during the movement determines the first sound output in institute State the distribution in multiple spatial channels.
7. method according to claim 1 to 6, in which:
Being provided for first acoustic information that first sound exports to the audio system includes: according to Control institute of the user interface object from the second position on the first position to the display on the display Mobile speed is stated, determines the volume of the first sound output.
8. method according to any one of claims 1-7, in which:
The control user interface object is the sliding block on slider bar;And
The pitch of the first sound output changes according to the position for controlling user interface object on the slider bar.
9. method according to claim 1 to 8, in which:
The control user interface object is the sliding block on slider bar;
The second position on the display is not the terminal of the slider bar;And
The described method includes:
Input is received, the input corresponds to and the corresponding interaction of the control user interface object on the display;With And
In response to receiving the corresponding interactive institute corresponded to the control user interface object on the display State input:
It provides to the display for the control user interface object to be moved to the display according to the input The 4th position data, wherein the 4th position on the display is the terminal of the slider bar;And
It is provided for the acoustic information that third sound exports to the audio system, to indicate the control user interface pair Terminal point as being located at the slider bar exports wherein third sound output is different from first sound.
10. method according to claim 1 to 9, in which:
Correspond to and the described first institute interacted of the control user interface object on the display in response to receiving State the first input:
To the display provide for according to first input by the control user interface object from the display The first position be moved to the data of the second position on the display, the second on the display The first position being different from the display is set, and in the control user interface object from the display It is being regarded during the movement of the second position on the first position to the display according to first input The control user interface object is distinguished in feel.
11. a kind of electronic equipment, comprising:
One or more processors;And
Memory, the memory stores one or more programs for being run by one or more of processors, described One or more programs include that the instruction of method described in any one of 1-10 is required for perform claim.
12. a kind of computer readable storage medium stores one or more programs, one or more of programs include instruction, Described instruction makes the equipment perform claim require method described in any one of 1-10 when being run as electronic equipment.
CN201910417641.XA 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback Active CN110109730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910417641.XA CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201562215244P 2015-09-08 2015-09-08
US62/215,244 2015-09-08
US14/866,570 US9928029B2 (en) 2015-09-08 2015-09-25 Device, method, and graphical user interface for providing audiovisual feedback
US14/866,570 2015-09-25
CN201610670699.1A CN106502638B (en) 2015-09-08 2016-08-15 For providing the equipment, method and graphic user interface of audiovisual feedback
CN201910417641.XA CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610670699.1A Division CN106502638B (en) 2015-09-08 2016-08-15 For providing the equipment, method and graphic user interface of audiovisual feedback

Publications (2)

Publication Number Publication Date
CN110109730A true CN110109730A (en) 2019-08-09
CN110109730B CN110109730B (en) 2023-04-28

Family

ID=56799573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910417641.XA Active CN110109730B (en) 2015-09-08 2016-08-15 Apparatus, method and graphical user interface for providing audiovisual feedback

Country Status (2)

Country Link
CN (1) CN110109730B (en)
AU (2) AU2016101424A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023065839A1 (en) * 2021-10-20 2023-04-27 华为技术有限公司 Touch feedback method and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10691406B2 (en) 2017-02-16 2020-06-23 Microsoft Technology Licensing, Llc. Audio and visual representation of focus movements
CN110351601B (en) * 2019-07-01 2021-09-17 湖南科大天河通信股份有限公司 Civil air defense propaganda education terminal equipment and method
CN115114475B (en) * 2022-08-29 2022-11-29 成都索贝数码科技股份有限公司 Audio retrieval method for matching short video sounds with live soundtracks of music

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6297818B1 (en) * 1998-05-08 2001-10-02 Apple Computer, Inc. Graphical user interface having sound effects for operating control elements and dragging objects
JP2005322125A (en) * 2004-05-11 2005-11-17 Sony Corp Information processing system, information processing method, and program
CN1956516A (en) * 2005-10-28 2007-05-02 深圳Tcl新技术有限公司 Method for displaying TV function surface by image and combined with voice
US20080165153A1 (en) * 2007-01-07 2008-07-10 Andrew Emilio Platzer Portable Multifunction Device, Method, and Graphical User Interface Supporting User Navigations of Graphical Objects on a Touch Screen Display
CN101241414A (en) * 2007-02-05 2008-08-13 三星电子株式会社 User interface method for a multimedia playing device having a touch screen
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
CN101473292A (en) * 2006-06-22 2009-07-01 索尼爱立信移动通讯有限公司 Wireless communications devices with three dimensional audio systems
WO2010018949A2 (en) * 2008-08-14 2010-02-18 (주)펜타비전 Apparatus for relaying an audio game
US8190438B1 (en) * 2009-10-14 2012-05-29 Google Inc. Targeted audio in multi-dimensional space
US20120263307A1 (en) * 2011-04-12 2012-10-18 International Business Machines Corporation Translating user interface sounds into 3d audio space
US20120314871A1 (en) * 2011-06-13 2012-12-13 Yasuyuki Koga Information processing apparatus, information processing method, and program
US20130041648A1 (en) * 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion
US20130179784A1 (en) * 2012-01-10 2013-07-11 Lg Electronics Inc. Mobile terminal and control method thereof
WO2013151010A1 (en) * 2012-04-02 2013-10-10 シャープ株式会社 Location input device, location input device control method, control program, and computer-readable recording medium
US20130325154A1 (en) * 2012-05-30 2013-12-05 Samsung Electronics Co. Ltd. Apparatus and method for high speed visualization of audio stream in an electronic device
CN103455237A (en) * 2013-08-21 2013-12-18 中兴通讯股份有限公司 Menu processing method and device
CN103455236A (en) * 2012-05-29 2013-12-18 Lg电子株式会社 Mobile terminal and control method thereof
US20140141879A1 (en) * 2012-11-16 2014-05-22 Nintendo Co., Ltd. Storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method
CN103914303A (en) * 2014-04-10 2014-07-09 福建伊时代信息科技股份有限公司 Method and device for presenting progress bars
CN104618788A (en) * 2014-12-29 2015-05-13 北京奇艺世纪科技有限公司 Method and device for displaying video information
US20150193197A1 (en) * 2014-01-03 2015-07-09 Harman International Industries, Inc. In-vehicle gesture interactive spatial audio system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080229206A1 (en) * 2007-03-14 2008-09-18 Apple Inc. Audibly announcing user interface elements
US20090121903A1 (en) * 2007-11-12 2009-05-14 Microsoft Corporation User interface with physics engine for natural gestural control

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6297818B1 (en) * 1998-05-08 2001-10-02 Apple Computer, Inc. Graphical user interface having sound effects for operating control elements and dragging objects
JP2005322125A (en) * 2004-05-11 2005-11-17 Sony Corp Information processing system, information processing method, and program
CN1956516A (en) * 2005-10-28 2007-05-02 深圳Tcl新技术有限公司 Method for displaying TV function surface by image and combined with voice
CN101473292A (en) * 2006-06-22 2009-07-01 索尼爱立信移动通讯有限公司 Wireless communications devices with three dimensional audio systems
US20080165153A1 (en) * 2007-01-07 2008-07-10 Andrew Emilio Platzer Portable Multifunction Device, Method, and Graphical User Interface Supporting User Navigations of Graphical Objects on a Touch Screen Display
CN101241414A (en) * 2007-02-05 2008-08-13 三星电子株式会社 User interface method for a multimedia playing device having a touch screen
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
WO2010018949A2 (en) * 2008-08-14 2010-02-18 (주)펜타비전 Apparatus for relaying an audio game
US20130041648A1 (en) * 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion
US8190438B1 (en) * 2009-10-14 2012-05-29 Google Inc. Targeted audio in multi-dimensional space
US20120263307A1 (en) * 2011-04-12 2012-10-18 International Business Machines Corporation Translating user interface sounds into 3d audio space
US20120314871A1 (en) * 2011-06-13 2012-12-13 Yasuyuki Koga Information processing apparatus, information processing method, and program
CN102855116A (en) * 2011-06-13 2013-01-02 索尼公司 Information processing apparatus, information processing method, and program
US20130179784A1 (en) * 2012-01-10 2013-07-11 Lg Electronics Inc. Mobile terminal and control method thereof
WO2013151010A1 (en) * 2012-04-02 2013-10-10 シャープ株式会社 Location input device, location input device control method, control program, and computer-readable recording medium
CN103455236A (en) * 2012-05-29 2013-12-18 Lg电子株式会社 Mobile terminal and control method thereof
US20130325154A1 (en) * 2012-05-30 2013-12-05 Samsung Electronics Co. Ltd. Apparatus and method for high speed visualization of audio stream in an electronic device
US20140141879A1 (en) * 2012-11-16 2014-05-22 Nintendo Co., Ltd. Storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method
CN103455237A (en) * 2013-08-21 2013-12-18 中兴通讯股份有限公司 Menu processing method and device
US20150193197A1 (en) * 2014-01-03 2015-07-09 Harman International Industries, Inc. In-vehicle gesture interactive spatial audio system
CN103914303A (en) * 2014-04-10 2014-07-09 福建伊时代信息科技股份有限公司 Method and device for presenting progress bars
CN104618788A (en) * 2014-12-29 2015-05-13 北京奇艺世纪科技有限公司 Method and device for displaying video information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023065839A1 (en) * 2021-10-20 2023-04-27 华为技术有限公司 Touch feedback method and electronic device

Also Published As

Publication number Publication date
AU2016101424A4 (en) 2016-09-15
AU2017100472A4 (en) 2017-05-25
CN110109730B (en) 2023-04-28
AU2017100472B4 (en) 2018-02-08

Similar Documents

Publication Publication Date Title
CN106502638B (en) For providing the equipment, method and graphic user interface of audiovisual feedback
JP6825020B2 (en) Column interface for navigating in the user interface
CN108139863B (en) Device, method and graphical user interface for providing feedback during interaction with intensity sensitive buttons
CN108351750B (en) For handling equipment, method and the graphic user interface of strength information associated with touch input
CN114942693B (en) Identifying applications on which content is available
CN106415431B (en) For sending method, computer-readable medium and the electronic equipment of instruction
CN104487928B (en) For equipment, method and the graphic user interface of transition to be carried out between dispaly state in response to gesture
CN105264479B (en) Equipment, method and graphic user interface for navigating to user interface hierarchical structure
CN104487929B (en) For contacting the equipment for carrying out display additional information, method and graphic user interface in response to user
CN104903835B (en) For abandoning equipment, method and the graphic user interface of generation tactile output for more contact gestures
CN104471521B (en) For providing the equipment, method and graphic user interface of feedback for the state of activation for changing user interface object
CN104685470B (en) For the device and method from template generation user interface
CN110209290A (en) Gestures detection, lists navigation and items selection are carried out using crown and sensor
CN108701013A (en) Intelligent digital assistant in multitask environment
CN108363526A (en) Device and method for navigating between user interface
CN109219796A (en) Digital touch on real-time video
CN108292190A (en) For navigating and playing the user interface based on channel content
CN109313528A (en) Accelerate to roll
CN110297594A (en) Input equipment and user interface interaction
CN107683470A (en) Audio frequency control for web browser
JP2017525011A (en) User interface during music playback
CN109219781A (en) Display and update application view group
CN106233237B (en) A kind of method and apparatus of processing and the new information of association
CN108829325A (en) For dynamically adjusting the equipment, method and graphic user interface of the presentation of audio output
US20230164296A1 (en) Systems and methods for managing captions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant