CN115202530A - Gesture interaction method and system of user interface - Google Patents
Gesture interaction method and system of user interface Download PDFInfo
- Publication number
- CN115202530A CN115202530A CN202210587715.6A CN202210587715A CN115202530A CN 115202530 A CN115202530 A CN 115202530A CN 202210587715 A CN202210587715 A CN 202210587715A CN 115202530 A CN115202530 A CN 115202530A
- Authority
- CN
- China
- Prior art keywords
- gesture
- interaction
- recognition area
- gesture recognition
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 152
- 238000000034 method Methods 0.000 title claims abstract description 98
- 230000008569 process Effects 0.000 claims description 61
- 238000012790 confirmation Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 14
- 238000004590 computer program Methods 0.000 description 12
- 230000006399 behavior Effects 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application relates to a gesture interaction method and a system of a user interface, wherein the method comprises the following steps: obtaining a gesture recognition area and a non-gesture recognition area by carrying out area division on a user interface; performing gesture interaction through a gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition; and displaying the flow node of the planned content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area. By the method and the device, the problem that general human-computer interaction needs to depend on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is realized, and the interaction form becomes more convenient and intelligent.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a gesture interaction method and system for a user interface.
Background
At present, with the development of computer vision and the wide application in life practice, various algorithm-based behavior detection and motion recognition projects are increasingly applied in practice and widely researched in related fields. In the aspect of behavior monitoring, the behavior of the swarm is monitored through information such as graphs, temperature, humidity and sound, and meanwhile, more applications are focused on human behavior monitoring. Human bone recognition is widely applied in the fields of video capture, computer graphics and the like as an important reference basis for behavior monitoring.
In a general human-computer interaction scenario, a user needs to realize interaction with an intelligent electronic device through a specific physical medium, such as: the television remote controller is used for switching channels, adjusting volume and the like, the keyboard and the mouse are used for inputting and selecting information, and the game handle is used for adjusting game parameters and controlling game characters and the like. The human skeleton recognition provides a brand new idea for solving the problem that human-computer interaction needs to be based on physical media.
At present, an effective solution is not provided aiming at the problem that the common man-machine interaction in the related technology needs to depend on physical media.
Disclosure of Invention
The embodiment of the application provides a gesture interaction method and system of a user interface, and aims to at least solve the problem that general man-machine interaction in the related art needs physical media.
In a first aspect, an embodiment of the present application provides a gesture interaction method for a user interface, where the method includes:
carrying out region division on a user interface to obtain a gesture recognition region and a non-gesture recognition region;
performing gesture interaction through the gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition;
and displaying the flow node of the planned content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
In some of these embodiments, the method comprises:
the first gesture recognition area and the second gesture recognition area have recognition priority;
under the condition that the recognition priorities are set to be the same, simultaneously executing gesture interaction of the first gesture recognition area and gesture interaction of the second gesture recognition area;
and under the condition that the recognition priorities are different, executing the gesture interaction of the second gesture recognition area preferentially, and then executing the gesture interaction of the first gesture recognition area. In some embodiments, performing gesture interaction through the gesture recognition area according to the acquired user gesture includes:
dividing the gesture recognition area into a first gesture recognition area and a second gesture recognition area;
performing gesture interaction on the branch options corresponding to the flow nodes through the first gesture recognition area according to the acquired user gestures;
and performing gesture interaction based on the function interaction gesture through the second gesture recognition area according to the acquired user gesture.
In some embodiments, performing gesture interaction on the branch option corresponding to the flow node through the first gesture recognition area according to the acquired user gesture includes:
and when the situation that a user touches the branch option of the flow node is detected, amplifying the branch option according to a preset size, displaying a corresponding time progress, and finishing the gesture interaction of the flow node after the time progress is executed.
In some embodiments, performing, according to the acquired user gesture, gesture interaction based on a function interaction gesture through the second gesture recognition area includes:
the function interaction gesture comprises a return gesture, and in the process nodes which are not the first process node, when the return gesture made by the user is detected, the current process node is returned to the previous process node.
In some embodiments, in the case that it is detected that the user makes a return gesture, returning from the current flow node to the previous flow node comprises:
and when the situation that the user makes a return gesture is detected, displaying the time progress of the return gesture, and returning to the previous process node from the current process node after the time progress is executed.
In some embodiments, performing, according to the acquired user gesture, gesture interaction based on a function interaction gesture through the second gesture recognition area includes:
the functional interaction gestures further comprise execution gestures, and after the last process node is completed, the planning courses are executed under the condition that the execution gestures of the user are detected.
In some embodiments, the determining the display state of the flow node according to the gesture interaction result of the gesture recognition area includes:
and displaying the current process node of the plan content in the non-gesture recognition area in a first interaction state, and displaying the current process node in a second interaction state if the current process node finishes gesture interaction.
In some of these embodiments, the first interaction state is a gray icon state and the second interaction state is a highlighted and confirmed marked icon state.
In a second aspect, an embodiment of the present application provides a gesture interaction system for a user interface, where the system includes a region division module, a gesture interaction module, and a display matching module;
the area division module is used for carrying out area division on the user interface to obtain a gesture recognition area and a non-gesture recognition area;
the gesture interaction module is used for performing gesture interaction through the gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition;
the display matching module is used for displaying the process nodes of the plan content in the non-gesture recognition area and determining the display state of the process nodes according to the gesture interaction result of the gesture recognition area.
Compared with the related art, the gesture interaction method and the gesture interaction system for the user interface provided by the embodiment of the application have the advantages that the user interface is subjected to region division to obtain a gesture recognition region and a non-gesture recognition region; performing gesture interaction through a gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition; and displaying the flow node of the planned content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area. The problem that general human-computer interaction needs to depend on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is achieved, and the interaction mode becomes more convenient and intelligent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of steps of a method of gesture interaction of a user interface according to an embodiment of the application;
FIG. 2 is a schematic illustration of user interface partitioning according to an embodiment of the present application;
FIG. 3 is a schematic diagram of gesture recognition area division according to an embodiment of the present application;
FIG. 4 is a first schematic diagram of a user interface gesture interaction according to an embodiment of the present application;
FIG. 5 is a schematic diagram two of a user interface gesture interaction according to an embodiment of the present application;
FIG. 6 is a schematic diagram three of a user interface gesture interaction according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a matchmaking lesson according to an embodiment of the present application;
FIG. 8 is a block diagram of a gesture interaction system of a user interface according to an embodiment of the present application;
fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Description of the drawings: 81. a region dividing module; 82. a gesture interaction module; 83. and displaying the matching module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of describing the invention (including a single reference) are to be construed in a non-limiting sense as indicating either the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
An embodiment of the present application provides a gesture interaction method for a user interface, and fig. 1 is a flowchart illustrating steps of the gesture interaction method for the user interface according to the embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S102, carrying out region division on a user interface to obtain a gesture recognition region and a non-gesture recognition region;
specifically, fig. 2 is a schematic diagram of user interface division according to an embodiment of the present application, and as shown in fig. 2, the user interface is divided into a gesture recognition area and a non-gesture recognition area. The method comprises the steps that a gesture recognition area and a non-gesture recognition area are displayed on a display interface of a large screen, the large screen can be a television screen or a screen of a projector, the non-gesture recognition area is used for displaying flow nodes or planned courses and the like, and the gesture recognition area is used for displaying an interaction process of a user;
step S104, performing gesture interaction through a gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition; the interaction process comprises the steps of obtaining skeleton movement information or hand posture information of a user body, and the like, wherein the interaction process comprises the steps of obtaining a user image or depth image information through an external camera or a camera carried by a large-screen terminal, identifying the skeleton movement information or the hand posture information of the user through the user image or the depth image information, and obtaining the interaction information of the user and displaying the interaction information in a gesture identification area according to the skeleton movement information or the hand posture information;
specifically, fig. 3 is a schematic diagram of a gesture recognition area division according to an embodiment of the present application, and as shown in fig. 3, the gesture recognition area is further divided into a first gesture recognition area and a second gesture recognition area; performing gesture interaction on the branch options corresponding to the flow nodes through the first gesture recognition area according to the acquired user gestures; according to the acquired user gestures, performing gesture interaction based on the function interaction gestures through the second gesture recognition area, and further, the first gesture recognition area has the following functions: displaying an interaction target and progress under the condition of interaction inside the flow node; the second gesture recognition area has the following functions: and displaying the interaction target and the progress under the condition of interaction among the flow nodes.
For example, when the recognition areas of the first gesture recognition area and the second gesture recognition area are different, the second gesture recognition area recognizes the whole area of the recognition image, recognizes whether the gesture of the user is a function interaction gesture after being recognized, and determines whether to perform the next process, and the first gesture recognition area is used for recognizing whether the human skeleton of the user touches the touch area, and performs the next process after recognizing that the human skeleton touches the touch area. Under the condition that the recognition priorities of the first gesture recognition area and the second gesture recognition area are the same, the gesture interaction of the first gesture recognition area and the gesture interaction of the second gesture recognition area are executed simultaneously, so that the execution efficiency of the recognition process can be improved, and the time consumed by the gesture interaction is reduced, such as: the method comprises the steps that a first gesture recognition area recognizes a touch area corresponding to a branch option in a current process node, a second gesture recognition area recognizes a function interaction gesture made by a user, if the time progress of the branch option in the current process node is completed earlier than the time progress of the function interaction gesture, the function interaction gesture is executed based on an interaction result of a previous process node, and if not, the function interaction gesture is executed based on the current process node; under the condition that the recognition priorities are different, the priority of the second gesture recognition area is higher than that of the first gesture recognition area, the background preferentially recognizes the user gestures in the recognition image, and after the gesture recognition is finished, whether the user bones touch the touch area in the recognition image or not is recognized, so that the conflict of execution of the recognition process can be prevented. Fig. 4 is a schematic diagram of a user interface gesture interaction according to an embodiment of the present application, as shown in fig. 4, two touch areas, namely a "male" touch area and a "female" touch area, are set in a first gesture recognition area, after a background detects and recognizes that a user skeleton in an image touches the touch area "female", a corresponding display branch option of a flow node in a non-gesture recognition area is enlarged according to a preset size in the touch area "female" corresponding to the branch option, and a corresponding time schedule is displayed, and after the time schedule is executed, the gesture interaction of the flow node is completed. If the branch option 'woman' of the flow node 'gender' is touched, the branch option is amplified, the corresponding time schedule is displayed, after the time schedule is executed, gesture interaction of 'gender' is completed, and the user can be effectively prevented from mistakenly touching the branch option.
Preferably, fig. 5 is a schematic diagram of gesture interaction of a user interface according to an embodiment of the present application, as shown in fig. 5, in the gesture interaction performed in the second gesture recognition area, the function interaction gesture includes a return gesture, and in a process node that is not a first process node, when the gesture of the user is recognized as the return gesture for the recognition image, a time schedule of the return gesture is displayed in the second gesture recognition area, and after the time schedule is completed, the current process node is returned to the previous process node. If the user is detected to make a return gesture, displaying the time schedule of the return gesture, and returning to the previous process node 'sex' from the current process node 'exercise part' after the time schedule is executed "
Preferably, fig. 6 is a third schematic view of gesture interaction of a user interface according to the embodiment of the present application, as shown in fig. 6, in the gesture interaction performed in the second gesture recognition area, the function interaction gesture further includes an execution gesture, after the last process node is completed, when it is detected that the user makes the execution gesture, a time schedule of the execution gesture is displayed, and after the time schedule is executed, a planned course is executed. For example, after the last process node "training immediately" is completed, when it is detected that the user makes a gesture for execution, the time schedule for executing the gesture is displayed, and after the time schedule is executed, the planning course is executed.
And step S106, displaying the process node of the planned content in the non-gesture recognition area, and determining the display state of the process node according to the gesture interaction result of the gesture recognition area.
It should be noted that, part or all of the process nodes of the plan content may be displayed in the non-gesture recognition area, the display state of the current process node is determined by the gesture interaction result of the gesture recognition area, the change of the display state indicates whether the user gesture interaction is completed, and in addition, the plan content may change according to different application scenarios, for example, the process node of the fitness plan content may be a setting node of the fitness plan, and the process node of the learning course plan content may be a setting node of the course learning. After the process nodes which acquire the plan content are subjected to gesture interaction, the background can generate corresponding plan courses according to the content of the determined process nodes, for example, the fitness plan content generates fitness exercise courses, and the learning course plan content generates learning course arrangement.
Specifically, as shown in fig. 4, through the non-gesture recognition area, the display state of the current process node of the gesture interaction is displayed in the first interaction state, and if the current process node completes the gesture interaction, the current process node is displayed in the second interaction state, where the first interaction state is a gray icon state, and the second interaction state is an icon state that is highlighted and has a confirmation mark.
Further, fig. 7 is a schematic diagram illustrating a matching planned course, according to an embodiment of the present disclosure, in which a floating interface is generated when all process nodes displayed in a non-gesture recognition area have completed gesture interaction, and the matching planned course is displayed for a user in the floating interface.
It should be noted that, the flow nodes of the gesture interaction are displayed through the non-gesture recognition area, and the user may generate a planned course of the exercise when part of the flow nodes are completed (part of the nodes are displayed in the second interaction state, and part of the nodes are displayed in the second interaction state). The user may also generate a planned lesson of the exercise upon completion of all of the flow nodes (all of which are displayed in the second interactive state). Through the steps S102 to S106 in the embodiment of the application, the problem that general man-machine interaction needs to depend on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is realized, and the interaction form is more convenient and intelligent.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
An embodiment of the present application provides a gesture interaction system for a user interface, fig. 8 is a block diagram of a structure of the gesture interaction system for a user interface according to the embodiment of the present application, and as shown in fig. 8, the system includes a region dividing module 81, a gesture interaction module 82, and a display matching module 83;
the region dividing module 81 is configured to perform region division on the user interface to obtain a gesture recognition region and a non-gesture recognition region;
the gesture interaction module 82 is used for performing gesture interaction through a gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human body skeleton recognition;
and the display matching module 83 is configured to display a process node of the planned content in the non-gesture recognition area, and determine the display state of the process node according to the gesture interaction result of the gesture recognition area.
Through the region dividing module 81, the gesture interaction module 82 and the display matching module 83 in the embodiment of the application, the problem that general human-computer interaction needs physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is realized, and the interaction form becomes more convenient and intelligent.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiment and optional implementation manners, and details of this embodiment are not described herein again.
In addition, in combination with the gesture interaction method of the user interface in the foregoing embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements the gesture interaction method of any one of the user interfaces described in the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a gesture interaction method of a user interface. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In an embodiment, fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 9, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 9. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory, which stores an operating system, a computer program, and a database, connected by an internal bus. The processor is used for providing calculation and control capabilities, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a gesture interaction method of the user interface, and the database is used for storing data.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that various technical features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered as being described in the present specification.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of gesture interaction for a user interface, the method comprising:
carrying out region division on a user interface to obtain a gesture recognition region and a non-gesture recognition region;
performing gesture interaction through the gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition;
and displaying the flow node of the planned content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
2. The method according to claim 1, wherein performing gesture interaction through the gesture recognition area according to the acquired user gesture comprises:
dividing the gesture recognition area into a first gesture recognition area and a second gesture recognition area;
performing gesture interaction on the branch options corresponding to the flow nodes through the first gesture recognition area according to the acquired user gestures;
and performing gesture interaction based on the function interaction gesture through the second gesture recognition area according to the acquired user gesture.
3. The method of claim 2, wherein the method comprises:
the first gesture recognition area and the second gesture recognition area have recognition priority;
under the condition that the recognition priorities are set to be the same, simultaneously executing gesture interaction of the first gesture recognition area and gesture interaction of the second gesture recognition area;
and under the condition that the recognition priorities are different, executing the gesture interaction of the second gesture recognition area preferentially, and then executing the gesture interaction of the first gesture recognition area.
4. The method of claim 2, wherein performing gesture interaction on the branch option corresponding to the flow node through the first gesture recognition area according to the acquired user gesture comprises:
and under the condition that a user is detected to make a touch gesture on the branch option of the flow node, amplifying the branch option according to a preset size, displaying a corresponding time progress, and finishing the gesture interaction of the flow node after the time progress is executed.
5. The method according to claim 2, wherein performing gesture interaction based on a function interaction gesture through the second gesture recognition area according to the acquired user gesture comprises:
the function interaction gesture comprises a return gesture, and in the process nodes which are not the first process node, when the return gesture made by the user is detected, the current process node is returned to the previous process node.
6. The method of claim 5, wherein returning from a current flow node to a previous flow node upon detecting that a user has made a return gesture comprises:
and when the situation that the user makes a return gesture is detected, displaying the time progress of the return gesture, and returning to the previous process node from the current process node after the time progress is executed.
7. The method according to claim 2, wherein performing gesture interaction based on a function interaction gesture through the second gesture recognition area according to the acquired user gesture comprises:
the functional interaction gestures further comprise execution gestures, and the plan course is executed when the execution gestures of the user are detected after the last process node is completed.
8. The method of claim 2, wherein the determining the display state of the flow node according to the gesture interaction result of the gesture recognition area comprises:
and displaying the current process node of the plan content in the non-gesture recognition area in a first interaction state, and displaying the current process node in a second interaction state if the current process node finishes gesture interaction.
9. The method of claim 8, wherein the first interaction state is a gray icon state and the second interaction state is a highlighted icon state with a confirmation mark.
10. The gesture interaction system of the user interface is characterized by comprising a region division module, a gesture interaction module and a display matching module;
the area division module is used for carrying out area division on the user interface to obtain a gesture recognition area and a non-gesture recognition area;
the gesture interaction module is used for performing gesture interaction through the gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition;
the display matching module is used for displaying the process nodes of the plan content in the non-gesture recognition area and determining the display state of the process nodes according to the gesture interaction result of the gesture recognition area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210587715.6A CN115202530B (en) | 2022-05-26 | 2022-05-26 | Gesture interaction method and system of user interface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210587715.6A CN115202530B (en) | 2022-05-26 | 2022-05-26 | Gesture interaction method and system of user interface |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115202530A true CN115202530A (en) | 2022-10-18 |
CN115202530B CN115202530B (en) | 2024-04-09 |
Family
ID=83575423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210587715.6A Active CN115202530B (en) | 2022-05-26 | 2022-05-26 | Gesture interaction method and system of user interface |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115202530B (en) |
Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090315740A1 (en) * | 2008-06-23 | 2009-12-24 | Gesturetek, Inc. | Enhanced Character Input Using Recognized Gestures |
US20100229125A1 (en) * | 2009-03-09 | 2010-09-09 | Samsung Electronics Co., Ltd. | Display apparatus for providing a user menu, and method for providing user interface (ui) applicable thereto |
CN102184021A (en) * | 2011-05-27 | 2011-09-14 | 华南理工大学 | Television man-machine interaction method based on handwriting input and fingertip mouse |
EP2610722A2 (en) * | 2011-12-29 | 2013-07-03 | Apple Inc. | Device, method and graphical user interface for configuring restricted interaction with a user interface |
US20140089849A1 (en) * | 2012-09-24 | 2014-03-27 | Lg Electronics Inc. | Image display apparatus and method for operating the same |
US20140125598A1 (en) * | 2012-11-05 | 2014-05-08 | Synaptics Incorporated | User interface systems and methods for managing multiple regions |
US20140298273A1 (en) * | 2013-04-02 | 2014-10-02 | Imimtek, Inc. | Systems and Methods for Implementing Three-Dimensional (3D) Gesture Based Graphical User Interfaces (GUI) that Incorporate Gesture Reactive Interface Objects |
CN104735544A (en) * | 2015-03-31 | 2015-06-24 | 上海摩软通讯技术有限公司 | Video guidance method for mobile terminal |
CN104869469A (en) * | 2015-05-19 | 2015-08-26 | 乐视致新电子科技(天津)有限公司 | Method and apparatus for displaying program contents |
AU2016100653A4 (en) * | 2015-06-07 | 2016-06-16 | Apple Inc. | Devices and methods for navigating between user interfaces |
CN105915977A (en) * | 2015-06-30 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Method for controlling electronic equipment and device thereof |
CN106327934A (en) * | 2015-07-01 | 2017-01-11 | 陆雨竹 | Network terminal-based learning guidance device |
US20170153710A1 (en) * | 2014-06-20 | 2017-06-01 | Lg Electronics Inc. | Video display device and operating method thereof |
CN108235091A (en) * | 2018-01-25 | 2018-06-29 | 青岛海信电器股份有限公司 | Smart television and the method that upper content is applied based on access homepage in display equipment |
CN108429927A (en) * | 2018-02-08 | 2018-08-21 | 聚好看科技股份有限公司 | The method of virtual goods information in smart television and search user interface |
CN108885525A (en) * | 2016-11-04 | 2018-11-23 | 华为技术有限公司 | Menu display method and terminal |
CN108853946A (en) * | 2018-07-10 | 2018-11-23 | 燕山大学 | A kind of exercise guide training system and method based on Kinect |
CN110780743A (en) * | 2019-11-05 | 2020-02-11 | 聚好看科技股份有限公司 | VR (virtual reality) interaction method and VR equipment |
CN111178348A (en) * | 2019-12-09 | 2020-05-19 | 广东小天才科技有限公司 | Method for tracking target object and sound box equipment |
US20200183556A1 (en) * | 2017-08-14 | 2020-06-11 | Guohua Liu | Interaction position determination method and system, storage medium and smart terminal |
US20200356221A1 (en) * | 2019-05-06 | 2020-11-12 | Apple Inc. | User interfaces for sharing content with other electronic devices |
CN112351325A (en) * | 2020-11-06 | 2021-02-09 | 惠州视维新技术有限公司 | Gesture-based display terminal control method, terminal and readable storage medium |
CN112348942A (en) * | 2020-09-18 | 2021-02-09 | 当趣网络科技(杭州)有限公司 | Body-building interaction method and system |
CN112383805A (en) * | 2020-11-16 | 2021-02-19 | 四川长虹电器股份有限公司 | Method for realizing man-machine interaction at television end based on human hand key points |
CN112487844A (en) * | 2019-09-11 | 2021-03-12 | 华为技术有限公司 | Gesture recognition method, electronic device, computer-readable storage medium, and chip |
CN112612393A (en) * | 2021-01-05 | 2021-04-06 | 杭州慧钥医疗器械科技有限公司 | Interaction method and device of interface function |
CN113076836A (en) * | 2021-03-25 | 2021-07-06 | 东风汽车集团股份有限公司 | Automobile gesture interaction method |
CN113596590A (en) * | 2020-04-30 | 2021-11-02 | 聚好看科技股份有限公司 | Display device and play control method |
CN113760131A (en) * | 2021-08-05 | 2021-12-07 | 当趣网络科技(杭州)有限公司 | Projection touch processing method and device and computer readable storage medium |
CN113794917A (en) * | 2021-09-15 | 2021-12-14 | 海信视像科技股份有限公司 | Display device and display control method |
CN114489331A (en) * | 2021-12-31 | 2022-05-13 | 上海米学人工智能信息科技有限公司 | Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks |
-
2022
- 2022-05-26 CN CN202210587715.6A patent/CN115202530B/en active Active
Patent Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090315740A1 (en) * | 2008-06-23 | 2009-12-24 | Gesturetek, Inc. | Enhanced Character Input Using Recognized Gestures |
US20100229125A1 (en) * | 2009-03-09 | 2010-09-09 | Samsung Electronics Co., Ltd. | Display apparatus for providing a user menu, and method for providing user interface (ui) applicable thereto |
CN102184021A (en) * | 2011-05-27 | 2011-09-14 | 华南理工大学 | Television man-machine interaction method based on handwriting input and fingertip mouse |
EP2610722A2 (en) * | 2011-12-29 | 2013-07-03 | Apple Inc. | Device, method and graphical user interface for configuring restricted interaction with a user interface |
US20140089849A1 (en) * | 2012-09-24 | 2014-03-27 | Lg Electronics Inc. | Image display apparatus and method for operating the same |
US20140125598A1 (en) * | 2012-11-05 | 2014-05-08 | Synaptics Incorporated | User interface systems and methods for managing multiple regions |
US20140298273A1 (en) * | 2013-04-02 | 2014-10-02 | Imimtek, Inc. | Systems and Methods for Implementing Three-Dimensional (3D) Gesture Based Graphical User Interfaces (GUI) that Incorporate Gesture Reactive Interface Objects |
US20170153710A1 (en) * | 2014-06-20 | 2017-06-01 | Lg Electronics Inc. | Video display device and operating method thereof |
CN104735544A (en) * | 2015-03-31 | 2015-06-24 | 上海摩软通讯技术有限公司 | Video guidance method for mobile terminal |
CN104869469A (en) * | 2015-05-19 | 2015-08-26 | 乐视致新电子科技(天津)有限公司 | Method and apparatus for displaying program contents |
AU2016100653A4 (en) * | 2015-06-07 | 2016-06-16 | Apple Inc. | Devices and methods for navigating between user interfaces |
CN105915977A (en) * | 2015-06-30 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Method for controlling electronic equipment and device thereof |
CN106327934A (en) * | 2015-07-01 | 2017-01-11 | 陆雨竹 | Network terminal-based learning guidance device |
CN108885525A (en) * | 2016-11-04 | 2018-11-23 | 华为技术有限公司 | Menu display method and terminal |
US20200183556A1 (en) * | 2017-08-14 | 2020-06-11 | Guohua Liu | Interaction position determination method and system, storage medium and smart terminal |
CN108235091A (en) * | 2018-01-25 | 2018-06-29 | 青岛海信电器股份有限公司 | Smart television and the method that upper content is applied based on access homepage in display equipment |
CN108429927A (en) * | 2018-02-08 | 2018-08-21 | 聚好看科技股份有限公司 | The method of virtual goods information in smart television and search user interface |
CN108853946A (en) * | 2018-07-10 | 2018-11-23 | 燕山大学 | A kind of exercise guide training system and method based on Kinect |
US20200356221A1 (en) * | 2019-05-06 | 2020-11-12 | Apple Inc. | User interfaces for sharing content with other electronic devices |
CN112487844A (en) * | 2019-09-11 | 2021-03-12 | 华为技术有限公司 | Gesture recognition method, electronic device, computer-readable storage medium, and chip |
CN110780743A (en) * | 2019-11-05 | 2020-02-11 | 聚好看科技股份有限公司 | VR (virtual reality) interaction method and VR equipment |
CN111178348A (en) * | 2019-12-09 | 2020-05-19 | 广东小天才科技有限公司 | Method for tracking target object and sound box equipment |
CN113596590A (en) * | 2020-04-30 | 2021-11-02 | 聚好看科技股份有限公司 | Display device and play control method |
CN112348942A (en) * | 2020-09-18 | 2021-02-09 | 当趣网络科技(杭州)有限公司 | Body-building interaction method and system |
CN112351325A (en) * | 2020-11-06 | 2021-02-09 | 惠州视维新技术有限公司 | Gesture-based display terminal control method, terminal and readable storage medium |
CN112383805A (en) * | 2020-11-16 | 2021-02-19 | 四川长虹电器股份有限公司 | Method for realizing man-machine interaction at television end based on human hand key points |
CN112612393A (en) * | 2021-01-05 | 2021-04-06 | 杭州慧钥医疗器械科技有限公司 | Interaction method and device of interface function |
CN113076836A (en) * | 2021-03-25 | 2021-07-06 | 东风汽车集团股份有限公司 | Automobile gesture interaction method |
CN113760131A (en) * | 2021-08-05 | 2021-12-07 | 当趣网络科技(杭州)有限公司 | Projection touch processing method and device and computer readable storage medium |
CN113794917A (en) * | 2021-09-15 | 2021-12-14 | 海信视像科技股份有限公司 | Display device and display control method |
CN114489331A (en) * | 2021-12-31 | 2022-05-13 | 上海米学人工智能信息科技有限公司 | Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks |
Non-Patent Citations (1)
Title |
---|
戴一康: "基于NUI的智能车载助理系统人机界面设计研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, no. 5 * |
Also Published As
Publication number | Publication date |
---|---|
CN115202530B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10101827B2 (en) | Method and apparatus for controlling a touch-screen based application ported in a smart device | |
CN103207670A (en) | Touch free operation of devices by use of depth sensors | |
JP2014502399A (en) | Handwriting input method by superimposed writing | |
CN110942479B (en) | Virtual object control method, storage medium and electronic device | |
CN114397997B (en) | Control method for interactive operation and multi-screen interactive system | |
CN112904994A (en) | Gesture recognition method and device, computer equipment and storage medium | |
CN113209601A (en) | Interface display method and device, electronic equipment and storage medium | |
CN108984089A (en) | touch operation method, device, storage medium and electronic equipment | |
CN112148171B (en) | Interface switching method and device and electronic equipment | |
WO2023051215A1 (en) | Gaze point acquisition method and apparatus, electronic device and readable storage medium | |
US11500453B2 (en) | Information processing apparatus | |
CN111966268B (en) | Interface display method and device and electronic equipment | |
CN115202530A (en) | Gesture interaction method and system of user interface | |
CN114527669A (en) | Equipment control method and device and electronic equipment | |
CN109375851B (en) | Sensor binding method and device, computer equipment and storage medium | |
US20160124603A1 (en) | Electronic Device Including Tactile Sensor, Operating Method Thereof, and System | |
JP2017174144A (en) | Program, computer device, program execution method and system | |
CN115981542A (en) | Intelligent interactive touch control method, system, equipment and medium for touch screen | |
CN115731371A (en) | Method, device and equipment for determining safety area | |
CN114327232A (en) | Full-screen handwriting realization method and electronic equipment | |
CN112269511A (en) | Page display method and device and electronic equipment | |
CN114546103A (en) | Operation method through gestures in augmented reality and head-mounted display system | |
CN117472262B (en) | Interaction method and electronic equipment | |
CN114415929B (en) | Control method and device of electronic equipment, electronic equipment and readable storage medium | |
CN118838566A (en) | Interface processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |