Nothing Special   »   [go: up one dir, main page]

CN118113141A - Interaction method, interaction device, electronic equipment and storage medium - Google Patents

Interaction method, interaction device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118113141A
CN118113141A CN202211528129.0A CN202211528129A CN118113141A CN 118113141 A CN118113141 A CN 118113141A CN 202211528129 A CN202211528129 A CN 202211528129A CN 118113141 A CN118113141 A CN 118113141A
Authority
CN
China
Prior art keywords
user
display interface
image
local area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211528129.0A
Other languages
Chinese (zh)
Inventor
张丽娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211528129.0A priority Critical patent/CN118113141A/en
Publication of CN118113141A publication Critical patent/CN118113141A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to an interaction method, an apparatus, an electronic device, and a storage medium, where the method is applied to a terminal device, and the terminal device is communicatively connected to a brain-computer interface system, and the method includes: responding to the brain-computer control instruction, and acquiring an electroencephalogram signal on the scalp surface of the user, which is acquired by the brain-computer interface system; classifying the electroencephalogram signals to obtain classification results, wherein the classification results comprise identifications of target local areas in a display interface, the display interface comprises a plurality of local areas, and the target local areas are any one of the local areas; and executing response operation according to the display content in the target local area.

Description

Interaction method, interaction device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of terminal equipment, and in particular relates to an interaction method, an interaction device, electronic equipment and a storage medium.
Background
In recent years, terminal devices continuously push out various human-computer interaction modes, such as interaction modes of screen reading, barrier-free gestures, large mouse pointers, high-contrast characters, text-to-speech and the like. The interaction modes enrich interaction means and simultaneously help the disabled people to realize the use of the terminal equipment, so that the interaction friendliness of the terminal equipment to the disabled people is greatly improved. However, in the related art, the interaction manner provided by the terminal device for the disabled people only can meet the requirements of specific disabled people, such as visually impaired people and hearing impaired people, but cannot meet the requirements of other disabled people, such as limb disabled people.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an interaction method, an interaction device, an electronic device, and a storage medium, which are used to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided an interaction method applied to a terminal device, where the terminal device is communicatively connected to a brain-computer interface system, the method including:
acquiring brain-computer signals on the scalp surface of the user, which are acquired by the brain-computer interface system;
Classifying the electroencephalogram signals, and determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results;
and generating and executing a response instruction according to the display content in the target local area.
In one embodiment, the classifying the electroencephalogram signal includes:
Extracting characteristics of the electroencephalogram signals to obtain characteristic vectors;
Determining the classification result according to the feature vector, wherein the classification result comprises the identification of any local area of the display interface;
The determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to the classification result comprises the following steps:
And determining the local area to which the mark in the classification result belongs as the target local area.
In one embodiment, the generating and executing a response instruction according to the display content in the target local area includes:
Executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area;
And under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area.
In one embodiment, the terminal device has an image acquisition element; after the display interface is switched to the target local area, the method further comprises the following steps:
Acquiring a user image acquired by the image acquisition element, wherein the user image is internally provided with user eyes;
determining the sight direction of the user according to the user image;
Determining the gazing position of the user in the display interface according to the sight direction;
and generating and executing a response instruction according to the gazing position.
In one embodiment, the determining the direction of the line of sight of the user according to the user image includes:
performing face detection processing on the user image, and cutting the user image according to a face recognition result to obtain a face image;
performing eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image;
and determining the sight direction of the user according to the face image and the eye image.
In one embodiment, the performing an eye detection process on the face image includes:
performing key point detection on the face image, and determining the eye position and the head gesture based on a key point detection result and a standard face model;
before said determining the direction of the line of sight of the user from said face image and said eye image, the method further comprises:
and carrying out normalization processing on the face image and the eye image according to the head gesture.
In one embodiment, the method further comprises:
Calibrating an external parameter of the image acquisition element, wherein the external parameter is used for representing the relative relation between a world coordinate system in which the sight line direction is positioned and an image coordinate system of a display interface;
and determining the gazing position of the user in the display interface according to the sight line direction, wherein the method comprises the following steps:
And determining the gazing position of the user in the display interface according to the external parameters of the image acquisition element in the sight line direction.
In one embodiment, the generating and executing the response instruction according to the gaze location includes:
Determining the matching degree of each control element in the display interface and the gazing position according to the position of each control element in the display interface;
and executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
In one embodiment, the executing the response operation corresponding to the control element with the highest matching degree in the display interface includes:
and executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value.
In one embodiment, the method further comprises:
And responding to the matching degree of the control element with the highest matching degree in the display interface to be smaller than the preset threshold value, and acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system.
According to a second aspect of embodiments of the present disclosure, there is provided an interaction device applied to a terminal device, the terminal device being communicatively connected to a brain-computer interface system, the device comprising:
the acquisition module is used for acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system;
The classification module is used for classifying the electroencephalogram signals and determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results;
and the first response module is used for generating and executing a response instruction according to the display content in the target local area.
In one embodiment, the classification module is configured to, when performing classification processing on the electroencephalogram signal, specifically:
Extracting characteristics of the electroencephalogram signals to obtain characteristic vectors;
Determining the classification result according to the feature vector, wherein the classification result comprises the identification of any local area of the display interface;
The classification module is used for determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results, and is specifically used for:
And determining the local area to which the mark in the classification result belongs as the target local area.
In one embodiment, the first response module is specifically configured to:
Executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area;
And under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area.
In one embodiment, the terminal device has an image acquisition element; the apparatus further comprises a second response module for:
After the display interface is switched to the target local area, acquiring a user image acquired by the image acquisition element, wherein the user image is internally provided with user eyes;
determining the sight direction of the user according to the user image;
Determining the gazing position of the user in the display interface according to the sight direction;
and generating and executing a response instruction according to the gazing position.
In one embodiment, the second response module is configured to, when determining a line of sight direction of the user according to the user image, specifically:
performing face detection processing on the user image, and cutting the user image according to a face recognition result to obtain a face image;
performing eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image;
and determining the sight direction of the user according to the face image and the eye image.
In one embodiment, the second response module is configured to, when performing the eye detection processing on the face image, specifically:
performing key point detection on the face image, and determining the eye position and the head gesture based on a key point detection result and a standard face model;
the apparatus further comprises a normalization module for:
And before the sight direction of the user is determined according to the face image and the eye image, carrying out normalization processing on the face image and the eye image according to the head gesture.
In one embodiment, the apparatus further comprises a calibration module for:
Calibrating an external parameter of the image acquisition element, wherein the external parameter is used for representing the relative relation between a world coordinate system in which the sight line direction is positioned and an image coordinate system of a display interface;
the second response module is configured to determine, according to the line of sight direction, a gaze location of the user within the display interface, where the second response module is specifically configured to:
And determining the gazing position of the user in the display interface according to the external parameters of the image acquisition element in the sight line direction.
In one embodiment, the second response module is configured to, when performing a response operation according to the gaze location, specifically:
Determining the matching degree of each control element in the display interface and the gazing position according to the position of each control element in the display interface;
and executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
In one embodiment, the second response module is configured to, when executing a response operation corresponding to the control element with the highest matching degree in the display interface, specifically:
and executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value.
In one embodiment, the apparatus further comprises an instruction module for:
And responding to the matching degree of the control element with the highest matching degree in the display interface to be smaller than the preset threshold value, and acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory for storing computer instructions executable on a processor for implementing the interaction method of the first aspect when the computer instructions are executed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
According to the interaction method provided by the embodiment of the disclosure, the electroencephalogram signals on the scalp surface of the user, which are acquired by the brain-computer interface system, can be classified, target local areas are determined in a plurality of local areas of a display interface of the terminal equipment according to classification results, and finally response instructions can be generated and executed according to display contents in the target local areas. Because the display interface of the terminal equipment comprises a plurality of local areas, and the target local area can be determined in the display interface by utilizing the classification result of the electroencephalogram signals, the response instruction is generated based on a certain local area (namely the target local area) of the display interface, namely the response action equivalent to the action generated by clicking the local area by a user, in other words, the interaction effect equivalent to the click of the display screen by the user can be achieved by collecting and processing the electroencephalogram signals on the scalp surface of the user, so that the operation and interaction of the terminal equipment can be realized by more disabled people.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of an interaction method shown in an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a display interface shown in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a switching display interface shown in an exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart of an interaction method shown in another exemplary embodiment of the present disclosure;
FIG. 5 is a flow chart of an interaction method shown in yet another exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a switching display interface shown in another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an image tagging device according to an exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" depending on the context.
Currently, barrier-free auxiliary interaction functions for disabled people are designed on most terminal devices such as smart phones, flat-panel or televisions. For example, aiming at the visually impaired people, the functions of larger fonts, large mouse pointers, high-contrast characters, color correction and the like are designed, meanwhile, the screen operation restriction caused by the visual impairment is made up by utilizing the hearing assistance function, and the man-machine interaction of the visually impaired people is assisted in the modes of screen reading, text-to-speech and the like; for people with hearing impairment, a voice-to-text function is designed, and when a user views a video, voice is displayed on a screen in the form of subtitles. But screen interaction designs for people with disabled limbs who cannot manually operate the equipment are quite popular.
The disabled person may typically use the intelligent voice assistant to operate the device for human-machine interaction. The intelligent voice assistant is provided with the functions of voice recognition, semantic analysis, voice synthesis and the like, and obtains the user operation intention by recognizing the voice of the user, so that simple logic interaction is completed. However, the intelligent voice assistant has limited capability of analyzing complex and various semantics, and has the problem of difficulty in recognition such as poor speech and mouth teeth, and the intelligent voice technology has a general effect on realizing terminal equipment interaction of people with disabled limbs.
Based on this, in a first aspect, at least one embodiment of the present disclosure provides an interaction method, please refer to fig. 1, which illustrates a flow of the method, including steps S101 to S103.
The method can be applied to terminal equipment, and the terminal equipment can be in communication connection with the brain-computer interface system. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) handset, a computing device, a vehicle-mounted device, a wearable device, etc.
One paradigm commonly used by brain-computer interface systems is motor imagery, where a person, while imagining his own limb movements, activates energy in specific brain regions that control motor functions, even without actual limb movements. In the motor imagery process, the neuron cells are activated and the metabolism speed is increased, and the cerebral cortex can generate two kinds of rhythm signals with obvious change, namely a mu rhythm signal of 8-15Hz and a beta rhythm of 18-24 Hz. Therefore, various control instructions can be generated by actively controlling the magnitude of the mu and beta rhythms of the left brain and the right brain by a human. The electroencephalogram signals under different operation intentions of the user are analyzed, the operation intentions are judged through different activation effects of specific brain areas, and interaction with external equipment can be achieved. Therefore, the brain-computer interface system based on motor imagery can be used for controlling external terminal equipment to replace trunk movement. For example, the common motor imagination parts can be left hand and right hand, when a user operates the screen, the user wants to acquire an application module on the left half screen, and can imagine own left hand movement; the user wants to acquire the application module on the right half screen and can imagine his right hand movement.
In step S101, an electroencephalogram signal of the scalp surface of the user acquired by the brain-computer interface system is acquired.
The brain-computer interface system can collect brain-computer signals on the scalp surface of the user through the signal collecting probe, for example, the signal collecting probe can be distributed in the interactive helmet (or hat), and after the user wears the interactive helmet, the signal collecting probe distributed on the interactive helmet can collect brain-computer signals on the scalp surface of the user. It can be understood that other reasonable electroencephalogram signal acquisition modes can be set in the brain-computer interface system, and detailed description is omitted here.
In step S102, the electroencephalogram signals are classified, and a target local area is determined in a plurality of local areas of a display interface of the terminal device according to the classification result.
The display interface is an interface based on user interaction. The display interface has contents such as images, characters and the like, and at least one control element, wherein the control elements can trigger response instructions, for example, if a user clicks the control elements, the user can trigger the response instructions. The display screen of the terminal device may be divided into a plurality of local areas in advance, where each local area has an independent identifier (for example, a left and right location identifier, or a digital identifier), so that each interface displayed by the terminal device is divided into a plurality of local areas, and each local area has a partial control element, for example, all control elements in the display interface may be evenly distributed in each local area.
The number and the division mode of the local areas can be preset by matching with the classification capability.
Referring to fig. 2, a display interface is exemplarily shown, where the interface is a desktop of a terminal device, and icons of a plurality of application programs (APP) exist in the desktop, where APP (i, j) represents an icon of an application program in an ith row and a jth column, i e [1, N ], j e [1, M ], N and M are a total number of rows and a total number of columns of an application module respectively. The icon of each application program is a control element, and the response operation corresponding to the control element is to start the corresponding application program; in addition, the interface is divided into left and right partial areas, and icons (average or random) of a plurality of applications are divided into the two partial areas.
In one possible embodiment, the electroencephalogram signals may be classified as follows: firstly, extracting features of the electroencephalogram signals to obtain feature vectors, for example, extracting features of the electroencephalogram signals by using a common spatial mode algorithm; and then, determining the classification result according to the feature vector, wherein the classification result comprises the identification of any local area of the display interface, and the feature vector can be classified by using a support vector machine algorithm to obtain the classification result. By way of example, a neural network that has been trained in advance may be employed to accomplish the classification task of the present embodiment. Further, a local area to which the identification in the classification result belongs may be determined as the target local area.
The target local area characterizes the local area where the control element the user wants to operate is located. Taking the display interface shown in fig. 2 as an example, the classification result may be left or right (where left is the identifier of the left local area and right is the identifier of the right local area), and the target local area may be the left local area or the right local area.
In step S103, a response instruction is generated and executed according to the display content in the target local area.
Exemplary, executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area; under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area, namely hiding display contents, control elements and the like of other local areas, and performing layout adjustment (namely enlarged display) on the display contents, the control elements and the like of the target local area so as to fully cover the whole interface.
Referring to fig. 3, if the target local area in step S102 is the left local area, the display interface is switched to the left local area due to the plurality of control elements in the left local area.
It will be appreciated that the display interface switched to the target localized area still includes multiple localized areas, with control elements within each localized area. Therefore, steps S101 to S103 may be further performed after step S103, that is, steps S101 to S103 may be performed by cycling through a plurality of times until only one control element is in the target local area, thereby completing the interactive operation for the control element intended by the user.
In addition, after the response operation corresponding to the control element is executed, if a new interface appears, the operations from step S101 to step S103 may be utilized to perform man-machine interaction with respect to the new interface.
According to the interaction method provided by the embodiment of the disclosure, the electroencephalogram signals on the scalp surface of the user, which are acquired by the brain-computer interface system, can be classified, target local areas are determined in a plurality of local areas of a display interface of the terminal equipment according to classification results, and finally response instructions can be generated and executed according to display contents in the target local areas. Because the display interface of the terminal equipment comprises a plurality of local areas, and the target local area can be determined in the display interface by utilizing the classification result of the electroencephalogram signals, the response instruction is generated based on a certain local area (namely the target local area) of the display interface, namely the response action equivalent to the action generated by clicking the local area by a user, in other words, the interaction effect equivalent to the click of the display screen by the user can be achieved by collecting and processing the electroencephalogram signals on the scalp surface of the user, so that the operation and interaction of the terminal equipment can be realized by more disabled people.
If steps S101 to S103 are performed repeatedly until only one control element is in the target local area, the user' S desired interactive operation for the control element is completed, and more cycles are required, so that the interactive efficiency is low and the interactive experience of the user is general. Therefore, the interaction efficiency may be further improved in combination with viewpoint estimation, that is, in some embodiments of the present disclosure, after the display interface is switched to the target local area in step S103, the interaction operation of the user with respect to a certain control element may be completed in a manner as shown in fig. 4, including steps S401 to S404.
In step S401, a user image acquired by the image acquisition element is acquired, wherein the user image has a user eye therein.
The terminal device has an image acquisition element, such as a camera or the like, whereby the user image can be acquired using the image acquisition element.
For example, the image capturing element may be controlled to capture a plurality of images of the user at a preset frequency, and to capture the plurality of images.
In step S402, a line of sight direction of a user is determined from the user image.
First, face detection processing is performed on the user image, and the user image is cut according to a face recognition result to obtain a face image. For example, a face detection model CENTERFACE may be used to detect face regions.
And then, carrying out eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image. For example, the key point detection is performed on the face image, and the eye position and the head pose are determined based on the key point detection result and the standard face model, that is, the head pose and the position of the eye image are calculated by detecting on the face image using models such as the face key point detection model PIPNet and fitting to the standard face model.
And finally, determining the sight direction of the user according to the face image and the eye image. For example, the face image and the eye image are normalized according to the head pose, i.e. face image and eye image with incorrect direction are aligned; and then the face image and the eye image are input into models such as a sight line estimation model GazeNet, so as to obtain the sight line direction under the standard space (namely the world coordinate system).
In step S403, a gaze location of the user within the display interface is determined according to the gaze direction.
For example, the gaze location of the user in the display interface may be determined according to an external parameter of the image capturing element in the line of sight direction, where the external parameter is used to characterize a relative relationship between a world coordinate system in which the line of sight direction is located and an image coordinate system of the display interface.
The external parameters of the image acquisition element can be calibrated in advance, for example, the calibration of the external parameters can be completed by a specular reflection camera external parameter calibration method.
In step S404, a response instruction is generated and executed according to the gaze location.
Since there is a certain error in the line-of-sight estimation and there is a deviation when converting to the screen position, and the human line of sight is not easily stabilized at a certain landing point, the gaze position obtained in step S103 is not necessarily the position of a certain control element. The response operation may then be determined as follows:
firstly, according to the position of each control element in the display interface, the matching degree of each control element in the display interface and the gazing position is determined.
In the steps S401 to S403, a plurality of frames of user images may be used to obtain a plurality of corresponding gaze locations, and then the matching degree between each control element and the gaze location is determined according to the following formula:
where x f、yf is the abscissa and ordinate of the F-th gaze position P f(xf、yf) in the image coordinate, F is the number of gaze positions, x ij,yij is the abscissa and ordinate of the control element of the i-th row and j-th column in the image coordinate system, and w and h are the width and height of the display interface in the image coordinate system, respectively.
Theoretically, the range of the matching degree is [0,1], and the closer to 1, the greater the reliability that the user is looking at the control element to which the user belongs.
And then executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
And executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value. I.e. if the control element of the mth row and the nth column satisfies the following conditions:
Sigma ij≥σtt is a preset threshold value)
And executing response operation corresponding to the nth control element in the mth row.
It can be understood that, in response to the matching degree of the control element with the highest matching degree in the display interface being smaller than the preset threshold, the brain-computer interface system may acquire the brain-computer signal on the scalp surface of the user, that is, perform the operations from step S101 to step S103, so as to reduce the number of control elements in the display interface, and further determine the response operation by using step S103 or the manner shown in fig. 4.
In the embodiment, the display interface is dynamically updated based on the brain-computer interface system, the control elements are locked based on viewpoint estimation, and the control elements can be gradually locked by alternately executing the operations of the control elements and the viewpoint estimation, so that man-machine interaction is completed; the interaction mode of combining eyes and brain is realized, the interaction modes of disabled people such as limb disabled people are enriched, and the interaction experience and interaction efficiency of disabled people are improved.
Referring to fig. 5, a manner of implementing man-machine interaction by combining dynamic adjustment of the left and right screens (i.e., switching the display interface to a local area of the left half screen or a local area of the right half screen) and locking of the view point (i.e., gaze position) of the screen in this embodiment is more vividly shown, that is, by alternately executing the two to lock the control element step by step, and executing the response operation corresponding to the control element. The method comprises the steps of sequentially obtaining signals, extracting features, classifying signals, adjusting left and right half screens to update a display interface, sequentially carrying out face detection, face key point detection, face labeling model fitting, three-dimensional head gesture and eye positioning and normalization processing, inputting eye images and face images into an apparent sight estimating model to obtain a sight line direction, calibrating external parameters, and determining a screen viewpoint to lock the screen viewpoint.
That is, the display interface may be switched from step S101 to step S103, and after each update of the display interface, the response instruction is determined from step S401 to step S404, and if the matching degree between the gaze location and the identifier of each application program is smaller than the preset threshold, the display interface is continuously switched from step S101 to step S103.
Referring to fig. 3, the display interfaces are switched from step S101 to step S103, then a response instruction is determined for the latest display interface from step S401 to step S404, and since the matching degree of the gaze location and the identifier of each application program is smaller than the preset threshold, the display interfaces are continuously switched from step S101 to step S103 until the situation shown in fig. 6 is that after the display interfaces are switched from step S101 to step S103, when the response instruction is determined from step S401 to step S404, the matching degree of the gaze location and the identifier of the application program in the 1 st row and 1 st column is the largest and is larger than the preset threshold, and therefore the response instruction corresponding to the application program identifier in the 1 st row and 1 st column is executed, i.e. the application program is started.
According to a second aspect of the embodiments of the present disclosure, there is provided an interaction device applied to a terminal device, where the terminal device is communicatively connected to a brain-computer interface system, referring to fig. 7, the device includes:
An acquisition module 701, configured to acquire an electroencephalogram signal on the scalp surface of the user acquired by the brain-computer interface system;
The classification module 702 is configured to perform classification processing on the electroencephalogram signal, and determine a target local area in a plurality of local areas of a display interface of the terminal device according to a classification result;
the first response module 703 is configured to generate and execute a response instruction according to the display content in the target local area.
In some embodiments of the present disclosure, the classification module is configured to, when performing classification processing on the electroencephalogram signal, specifically:
Extracting characteristics of the electroencephalogram signals to obtain characteristic vectors;
Determining the classification result according to the feature vector, wherein the classification result comprises the identification of any local area of the display interface;
The classification module is used for determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results, and is specifically used for:
And determining the local area to which the mark in the classification result belongs as the target local area.
In some embodiments of the disclosure, the first response module is specifically configured to:
Executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area;
And under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area.
In some embodiments of the present disclosure, the terminal device has an image acquisition element; the apparatus further comprises a second response module for:
After the display interface is switched to the target local area, acquiring a user image acquired by the image acquisition element, wherein the user image is internally provided with user eyes;
determining the sight direction of the user according to the user image;
Determining the gazing position of the user in the display interface according to the sight direction;
and generating and executing a response instruction according to the gazing position.
In some embodiments of the present disclosure, the second response module is configured to, when determining a line of sight direction of the user according to the user image, specifically:
performing face detection processing on the user image, and cutting the user image according to a face recognition result to obtain a face image;
performing eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image;
and determining the sight direction of the user according to the face image and the eye image.
In some embodiments of the present disclosure, the second response module is configured to, when performing an eye detection process on the face image, specifically:
performing key point detection on the face image, and determining the eye position and the head gesture based on a key point detection result and a standard face model;
the apparatus further comprises a normalization module for:
And before the sight direction of the user is determined according to the face image and the eye image, carrying out normalization processing on the face image and the eye image according to the head gesture.
In some embodiments of the present disclosure, the apparatus further comprises a calibration module for:
Calibrating an external parameter of the image acquisition element, wherein the external parameter is used for representing the relative relation between a world coordinate system in which the sight line direction is positioned and an image coordinate system of a display interface;
the second response module is configured to determine, according to the line of sight direction, a gaze location of the user within the display interface, where the second response module is specifically configured to:
And determining the gazing position of the user in the display interface according to the external parameters of the image acquisition element in the sight line direction.
In some embodiments of the present disclosure, the second response module is configured to, when generating and executing a response instruction according to the gaze location, specifically:
Determining the matching degree of each control element in the display interface and the gazing position according to the position of each control element in the display interface;
and executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
In some embodiments of the present disclosure, the second response module is configured to execute a response operation corresponding to a control element with the highest matching degree in the display interface, where the response operation is specifically configured to:
and executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value.
In some embodiments of the present disclosure, the apparatus further comprises an instruction module for:
And responding to the matching degree of the control element with the highest matching degree in the display interface to be smaller than the preset threshold value, and acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system.
The specific manner in which the various modules perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method of the first aspect and will not be described in detail here.
In accordance with a third aspect of embodiments of the present disclosure, reference is made to fig. 8, which schematically illustrates a block diagram of an electronic device. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may also include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G or 5G, or a combination thereof. In one exemplary embodiment, the communication part 816 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the power supply methods of electronic devices described above.
In a fourth aspect, the present disclosure also provides, in an exemplary embodiment, a non-transitory computer-readable storage medium, such as memory 804, comprising instructions executable by processor 820 of apparatus 800 to perform the method of powering an electronic device described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. An interaction method, characterized by being applied to a terminal device, the terminal device being communicatively connected to a brain-computer interface system, the method comprising:
acquiring brain-computer signals on the scalp surface of the user, which are acquired by the brain-computer interface system;
Classifying the electroencephalogram signals, and determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results;
and generating and executing a response instruction according to the display content in the target local area.
2. The interaction method according to claim 1, wherein the classifying the electroencephalogram signal includes:
Extracting characteristics of the electroencephalogram signals to obtain characteristic vectors;
Determining a classification result according to the feature vector, wherein the classification result comprises identification of any local area of the display interface;
The determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to the classification result comprises the following steps:
And determining the local area to which the mark in the classification result belongs as the target local area.
3. The interaction method according to claim 1, wherein the generating and executing a response instruction according to the display content in the target local area includes:
Executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area;
And under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area.
4. An interaction method as claimed in claim 3, characterized in that the terminal device has an image acquisition element; after the display interface is switched to the target local area, the method further comprises the following steps:
Acquiring a user image acquired by the image acquisition element, wherein the user image is internally provided with user eyes;
determining the sight direction of the user according to the user image;
Determining the gazing position of the user in the display interface according to the sight direction;
and generating and executing a response instruction according to the gazing position.
5. The interactive method according to claim 4, wherein said determining a direction of a line of sight of a user from said user image comprises:
performing face detection processing on the user image, and cutting the user image according to a face recognition result to obtain a face image;
performing eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image;
and determining the sight direction of the user according to the face image and the eye image.
6. The interaction method according to claim 5, wherein the performing an eye detection process on the face image includes:
performing key point detection on the face image, and determining the eye position and the head gesture based on a key point detection result and a standard face model;
before said determining the direction of the line of sight of the user from said face image and said eye image, the method further comprises:
And carrying out normalization processing on the face image and the eye image according to the head gesture.
7. The method of interaction of claim 4, wherein the method further comprises:
Calibrating an external parameter of the image acquisition element, wherein the external parameter is used for representing the relative relation between a world coordinate system in which the sight line direction is positioned and an image coordinate system of a display interface;
and determining the gazing position of the user in the display interface according to the sight line direction, wherein the method comprises the following steps:
And determining the gazing position of the user in the display interface according to the external parameters of the image acquisition element in the sight line direction.
8. The interaction method of claim 4, wherein generating and executing response instructions from the gaze location comprises:
Determining the matching degree of each control element in the display interface and the gazing position according to the position of each control element in the display interface;
and executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
9. The interaction method according to claim 8, wherein the executing the response operation corresponding to the control element with the highest matching degree in the display interface includes:
and executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value.
10. The method of interaction of claim 9, wherein the method further comprises:
And responding to the matching degree of the control element with the highest matching degree in the display interface to be smaller than the preset threshold value, and acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system.
11. An interactive apparatus for use with a terminal device, the terminal device communicatively coupled to a brain-computer interface system, the apparatus comprising:
the acquisition module is used for acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system;
The classification module is used for classifying the electroencephalogram signals and determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results;
and the first response module is used for generating and executing a response instruction according to the display content in the target local area.
12. The interaction device of claim 11, wherein the classification module is configured to, when performing classification processing on the electroencephalogram signals, specifically:
Extracting characteristics of the electroencephalogram signals to obtain characteristic vectors;
Determining the classification result according to the feature vector, wherein the classification result comprises the identification of any local area of the display interface;
The classification module is used for determining a target local area in a plurality of local areas of a display interface of the terminal equipment according to classification results, and is specifically used for:
And determining the local area to which the mark in the classification result belongs as the target local area.
13. The interaction device of claim 11, wherein the first response module is specifically configured to:
Executing a response instruction corresponding to a control element under the condition that the control element exists in the target local area;
And under the condition that a plurality of control elements exist in the target local area, switching the display interface into the target local area.
14. The interactive apparatus of claim 13 wherein the terminal device has an image acquisition element; the apparatus further comprises a second response module for:
After the display interface is switched to the target local area, acquiring a user image acquired by the image acquisition element, wherein the user image is internally provided with user eyes;
determining the sight direction of the user according to the user image;
Determining the gazing position of the user in the display interface according to the sight direction;
and generating and executing a response instruction according to the gazing position.
15. The interaction device of claim 14, wherein the second response module is configured to, when determining a direction of a line of sight of a user from the user image, specifically:
performing face detection processing on the user image, and cutting the user image according to a face recognition result to obtain a face image;
performing eye detection processing on the face image, and cutting the face image according to an eye recognition result to obtain an eye image;
and determining the sight direction of the user according to the face image and the eye image.
16. The interaction device of claim 15, wherein the second response module is configured to, when performing the eye detection processing on the face image:
performing key point detection on the face image, and determining the eye position and the head gesture based on a key point detection result and a standard face model;
the apparatus further comprises a normalization module for:
and before the sight direction of the user is determined according to the face image and the eye image, carrying out normalization processing on the face image and the eye image according to the head gesture.
17. The interactive device of claim 14, further comprising a calibration module for:
Calibrating an external parameter of the image acquisition element, wherein the external parameter is used for representing the relative relation between a world coordinate system in which the sight line direction is positioned and an image coordinate system of a display interface;
the second response module is configured to determine, according to the line of sight direction, a gaze location of the user within the display interface, where the second response module is specifically configured to:
And determining the gazing position of the user in the display interface according to the external parameters of the image acquisition element in the sight line direction.
18. The interaction device of claim 14, wherein the second response module is configured to, when generating and executing a response instruction according to the gaze location, specifically:
Determining the matching degree of each control element in the display interface and the gazing position according to the position of each control element in the display interface;
and executing a response instruction corresponding to the control element with the highest matching degree in the display interface.
19. The interaction device of claim 18, wherein the second response module is configured to, when executing a response operation corresponding to a control element with the highest matching degree in the display interface, specifically:
and executing a response instruction corresponding to the control element with the highest matching degree in response to the matching degree of the control element with the highest matching degree in the display interface being larger than or equal to a preset threshold value.
20. The interactive apparatus of claim 19, further comprising an instruction module for:
And responding to the matching degree of the control element with the highest matching degree in the display interface to be smaller than the preset threshold value, and acquiring the brain-computer signal on the scalp surface of the user acquired by the brain-computer interface system.
21. An electronic device comprising a memory, a processor for storing computer instructions executable on the processor for implementing the interaction method of any of claims 1 to 10 when the computer instructions are executed.
22. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method of any one of claims 1 to 10.
CN202211528129.0A 2022-11-30 2022-11-30 Interaction method, interaction device, electronic equipment and storage medium Pending CN118113141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211528129.0A CN118113141A (en) 2022-11-30 2022-11-30 Interaction method, interaction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211528129.0A CN118113141A (en) 2022-11-30 2022-11-30 Interaction method, interaction device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118113141A true CN118113141A (en) 2024-05-31

Family

ID=91207442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211528129.0A Pending CN118113141A (en) 2022-11-30 2022-11-30 Interaction method, interaction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118113141A (en)

Similar Documents

Publication Publication Date Title
CN108052079B (en) Device control method, device control apparatus, and storage medium
US10462568B2 (en) Terminal and vehicle control method of mobile terminal using machine learning
KR101850035B1 (en) Mobile terminal and control method thereof
CN112118380B (en) Camera control method, device, equipment and storage medium
CN110968189B (en) Pupil modulation as cognitive control signal
CN106791893A (en) Net cast method and device
EP3933570A1 (en) Method and apparatus for controlling a voice assistant, and computer-readable storage medium
EP3933552B1 (en) Method and device for determining gaze position of user, storage medium, and electronic apparatus
KR20130081117A (en) Mobile terminal and control method therof
EP3299946B1 (en) Method and device for switching environment picture
CN111783517B (en) Image recognition method, device, electronic equipment and storage medium
US11816924B2 (en) Method for behaviour recognition based on line-of-sight estimation, electronic equipment, and storage medium
CN112099639A (en) Display attribute adjusting method and device, display equipment and storage medium
CN116048243B (en) Display method and electronic equipment
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN111373409B (en) Method and terminal for obtaining color value change
CN111554314B (en) Noise detection method, device, terminal and storage medium
CN111988522B (en) Shooting control method and device, electronic equipment and storage medium
CN118113141A (en) Interaction method, interaction device, electronic equipment and storage medium
CN116860913A (en) Voice interaction method, device, equipment and storage medium
CN117234405A (en) Information input method and device, electronic equipment and storage medium
CN111857326A (en) Signal control method and device
CN115766927B (en) Lie detection method, lie detection device, mobile terminal and storage medium
CN112486603A (en) Interface adaptation method and device for adapting interface
CN115665398B (en) Image adjusting method, device, equipment and medium based on virtual reality technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination