Nothing Special   »   [go: up one dir, main page]

CN112905004A - Gesture control method and device for vehicle-mounted display screen and storage medium - Google Patents

Gesture control method and device for vehicle-mounted display screen and storage medium Download PDF

Info

Publication number
CN112905004A
CN112905004A CN202110084512.0A CN202110084512A CN112905004A CN 112905004 A CN112905004 A CN 112905004A CN 202110084512 A CN202110084512 A CN 202110084512A CN 112905004 A CN112905004 A CN 112905004A
Authority
CN
China
Prior art keywords
gesture
information
image
vehicle
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110084512.0A
Other languages
Chinese (zh)
Other versions
CN112905004B (en
Inventor
杨小辉
常博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Geely Automobile Research Institute Ningbo Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202110084512.0A priority Critical patent/CN112905004B/en
Publication of CN112905004A publication Critical patent/CN112905004A/en
Application granted granted Critical
Publication of CN112905004B publication Critical patent/CN112905004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/223Posture, e.g. hand, foot, or seat position, turned or inclined

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a gesture control method, a device and a storage medium for a vehicle-mounted display screen, wherein the method comprises the steps of obtaining information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions; identifying the information to be identified to obtain an identification result; if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared; and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen. Therefore, driving safety can be improved, and human-computer interaction experience can be improved.

Description

Gesture control method and device for vehicle-mounted display screen and storage medium
Technical Field
The application relates to the technical field of automobiles, in particular to a gesture control method and device for a vehicle-mounted display screen and a storage medium.
Background
With the development of automotive intelligence, new technologies of various grand and cool have gone from science fiction to the consumer's view. Of these, in-vehicle displays are increasingly receiving consumer attention as the most important configuration within automobiles.
The vehicle-mounted Display includes a console Display screen, a Head-UP Display (Head UP Display, HUD), a combination instrument Display screen, a streaming media rearview mirror, and the like. The center console display screen can display the contents of automobile sound, navigation, vehicle information, backing images and the like, is positioned between the main driving position and the assistant driving position of the automobile, and is convenient for the main driving and assistant driving passengers to use; the HUD can display important driving information such as speed per hour, navigation and the like on a windshield in front of a driver; the combined instrument display screen is positioned right in front of the main driving position and can display information such as speed per hour, navigation, weather, humidity, driving modes and the like so as to be convenient for a driver to observe; the streaming media rearview mirror shoots a picture behind a vehicle in real time through a camera behind the vehicle and displays the picture on a central rearview mirror display screen.
However, since the mounting positions of the vehicle-mounted displays are fixed, when the driver uses the navigation function, the driver must frequently turn around to see the central control display screen to know the current position and the driving route, so that the attention of the driver can be dispersed; for example, when a novice driver drives a vehicle, the vehicle rear road condition needs to be carried out by means of the streaming media rearview mirror, and meanwhile, the screen of the streaming media rearview mirror is small, so that the novice driver needs to frequently raise the head to watch the display screen of the central rearview mirror, cannot acquire information quickly, is easy to cause visual fatigue, and is poor in driving safety.
Disclosure of Invention
The embodiment of the application provides a gesture control method, a gesture control device and a storage medium for a vehicle-mounted display screen, which can improve driving safety and human-computer interaction experience.
In one aspect, an embodiment of the present application provides a gesture control method for a vehicle-mounted display screen, which is characterized by including:
acquiring information to be recognized containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions;
identifying the information to be identified to obtain an identification result;
if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared;
and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen.
Optionally, the detection unit includes a camera;
acquiring information to be recognized containing gesture actions, comprising the following steps:
acquiring a plurality of continuous images to be identified through a camera;
and detecting multiple frames of continuous images to be recognized according to the acquired image detection model to obtain a set of images to be recognized containing gesture actions.
Optionally, identifying the information to be identified to obtain an identification result, including:
and identifying the image set to be identified according to the acquired target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image.
Optionally, the recognition result includes first image frame information including the first gesture, location information of the first gesture in each frame of image, second image frame information including the second gesture, and location information of the second gesture in each frame of image;
determining that there is a transformation action transformed from the first gesture to the second gesture in the first screen surface preset space according to the recognition result, including:
if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is greater than or equal to a preset value, determining that a transformation action of transforming the first gesture into the second gesture from the first gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of a first gesture in the current frame image or the position of a second gesture in the first frame image.
Optionally, determining that there is a transition action from the second gesture to the first gesture in the preset space of the second screen surface according to the recognition result, including:
if the current frame image in the first image frame information is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is greater than or equal to a preset value, determining that a transformation action of transforming the second gesture to the first gesture exists in a preset space on the surface of a second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
Optionally, the detection unit includes a distance sensor; at least two vehicle-mounted display screens are respectively provided with a distance sensor;
acquiring information to be recognized containing gesture actions, comprising the following steps:
and acquiring hand motions through the distance sensor to generate information to be recognized containing gesture motions.
Optionally, the detection unit includes an infrared detector; the at least two vehicle-mounted display screens are respectively provided with an infrared detector;
acquiring information to be recognized containing gesture actions, comprising the following steps:
and acquiring hand motions through an infrared detector to generate information to be recognized containing gesture motions.
Optionally, the method further comprises:
collecting a preset space on the surface of the second screen through a camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the plurality of subspaces correspond to a plurality of controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one;
determining a manipulation gesture based on the sequence of images;
and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
On the other hand, the embodiment of the application provides a gesture controlling means for on-vehicle display screen, includes:
the acquisition module is used for acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions;
the identification module is used for identifying the information to be identified to obtain an identification result;
the first determining module is used for determining that a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared if the first determining module determines that a transformation action of transforming the first gesture into the second gesture exists in the first screen surface preset space according to the recognition result;
and the second determining module is used for displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen if the conversion action of converting the second gesture into the first gesture in the preset space on the surface of the second screen is determined according to the recognition result.
On the other hand, an embodiment of the present application provides a computer storage medium, where at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the above-mentioned gesture control method for a vehicle-mounted display screen.
The gesture control method, the gesture control device and the storage medium for the vehicle-mounted display screen have the following beneficial effects that:
acquiring information to be recognized containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions; identifying the information to be identified to obtain an identification result; if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared; and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen. Therefore, driving safety can be improved, and human-computer interaction experience can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an application scenario of an automobile cabin provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of a gesture control method for a vehicle-mounted display screen according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an identification process of information to be identified according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a corresponding area of a preset space on a screen surface of each vehicle-mounted display screen in an image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a process of determining a manipulation function according to a recognition result according to an embodiment of the present application;
fig. 6 is a schematic diagram of a controllable function corresponding to each sub-region in an image according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a gesture control device for a vehicle-mounted display screen according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an automobile cabin, which includes a plurality of on-board display screens: the system comprises a central control display screen 101, an instrument panel display screen 102, a streaming media rearview mirror display screen 103 and a HUD display screen 104; the detection unit is used for collecting gesture actions of a user aiming at each vehicle-mounted display screen so as to realize picture sharing among the vehicle-mounted display screens.
Acquiring information to be recognized containing gesture actions based on the detection unit, wherein the information to be recognized comprises coherent and different gesture actions; the detection range of the detection unit comprises screen surface preset spaces corresponding to the central control display screen 101, the instrument panel display screen 102, the streaming media rearview mirror display screen 103 and the HUD display screen 104 respectively; then, identifying the information to be identified to obtain an identification result; if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared; and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen.
Optionally, the position of the detection unit is installed according to actual needs, and a detection device may be installed near each vehicle-mounted display screen for detecting the gesture actions of each vehicle-mounted display screen, or only one detection device is installed for detecting the gesture actions of all vehicle-mounted display screens.
The following describes a specific embodiment of a gesture control method for a vehicle-mounted display screen, and fig. 2 is a schematic flowchart of a gesture control method for a vehicle-mounted display screen provided in an embodiment of the present application, and the present specification provides method operation steps as in the embodiment or the flowchart, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s201: acquiring information to be recognized containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises consecutive and different gesture actions.
In the embodiment of the application, a detection unit is arranged in an automobile cabin, and the detection unit is used for detecting a preset screen surface space corresponding to a vehicle-mounted display screen to acquire information to be recognized containing gesture actions, wherein the information to be recognized comprises consecutive and different gesture actions; the vehicle-mounted display screen comprises the above-mentioned central control display screen, a dashboard display screen, a streaming media rearview mirror display screen, a HUD display screen and the like; the screen surface is preset with a space, namely an area with a certain distance above the vehicle-mounted display screen; the detection unit comprises one or more detection devices, and when only one detection device is provided, the detection range of the detection device can cover all screen surface preset spaces corresponding to the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; when a plurality of detection device, a plurality of detection device set up respectively near well accuse display screen, panel board display screen, streaming media rear-view mirror display screen and HUD display screen, promptly every detection device among a plurality of detection device and every on-vehicle display screen one-to-one among a plurality of on-vehicle display screens.
In the embodiment of the application, the acquired information to be recognized containing the gesture motion can be in any form of an image, a video and a point cloud, and the corresponding detection unit can comprise a camera, an infrared recognition device, a distance sensor and other sensors.
In an optional embodiment, the detection unit comprises a camera, the camera is arranged in an area above the central control display screen, and the camera can detect screen surface preset spaces respectively corresponding to all vehicle-mounted display screens, including the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; the step S201 may specifically include: acquiring a plurality of continuous images to be identified through a camera; and detecting multiple frames of continuous images to be recognized according to the acquired image detection model to obtain a set of images to be recognized containing gesture actions. The image detection model can be obtained by training a built machine learning model based on a collected training image containing target gesture actions; the target gesture motion can realize a single gesture or a plurality of consecutive and different gestures for controlling the vehicle-mounted display screen.
In an alternative embodiment, the detection unit comprises a distance sensor; the at least two vehicle-mounted display screens are respectively provided with a distance sensor, namely, the distance sensors are respectively arranged in the areas near the display screens of the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; the step S201 may specifically include: and acquiring hand motions through the distance sensor to generate information to be recognized containing gesture motions. Specifically, the distance sensors of the vehicle-mounted display screens are connected with the same single chip microcomputer, and hand signals acquired respectively are input into the single chip microcomputer for operation, so that information to be recognized containing gesture actions is obtained; wherein the distance sensor may be microsoft Kinect.
In an alternative embodiment, the detection unit comprises an infrared detector; the at least two vehicle-mounted display screens are respectively provided with an infrared detector; the infrared detector is respectively arranged in the areas near the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; the step S201 may specifically include: and acquiring hand motions through an infrared detector to generate information to be recognized containing gesture motions. Specifically, the infrared detectors of the vehicle-mounted display screens are connected with the same single chip microcomputer, and hand signals acquired respectively are input into the single chip microcomputer to be operated, so that information to be recognized containing gesture actions is obtained.
S203: and identifying the information to be identified to obtain an identification result.
In the embodiment of the application, the information to be recognized is recognized to obtain a recognition result, and the recognition result comprises all gestures in the recognition information or further comprises only target gestures obtained after all gestures are screened; the target gesture can be a single gesture or a plurality of consecutive and different gestures for controlling the vehicle-mounted display screen.
In an optional implementation manner, the step S203 may include: and identifying the image set to be identified according to the acquired target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image. Based on the optional embodiment, the image set to be recognized is an image containing a target gesture action, and in order to further improve the recognition accuracy, the image set to be recognized is recognized again according to the target gesture recognition model to obtain image frame information containing a target gesture and the position of the target gesture in each frame of image; each frame of image is an image subjected to secondary detection, and the accuracy of the target gesture is high.
Specifically, a training image set containing a target gesture is collected, images of the target gesture at different positions or areas in the images can be collected, and a preset machine learning model is trained according to the training image set, so that a target gesture recognition model is obtained; as shown in fig. 3, fig. 3 is a schematic diagram of a recognition process of information to be recognized according to an embodiment of the present application, in which after an image set to be recognized passes through a target gesture recognition model, image frame information t 0-tn including a target gesture and a position P { (x0, y0) … … (xn, yn) } of the target gesture in each frame of image are output.
S205: and if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared.
S207: and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen.
In the embodiment of the application, a corresponding region of the first screen surface preset space in the image is predetermined, for example, as shown in fig. 4, the corresponding region of each screen surface preset space in the image is a region a corresponding to the center control display screen, a region b corresponding to the dashboard display screen, a region c corresponding to the streaming media rearview mirror display screen, and a region d corresponding to the HUD display screen. The target gestures comprise a first gesture and a second gesture, the first gesture is changed into the second gesture in the same area to represent selection and select a corresponding vehicle-mounted display screen, and after the vehicle-mounted display screen is selected, a picture in the vehicle-mounted display screen is determined as a picture to be shared; if the second gesture is changed to the first gesture again in the other area, the picture to be shared is displayed on the vehicle-mounted display screen corresponding to the current area.
In an optional implementation manner, the recognition result includes first image frame information including a first gesture, position information of the first gesture in each image frame, second image frame information including a second gesture, and position information of the second gesture in each image frame, and when it is determined according to the recognition result that a conversion action of converting the first gesture into the second gesture in the first screen surface preset space exists, a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is determined to be a picture to be shared.
Correspondingly, the determining, according to the recognition result, that there is a transition action from the first gesture to the second gesture in the preset space of the first screen surface may include: if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is greater than or equal to a preset value, determining that a transformation action of transforming the first gesture into the second gesture from the first gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of a first gesture in the current frame image or the position of a second gesture in the first frame image. The preset value may be 1, the matching degree value between the position of the first gesture in the current frame image and the position of the second gesture in the first frame image may be determined based on the corresponding region of each predetermined screen surface preset space in the image, and when the position of the first gesture in the current frame image and the position of the second gesture in the first frame image are in the corresponding region of the same screen surface preset space in the image, the matching degree value is 1; and when the position of the first gesture in the current frame image and the position of the second gesture in the first frame image are not in the corresponding area of the preset space on the same screen surface in the image, the matching degree value is 0.
Correspondingly, the determining, according to the recognition result, that there is a transformation action from the second gesture to the first gesture in the preset space of the second screen surface may include: if the current frame image in the first image frame information is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is greater than or equal to a preset value, determining that a transformation action of transforming the second gesture to the first gesture exists in a preset space on the surface of a second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
Specifically, as shown in fig. 5, the first gesture may be opening of the palm and the second gesture may be pinching of the fingers; the recognition result comprises first image frame information containing a first gesture (palm opening) of t 0-t 5 and t 21-tn, position information P { (x0, y0) … (x5, y5) of the first gesture (palm opening) in each frame image, (x21, y21) … (xn, yn) }, second image frame information t 6-t 20 containing a second gesture, and position information P { (x6, y6) … (x21, y21) } of the second gesture in each frame image; at this time, a next frame image (t6) of a current frame image (t5) in the first image frame information is a first frame image in the second image frame information, the position of a first gesture in the current frame image (t5) and the position of a second gesture in the first frame image (t6) are both located in an area a, and the matching degree value is 1, it is determined that a conversion action from the first gesture to the second gesture exists in a screen surface preset space corresponding to the area a, and a picture in a central control display screen corresponding to the area a is a picture to be shared; meanwhile, a current frame image (t21) existing in the first image frame information is a next frame image of a tail frame image (t20) in the second image frame information, the position of a second gesture in the tail frame image (t20) and the position of a first gesture in the current frame image (t21) are both located in a region b, the matching degree value is 1, it is determined that a conversion action from the second gesture to the first gesture exists in a screen surface preset space corresponding to the region b, and at the moment, a picture to be shared in the central control display screen is displayed on the dashboard display screen. The picture to be shared can be a navigation map picture, so that a driver can know the current position and the driving route without frequently turning around to see the central control display screen, the attention of the driver is dispersed, and the driving safety can be improved. The method is also applicable to picture interaction between other vehicle-mounted display screens, for example, when the position of the first gesture in the current frame image (t5) and the position of the second gesture in the first frame image (t6) are both located in the region c, the picture of the streaming media rearview mirror display screen is selected as a picture to be shared, the position of the second gesture in the tail frame image (t20) and the position of the first gesture in the current frame image (t21) are both located in the region a, the picture to be shared in the streaming media rearview mirror display screen is displayed in a full screen or window mode on the central control display screen, so that a driver does not need to frequently lift the head to view the streaming media rearview mirror display screen with a small screen, information can be directly and quickly acquired from the central control display screen, visual fatigue is reduced, and driving safety can be improved.
In an optional embodiment, the method may further include: collecting a preset space on the surface of the second screen through a camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the plurality of subspaces correspond to a plurality of controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one; determining a manipulation gesture based on the sequence of images; and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled. The multiple controllable functions include adjusting the progress, the brightness, the volume and the like of the currently played video.
Specifically, sub-regions corresponding to a plurality of sub-spaces on an image are predetermined, as shown in fig. 6, fig. 6 is a schematic view of a controllable function corresponding to each sub-region in the image provided in the embodiment of the present application, for example, the sub-region e corresponds to a video progress adjustment function, the sub-region f corresponds to a video image brightness adjustment function, and the sub-region g corresponds to a video volume adjustment function; different sub-areas can be additionally arranged or reduced according to actual requirements; acquiring a screen surface preset space corresponding to the central control display screen through a camera to obtain an image sequence; then analyzing a control gesture based on the image sequence; determining a function to be controlled from a plurality of controllable functions according to the control gesture, and controlling the function to be controlled; wherein the manipulation gesture may include a transition from a first gesture to a second gesture in the first sub-region, then a spatial movement with the second gesture, and finally a switch from the second gesture to a third gesture; the third gesture can be the same as the first gesture, and the function corresponding to the first sub-area is a function to be controlled; for example, when the central control display screen (or other display screens in the car) is playing a video in a full screen mode, the central control display screen extends to a region (sub-region e) on the lower portion of a preset space on the surface of the screen in a first gesture (palm is opened), then a thumb and a forefinger are kneaded (switched to a second gesture), and then the video is controlled to synchronously fast forward when the central control display screen moves to the right in the kneading gesture; when the user moves leftwards by a pinch gesture, controlling the video to synchronously fast backward; when the pinch gesture is unfolded (switched to the third gesture), the manipulation is stopped; for another example, when the central control display screen is playing a video in full screen, the palm is opened and extended to the left region (sub-region f) of the preset space on the screen surface, then the thumb and the index finger are kneaded (switched to the second gesture), and then when the kneading gesture moves upwards, the brightness of the video picture is controlled to be synchronously increased; when the user moves downwards by a pinch gesture, controlling the brightness of the video picture to synchronously reduce; when the pinch gesture is unfolded, the operation and the control are stopped; for another example, when the central control display screen is playing a video in a full screen mode, the palm is opened and extends to a region (sub-region g) close to the right of the preset space on the screen surface, then the thumb and the forefinger are kneaded, and then when the kneading gesture moves upwards, the volume of the video is controlled to be synchronously increased; controlling the video volume to synchronously decrease when moving downwards in the pinch gesture; when the pinch gesture is unfolded, the operation and the control are stopped; for another example, when the central control display screen is playing a video but not playing the full screen, the central control display screen extends to a preset area of the display screen by a pinch gesture, then five fingers are opened, and the video is controlled to start to play the full screen; otherwise, controlling the video to quit the full-screen playing. Therefore, compared with a control mode in the prior art that static gestures and single dynamic gestures are mainly recognized and the gestures are simple, the control mode corresponds to a single operation command on the vehicle, some gestures are not frequently used, and the control method is not in line with human operation intuition.
The method provided by the embodiment of the application can be executed in a computer terminal, a server or a similar operation device.
An embodiment of the present application further provides a gesture control device for a vehicle-mounted display screen, and fig. 7 is a schematic structural diagram of the gesture control device for the vehicle-mounted display screen provided in the embodiment of the present application, and as shown in fig. 7, the device includes:
an obtaining module 701, configured to obtain information to be identified that includes a gesture; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions;
the identification module 702 is configured to identify information to be identified to obtain an identification result;
the first determining module 703 is configured to determine, if it is determined that a conversion action from the first gesture to the second gesture exists in the first screen surface preset space according to the recognition result, that a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared;
the second determining module 704 is configured to display the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space if it is determined that the conversion action from the second gesture to the first gesture exists in the second screen surface preset space according to the recognition result.
In an alternative embodiment, the detection unit comprises a camera; the obtaining module 701 is specifically configured to: acquiring a plurality of continuous images to be identified through a camera; and detecting multiple frames of continuous images to be recognized according to the acquired image detection model to obtain a set of images to be recognized containing gesture actions.
In an optional implementation, the identifying module 702 is specifically configured to: and identifying the image set to be identified according to the acquired target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image.
In an optional embodiment, the recognition result comprises first image frame information containing a first gesture, position information of the first gesture in each frame of image, second image frame information containing a second gesture, and position information of the second gesture in each frame of image; the first determining module 703 is specifically configured to: if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is greater than or equal to a preset value, determining that a transformation action of transforming the first gesture into the second gesture from the first gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of a first gesture in the current frame image or the position of a second gesture in the first frame image.
In an optional implementation manner, the second determining module 704 is specifically configured to: if the current frame image in the first image frame information is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is greater than or equal to a preset value, determining that a transformation action of transforming the second gesture to the first gesture exists in a preset space on the surface of a second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
In an alternative embodiment, the detection unit comprises a distance sensor; at least two vehicle-mounted display screens are respectively provided with a distance sensor; the obtaining module 701 is specifically configured to: and acquiring hand motions through the distance sensor to generate information to be recognized containing gesture motions.
In an alternative embodiment, the detection unit comprises an infrared detector; the at least two vehicle-mounted display screens are respectively provided with an infrared detector; the obtaining module 701 is specifically configured to: and acquiring hand motions through an infrared detector to generate information to be recognized containing gesture motions.
In an optional implementation manner, the apparatus further includes a third determining module, configured to acquire a preset space on the surface of the second screen through a camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the plurality of subspaces correspond to a plurality of controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one; determining a manipulation gesture based on the sequence of images; and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
Embodiments of the present application further provide a storage medium, where the storage medium may be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a gesture control method for an on-vehicle display screen in the method embodiments, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the gesture control method for the on-vehicle display screen.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
According to the embodiment of the gesture control method, the gesture control device and the storage medium for the vehicle-mounted display screen, the information to be recognized containing gesture actions is obtained; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions; identifying the information to be identified to obtain an identification result; if the conversion action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result, determining the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space as the picture to be shared; and if the conversion action of converting the second gesture into the first gesture exists in the preset space on the surface of the second screen according to the recognition result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen. Therefore, driving safety can be improved, and human-computer interaction experience can be improved.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A gesture control method for a vehicle-mounted display screen is characterized by comprising the following steps:
acquiring information to be recognized containing gesture actions; the information to be identified is obtained based on a detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions;
identifying the information to be identified to obtain an identification result;
if the fact that a conversion action of converting the first gesture into the second gesture exists in the first screen surface preset space is determined according to the recognition result, determining that a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared;
and if the fact that the conversion action of converting the second gesture into the first gesture exists in a second screen surface preset space is determined according to the recognition result, displaying the picture to be shared in a vehicle-mounted display screen corresponding to the second screen surface preset space.
2. The method of claim 1, wherein the detection unit comprises a camera;
the acquiring of the information to be recognized containing the gesture action comprises the following steps:
acquiring a plurality of continuous images to be identified through the camera;
and detecting the multiple frames of continuous images to be recognized according to the obtained image detection model to obtain a set of images to be recognized containing gesture actions.
3. The method according to claim 2, wherein the identifying the information to be identified to obtain an identification result comprises:
and identifying the image set to be identified according to the acquired target gesture identification model to obtain image frame information containing a target gesture and the position of the target gesture in each frame of image.
4. The method according to claim 3, wherein the recognition result comprises first image frame information containing a first gesture, position information of the first gesture in each frame of image, second image frame information containing a second gesture, and position information of the second gesture in each frame of image;
the determining, according to the recognition result, that there is a transformation action from a first gesture to a second gesture in a first screen surface preset space includes:
if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is greater than or equal to a preset value, determining that a transformation action for transforming the first gesture into the second gesture exists in a preset space on the surface of a first screen; the first screen surface preset space corresponds to the position of a first gesture in the current frame image or the position of a second gesture in the first frame image.
5. The method of claim 3, wherein the determining that there is a transition action from the second gesture to the first gesture in a second screen surface preset space according to the recognition result comprises:
if a current frame image exists in the first image frame information and is a next frame image of a tail frame image in the second image frame information, and the matching degree value of the position of a second gesture in the tail frame image and the position of a first gesture in the current frame image is greater than or equal to a preset value, determining that a transformation action for transforming the second gesture to the first gesture exists in a preset space on the surface of a second screen; the second screen surface preset space corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
6. The method of claim 1, wherein the detection unit comprises a distance sensor; the at least two vehicle-mounted display screens are respectively provided with the distance sensors;
the acquiring of the information to be recognized containing the gesture action comprises the following steps:
and acquiring hand motions through the distance sensor to generate information to be recognized containing gesture motions.
7. The method of claim 1, wherein the detection unit comprises an infrared detector; the infrared detectors are respectively arranged on the at least two vehicle-mounted display screens;
the acquiring of the information to be recognized containing the gesture action comprises the following steps:
and acquiring hand motions through the infrared detector to generate information to be recognized containing gesture motions.
8. The method of claim 2, further comprising:
collecting a preset space on the surface of the second screen through the camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the plurality of subspaces correspond to a plurality of controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one;
determining a manipulation gesture based on the sequence of images;
determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
9. A gesture control apparatus for an in-vehicle display screen, comprising:
the acquisition module is used for acquiring information to be identified containing gesture actions; the information to be identified is obtained based on a detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized comprises coherent and different gesture actions;
the identification module is used for identifying the information to be identified to obtain an identification result;
the first determining module is used for determining that a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared if the converting action of converting the first gesture into the second gesture in the first screen surface preset space is determined according to the recognition result;
and the second determining module is used for displaying the picture to be shared in the vehicle-mounted display screen corresponding to the preset space on the surface of the second screen if the conversion action of converting the second gesture into the first gesture in the preset space on the surface of the second screen is determined according to the recognition result.
10. A computer storage medium, characterized in that at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the gesture control method for the vehicle-mounted display screen according to any one of claims 1 to 8.
CN202110084512.0A 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium Active CN112905004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110084512.0A CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110084512.0A CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Publications (2)

Publication Number Publication Date
CN112905004A true CN112905004A (en) 2021-06-04
CN112905004B CN112905004B (en) 2023-05-26

Family

ID=76118230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110084512.0A Active CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Country Status (1)

Country Link
CN (1) CN112905004B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253849A (en) * 2021-07-01 2021-08-13 湖北亿咖通科技有限公司 Display control method, device and equipment of control bar
CN114527924A (en) * 2022-02-16 2022-05-24 珠海读书郎软件科技有限公司 Control method based on double-screen device, storage medium and device
CN117218716A (en) * 2023-08-10 2023-12-12 中国矿业大学 DVS-based automobile cabin gesture recognition system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855088A (en) * 2012-09-14 2013-01-02 福州瑞芯微电子有限公司 Method for starting and setting connection in same screen as well as corresponding system and equipment thereof
CN104090707A (en) * 2013-07-11 2014-10-08 腾讯科技(北京)有限公司 Method, device and system for sharing content between intelligent terminals
CN104777981A (en) * 2015-04-24 2015-07-15 无锡天脉聚源传媒科技有限公司 Information fast sharing method and device
CN107678664A (en) * 2017-08-28 2018-02-09 中兴通讯股份有限公司 A kind of terminal interface switching, the method, apparatus and terminal of gesture processing
CN110109639A (en) * 2019-05-09 2019-08-09 北京伏羲车联信息科技有限公司 Multi-screen interaction method and onboard system
CN110231866A (en) * 2019-05-29 2019-09-13 中国第一汽车股份有限公司 Vehicular screen control method, system, vehicle and storage medium
CN111857468A (en) * 2020-07-01 2020-10-30 Oppo广东移动通信有限公司 Content sharing method and device, equipment and storage medium
US20200406752A1 (en) * 2019-06-25 2020-12-31 Hyundai Mobis Co., Ltd. Control system and method using in-vehicle gesture input

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855088A (en) * 2012-09-14 2013-01-02 福州瑞芯微电子有限公司 Method for starting and setting connection in same screen as well as corresponding system and equipment thereof
CN104090707A (en) * 2013-07-11 2014-10-08 腾讯科技(北京)有限公司 Method, device and system for sharing content between intelligent terminals
CN104777981A (en) * 2015-04-24 2015-07-15 无锡天脉聚源传媒科技有限公司 Information fast sharing method and device
CN107678664A (en) * 2017-08-28 2018-02-09 中兴通讯股份有限公司 A kind of terminal interface switching, the method, apparatus and terminal of gesture processing
CN110109639A (en) * 2019-05-09 2019-08-09 北京伏羲车联信息科技有限公司 Multi-screen interaction method and onboard system
CN110231866A (en) * 2019-05-29 2019-09-13 中国第一汽车股份有限公司 Vehicular screen control method, system, vehicle and storage medium
US20200406752A1 (en) * 2019-06-25 2020-12-31 Hyundai Mobis Co., Ltd. Control system and method using in-vehicle gesture input
CN111857468A (en) * 2020-07-01 2020-10-30 Oppo广东移动通信有限公司 Content sharing method and device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253849A (en) * 2021-07-01 2021-08-13 湖北亿咖通科技有限公司 Display control method, device and equipment of control bar
CN114527924A (en) * 2022-02-16 2022-05-24 珠海读书郎软件科技有限公司 Control method based on double-screen device, storage medium and device
CN117218716A (en) * 2023-08-10 2023-12-12 中国矿业大学 DVS-based automobile cabin gesture recognition system and method
CN117218716B (en) * 2023-08-10 2024-04-09 中国矿业大学 DVS-based automobile cabin gesture recognition system and method

Also Published As

Publication number Publication date
CN112905004B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
KR102182667B1 (en) An operating device comprising an eye tracker unit and a method for calibrating the eye tracker unit of the operating device
US10394375B2 (en) Systems and methods for controlling multiple displays of a motor vehicle
CN112905004A (en) Gesture control method and device for vehicle-mounted display screen and storage medium
CN112241204B (en) Gesture interaction method and system of vehicle-mounted AR-HUD
EP3260331A1 (en) Information processing device
MX2011004124A (en) Method and device for displaying information sorted into lists.
US20210055790A1 (en) Information processing apparatus, information processing system, information processing method, and recording medium
CN112959945B (en) Vehicle window control method and device, vehicle and storage medium
CN116529125A (en) Method and apparatus for controlled hand-held steering wheel gesture interaction
JP2018055614A (en) Gesture operation system, and gesture operation method and program
JP5136948B2 (en) Vehicle control device
CN111483406A (en) Vehicle-mounted infotainment device, control method thereof and vehicle comprising same
US20180297471A1 (en) Support to handle an object within a passenger interior of a vehicle
CN1719895A (en) Devices for monitoring vehicle operation
CN112954486A (en) Vehicle-mounted video trace processing method based on sight attention
US12179669B2 (en) In-vehicle display controlling device and display controlling method for improved display output of operations
CN114008684A (en) Positionally correct representation of additional information on a vehicle display unit
US11436772B2 (en) Method for generating an image data set for reproduction by means of an infotainment system of a motor vehicle
CN117215689A (en) Method and device for generating vehicle-machine interface
CN116101205A (en) Intelligent cabin in-vehicle intelligent sensing system based on in-vehicle camera
CN110850975B (en) Electronic systems, vehicles with palm recognition and methods of operating the same
US11734928B2 (en) Vehicle controls and cabin interior devices augmented reality usage guide
US20230249552A1 (en) Control apparatus
EP4220356A1 (en) Vehicle, apparatus, method and computer program for obtaining user input information
CN118270037A (en) Display method, vehicle and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant