CN117095045A - Positioning method, device and equipment of in-vehicle controller and storage medium - Google Patents
Positioning method, device and equipment of in-vehicle controller and storage medium Download PDFInfo
- Publication number
- CN117095045A CN117095045A CN202210711832.9A CN202210711832A CN117095045A CN 117095045 A CN117095045 A CN 117095045A CN 202210711832 A CN202210711832 A CN 202210711832A CN 117095045 A CN117095045 A CN 117095045A
- Authority
- CN
- China
- Prior art keywords
- controller
- hand
- information
- infrared camera
- camera array
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000000875 corresponding effect Effects 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 7
- 230000002596 correlated effect Effects 0.000 claims description 3
- 230000001276 controlling effect Effects 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a positioning method, device and equipment of an in-vehicle controller and a storage medium, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring a current depth image of a region to be detected in a vehicle from an infrared camera array; determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller; analyzing the depth information corresponding to the target pixel point to determine the current angle information and the current distance information of the controller relative to the infrared camera array; and determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information. Therefore, no device is added on the controller, so that the endurance time of the controller is not lost, and the effective information of the depth information is more and more complete, so that the controller is positioned more accurately and finely.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to a positioning method, a positioning device, computer equipment and a storage medium of an in-vehicle controller.
Background
The positioning detection technology of the controller is a key for rendering the position information of the controller in the virtual reality picture. In the related art, a position sensor is often used in the controller to position the controller, however, the position sensor greatly affects the endurance time of the controller. Thus, how to position the controller without affecting the endurance of the controller is a current challenge.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present disclosure provides a positioning method for an in-vehicle controller, including:
acquiring a current depth image of a region to be detected in a vehicle from an infrared camera array;
determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
analyzing the depth information corresponding to the target pixel point to determine the current angle information and the current distance information of the controller relative to the infrared camera array;
and determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
An embodiment of a second aspect of the present disclosure provides a positioning device for an in-vehicle controller, including:
the acquisition module is used for acquiring a current depth image of a region to be detected in the vehicle from the infrared camera array;
the first determining module is used for determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
the second determining module is used for analyzing the depth information corresponding to the target pixel point to determine the angle information and the distance information of the controller relative to the infrared camera array at present;
and the third determining module is used for determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
Embodiments of a third aspect of the present disclosure provide a computer device comprising: the positioning method of the in-vehicle controller according to the first aspect of the present disclosure is implemented when the processor executes the program.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, implements a positioning method of an in-vehicle controller as proposed by the first aspect of the present disclosure.
The positioning method, the positioning device, the computer equipment and the storage medium of the in-vehicle controller have the following beneficial effects:
in the embodiment of the disclosure, a current depth image of a region to be detected in a vehicle from an infrared camera array is firstly obtained, then each target pixel point of the depth image, the depth value of which is located in a preset range, is determined, the preset range is a confidence depth value range corresponding to the hand and the controller, then depth information corresponding to the target pixel point is analyzed to determine angle information and distance information of the controller relative to the infrared camera array, and then the current position of the controller is determined according to the setting position of the infrared camera array, the angle information and the distance information. Therefore, no device is added on the controller, so that the endurance time of the controller is not lost, and the effective information of the depth information is more and more complete, so that the controller is positioned more accurately and finely.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart of a positioning method of an in-vehicle controller according to a first embodiment of the disclosure;
fig. 2 is a flowchart of a positioning method of an in-vehicle controller according to a second embodiment of the disclosure;
fig. 3 is a flowchart illustrating a positioning method of an in-vehicle controller according to a third embodiment of the disclosure;
fig. 4 is a block diagram of a positioning device of an in-vehicle controller according to a fourth embodiment of the present disclosure;
fig. 5 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
The following describes a positioning method, apparatus, computer device, and storage medium of an in-vehicle controller of an embodiment of the present disclosure with reference to the accompanying drawings.
In a vehicle, by performing 6dof (6 degrees of freedom) positioning tracking on some simple controllers and matching with virtual reality equipment, a very rich visual experience can be brought to a user in the vehicle. The position and posture information of the controller are the necessary conditions for generating the dof virtual reality picture of the controller 6, the built-in gyroscope in the controller can measure the posture of the controller, and the positioning detection technology of the controller is the key for identifying the position of the controller.
It should be noted that the six-degree-of-freedom information of the controller includes position information and attitude information of the controller in a world coordinate system, where the position information is coordinates of the controller along three rectangular coordinate axes X, Y, Z, the attitude information includes attitude information Pitch, yaw, roll of the controller along three rectangular coordinate axes X, Y, Z, pitch is a Pitch angle rotating around an X axis, yaw is a Yaw angle rotating around a Y axis, and Roll is a Roll angle rotating around a Z axis. The positional information along the three rectangular coordinate axes of X, Y, Z and the posture information Pitch, yaw, roll about the three rectangular coordinate axes of X, Y, Z are generally collectively referred to as six-degree-of-freedom information.
Fig. 1 is a flowchart illustrating a positioning method of an in-vehicle controller according to a first embodiment of the disclosure.
Here, the execution subject of the six-degree-of-freedom screen generating method of the controller according to the present embodiment is a six-degree-of-freedom screen generating device of the controller, which may be implemented in software and/or hardware, and which may be disposed in a server at a vehicle end, that is, a vehicle, and the six-degree-of-freedom screen generating method of the controller according to the present disclosure will be described below with the vehicle as the execution subject, and is not limited thereto.
As shown in fig. 1, the positioning method of the in-vehicle controller may include the steps of:
and step 101, acquiring a current depth image of a region to be detected in the vehicle from the infrared camera array.
Optionally, in the present disclosure, an infrared camera array in the TOF camera device may be used to emit a modulated laser beam to capture a current depth image of a region to be detected in the vehicle.
The depth image may include depth information of pixel points corresponding to each object in the region to be detected, and a stereoscopic 3D model of each object in the region to be detected may be displayed through the depth image.
Specifically, after the infrared camera array is started, the car machine can control the infrared camera array to emit dense laser beams, the laser beams are reflected at the positions of a human hand and a controller and are transmitted back into the infrared camera array, and effective information can be up to 30 ten thousand, so that the laser beams scan objects more accurately and finely, and further effective information contained in the current photographed depth image is more available and reliable.
It will be appreciated that since the user typically uses the controller in a particular spatial region within the vehicle when using the controller, the region to be detected may be determined in advance from the spatial region in which the controller may be located when used.
In order to enable the infrared camera array to accurately acquire depth information containing a hand or a controller in a region to be detected in the vehicle, a laser beam to be emitted can be modulated in advance. Information of the hand and the controller can be captured in a specific spatial range through the modulated laser beam.
Specifically, the infrared camera array can scan the region to be detected point by point to acquire depth information of the region to be detected. Each pixel in the infrared camera array can record the phase between the laser round-trip camera and the object, and further can draw depth data.
Alternatively, the time difference and the phase difference may be determined based on the emission time of the laser beam emitted by the infrared camera array, and the reception time, and the phase between the laser round-trip camera and the object, and further the depth data may be plotted based on the time difference or the phase difference.
A bandpass filter may be disposed at the front end of the ir camera array to allow light of the same wavelength as the emitted laser beam to enter, so that the ambient light is filtered out, i.e., light of a wavelength different from that of the emitted laser beam. Therefore, the influence of ambient light can be reduced, and the accuracy of the depth information is improved.
Step 102, determining each target pixel point with a depth value in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller.
The preset range may be a confidence depth value range corresponding to the hand and the controller.
For example, if the depth information of the current shot is between 0-70cm, and the hand and the controller typically move in the interval of 30-45cm, the range may be determined as the confidence depth value range corresponding to the hand and the controller, which is not limited herein.
It will be appreciated that the confidence depth value range may be a range of confidence depth values determined from a previous test of the hand and controller, i.e., typically the hand and controller are within that range.
The controller may be an electronic interaction device, which may include a communication module such as a bluetooth module, an NFC module, etc., so as to communicate with the vehicle to further transfer electric quantity information, physical address information (MAC), form (type) information and model information of the controller, etc., which is not limited herein. In addition, an attitude sensor can be installed in the controller and used for collecting attitude information of the controller.
It should be noted that the controller in the present disclosure may be any shape, such as a finger-shaped controller, a watch-shaped controller, a ball-shaped controller, a handle-shaped controller, and the like, and is not limited herein.
The target pixel point may be a gray value of the pixel point, that is, a target pixel point whose depth value is in a reasonable interval.
Specifically, by reserving target pixel points with depth values in a preset range, current background information of the hand and the controller can be filtered, namely, current effective available pixel points can be automatically screened through depth of field.
As a possible implementation manner, the current depth image may be deleted and the infrared camera array may be controlled to capture the current depth image of the region to be detected in the vehicle again when the number of target pixel points is smaller than the preset threshold.
It should be noted that, if the preset threshold is the number threshold of the target pixels, and the number of the target pixels is smaller than the preset threshold, it is indicated that the available depth information in the current depth image is less. That is, the number of target pixels is relatively small, possibly because the hand and the controller are relatively blurred or otherwise occluded.
Therefore, under the condition that the number of target pixel points of the current depth image cannot meet the requirement, the current depth image is deleted, and the current depth image of the area to be detected in the vehicle is acquired again based on the infrared camera array.
And 103, analyzing the depth information corresponding to the target pixel point to determine the current angle information and the current distance information of the controller relative to the infrared camera array.
It should be noted that, the target depth image formed by the target pixels may be determined according to the target pixels, where the target depth image includes depth information corresponding to each pixel, for example, gray information of the pixel and gradient information of each pixel in each direction, which is not limited herein.
As one possible implementation, the target depth image may be input into a neural network model that is pre-trained to generate to determine angle information and distance information of the controller or hand in the target depth image relative to the infrared camera array. Optionally, the vehicle may perform object recognition, image segmentation, and image ranging through computer vision algorithms to determine the distance of the controller or hand in the object depth image relative to the current infrared camera array.
Alternatively, the angle of the controller or hand in the target depth image relative to the infrared camera array may also be calculated by an image processing algorithm, such as Harris corner detection algorithm, which is not limited herein.
And 104, determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
The setting position of the infrared camera array is provided with unique world coordinates.
In the present disclosure, the current position of the controller may be calculated based on the angle and distance of the controller with respect to the infrared camera array and the coordinate data corresponding to the set position of the infrared camera array.
Or, the angle information and the distance information of the hand relative to the infrared camera array can be converted firstly based on the preset relative position relation between the hand and the controller, so as to obtain the angle information and the distance information of the controller relative to the infrared camera array, and then the position of the controller is determined according to the setting position of the infrared camera array and the angle information and the distance information of the controller relative to the infrared camera array.
It will be appreciated that there is a particular relative positional relationship between the hand and the controller, that is, the controller may be used when held by the hand in a particular posture. Therefore, the relative position relationship between the hand and the controller can be input into the vehicle machine in advance, so that the vehicle machine can convert the angle and the distance after determining the angle information and the distance information of the hand relative to the infrared camera array, and further calculate the angle and the distance of the controller relative to the infrared camera array.
In the embodiment of the disclosure, a current depth image of a region to be detected in a vehicle from an infrared camera array is firstly obtained, then each target pixel point of the depth image, the depth value of which is located in a preset range, is determined, the preset range is a confidence depth value range corresponding to the hand and the controller, then depth information corresponding to the target pixel point is analyzed to determine angle information and distance information of the controller relative to the infrared camera array, and then the current position of the controller is determined according to the setting position of the infrared camera array, the angle information and the distance information. Therefore, no device is added on the controller, so that the endurance time of the controller is not lost, and the effective information of the depth information is more and more complete, so that the controller is positioned more accurately and finely.
Fig. 2 is a flow chart of a positioning method of an in-vehicle controller according to a second embodiment of the present disclosure.
As shown in fig. 2, the positioning method of the in-vehicle controller may include the steps of:
step 201, acquiring a current depth image of a region to be detected in a vehicle from an infrared camera array.
Step 202, determining each target pixel point with a depth value in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller.
And 203, analyzing the depth information corresponding to the target pixel point to determine the angle information and the distance information of the controller or the hand relative to the infrared camera array at present.
It should be noted that, the specific implementation manner of the steps 201, 202, 203 may refer to the above embodiment, and will not be described herein.
And 204, determining the position of the controller corresponding to any historical depth image as the current position of the controller when the current angle information and the current distance information of the controller relative to the infrared camera array are respectively the same as the angle information and the distance information corresponding to the controller in any historical depth image.
The historical depth image may be a depth image obtained by shooting with an infrared camera array in historical time.
For example, if the angle information of the current controller with respect to the infrared camera array is F, the distance information is P, and the angle information of the current controller with respect to the infrared camera array is F, the distance information is P, that is, the same angle and distance as the current controller with respect to the infrared camera array, so that the position of the controller calculated on the historical depth image can be determined as the position of the current controller.
And 205, determining the position of the controller corresponding to any historical depth image as the current position of the controller when the angle information and the distance information of the hand relative to the infrared camera array are the same as the angle information and the distance information corresponding to the hand in any historical depth image.
For example, if the angle information of the current hand relative to the infrared camera array is M, the distance information is N, and the angle information of the current hand relative to the infrared camera array in a certain historical depth image is M, the distance information is N, that is, the same angle and distance as the current hand relative to the infrared camera array. Thus, the position of the controller calculated on the history depth image may be determined as the position of the current controller.
Therefore, the process of converting the angle information and the distance information of the hand relative to the infrared camera array into the angle information and the distance information of the controller relative to the infrared camera array and calculating the position according to the angle and the distance can be avoided, the calculated amount of the vehicle and the machine is reduced to a great extent, the processing speed of the processor is improved, and the time delay is reduced.
And 206, storing the current position of the controller in association with the angle information and the distance information of the controller or the hand relative to the infrared camera array.
After the current position of the controller is calculated, the vehicle machine can store the current coordinate of the controller, and the angle information and the distance information of the controller or the hand relative to the infrared camera array in a correlated mode.
It should be noted that, the angle information and the distance information of the hand and the controller relative to the infrared camera array can be respectively associated with the current position of the controller, that is, can be stored in different positions, so that query errors can be avoided, that is, the angle information and the distance information of the hand relative to the infrared camera array can be avoided being mistaken as the angle information and the distance information of the controller relative to the infrared camera array.
For example, if the controller is positioned at a, the controller is positioned at an angle x and a distance y relative to the infrared camera array, the vehicle may record a-x-y and store the recorded a-x-y in the memory. If the angle of the hand relative to the infrared camera array is X and the distance is Y, the car machine can record A-X-Y and store the A-X-Y in the memory.
Therefore, after the photographed depth image is analyzed to obtain the angle information and the distance information of the hand or the controller relative to the infrared camera array, the hand or the controller can directly inquire in the memory to determine the position corresponding to the angle and the distance, so that the processing speed and the response speed of the processor of the vehicle can be improved, and the time delay is reduced.
Step 207, generating a virtual reality screen including six degrees of freedom of the controller according to the position of the controller and the currently received gesture information of the controller.
Wherein the gesture information may be obtained by gesture sensor measurements of the controller. In the present disclosure, the attitude sensor may be an inertial measurement sensor (IMU). The inertial measurement sensor may include an accelerometer and a gyroscope for measuring acceleration data and angle data, respectively. The angle data may be pitch angle, heading angle, roll angle, etc., which are not limited herein.
Specifically, the vehicle may determine displacement information of the controller in the world coordinate system based on the current position of the controller and the historical spatial position of the controller. Wherein the historical spatial position may be the spatial position determined in the previous unit time.
Then, the vehicle machine can render the image to be rendered, which is shot by the obtained virtual reality equipment, through a system with rendering capability based on displacement information and posture information of the controller under a world coordinate system, and a rendered virtual reality picture is generated, so that the virtual reality picture can contain six-degree-of-freedom information of the controller.
The frame to be rendered may be a frame obtained by shooting the current virtual reality device, and the controller is located in the frame to be rendered. The position and the orientation of the controller in the picture can be rendered by performing rendering processing on the picture to be rendered by combining the displacement information and the gesture information of the controller.
In the embodiment of the disclosure, a current depth image of a region to be detected in a vehicle from an infrared camera array is firstly obtained, then each target pixel point of the depth image, of which the depth value is in a preset range, is determined, the preset range is a confidence depth value range corresponding to the hand and the controller, then the depth information corresponding to the target pixel point is analyzed to determine the current position of the controller or the hand relative to the infrared camera array, then when the current angle information and the distance information of the controller relative to the infrared camera array are the same, the controller position corresponding to any historical depth image is determined to be the current position of the controller, then when the current angle information and the distance information of the hand relative to the infrared camera array are the same, the controller position corresponding to any historical depth image is determined to be the current position of the controller, then the current position of the controller is determined to be the controller relative to the angle information and the distance information of the hand relative to the infrared camera array, finally the controller is stored according to the current position of the controller or the current angle information and the distance information of the hand, and the controller is stored in a free image, and the controller is generated. Therefore, when the angle information and the distance information of the controller or the hand relative to the infrared camera array are consistent with those of the controller or the hand relative to the infrared camera array in the historical depth image, the position of the controller corresponding to the historical depth image is determined to be the position of the controller corresponding to the current depth image, so that the calculated amount is reduced, the processing speed is improved, the calculation force of the vehicle is saved, and after the controller is positioned, a virtual reality picture containing six degrees of freedom of the controller can be generated according to the positioning information, and the user experience is improved.
Fig. 3 is a flow chart of a positioning method of an in-vehicle controller according to a third embodiment of the present disclosure.
As shown in fig. 3, the positioning method of the in-vehicle controller may include the steps of:
step 301, acquiring a current depth image of a region to be detected in a vehicle from an infrared camera array.
It should be noted that, the specific implementation manner of step 301 may refer to the above embodiment, and will not be described herein.
In step 302, the depth image is segmented to extract a hand image in the depth image, wherein the hand image is an image containing hand depth information.
It should be noted that, after the depth image is obtained, the vehicle may segment the depth image by using an image segmentation algorithm, so as to obtain an image containing hand depth information in the depth image.
Step 303, comparing the hand image with the reference hand image to determine whether the hand in the hand image is the hand holding the controller.
The reference hand image may be a depth image of the hand without the controller, or may be a depth image of the hand with the controller. The hand in the reference hand image may be the left hand or may be the right hand.
Further, by comparing the hand image with the reference hand image, the vehicle can learn whether the hand in the current depth image is the hand holding the controller.
In step 304, in the case that the hand is the hand holding the controller, the hand image is parsed to determine angle information and distance information of the hand with respect to the infrared camera array.
When the vehicle determines that the hand is the hand holding the controller, the hand image may be analyzed, so that the angle information and the distance information of the hand with respect to the infrared imaging array may be determined.
As one possible implementation, the hand image may be input into a neural network model that is pre-trained to determine angle information and distance information of the hand in the hand image relative to the infrared camera array. Optionally, the vehicle machine can perform target recognition, image segmentation and image ranging through a computer vision algorithm to determine the distance between the hand in the hand image and the current infrared camera array.
Alternatively, the angle of the hand in the hand image relative to the infrared camera array may also be calculated by an image processing algorithm, such as Harris corner detection algorithm, which is not limited herein.
Step 305, converting the angle information and the distance information of the hand relative to the infrared camera array currently based on the preset relative position relationship between the hand and the controller, so as to obtain the angle information and the distance information of the controller relative to the infrared camera array.
It will be appreciated that there is a particular relative positional relationship between the hand and the controller, that is, the controller may be used when held by the hand in a particular posture. Therefore, the relative position relationship between the hand and the controller can be input into the vehicle machine in advance, so that the vehicle machine can convert the angle and the distance after determining the angle information and the distance information of the hand relative to the infrared camera array, and further calculate the angle and the distance of the controller relative to the infrared camera array.
And step 306, determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
It should be noted that, the specific implementation of step 306 may refer to the above embodiments, and will not be described herein.
In the embodiment of the disclosure, a current depth image of a region to be detected in a vehicle from an infrared camera array is firstly obtained, then the depth image is segmented to extract a hand image in the depth image, wherein the hand image is an image containing hand depth information, then the hand image is compared with a reference hand image to judge whether the hand in the hand image is a hand holding a controller, then the hand image is analyzed to determine angle information and distance information of the hand relative to the infrared camera array currently under the condition that the hand is the hand holding the controller, then the angle information and the distance information of the hand relative to the infrared camera array are converted based on a preset relative position relationship between the hand and the controller to obtain the angle information and the distance information of the controller relative to the infrared camera array, and then the current position of the controller is determined according to the setting position of the infrared camera array and the angle information and the distance information. Therefore, the current position of the controller relative to the infrared camera array can be accurately judged through the depth image containing the depth information of the controller or the hand, and the controller is further positioned. Because no device is added on the controller, the endurance time of the controller is not lost, and the effective information of the depth information is more and more complete, so that the controller is positioned more accurately and finely.
Fig. 4 is a schematic structural diagram of a positioning device of an in-vehicle controller according to a fourth embodiment of the disclosure.
As shown in fig. 4, the positioning device 400 of the in-vehicle controller may include: the acquisition module 410, the first determination module 420, the first determination module 430, and the third determination module 440.
The acquisition module is used for acquiring a current depth image of a region to be detected in the vehicle from the infrared camera array;
the first determining module is used for determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
the second determining module is used for analyzing the depth information corresponding to the target pixel point to determine the angle information and the distance information of the controller relative to the infrared camera array at present;
and the third determining module is used for determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
Optionally, the first determining module is further configured to:
determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
And analyzing the depth information corresponding to the target pixel point to determine the angle information and the distance information of the controller or the hand relative to the infrared camera array at present.
Optionally, the determining unit is further configured to:
and deleting the current depth image under the condition that the number of the target pixel points is smaller than a preset threshold value, and controlling the infrared camera array to shoot the current depth image of the region to be detected in the vehicle again.
Optionally, the third determining module is specifically configured to:
and when the angle information and the distance information of the controller relative to the infrared camera array at present are respectively the same as the angle information and the distance information corresponding to the controller in any one of the historical depth images, determining the position of the controller corresponding to any one of the historical depth images as the current position of the controller.
Optionally, the acquiring module further includes:
the segmentation unit is used for segmenting the depth image to extract a hand image in the depth image, wherein the hand image is an image containing hand depth information;
the comparison unit is used for comparing the hand image with a reference hand image to judge whether the hand in the hand image is the hand with the controller;
The analysis unit is used for analyzing the hand image to determine the current angle information and distance information of the hand relative to the infrared camera array under the condition that the hand is the hand with the controller;
the acquisition unit is used for converting the current angle information and the current distance information of the hand relative to the infrared camera array based on the preset relative position relation between the hand and the controller so as to acquire the angle information and the distance information of the controller relative to the infrared camera array.
Optionally, the parsing unit is further configured to:
and when the angle information and the distance information of the hand relative to the infrared camera array are the same as the angle information and the distance information corresponding to the hand in any one of the historical depth images, determining the position of the controller corresponding to any one of the historical depth images as the current position of the controller.
Optionally, the third determining module is further configured to:
and storing the current position of the controller and the current angle information and distance information of the controller or the hand relative to the infrared camera array in a correlated mode.
Optionally, the third determining module is further configured to:
And generating a virtual reality picture containing six degrees of freedom of the controller according to the position of the controller and the currently received gesture information of the controller.
In the embodiment of the disclosure, a current depth image of a region to be detected in a vehicle is obtained based on an infrared camera array, then the depth image is analyzed to determine angle information and distance information of a controller or a hand relative to the infrared camera array, and then the current position of the controller is determined according to the setting position of the infrared camera array, the angle information and the distance information. Therefore, the current position of the controller relative to the infrared camera array can be accurately judged through the depth image containing the depth information of the controller or the hand, and the controller is further positioned. Because no device is added on the controller, the endurance time of the controller is not lost, and the effective information of the depth information is more and more complete, so that the controller is positioned more accurately and finely.
To achieve the above embodiments, the present disclosure further proposes a computer device including: the positioning method of the in-vehicle controller according to the foregoing embodiment of the disclosure is implemented when the processor executes the program.
In order to implement the above-described embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, implements a positioning method of an in-vehicle controller as proposed in the foregoing embodiments of the present disclosure.
To achieve the above-described embodiments, the present disclosure also proposes a computer program product which, when executed by an instruction processor in the computer program product, performs a positioning method of an in-vehicle controller as proposed by the foregoing embodiments of the present disclosure.
Fig. 5 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure. The computer device 12 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in FIG. 5, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, the computer device 12 may also communicate with one or more networks such as a local area network (Local Area Network; hereinafter LAN), a wide area network (Wide Area Network; hereinafter WAN) and/or a public network such as the Internet via the network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the methods mentioned in the foregoing embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.
Claims (10)
1. A method of positioning an in-vehicle controller, comprising:
acquiring a current depth image of a region to be detected in a vehicle from an infrared camera array;
determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
analyzing the depth information corresponding to the target pixel point to determine the current angle information and the current distance information of the controller relative to the infrared camera array;
and determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
2. The method of claim 1, further comprising, after said determining that the depth value in the depth image is located at each target pixel in a preset range:
And deleting the current depth image under the condition that the number of the target pixel points is smaller than a preset threshold value, and controlling the infrared camera array to shoot the current depth image of the region to be detected in the vehicle again.
3. The method of claim 1, wherein determining the current position of the controller based on the set position of the infrared camera array, and the angle information and distance information, comprises:
and when the angle information and the distance information of the controller relative to the infrared camera array at present are respectively the same as the angle information and the distance information corresponding to the controller in any one of the historical depth images, determining the position of the controller corresponding to any one of the historical depth images as the current position of the controller.
4. The method of claim 1, further comprising, after said determining the location of the controller:
and storing the current position of the controller and the current angle information and distance information of the controller relative to the infrared camera array in a correlated mode.
5. The method of claim 1, further comprising, after the acquiring the current depth image of the area to be detected in the vehicle from the infrared camera array:
Dividing the depth image to extract a hand image in the depth image, wherein the hand image is an image containing hand depth information;
comparing the hand image with a reference hand image to judge whether the hand in the hand image is the hand with the controller;
analyzing the hand image to determine the current angle information and distance information of the hand relative to the infrared camera array under the condition that the hand is the hand with the controller;
and converting the current angle information and distance information of the hand relative to the infrared camera array based on the preset relative position relation between the hand and the controller so as to acquire the angle information and the distance information of the controller relative to the infrared camera array.
6. The method of claim 5, further comprising, after said determining the angle information and distance information of the hand with respect to the infrared camera array:
and when the angle information and the distance information of the hand relative to the infrared camera array are the same as the angle information and the distance information corresponding to the hand in any one of the historical depth images, determining the position of the controller corresponding to any one of the historical depth images as the current position of the controller.
7. The method of claim 1, further comprising, after said determining the location of the controller:
and generating a virtual reality picture containing six degrees of freedom of the controller according to the position of the controller and the currently received gesture information of the controller, wherein the virtual reality picture contains the space pointing information of the controller in the current picture environment.
8. A positioning device of an in-vehicle controller, comprising:
the acquisition module is used for acquiring a current depth image of a region to be detected in the vehicle from the infrared camera array;
the first determining module is used for determining each target pixel point of which the depth value is in a preset range in the depth image, wherein the preset range is a confidence depth value range corresponding to the hand and the controller;
the second determining module is used for analyzing the depth information corresponding to the target pixel point to determine the angle information and the distance information of the controller relative to the infrared camera array at present;
and the third determining module is used for determining the current position of the controller according to the setting position of the infrared camera array, the angle information and the distance information.
9. A computer device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which processor, when executing the program, implements a method of positioning an in-vehicle controller according to any one of claims 1-7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements a method of positioning an in-vehicle controller according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210711832.9A CN117095045A (en) | 2022-06-22 | 2022-06-22 | Positioning method, device and equipment of in-vehicle controller and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210711832.9A CN117095045A (en) | 2022-06-22 | 2022-06-22 | Positioning method, device and equipment of in-vehicle controller and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117095045A true CN117095045A (en) | 2023-11-21 |
Family
ID=88768624
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210711832.9A Pending CN117095045A (en) | 2022-06-22 | 2022-06-22 | Positioning method, device and equipment of in-vehicle controller and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117095045A (en) |
-
2022
- 2022-06-22 CN CN202210711832.9A patent/CN117095045A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11002840B2 (en) | Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle | |
CN110322500B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
CN110763251B (en) | Method and system for optimizing visual inertial odometer | |
EP2531980B1 (en) | Depth camera compatibility | |
EP2671384B1 (en) | Mobile camera localization using depth maps | |
US8823779B2 (en) | Information processing apparatus and control method thereof | |
CN112384891B (en) | Method and system for point cloud coloring | |
KR101880185B1 (en) | Electronic apparatus for estimating pose of moving object and method thereof | |
JP5778182B2 (en) | Depth camera compatibility | |
US20170132806A1 (en) | System and method for augmented reality and virtual reality applications | |
CN109084746A (en) | Monocular mode for the autonomous platform guidance system with aiding sensors | |
KR102006291B1 (en) | Method for estimating pose of moving object of electronic apparatus | |
CN111094895B (en) | System and method for robust self-repositioning in pre-constructed visual maps | |
KR102169309B1 (en) | Information processing apparatus and method of controlling the same | |
CN110349212B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
CN109300143A (en) | Determination method, apparatus, equipment, storage medium and the vehicle of motion vector field | |
US20160210761A1 (en) | 3d reconstruction | |
WO2022135594A1 (en) | Method and apparatus for detecting target object, fusion processing unit, and medium | |
US20220180545A1 (en) | Image processing apparatus, image processing method, and program | |
CN112396634B (en) | Moving object detection method, moving object detection device, vehicle and storage medium | |
CN110706257B (en) | Identification method of effective characteristic point pair, and camera state determination method and device | |
JP5617166B2 (en) | Rotation estimation apparatus, rotation estimation method and program | |
CN117095045A (en) | Positioning method, device and equipment of in-vehicle controller and storage medium | |
CN111161357B (en) | Information processing method and device, augmented reality device and readable storage medium | |
CN113011212B (en) | Image recognition method and device and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |