CN108958483A - Rigid body localization method, device, terminal device and storage medium based on interaction pen - Google Patents
Rigid body localization method, device, terminal device and storage medium based on interaction pen Download PDFInfo
- Publication number
- CN108958483A CN108958483A CN201810696389.6A CN201810696389A CN108958483A CN 108958483 A CN108958483 A CN 108958483A CN 201810696389 A CN201810696389 A CN 201810696389A CN 108958483 A CN108958483 A CN 108958483A
- Authority
- CN
- China
- Prior art keywords
- information
- axis
- freedom degree
- posture position
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Position Input By Displaying (AREA)
Abstract
The invention discloses a kind of rigid body localization method, device, terminal device and storage medium based on interaction pen.The present invention passes through the setting direction sensor in interaction pen, and it is aided with the visual sensor being set on stereoscopic display device, in interactive process, pass through the location information of each luminescent marking point in the imaging picture using luminescent marking point on the collected interaction pen of visual sensor, it determines in the visual angle of visual sensor, the the first freedom degree posture position information of interaction pen in three dimensions, and according to the first determining freedom degree posture position information, the the second freedom degree posture position information (visual angle based on direction sensor) of the determining interaction pen of direction sensor in three dimensions is calibrated, obtain the higher third freedom degree posture position information of accuracy, the physical location of interaction pen in three dimensions is finally determined according to third freedom degree posture position information, to effectively improve the accuracy positioned in interactive process And stability.
Description
Technical field
The present invention relates to virtual reality interaction technique field more particularly to a kind of rigid body localization method based on interaction pen,
Device, terminal device and storage medium.
Background technique
With computer graphics, computer simulation technique, human-machine interface technology, multimedia technology and sensing technology
Development, virtual technology, as the human-computer interaction of virtual reality, the human-computer interaction of mixed reality become research hotspot.Virtual technology
Development, spatial interaction occasion is greatly facilitated, as stereo projector, stereoscopic report controller, three-dimensional (three dimensional,
Referred to as: the 3D) development of the interactive products such as display and service.
By taking 3D display device as an example, at present on the market there is a large amount of various sizes of 3D display devices, the class display is very
It is suitble to the more people of group using three-dimensional things is watched, in order to enable 3D display device can be popularized in more crowds, to reduce simultaneously
User's purchase cost, the class display is mostly without providing special virtual display interactive device, such as Virtual Reality Head-mounted Displays or 3D
Glasses etc., and be to provide preparation cost it is lower and can be applied to the interaction handle of other terminal devices (be roughly divided into band positioning
Tracking system and without locating and tracking system) give user use.
But (it is equivalent to common air mouse, handle operates only without the interaction handle of locating and tracking system at present
By motion sensor), it is only capable of obtaining rotary freedom, is unable to get high-precision position positioning, can not meet at all precisely
Spatial interaction application demand.And the interaction handle with locating and tracking system, since its original design intention is to cooperate virtual display
The helmet considers, thus can not be suitable for the interaction of space desktop solid, the especially occasions such as desktop type short distance interactive application.It is i.e. existing
There is product majority all cannot carry out while use, this is unable to satisfy novel reality in a space while when opening multiple systems
Test the application in classroom.And existing product is to realize positioning based on ultrasonic wave principle mostly, thus in interactive process
It there is a problem that delay is high, precision is low, stability is poor, be unable to satisfy precisely interactive requirement.
Above content is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that above content is existing skill
Art.
Summary of the invention
The rigid body localization method that the main purpose of the present invention is to provide a kind of based on interaction pen, device, terminal device and
Storage medium, it is intended to solve in the prior art can not the more people of support table noodles product it is shared simultaneously, and interactive process stability
The low technical problem of difference, accuracy.
To achieve the above object, the present invention provides a kind of rigid body localization method based on interaction pen, the method includes
Following steps:
The imaging picture of the luminescent marking point of visual sensor acquisition is received, the luminescent marking point is at least one, and
Each luminescent marking point is nonoverlapping to be set on interaction pen;
Determine coordinate of each luminescent marking point in the imaging picture;
According to coordinate of each luminescent marking point in the imaging picture, the interaction pen is determined in three dimensions
The first freedom degree posture position information;
The the second freedom degree posture position information of the interaction pen of receiving direction sensor acquisition in three dimensions, institute
It states direction sensor to be set in the interaction pen, the type of the direction sensor is true according to the number of the luminescent marking point
It is fixed, the direction sensor position of location according to the luminescent marking point on the interaction pen in the interaction pen
Set determination;
According to the first freedom degree posture position information, the second freedom degree posture position information is calibrated,
Obtain the third freedom degree posture position information of the interaction pen in three dimensions;
According to the third freedom degree posture position information, the position of the interaction pen in three dimensions is determined, with reality
The now real-time positioning to the interaction pen in virtual scene and the interaction to object in virtual scene.
Preferably, the luminescent marking point be one, the direction sensor be nine axis inertia motion sensors, described nine
Axis inertia motion sensor includes three axis micro machine gyroscopes, three axis accelerometer and three axle magnetometer, the luminescent marking point
The pen tip region of the interaction pen, and luminescent marking point and described nine are all set in the nine axis inertia motion sensor
Axis inertia motion sensor is adjacent;
Correspondingly, the first freedom degree posture position information includes the displacement of the displacement information of X-direction, Y direction
The displacement information of information and Z-direction, the second freedom degree posture position information includes the displacement information of X-direction, X-axis
Rotation information, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information, institute
Stating third freedom degree posture position information includes the displacement information of X-direction, the rotation information of X-axis, Y direction after calibration
Displacement information, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is obtained the third freedom degree posture position information of the interaction pen in three dimensions, is specifically included:
According to the displacement information of the X-direction in the first freedom degree posture position information, the displacement information of Y direction
With the displacement information of Z-direction, the respectively displacement information to the X-direction in the second freedom degree posture position information, Y-axis
The displacement information in direction and the displacement information of Z-direction are calibrated, and the third freedom degree posture position information lieutenant colonel is obtained
The displacement information of the displacement information of X-direction after standard, the displacement information of Y direction and Z-direction;
According to the three axle magnetometer and the three axis accelerometer, to the accumulative angle of the three axis micro machine gyroscope
Error compensates, the rotation of the rotation information, Y-axis of the X-axis after obtaining the third freedom degree posture position information alignment
The rotation information of information and Z axis.
Preferably, the luminescent marking point is two, and respectively the first luminescent marking point and the second luminescent marking point are described
Direction sensor is six axis inertia motion sensors, and the six axis inertia motion sensor includes three axis micro machine gyroscopes and three
Axis accelerometer, the first luminescent marking point are set to the pen tip region of the interaction pen, and the second luminescent marking point is set
It is placed in the tail region domain of the interaction pen, the six axis inertia motion sensor is set to the central region of the interaction pen;
Correspondingly, the first freedom degree posture position information includes the displacement of the displacement information of X-direction, Y direction
Information, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, the second freedom degree posture position packet
Include the displacement information of X-direction, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, Z-direction position
The rotation information of information and Z axis is moved, the third freedom degree posture position information includes the displacement letter of the X-direction after calibration
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is obtained the third freedom degree posture position information of the interaction pen in three dimensions, is specifically included:
Believed according to the displacement of the displacement information of the X-direction in the first freedom degree posture position information, Y direction
Breath, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, respectively to the second freedom degree posture position believe
The displacement information of X-direction in breath, the displacement information of Y direction, the rotation of Y-axis, the displacement information of Z-direction and Z axis
Rotation information is calibrated, displacement information, the Y of the X-direction after obtaining the third freedom degree posture position information alignment
The displacement information of axis direction, the rotation of Y-axis, the displacement information of Z-direction and the rotation information of Z axis;
According to the displacement information of the X-direction after the third freedom degree posture position information alignment, the position of Y direction
Move the X in information, the rotation of Y-axis, the displacement information of Z-direction, the rotation information of Z axis and the first freedom degree posture position information
The rotation information of axis calibrates the rotation information of the X-axis, obtains the third freedom degree posture position information alignment
The rotation information of X-axis afterwards.
Preferably, the luminescent marking point is three, respectively the first luminescent marking point, the second luminescent marking point and third
Luminescent marking point, the direction sensor are three axis inertia motion sensors, and the three axis inertia motion sensing includes that three axis are micro-
Motor gyroscope, the first luminescent marking point, the second luminescent marking point, third luminescent marking point and the three axis inertia motion
Sensor is all set in the pen tip region of the interaction pen, and the line of three luminescent marking points constitutes a non-equilateral triangle
Shape;
Correspondingly, the first freedom degree posture position information include the displacement information of X-direction, X-axis rotation information,
The displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, described second freely
Degree posture position information includes the rotation information of the rotation information of X-axis, the rotation information of Y-axis and Z axis, the third freedom degree appearance
State location information includes the displacement information of X-direction, the rotation information of X-axis, the displacement information of Y direction, Y-axis after calibration
The rotation information of rotation information, the displacement information of Z-direction and Z axis;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is obtained the third freedom degree posture position information of the interaction pen in three dimensions, is specifically included:
According to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the first freedom degree posture position information
Rotation information, respectively to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the second freedom degree posture position information
Rotation information calibrated, the rotation information of the X-axis after obtaining the third freedom degree posture position information alignment, Y-axis
Rotation information and Z axis rotation information;
According to the rotation information of X-axis, the rotation information of Y-axis, Z axis after the Three Degree Of Freedom posture position information alignment
Rotation information and the displacement information of X-direction, the displacement information of Y direction, Z axis in the first freedom degree posture position information
The displacement information in direction, the displacement information of the X-direction after determining the third freedom degree posture position information alignment, Y-axis
The displacement information of the displacement information in direction, Z-direction.
Preferably, the visual sensor is more mesh infrared camera mould groups;
Correspondingly, coordinate of each luminescent marking point of the determination in the imaging picture, specifically includes:
It is collected to camera each in more mesh infrared camera mould groups each described luminous according to more mesh range measurement principles
The imaging picture of mark point is handled, and determines coordinate of each luminescent marking point in the imaging picture.
Preferably, described according to more mesh range measurement principles, each camera in more mesh infrared camera mould groups is collected
The imaging picture of each luminescent marking point handled, determine seat of each luminescent marking point in the imaging picture
Mark, specifically includes:
More mesh infrared camera mould groups are demarcated, to eliminate the institute of more mesh infrared camera mould group acquisitions
The distortion of imaging picture is stated, and obtains the internal reference of each camera and outer ginseng in more mesh infrared camera mould groups;
According to the internal reference of each camera and the outer ginseng, three-dimensional is carried out to more mesh infrared camera mould groups
Match, to obtain the parallax data between any two camera;
According to the parallax data and similar triangle theory, determine each luminescent marking point in the imaging picture
Coordinate.
Preferably, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Before breath is calibrated, the method also includes:
According to preset precision adjustment criteria, respectively freely to the first freedom degree posture position information and described second
Degree posture position information is adjusted, so that the first freedom degree posture position information and the second freedom degree posture position
The accuracy value of information is unified;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is specifically included:
According to the first freedom degree posture position information after adjusting, to the second freedom degree posture position after adjusting
Confidence breath is calibrated.
In addition, to achieve the above object, the present invention also provides a kind of rigid body positioning device based on interaction pen, the dress
It sets and includes:
First receiving module, the imaging picture of the luminescent marking point for receiving visual sensor acquisition, the luminous mark
Note point is at least one, and each luminescent marking point is nonoverlapping is set on interaction pen;
First determining module, for determining coordinate of each luminescent marking point in the imaging picture;
Second determining module, described in, in the coordinate being imaged in picture, being determined according to each luminescent marking point
The the first freedom degree posture position information of interaction pen in three dimensions;
Second receiving module, the second freedom of the interaction pen in three dimensions for the acquisition of receiving direction sensor
Posture position information is spent, the direction sensor is set in the interaction pen, and the type of the direction sensor is according to
The number determination of luminescent marking point, the direction sensor is the location of in the interaction pen according to the luminescent marking point
Position on the interaction pen determines;
Calibration module is used for according to the first freedom degree posture position information, to the second freedom degree posture position
Information is calibrated, and the third freedom degree posture position information of the interaction pen in three dimensions is obtained;
Third determining module, for determining the interaction pen in three-dimensional according to the third freedom degree posture position information
Position in space, to realize the real-time positioning in virtual scene and the friendship to object in virtual scene to the interaction pen
Mutually.
In addition, to achieve the above object, the present invention also provides a kind of terminal device, the terminal device includes: storage
Device, processor and the rigid body positioning journey based on interaction pen that is stored on the memory and can run on the processor
The step of sequence, the rigid body finder based on interaction pen is arranged for carrying out the rigid body localization method based on interaction pen.
In addition, to achieve the above object, the present invention also provides a kind of storage medium, the storage medium is that computer can
Storage medium is read, the rigid body finder based on interaction pen is stored on the computer readable storage medium, it is described based on friendship
The step of rigid body finder of mutual pen realizes the rigid body localization method based on interaction pen when being executed by processor.
The present invention is aided with the vision being set on stereoscopic display device and is passed by the setting direction sensor in interaction pen
Sensor, in interactive process, by each in the imaging picture using luminescent marking point on the collected interaction pen of visual sensor
The location information of luminescent marking point determines the first freedom degree of interaction pen in three dimensions in the visual angle of visual sensor
Posture position information, and according to the first determining freedom degree posture position information, to the determining interaction pen of direction sensor three
The second freedom degree posture position information (visual angle based on direction sensor) in dimension space is calibrated, and it is higher to obtain accuracy
Third freedom degree posture position information, finally determine interaction pen in three-dimensional space according to third freedom degree posture position information
In physical location, to effectively improve the accuracy and stability positioned in interactive process.Also, pass through choosing in the present invention
Known to size, shape fix, and be not susceptible to the interaction pen of deformation as the interactive device used in virtual interacting scene,
Based on above-mentioned positioning method, while user-friendly, can also realize while to the location requirement of multiple interaction pens, from
And it can accomplish the more people of support table noodles product while share.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of the terminal device for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is that the present invention is based on the flow diagrams of the rigid body localization method first embodiment of interaction pen;
Fig. 3 is that the present invention is based on the signals that binocular range measurement principle in the rigid body localization method first embodiment of interaction pen is realized
Figure;
Fig. 4 is that the present invention is based on three-axis reference and X-axis, Y-axis and Z in the rigid body localization method first embodiment of interaction pen
The schematic diagram of axis direction of rotation;
Fig. 5 is that the present invention is based on the flow diagrams of the rigid body localization method second embodiment of interaction pen;
Fig. 6 is that the present invention is based on the functional block diagrams of the rigid body positioning device of interaction pen.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Referring to Fig.1, Fig. 1 is the structural representation of the terminal device for the hardware running environment that the embodiment of the present invention is related to
Figure, the terminal device can be the equipment that personal computer, tablet computer, smart phone etc. are able to access that network, herein no longer
It enumerates, is also not particularly limited.
As shown in Figure 1, the terminal device may include: processor 1001, such as central processing unit (Central
Processing Unit, CPU), communication bus 1002, user interface 1003, network interface 1004, memory 1005.Wherein,
Communication bus 1002 is for realizing the connection communication between these components.User interface 1003 may include tangible display screen,
Voice recognition unit etc., optionally, user interface 1003 can also include standard wireline interface and wireless interface.Network interface
1004 may include optionally standard wireline interface and wireless interface (such as Wireless Fidelity (WIreless-FIdelity, WI-FI)
Interface, blue tooth interface etc.).Memory 1005 can be high speed RAM memory, be also possible to stable memory (non-
Volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor 1001
Storage device.
It will be understood by those skilled in the art that structure shown in Fig. 1 does not constitute the restriction to terminal device, can wrap
It includes than illustrating more or fewer components, perhaps combines certain components or different component layouts.
Therefore, as shown in Figure 1, as may include in a kind of memory 1005 of computer storage medium operating system,
Network communication module, Subscriber Interface Module SIM and the rigid body finder based on interaction pen.
In terminal device shown in Fig. 1, network interface 1004 is main with establishing terminal device and visual sensor, interaction
The communication connection of pen;User interface 1003 is mainly used for receiving the input instruction of user;The terminal device passes through processor
The rigid body finder based on interaction pen stored in 1001 calling memories 1005, and execute following operation:
The imaging picture of the luminescent marking point of visual sensor acquisition is received, the luminescent marking point is at least one, and
Each luminescent marking point is nonoverlapping to be set on interaction pen;
Determine coordinate of each luminescent marking point in the imaging picture;
According to coordinate of each luminescent marking point in the imaging picture, the interaction pen is determined in three dimensions
The first freedom degree posture position information;
The the second freedom degree posture position information of the interaction pen of receiving direction sensor acquisition in three dimensions, institute
It states direction sensor to be set in the interaction pen, the type of the direction sensor is true according to the number of the luminescent marking point
It is fixed, the direction sensor position of location according to the luminescent marking point on the interaction pen in the interaction pen
Set determination;
According to the first freedom degree posture position information, the second freedom degree posture position information is calibrated,
Obtain the third freedom degree posture position information of the interaction pen in three dimensions;
According to the third freedom degree posture position information, the position of the interaction pen in three dimensions is determined, with reality
The now real-time positioning to the interaction pen in virtual scene and the interaction to object in virtual scene.
Further, the luminescent marking point is one, and the direction sensor is nine axis inertia motion sensors, described
Nine axis inertia motion sensors include three axis micro machine gyroscopes, three axis accelerometer and three axle magnetometer, the luminescent marking
Point and the nine axis inertia motion sensor are all set in the pen tip region of the interaction pen, and luminescent marking point with it is described
Nine axis inertia motion sensors are adjacent, and the first freedom degree posture position information includes the displacement information of X-direction, Y-axis side
To displacement information and Z-direction displacement information, the second freedom degree posture position information include X-direction displacement letter
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information, the third freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y after calibration
The displacement information of axis direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, processor 1001 can
To call the rigid body finder based on interaction pen stored in memory 1005, following operation is also executed:
According to the displacement information of the X-direction in the first freedom degree posture position information, the displacement information of Y direction
With the displacement information of Z-direction, the respectively displacement information to the X-direction in the second freedom degree posture position information, Y-axis
The displacement information in direction and the displacement information of Z-direction are calibrated, and the third freedom degree posture position information lieutenant colonel is obtained
The displacement information of the displacement information of X-direction after standard, the displacement information of Y direction and Z-direction;
According to the three axle magnetometer and the three axis accelerometer, to the accumulative angle of the three axis micro machine gyroscope
Error compensates, the rotation of the rotation information, Y-axis of the X-axis after obtaining the third freedom degree posture position information alignment
The rotation information of information and Z axis.
Further, the luminescent marking point is two, respectively the first luminescent marking point and the second luminescent marking point, institute
State direction sensor be six axis inertia motion sensors, the six axis inertia motion sensor include three axis micro machine gyroscopes and
Three axis accelerometer, the first luminescent marking point are set to the pen tip region of the interaction pen, the second luminescent marking point
It is set to the tail region domain of the interaction pen, the six axis inertia motion sensor is set to the central region of the interaction pen,
The first freedom degree posture position information includes the displacement information of X-direction, the displacement information of Y direction, the rotation of Y-axis, Z
The displacement information of axis direction and the rotation information of Z axis, the second freedom degree posture position information include the displacement letter of X-direction
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information, the third freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y after calibration
The displacement information of axis direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, processor 1001 can
To call the rigid body finder based on interaction pen stored in memory 1005, following operation is also executed:
Believed according to the displacement of the displacement information of the X-direction in the first freedom degree posture position information, Y direction
Breath, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, respectively to the second freedom degree posture position believe
The displacement information of X-direction in breath, the displacement information of Y direction, the rotation of Y-axis, the displacement information of Z-direction and Z axis
Rotation information is calibrated, displacement information, the Y of the X-direction after obtaining the third freedom degree posture position information alignment
The displacement information of axis direction, the rotation of Y-axis, the displacement information of Z-direction and the rotation information of Z axis;
According to the displacement information of the X-direction after the third freedom degree posture position information alignment, the position of Y direction
Move the X in information, the rotation of Y-axis, the displacement information of Z-direction, the rotation information of Z axis and the first freedom degree posture position information
The rotation information of axis calibrates the rotation information of the X-axis, obtains the third freedom degree posture position information alignment
The rotation information of X-axis afterwards.
Further, the luminescent marking point is three, respectively the first luminescent marking point, the second luminescent marking point and the
3 luminescent marking points, the direction sensor are three axis inertia motion sensors, and the three axis inertia motion sensing includes three axis
Micro machine gyroscope, the first luminescent marking point, the second luminescent marking point, third luminescent marking point and three axis inertia fortune
Dynamic sensor is all set in the pen tip region of the interaction pen, and the line of three luminescent marking points constitutes a non-equilateral triangle
Shape, the first freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y direction position
Move information, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information, the second freedom degree posture position
Information includes the rotation information of the rotation information of X-axis, the rotation information of Y-axis and Z axis, the third freedom degree posture position information
Including the displacement information of X-direction, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, Z after calibration
The displacement information of axis direction and the rotation information of Z axis, processor 1001 can call stored in memory 1005 based on interaction
The rigid body finder of pen also executes following operation:
According to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the first freedom degree posture position information
Rotation information, respectively to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the second freedom degree posture position information
Rotation information calibrated, the rotation information of the X-axis after obtaining the third freedom degree posture position information alignment, Y-axis
Rotation information and Z axis rotation information;
According to the rotation information of X-axis, the rotation information of Y-axis, Z axis after the Three Degree Of Freedom posture position information alignment
Rotation information and the displacement information of X-direction, the displacement information of Y direction, Z axis in the first freedom degree posture position information
The displacement information in direction, the displacement information of the X-direction after determining the third freedom degree posture position information alignment, Y-axis
The displacement information of the displacement information in direction, Z-direction.
Further, the visual sensor is more mesh infrared camera mould groups, and processor 1001 can call memory
The rigid body finder based on interaction pen stored in 1005 also executes following operation:
It is collected to camera each in more mesh infrared camera mould groups each described luminous according to more mesh range measurement principles
The imaging picture of mark point is handled, and determines coordinate of each luminescent marking point in the imaging picture.
Further, processor 1001 can call the rigid body positioning journey based on interaction pen stored in memory 1005
Sequence also executes following operation:
More mesh infrared camera mould groups are demarcated, to eliminate the institute of more mesh infrared camera mould group acquisitions
The distortion of imaging picture is stated, and obtains the internal reference of each camera and outer ginseng in more mesh infrared camera mould groups;
According to the internal reference of each camera and the outer ginseng, three-dimensional is carried out to more mesh infrared camera mould groups
Match, to obtain the parallax data between any two camera;
According to the parallax data and similar triangle theory, determine each luminescent marking point in the imaging picture
Coordinate.
Further, processor 1001 can call the rigid body positioning journey based on interaction pen stored in memory 1005
Sequence also executes following operation:
According to preset precision adjustment criteria, respectively freely to the first freedom degree posture position information and described second
Degree posture position information is adjusted, so that the first freedom degree posture position information and the second freedom degree posture position
The accuracy value of information is unified;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is specifically included:
According to the first freedom degree posture position information after adjusting, to the second freedom degree posture position after adjusting
Confidence breath is calibrated.
This implementation through the above scheme, by the setting direction sensor in interaction pen, and is aided with and is set to stereoscopic display
Visual sensor in equipment, in interactive process, by utilizing luminescent marking point on the collected interaction pen of visual sensor
Imaging picture in each luminescent marking point location information, determine in the visual angle of visual sensor, interaction pen is in three-dimensional space
In the first freedom degree posture position information, it is true to direction sensor and according to the first determining freedom degree posture position information
The the second freedom degree posture position information (visual angle based on direction sensor) of fixed interaction pen in three dimensions is calibrated,
The higher third freedom degree posture position information of accuracy is obtained, friendship is finally determined according to third freedom degree posture position information
The mutual physical location of pen in three dimensions, to effectively improve the accuracy and stability positioned in interactive process.
In addition, by preferred dimension, known, shape is fixed in the present invention, and the interaction pen for being not susceptible to deformation is used as in void
Interactive device used in quasi- interaction scenarios, is based on above-mentioned positioning method, while user-friendly, can also realize same
When to the location requirements of multiple interaction pens, so as to accomplish more people of support table noodles product while shared.
Based on above-mentioned hardware configuration, propose that the present invention is based on the rigid body localization method embodiments of interaction pen.
It is that the present invention is based on the flow diagrams of the rigid body localization method first embodiment of interaction pen referring to Fig. 2, Fig. 2.
It should be noted that in the present embodiment, for executing each step of rigid body localization method based on interaction pen
Executing subject, the processor being specifically as follows in 3D display equipment are also possible to be deployed in the server (physical server of distal end
Or virtual Cloud Server), it is configured specifically, those skilled in the art can according to need, herein with no restrictions.
Specifically, in the first embodiment, the rigid body localization method based on interaction pen the following steps are included:
S10: the imaging picture of the luminescent marking point of visual sensor acquisition is received.
It is noted that luminescent marking point described in the present embodiment is at least one, it specifically can be and be set to ruler
Very little known, shape fixes, and is not susceptible to the luminescence unit on the interaction pen of deformation, such as existing electroluminescent semiconductor
Material chip (such as LED chip), or the phosphor dot that fluorescent material coats on interaction pen is directlyed adopt, specific this field
Technical staff can according to need and select suitable material as luminescent marking point, will not enumerate, also do not limit herein
System.
In addition, each luminescent marking point needs nonoverlapping set when the luminescent marking point being set on interaction pen is multiple
It is placed on interaction pen.
For example, the two luminescent marking points to be respectively arranged to the pen tip area of interaction pen when luminescent marking point is two
Domain and tail region domain.
Also for example, when luminescent marking point is three for the ease of detection, and more efficiently data can be obtained,
These three luminescent marking points can be arranged at the pen tip region of interaction pen, and make it in the shape point of non-equilateral triangle
Cloth.
In addition, it is noted that the interaction pen selected in this example can be wired interaction pen, i.e., use when
It waits, needs through USB interface or other communication interfaces, interaction pen is connected to 3D display equipment.It is also possible to wireless, that is, hands over
The wireless communication modules such as mutual pen built-in bluetooth, WI-FI, near field sensing equipment NFC, it is by above-mentioned wireless communication module and 3D aobvious
Show that wireless communication module or server in equipment establish communication connection, specifically, those skilled in the art can basis
It needs to select, herein with no restrictions.
S20: coordinate of each luminescent marking point in the imaging picture is determined.
Specifically, for convenience and more can accurately determine each luminescent marking point in the imaging picture
Coordinate, the preferably more mesh infrared camera mould groups of visual sensor in the present embodiment, i.e. the infrared camera mould group at least wraps
Two or more camera units, specific number are included, those skilled in the art can according to need setting, not do herein
Limitation.
Determine each luminescent marking point in the imaging picture with more mesh infrared camera mould groups, in above-mentioned steps S20
In the operation of coordinate be substantially: according to more mesh range measurement principles, camera each in more mesh infrared camera mould groups is acquired
To the imaging picture of each luminescent marking point handled, determine each luminescent marking point in the imaging picture
Coordinate, in order to make it easy to understand, being specifically described below:
Firstly, being demarcated to more mesh infrared camera mould groups, adopted with eliminating more mesh infrared camera mould groups
The distortion of the imaging picture of collection, and obtain the internal reference of each camera and outer ginseng in more mesh infrared camera mould groups.
Then, according to the internal reference of each camera and the outer ginseng, more mesh infrared camera mould groups are stood
Body matching, to obtain the parallax data between any two camera.
Finally, determining each luminescent marking point in the imaging according to the parallax data and similar triangle theory
Coordinate in picture.
In order to make it easy to understand, that is, more mesh infrared camera mould groups are by two camera units below by taking binocular ranging as an example
Composition, it is specifically described in conjunction with Fig. 3:
As shown in figure 3, P is the certain point (a luminescent marking point i.e. on interaction pen) on object under test, ORWith OTRespectively
It is the optical center of two cameras, imaging point of the point P on two camera photoreceptors is respectively that (imaging of camera is flat by P ' and P "
Face has been placed in front of camera lens after rotation), f is camera focus, and B is two image centers away from Z wants that the depth acquired is believed for us
It ceases (i.e. the coordinate of point P), the distance of the P ' that sets up an office to point P " is dis, then: dis=B- (XR-XT);
According to similar triangle theory:
It can obtain:
Since focal length f and camera center can be obtained away from B by calibration, as can be seen from the above formula that, as long as obtaining
Obtained XR-XT(that is, the value of parallax d) can acquire depth information.
Due to estimate more away from use have been relatively mature, about its detail, details are not described herein again.
S30: according to coordinate of each luminescent marking point in the imaging picture, determine the interaction pen in three-dimensional space
Between in the first freedom degree posture position information.
S40: the second freedom degree posture position letter of the interaction pen of receiving direction sensor acquisition in three dimensions
Breath.
Specifically, direction sensor described in the present embodiment, is specifically set in interaction pen, and the side
It to the type of sensor is determined according to the number of the luminescent marking point, the direction sensor is in the interaction pen
Location is the position according to the luminescent marking point on the interaction pen to determine.
For example, the direction sensor selects nine axis inertia motion sensors, so when the luminescent marking point is one
The luminescent marking point and the nine axis inertia motion sensor are all set in the pen tip region of the interaction pen afterwards, and described
Luminescent marking point is adjacent with the nine axis inertia motion sensor.
Specifically, nine axis inertia motion sensors described in the present embodiment specifically include three axis micro machine gyroscopes, three
Axis accelerometer and three axle magnetometer.
Also such as, the luminescent marking point be two when, for ease of description, hereinafter referred to as: the first luminescent marking
Point and the second luminescent marking point, the direction sensor can select six axis inertia motion sensors, first luminescent marking
Point can be set in the pen tip region of the interaction pen, and the tail in the interaction pen can be set in the second luminescent marking point
Region, the six axis inertia motion sensor can be set in the central region of the interaction pen.
Specifically, six axis inertia motion sensors described in the present embodiment specifically include three axis micro machine gyroscopes and three
Axis accelerometer.
Also such as, the luminescent marking point be three when, for ease of description, hereinafter referred to as: the first luminescent marking
Point, the second luminescent marking point and third luminescent marking point, the direction sensor can select three axis inertia motion sensors (tool
Body includes three axis micro machine gyroscopes), the first luminescent marking point, the second luminescent marking point, third luminescent marking point and institute
Stating three axis inertia motion sensors can be set in the pen tip region of the interaction pen, and the line structure of three luminescent marking points
At a non-equilateral triangle.
However, it should be understood that in the concrete realization, in the case where cost allows, no matter being arranged on interaction pen several
A luminescent marking point, direction sensor can select the higher nine axis inertia motion sensor of accuracy.
Additionally, it should be understood that in the concrete realization, if the luminescent marking point being arranged on interaction pen is greater than one
(two or three) can select price to be slightly above the six of three axis inertia motion sensors to reduce cost as far as possible
Axis inertia motion sensor, to realize that accuracy is compatible with cost.
In addition, it is noted that in the concrete realization, in order to guarantee that the direction sensor being set in interaction pen provides
The second freedom degree posture position information it is accurate as far as possible, can using interaction pen carry out virtual scene human-computer interaction grasp
Before work, direction sensor is corrected based on the error model prestored.
For example, noise error model is based on, to random walk caused by white Gaussian noise and internal structure and temperature change
Noise is corrected.
Also for example, being based on scale error model, the scale error generated to digital signal into physical signal conversion process
It is corrected.
Also for example, being based on axis deviation error model, the X-axis, Y-axis and Z axis of the direction sensor are corrected.
It should be understood that above-mentioned described noise error model, scale error model and axis deviation error model are specific
It can be carried out by the experimental data of the sensor to the various models, type that are stored in big data platform based on neural network
The training of associative mode recognition methods obtains.
It should be noted that having the above is only for example, not constituting any restriction to technical solution of the present invention
When body is realized, those skilled in the art, which can according to need, to be configured, herein with no restrictions.
S50: according to the first freedom degree posture position information, school is carried out to the second freedom degree posture position information
Standard obtains the third freedom degree posture position information of the interaction pen in three dimensions.
By foregoing description it can be found that the number of luminescent marking point is different, the direction sensor of selection is not also identical, with
The difference of direction sensor chosen, the second freedom degree posture position information kind particular content that includes determined also have
Institute is different.
In addition, it is noted that in the number difference of luminescent marking point, the image that is acquired according to real sensor
Piece, the particular content for finally including in the first determining freedom degree posture position information would also vary from.
It should be understood that the tool for including in the first freedom degree posture position information and the second freedom degree posture position information
When holding different in vivo, calibration process also can difference, but the third freedom degree posture position obtained after progress calibration operation
Information should all include calibration after the displacement information of X-direction, the rotation information of X-axis, Y direction displacement information, Y-axis rotation
The rotation information of transfering the letter breath, the displacement information of Z-direction and Z axis, following in order to facilitate understanding to be specifically described:
Such as be one in the luminescent marking point, the direction sensor is by three axis micro machine gyroscopes, three axis
When the nine axis inertia motion sensor that the main devices such as accelerometer and three axle magnetometer are constituted, the first freedom degree posture position
Confidence ceases the information for including specifically: the displacement of the displacement information of X-direction, the displacement information of Y direction and Z-direction is believed
Breath, the information that the second freedom degree posture position information includes specifically: the displacement information of X-direction, the rotation of X-axis are believed
Breath, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information.
Correspondingly, the operation of above-mentioned steps S50, specifically includes:
According to the displacement information of the X-direction in the first freedom degree posture position information, the displacement information of Y direction
With the displacement information of Z-direction, the respectively displacement information to the X-direction in the second freedom degree posture position information, Y-axis
The displacement information in direction and the displacement information of Z-direction are calibrated, and the third freedom degree posture position information lieutenant colonel is obtained
The displacement information of the displacement information of X-direction after standard, the displacement information of Y direction and Z-direction;
According to the three axle magnetometer and the three axis accelerometer, to the accumulative angle of the three axis micro machine gyroscope
Error compensates, the rotation of the rotation information, Y-axis of the X-axis after obtaining the third freedom degree posture position information alignment
The rotation information of information and Z axis.
That is, the three axle magnetometer and three axis accelerometer in nine axis inertia motion sensors being capable of direct compensations three
The accumulative angular error of axis micro machine gyroscope, so that the second freedom degree posture position that nine axis inertia motion sensors provide
The rotation information of the rotation information of X-axis, the rotation information of Y-axis and Z axis in confidence breath is after calibrating.
Such as, be also two in the luminescent marking point, the direction sensor be by three axis micro machine gyroscopes and
When the six axis inertia motion sensor that the main devices such as three axis accelerometer are constituted, the first freedom degree posture position packet
The information included specifically: the displacement information of X-direction, the displacement information of Y direction, the rotation of Y-axis, the displacement of Z-direction letter
The rotation information of breath and Z axis, the information that the second freedom degree posture position information includes specifically: the displacement of X-direction is believed
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information.
Correspondingly, the operation of above-mentioned steps S50, specifically includes:
Believed according to the displacement of the displacement information of the X-direction in the first freedom degree posture position information, Y direction
Breath, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, respectively to the second freedom degree posture position believe
The displacement information of X-direction in breath, the displacement information of Y direction, the rotation of Y-axis, the displacement information of Z-direction and Z axis
Rotation information is calibrated, displacement information, the Y of the X-direction after obtaining the third freedom degree posture position information alignment
The displacement information of axis direction, the rotation of Y-axis, the displacement information of Z-direction and the rotation information of Z axis;
According to the displacement information of the X-direction after the third freedom degree posture position information alignment, the position of Y direction
Move the X in information, the rotation of Y-axis, the displacement information of Z-direction, the rotation information of Z axis and the first freedom degree posture position information
The rotation information of axis calibrates the rotation information of the X-axis, obtains the third freedom degree posture position information alignment
The rotation information of X-axis afterwards.
That is, after known five parameter informations are all calibrated, the last one composition freedom degree posture position information
Parameter also can respective alignment.
It such as, is also three in the luminescent marking point, the direction sensor is to be made by three axis micro machine gyroscopes
For main devices constitute three axis inertia motion sensors when, the information that the first freedom degree posture position information includes is specific
Are as follows: the displacement information of X-direction, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, Z-direction
The rotation information of displacement information and Z axis, the information that the second freedom degree posture position information includes specifically: the rotation of X-axis
The rotation information of information, the rotation information of Y-axis and Z axis.
Correspondingly, the operation of above-mentioned steps S50, specifically includes:
According to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the first freedom degree posture position information
Rotation information, respectively to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the second freedom degree posture position information
Rotation information calibrated, the rotation information of the X-axis after obtaining the third freedom degree posture position information alignment, Y-axis
Rotation information and Z axis rotation information;
According to the rotation information of X-axis, the rotation information of Y-axis, Z axis after the Three Degree Of Freedom posture position information alignment
Rotation information and the displacement information of X-direction, the displacement information of Y direction, Z axis in the first freedom degree posture position information
The displacement information in direction, the displacement information of the X-direction after determining the third freedom degree posture position information alignment, Y-axis
The displacement information of the displacement information in direction, Z-direction.
In addition, it is noted that the position of the displacement information of the displacement information of above-mentioned X-direction, Y direction, Z-direction
Information is moved, direction of displacement can be subjected to displacement respectively such as the arrow direction of X-axis, Y-axis and Z axis in Fig. 4.
Correspondingly, the rotation information of the rotation information of above-mentioned X-axis, the rotation information of Y-axis and Z axis, direction of rotation difference
It can be moved such as the direction of the arrow rotated in Fig. 4 around X-axis, Y-axis and Z axis.
It should be noted that only a kind of specific example that Fig. 4 is provided, limit is not constituted to technical solution of the present invention
Fixed, in the concrete realization, those skilled in the art can according to need the direction of displacement for providing each axis and direction of rotation, herein
With no restrictions.
S60: according to the third freedom degree posture position information, determining the position of the interaction pen in three dimensions,
To realize the real-time positioning in virtual scene and the interaction to object in virtual scene to the interaction pen.
It should be noted that any restriction is not constituted to technical solution of the present invention, specific the above is only illustrating
In realization, those skilled in the art can according to need reasonable setting, herein with no restrictions.
By foregoing description it is not difficult to find that the rigid body localization method based on interaction pen provided in the present embodiment, by
Setting direction sensor in interaction pen, and it is aided with the visual sensor being set on stereoscopic display device, in interactive process, lead to
The location information of each luminescent marking point in the imaging picture using luminescent marking point on the collected interaction pen of visual sensor is crossed,
It determines in the visual angle of visual sensor, the first freedom degree posture position information of interaction pen in three dimensions, and according to true
The first fixed freedom degree posture position information, to the second freedom degree appearance of the determining interaction pen of direction sensor in three dimensions
State location information (visual angle based on direction sensor) is calibrated, and the higher third freedom degree posture position letter of accuracy is obtained
Breath, finally determines the physical location of interaction pen in three dimensions, thus effectively according to third freedom degree posture position information
Improve the accuracy and stability positioned in interactive process.
In addition, by preferred dimension, known, shape is fixed in the present invention, and the interaction pen for being not susceptible to deformation is used as in void
Interactive device used in quasi- interaction scenarios, is based on above-mentioned positioning method, while user-friendly, can also realize same
When to the location requirements of multiple interaction pens, so as to accomplish more people of support table noodles product while shared.
Further, as shown in figure 5, proposing that the present invention is based on the rigid body localization methods of interaction pen based on first embodiment
Second embodiment, in the present embodiment, according to the first freedom degree posture position information, to the second freedom degree posture
It, can be first according to preset precision adjustment criteria, respectively to the first freedom degree posture position before location information is calibrated
Confidence breath and the second freedom degree posture position information are adjusted, the step S00 being detailed in Fig. 5.
In step S00, according to preset precision adjustment criteria, respectively to the first freedom degree posture position information and
The second freedom degree posture position information is adjusted.
Specifically, in order to guarantee the reasonability of the precision adjustment criteria, described precision adjustment criteria in the present embodiment
It can be and obtained number is repeatedly tested by visual sensor to multiple same types, same configuration and direction sensor progress
According to being analyzed and processed, the feasible precision adjustable range determined, for example the first freedom degree posture position is required to believe
Breath and the second freedom degree posture position information need to reach millimetre-sized accuracy, wrap in each freedom degree posture position information
The unit of the information of the respective shaft included, numeric format need unification etc., so that according to the precision adjustment criteria, after adjusting
The accuracy value of the first freedom degree posture position information and the second freedom degree posture position information is unified.
However, it should be understood that in order to make the first freedom degree posture position information and the second freedom degree appearance
The accuracy value of state location information is unified, above-mentioned precision adjustment criteria, can according to need the man-machine interaction effect reached and determines, this
Place is with no restrictions.
Correspondingly, in the concrete realization, according to the first freedom degree posture position information in former step S50, to described
The operation that second freedom degree posture position information is calibrated, strains as in S50': freely according to described first after adjusting
Posture position information is spent, the second freedom degree posture position information after adjusting is calibrated.
It should be noted that having the above is only for example, not constituting any restriction to technical solution of the present invention
In body application, those skilled in the art, which can according to need, to be configured, and the present invention is without limitation.
By foregoing description it is not difficult to find that the rigid body localization method provided in this embodiment based on interaction pen, described in
First freedom degree posture position information, before being calibrated to the second freedom degree posture position information, by according to default
Precision adjustment criteria, respectively to the first freedom degree posture position information and the second freedom degree posture position information into
Row adjust, be effectively ensured it is subsequent according to the first freedom degree posture position information, to the second freedom degree posture position
When information carries out calibration operation, the accuracy of obtained third freedom degree posture position information, and then ensure that human-computer interaction
Cheng Zhong, high precision tracking and man-machine interaction effect to interaction pen operational motion.
In addition, the embodiment of the present invention also proposes a kind of rigid body positioning device based on interaction pen.As shown in fig. 6, this is based on
The rigid body positioning device of interaction pen include: the first receiving module 6001, the first determining module 6002, the second determining module 6003,
Second receiving module 6004, calibration module 6005 and third determining module 6006.
Wherein, the first receiving module 6001, the imaging picture of the luminescent marking point for receiving visual sensor acquisition, institute
It states luminescent marking point and is at least one, and each luminescent marking point is nonoverlapping is set on interaction pen.First determining module
6002, for determining coordinate of each luminescent marking point in the imaging picture.Second determining module 6003 is used for basis
Coordinate of each luminescent marking point in the imaging picture, determines the first freedom degree of the interaction pen in three dimensions
Posture position information.Second receiving module 6004, the interaction pen for the acquisition of receiving direction sensor is in three dimensions
The second freedom degree posture position information described in direction sensor be set in the interaction pen, the type of the direction sensor
According to the determination of the number of the luminescent marking point, the direction sensor is the location of in the interaction pen according to the hair
Position of the signal point on the interaction pen determines.Calibration module 6005, for according to the first freedom degree posture position
Information calibrates the second freedom degree posture position information, obtains the third of the interaction pen in three dimensions certainly
By degree posture position information.Third determining module 6006, described in determining according to the third freedom degree posture position information
The position of interaction pen in three dimensions, with realize to the interaction pen in virtual scene it is real-time position and to virtual scene
The interaction of middle object.
By foregoing description it is not difficult to find that the rigid body positioning device based on interaction pen provided in the present embodiment, by
Setting direction sensor in interaction pen, and it is aided with the visual sensor being set on stereoscopic display device, in interactive process, lead to
The location information of each luminescent marking point in the imaging picture using luminescent marking point on the collected interaction pen of visual sensor is crossed,
It determines in the visual angle of visual sensor, the first freedom degree posture position information of interaction pen in three dimensions, and according to true
The first fixed freedom degree posture position information, to the second freedom degree appearance of the determining interaction pen of direction sensor in three dimensions
State location information (visual angle based on direction sensor) is calibrated, and the higher third freedom degree posture position letter of accuracy is obtained
Breath, finally determines the physical location of interaction pen in three dimensions, thus effectively according to third freedom degree posture position information
Improve the accuracy and stability positioned in interactive process.
In addition, by preferred dimension, known, shape is fixed in the present invention, and the interaction pen for being not susceptible to deformation is used as in void
Interactive device used in quasi- interaction scenarios, is based on above-mentioned positioning method, while user-friendly, can also realize same
When to the location requirements of multiple interaction pens, so as to accomplish more people of support table noodles product while shared.
It should be noted that workflow described above is only schematical, not to protection model of the invention
Enclose composition limit, in practical applications, those skilled in the art can select according to the actual needs part therein or
It all achieves the purpose of the solution of this embodiment, herein with no restrictions.
In addition, the not technical detail of detailed description in the present embodiment, reference can be made to provided by any embodiment of the invention
Rigid body localization method based on interaction pen, details are not described herein again.
In addition, the embodiment of the present invention also proposes that a kind of storage medium, the storage medium are computer readable storage medium,
The rigid body finder based on interaction pen is stored on the computer readable storage medium, the rigid body based on interaction pen is fixed
Position program realizes following operation when being executed by processor:
The imaging picture of the luminescent marking point of visual sensor acquisition is received, the luminescent marking point is at least one, and
Each luminescent marking point is nonoverlapping to be set on interaction pen;
Determine coordinate of each luminescent marking point in the imaging picture;
According to coordinate of each luminescent marking point in the imaging picture, the interaction pen is determined in three dimensions
The first freedom degree posture position information;
The the second freedom degree posture position information of the interaction pen of receiving direction sensor acquisition in three dimensions, institute
It states direction sensor to be set in the interaction pen, the type of the direction sensor is true according to the number of the luminescent marking point
It is fixed, the direction sensor position of location according to the luminescent marking point on the interaction pen in the interaction pen
Set determination;
According to the first freedom degree posture position information, the second freedom degree posture position information is calibrated,
Obtain the third freedom degree posture position information of the interaction pen in three dimensions;
According to the third freedom degree posture position information, the position of the interaction pen in three dimensions is determined, with reality
The now real-time positioning to the interaction pen in virtual scene and the interaction to object in virtual scene.
Further, the luminescent marking point is one, and the direction sensor is nine axis inertia motion sensors, described
Nine axis inertia motion sensors include three axis micro machine gyroscopes, three axis accelerometer and three axle magnetometer, the luminescent marking
Point and the nine axis inertia motion sensor are all set in the pen tip region of the interaction pen, and luminescent marking point with it is described
Nine axis inertia motion sensors are adjacent, and the first freedom degree posture position information includes the displacement information of X-direction, Y-axis side
To displacement information and Z-direction displacement information, the second freedom degree posture position information include X-direction displacement letter
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information, the third freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y after calibration
The displacement information of axis direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, it is described based on interaction
Following operation is also realized when the rigid body finder of pen is executed by processor:
According to the displacement information of the X-direction in the first freedom degree posture position information, the displacement information of Y direction
With the displacement information of Z-direction, the respectively displacement information to the X-direction in the second freedom degree posture position information, Y-axis
The displacement information in direction and the displacement information of Z-direction are calibrated, and the third freedom degree posture position information lieutenant colonel is obtained
The displacement information of the displacement information of X-direction after standard, the displacement information of Y direction and Z-direction;
According to the three axle magnetometer and the three axis accelerometer, to the accumulative angle of the three axis micro machine gyroscope
Error compensates, the rotation of the rotation information, Y-axis of the X-axis after obtaining the third freedom degree posture position information alignment
The rotation information of information and Z axis.
Further, the luminescent marking point is two, respectively the first luminescent marking point and the second luminescent marking point, institute
State direction sensor be six axis inertia motion sensors, the six axis inertia motion sensor include three axis micro machine gyroscopes and
Three axis accelerometer, the first luminescent marking point are set to the pen tip region of the interaction pen, the second luminescent marking point
It is set to the tail region domain of the interaction pen, the six axis inertia motion sensor is set to the central region of the interaction pen,
The first freedom degree posture position information includes the displacement information of X-direction, the displacement information of Y direction, the rotation of Y-axis, Z
The displacement information of axis direction and the rotation information of Z axis, the second freedom degree posture position information include the displacement letter of X-direction
The rotation of breath, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis
Information, the third freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y after calibration
The displacement information of axis direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, it is described based on interaction
Following operation is also realized when the rigid body finder of pen is executed by processor:
Believed according to the displacement of the displacement information of the X-direction in the first freedom degree posture position information, Y direction
Breath, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, respectively to the second freedom degree posture position believe
The displacement information of X-direction in breath, the displacement information of Y direction, the rotation of Y-axis, the displacement information of Z-direction and Z axis
Rotation information is calibrated, displacement information, the Y of the X-direction after obtaining the third freedom degree posture position information alignment
The displacement information of axis direction, the rotation of Y-axis, the displacement information of Z-direction and the rotation information of Z axis;
According to the displacement information of the X-direction after the third freedom degree posture position information alignment, the position of Y direction
Move the X in information, the rotation of Y-axis, the displacement information of Z-direction, the rotation information of Z axis and the first freedom degree posture position information
The rotation information of axis calibrates the rotation information of the X-axis, obtains the third freedom degree posture position information alignment
The rotation information of X-axis afterwards.
Further, the luminescent marking point is three, respectively the first luminescent marking point, the second luminescent marking point and the
3 luminescent marking points, the direction sensor are three axis inertia motion sensors, and the three axis inertia motion sensing includes three axis
Micro machine gyroscope, the first luminescent marking point, the second luminescent marking point, third luminescent marking point and three axis inertia fortune
Dynamic sensor is all set in the pen tip region of the interaction pen, and the line of three luminescent marking points constitutes a non-equilateral triangle
Shape, the first freedom degree posture position information include the displacement information of X-direction, the rotation information of X-axis, Y direction position
Move information, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information, the second freedom degree posture position
Information includes the rotation information of the rotation information of X-axis, the rotation information of Y-axis and Z axis, the third freedom degree posture position information
Including the displacement information of X-direction, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, Z after calibration
The displacement information of axis direction and the rotation information of Z axis, when the rigid body finder based on interaction pen is executed by processor also
Realize following operation:
According to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the first freedom degree posture position information
Rotation information, respectively to the rotation information of X-axis, the rotation information of Y-axis and the Z axis in the second freedom degree posture position information
Rotation information calibrated, the rotation information of the X-axis after obtaining the third freedom degree posture position information alignment, Y-axis
Rotation information and Z axis rotation information;
According to the rotation information of X-axis, the rotation information of Y-axis, Z axis after the Three Degree Of Freedom posture position information alignment
Rotation information and the displacement information of X-direction, the displacement information of Y direction, Z axis in the first freedom degree posture position information
The displacement information in direction, the displacement information of the X-direction after determining the third freedom degree posture position information alignment, Y-axis
The displacement information of the displacement information in direction, Z-direction.
Further, the visual sensor is more mesh infrared camera mould groups, the rigid body positioning based on interaction pen
Following operation is also realized when program is executed by processor:
It is collected to camera each in more mesh infrared camera mould groups each described luminous according to more mesh range measurement principles
The imaging picture of mark point is handled, and determines coordinate of each luminescent marking point in the imaging picture.
Further, following operation is also realized when the rigid body finder based on interaction pen is executed by processor:
More mesh infrared camera mould groups are demarcated, to eliminate the institute of more mesh infrared camera mould group acquisitions
The distortion of imaging picture is stated, and obtains the internal reference of each camera and outer ginseng in more mesh infrared camera mould groups;
According to the internal reference of each camera and the outer ginseng, three-dimensional is carried out to more mesh infrared camera mould groups
Match, to obtain the parallax data between any two camera;
According to the parallax data and similar triangle theory, determine each luminescent marking point in the imaging picture
Coordinate.
Further, following operation is also realized when the rigid body finder based on interaction pen is executed by processor:
According to preset precision adjustment criteria, respectively freely to the first freedom degree posture position information and described second
Degree posture position information is adjusted, so that the first freedom degree posture position information and the second freedom degree posture position
The accuracy value of information is unified;
Correspondingly, described according to the first freedom degree posture position information, the second freedom degree posture position is believed
Breath is calibrated, and is specifically included:
According to the first freedom degree posture position information after adjusting, to the second freedom degree posture position after adjusting
Confidence breath is calibrated.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of rigid body localization method based on interaction pen, which is characterized in that the described method comprises the following steps:
The imaging picture of the luminescent marking point of visual sensor acquisition is received, the luminescent marking point is at least one, and each institute
State that luminescent marking point is nonoverlapping to be set on interaction pen;
Determine coordinate of each luminescent marking point in the imaging picture;
According to coordinate of each luminescent marking point in the imaging picture, the of the interaction pen in three dimensions is determined
Single-degree-of-freedom posture position information;
The the second freedom degree posture position information of the interaction pen of receiving direction sensor acquisition in three dimensions, the side
It being set in the interaction pen to sensor, the type of the direction sensor is determined according to the number of the luminescent marking point,
Position of the direction sensor the location of in the interaction pen according to the luminescent marking point on the interaction pen
It determines;
According to the first freedom degree posture position information, the second freedom degree posture position information is calibrated, is obtained
The third freedom degree posture position information of the interaction pen in three dimensions;
According to the third freedom degree posture position information, the position of the interaction pen in three dimensions is determined, with realization pair
Real-time positioning of the interaction pen in virtual scene and the interaction to object in virtual scene.
2. the rigid body localization method based on interaction pen as described in claim 1, which is characterized in that the luminescent marking point is one
A, the direction sensor is nine axis inertia motion sensors, and the nine axis inertia motion sensor includes three axis micro machine tops
Spiral shell instrument, three axis accelerometer and three axle magnetometer, the luminescent marking point and the nine axis inertia motion sensor are all set in
The pen tip region of the interaction pen, and luminescent marking point is adjacent with the nine axis inertia motion sensor;
Correspondingly, the first freedom degree posture position information includes the displacement information of the displacement information of X-direction, Y direction
With the displacement information of Z-direction, the second freedom degree posture position information includes the rotation of the displacement information of X-direction, X-axis
Information, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information, described
Three Degree Of Freedom posture position information includes the displacement of the displacement information of X-direction, the rotation information of X-axis, Y direction after calibration
Information, the rotation information of Y-axis, the displacement information of Z-direction and Z axis rotation information;
Correspondingly, described according to the first freedom degree posture position information, to the second freedom degree posture position information into
Row calibration, obtains the third freedom degree posture position information of the interaction pen in three dimensions, specifically includes:
According to the displacement information of X-direction, the displacement information of Y direction and the Z in the first freedom degree posture position information
The displacement information of axis direction, the respectively displacement information to the X-direction in the second freedom degree posture position information, Y-axis side
To displacement information and the displacement information of Z-direction calibrated, obtain the third freedom degree posture position information alignment
The displacement information of the displacement information of X-direction afterwards, the displacement information of Y direction and Z-direction;
According to the three axle magnetometer and the three axis accelerometer, to the accumulative angular error of the three axis micro machine gyroscope
It compensates, the rotation information of the rotation information of the X-axis after obtaining the third freedom degree posture position information alignment, Y-axis
With the rotation information of Z axis.
3. the rigid body localization method based on interaction pen as described in claim 1, which is characterized in that the luminescent marking point is two
A, respectively the first luminescent marking point and the second luminescent marking point, the direction sensor are six axis inertia motion sensors, institute
Stating six axis inertia motion sensors includes three axis micro machine gyroscopes and three axis accelerometer, the first luminescent marking point setting
In the pen tip region of the interaction pen, the second luminescent marking point is set to the tail region domain of the interaction pen, six axis
Inertia motion sensor is set to the central region of the interaction pen;
Correspondingly, the first freedom degree posture position information include the displacement information of X-direction, Y direction displacement information,
The rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information, the second freedom degree posture position information includes X-axis
The displacement of the displacement information in direction, the rotation information of X-axis, the displacement information of Y direction, the rotation information of Y-axis, Z-direction is believed
The rotation information of breath and Z axis, the third freedom degree posture position information include the displacement information of the X-direction after calibration, X-axis
Rotation information, the displacement information of Y direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis;
Correspondingly, described according to the first freedom degree posture position information, to the second freedom degree posture position information into
Row calibration, obtains the third freedom degree posture position information of the interaction pen in three dimensions, specifically includes:
According to the displacement information of X-direction, the displacement information of Y direction, Y-axis in the first freedom degree posture position information
Rotation, the displacement information of Z-direction and the rotation information of Z axis, respectively to the X in the second freedom degree posture position information
The displacement information of axis direction, the displacement information of Y direction, the rotation of Y-axis, the displacement information of Z-direction and the rotation information of Z axis
It is calibrated, the displacement information of the X-direction after obtaining the third freedom degree posture position information alignment, Y direction
Displacement information, the rotation of Y-axis, the displacement information of Z-direction and Z axis rotation information;
Believed according to the displacement of the displacement information of the X-direction after the third freedom degree posture position information alignment, Y direction
Breath, the rotation of Y-axis, the displacement information of Z-direction, the rotation information of Z axis and the X-axis in the first freedom degree posture position information
Rotation information calibrates the rotation information of the X-axis, after obtaining the third freedom degree posture position information alignment
The rotation information of X-axis.
4. the rigid body localization method based on interaction pen as described in claim 1, which is characterized in that the luminescent marking point is three
A, respectively the first luminescent marking point, the second luminescent marking point and third luminescent marking point, the direction sensor are used to for three axis
Property motion sensor, three axis inertia motion sensing includes three axis micro machine gyroscopes, the first luminescent marking point, second
Luminescent marking point, third luminescent marking point and the three axis inertia motion sensor are all set in the pen tip area of the interaction pen
Domain, and the line of three luminescent marking points constitutes a non-equilateral triangle;
Correspondingly, the first freedom degree posture position information includes the displacement information of X-direction, the rotation information of X-axis, Y-axis
The displacement information in direction, the rotation information of Y-axis, the displacement information of Z-direction and the rotation information of Z axis, second freedom degree
Posture position information includes the rotation information of the rotation information of X-axis, the rotation information of Y-axis and Z axis, the third freedom degree posture
Location information include calibration after the displacement information of X-direction, the rotation information of X-axis, Y direction displacement information, Y-axis rotation
The rotation information of transfering the letter breath, the displacement information of Z-direction and Z axis;
Correspondingly, described according to the first freedom degree posture position information, to the second freedom degree posture position information into
Row calibration, obtains the third freedom degree posture position information of the interaction pen in three dimensions, specifically includes:
According to the rotation of the rotation information of X-axis, the rotation information of Y-axis and Z axis in the first freedom degree posture position information
Information, respectively to the rotation of the rotation information of X-axis, the rotation information of Y-axis and Z axis in the second freedom degree posture position information
Transfering the letter breath is calibrated, the rotation of the rotation information, Y-axis of the X-axis after obtaining the third freedom degree posture position information alignment
The rotation information of transfering the letter breath and Z axis;
According to the rotation of the rotation information of X-axis, the rotation information of Y-axis, Z axis after the Three Degree Of Freedom posture position information alignment
Transfering the letter breath and the displacement information of X-direction, the displacement information of Y direction, Z-direction in the first freedom degree posture position information
Displacement information, the displacement information of the X-direction after determining the third freedom degree posture position information alignment, Y direction
Displacement information, Z-direction displacement information.
5. such as the described in any item rigid body localization methods based on interaction pen of Claims 1-4, which is characterized in that the vision
Sensor is more mesh infrared camera mould groups;
Correspondingly, coordinate of each luminescent marking point of the determination in the imaging picture, specifically includes:
According to more mesh range measurement principles, to the collected each luminescent marking of camera each in more mesh infrared camera mould groups
The imaging picture of point is handled, and determines coordinate of each luminescent marking point in the imaging picture.
6. the rigid body localization method based on interaction pen as claimed in claim 5, which is characterized in that the basis is estimated away from original more
Reason, at the imaging picture of the collected each luminescent marking point of camera each in more mesh infrared camera mould groups
Reason determines coordinate of each luminescent marking point in the imaging picture, specifically includes:
More mesh infrared camera mould groups are demarcated, with eliminate the more mesh infrared camera mould groups acquisition it is described at
As the distortion of picture, and obtain the internal reference of each camera and outer ginseng in more mesh infrared camera mould groups;
According to the internal reference of each camera and the outer ginseng, Stereo matching is carried out to more mesh infrared camera mould groups, with
Obtain the parallax data between any two camera;
According to the parallax data and similar triangle theory, seat of each luminescent marking point in the imaging picture is determined
Mark.
7. such as the described in any item rigid body localization methods based on interaction pen of Claims 1-4, which is characterized in that the basis
The first freedom degree posture position information, before being calibrated to the second freedom degree posture position information, the method
Further include:
According to preset precision adjustment criteria, respectively to the first freedom degree posture position information and the second freedom degree appearance
State location information is adjusted, so that the first freedom degree posture position information and the second freedom degree posture position information
Accuracy value it is unified;
Correspondingly, described according to the first freedom degree posture position information, to the second freedom degree posture position information into
Row calibration, specifically includes:
According to the first freedom degree posture position information after adjusting, the second freedom degree posture position after adjusting is believed
Breath is calibrated.
8. a kind of rigid body positioning device based on interaction pen, which is characterized in that described device includes:
First receiving module, the imaging picture of the luminescent marking point for receiving visual sensor acquisition, the luminescent marking point
At least one, and each luminescent marking point is nonoverlapping is set on interaction pen;
First determining module, for determining coordinate of each luminescent marking point in the imaging picture;
Second determining module determines the interaction for the coordinate according to each luminescent marking point in the imaging picture
The the first freedom degree posture position information of pen in three dimensions;
Second receiving module, the second freedom degree appearance of the interaction pen in three dimensions for the acquisition of receiving direction sensor
State location information, the direction sensor are set in the interaction pen, and the type of the direction sensor shines according to described
The number determination of mark point, the direction sensor is the location of in the interaction pen according to the luminescent marking point in institute
The position stated on interaction pen determines;
Calibration module is used for according to the first freedom degree posture position information, to the second freedom degree posture position information
It is calibrated, obtains the third freedom degree posture position information of the interaction pen in three dimensions;
Third determining module, for determining the interaction pen in three-dimensional space according to the third freedom degree posture position information
In position, to realize the real-time positioning in virtual scene and the interaction to object in virtual scene to the interaction pen.
9. a kind of terminal device, which is characterized in that the terminal device includes: memory, processor and is stored in described deposit
On reservoir and the rigid body finder based on interaction pen that can run on the processor, the rigid body based on interaction pen are fixed
The step of position program is arranged for carrying out the rigid body localization method as described in any one of claim 1 to 7 based on interaction pen.
10. a kind of storage medium, which is characterized in that the storage medium is computer readable storage medium, and the computer can
It reads to be stored with the rigid body finder based on interaction pen on storage medium, the rigid body finder based on interaction pen is processed
The step of device realizes the rigid body localization method as described in any one of claim 1 to 7 based on interaction pen when executing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810696389.6A CN108958483A (en) | 2018-06-29 | 2018-06-29 | Rigid body localization method, device, terminal device and storage medium based on interaction pen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810696389.6A CN108958483A (en) | 2018-06-29 | 2018-06-29 | Rigid body localization method, device, terminal device and storage medium based on interaction pen |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108958483A true CN108958483A (en) | 2018-12-07 |
Family
ID=64487968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810696389.6A Withdrawn CN108958483A (en) | 2018-06-29 | 2018-06-29 | Rigid body localization method, device, terminal device and storage medium based on interaction pen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108958483A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015269A (en) * | 2020-08-03 | 2020-12-01 | 深圳市瑞立视多媒体科技有限公司 | Display correction method and device for head display device and storage medium |
CN112085848A (en) * | 2020-08-21 | 2020-12-15 | 深圳市瑞立视多媒体科技有限公司 | Method and equipment for optimally selecting rigid body mark points and optical motion capture system |
CN112350458A (en) * | 2020-10-22 | 2021-02-09 | 华为技术有限公司 | Method and device for determining position and posture of terminal and related equipment |
CN112433628A (en) * | 2021-01-28 | 2021-03-02 | 深圳市瑞立视多媒体科技有限公司 | Rigid body pose determination method and device of double-light-ball interactive pen and computer equipment |
CN112433629A (en) * | 2021-01-28 | 2021-03-02 | 深圳市瑞立视多媒体科技有限公司 | Rigid body posture determination method and device of double-light-ball interactive pen and computer equipment |
CN115937725A (en) * | 2023-03-13 | 2023-04-07 | 江西科骏实业有限公司 | Attitude display method, device and equipment of space interaction device and storage medium thereof |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100328267A1 (en) * | 2009-06-30 | 2010-12-30 | Hon Hai Precision Industry Co., Ltd. | Optical touch device |
CN103389807A (en) * | 2013-07-16 | 2013-11-13 | 江苏惠通集团有限责任公司 | Data processing method for space mouse and control method for mouse pointer |
CN104007846A (en) * | 2014-05-22 | 2014-08-27 | 深圳市宇恒互动科技开发有限公司 | Three-dimensional figure generating method and electronic whiteboard system |
CN104834917A (en) * | 2015-05-20 | 2015-08-12 | 北京诺亦腾科技有限公司 | Mixed motion capturing system and mixed motion capturing method |
CN106774844A (en) * | 2016-11-23 | 2017-05-31 | 上海创米科技有限公司 | A kind of method and apparatus for virtual positioning |
CN106774880A (en) * | 2010-12-22 | 2017-05-31 | Z空间股份有限公司 | The three-dimensional tracking in space of user control |
US20170220133A1 (en) * | 2014-07-31 | 2017-08-03 | Hewlett-Packard Development Company, L.P. | Accurately positioning instruments |
CN206413625U (en) * | 2016-12-06 | 2017-08-18 | 北京臻迪科技股份有限公司 | A kind of underwater robot |
CN107102749A (en) * | 2017-04-23 | 2017-08-29 | 吉林大学 | A kind of three-dimensional pen type localization method based on ultrasonic wave and inertial sensor |
CN107289931A (en) * | 2017-05-23 | 2017-10-24 | 北京小鸟看看科技有限公司 | A kind of methods, devices and systems for positioning rigid body |
CN107704106A (en) * | 2017-10-17 | 2018-02-16 | 宁波视睿迪光电有限公司 | Attitude positioning method, device and electronic equipment |
CN108196701A (en) * | 2018-01-03 | 2018-06-22 | 青岛海信电器股份有限公司 | Determine the method, apparatus of posture and VR equipment |
-
2018
- 2018-06-29 CN CN201810696389.6A patent/CN108958483A/en not_active Withdrawn
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100328267A1 (en) * | 2009-06-30 | 2010-12-30 | Hon Hai Precision Industry Co., Ltd. | Optical touch device |
CN106774880A (en) * | 2010-12-22 | 2017-05-31 | Z空间股份有限公司 | The three-dimensional tracking in space of user control |
CN103389807A (en) * | 2013-07-16 | 2013-11-13 | 江苏惠通集团有限责任公司 | Data processing method for space mouse and control method for mouse pointer |
CN104007846A (en) * | 2014-05-22 | 2014-08-27 | 深圳市宇恒互动科技开发有限公司 | Three-dimensional figure generating method and electronic whiteboard system |
US20170220133A1 (en) * | 2014-07-31 | 2017-08-03 | Hewlett-Packard Development Company, L.P. | Accurately positioning instruments |
CN104834917A (en) * | 2015-05-20 | 2015-08-12 | 北京诺亦腾科技有限公司 | Mixed motion capturing system and mixed motion capturing method |
CN106774844A (en) * | 2016-11-23 | 2017-05-31 | 上海创米科技有限公司 | A kind of method and apparatus for virtual positioning |
CN206413625U (en) * | 2016-12-06 | 2017-08-18 | 北京臻迪科技股份有限公司 | A kind of underwater robot |
CN107102749A (en) * | 2017-04-23 | 2017-08-29 | 吉林大学 | A kind of three-dimensional pen type localization method based on ultrasonic wave and inertial sensor |
CN107289931A (en) * | 2017-05-23 | 2017-10-24 | 北京小鸟看看科技有限公司 | A kind of methods, devices and systems for positioning rigid body |
CN107704106A (en) * | 2017-10-17 | 2018-02-16 | 宁波视睿迪光电有限公司 | Attitude positioning method, device and electronic equipment |
CN108196701A (en) * | 2018-01-03 | 2018-06-22 | 青岛海信电器股份有限公司 | Determine the method, apparatus of posture and VR equipment |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015269A (en) * | 2020-08-03 | 2020-12-01 | 深圳市瑞立视多媒体科技有限公司 | Display correction method and device for head display device and storage medium |
CN112085848A (en) * | 2020-08-21 | 2020-12-15 | 深圳市瑞立视多媒体科技有限公司 | Method and equipment for optimally selecting rigid body mark points and optical motion capture system |
CN112350458A (en) * | 2020-10-22 | 2021-02-09 | 华为技术有限公司 | Method and device for determining position and posture of terminal and related equipment |
CN112350458B (en) * | 2020-10-22 | 2023-03-10 | 华为技术有限公司 | Method and device for determining position and posture of terminal and related equipment |
CN112433628A (en) * | 2021-01-28 | 2021-03-02 | 深圳市瑞立视多媒体科技有限公司 | Rigid body pose determination method and device of double-light-ball interactive pen and computer equipment |
CN112433629A (en) * | 2021-01-28 | 2021-03-02 | 深圳市瑞立视多媒体科技有限公司 | Rigid body posture determination method and device of double-light-ball interactive pen and computer equipment |
CN112433628B (en) * | 2021-01-28 | 2021-06-08 | 深圳市瑞立视多媒体科技有限公司 | Rigid body pose determination method and device of double-light-ball interactive pen and computer equipment |
CN113268149A (en) * | 2021-01-28 | 2021-08-17 | 深圳市瑞立视多媒体科技有限公司 | Rigid body pose determination method and device of double-light-ball interactive pen and computer equipment |
CN113268149B (en) * | 2021-01-28 | 2024-04-16 | 深圳市瑞立视多媒体科技有限公司 | Rigid body pose determining method and device of double-light ball interactive pen and computer equipment |
CN115937725A (en) * | 2023-03-13 | 2023-04-07 | 江西科骏实业有限公司 | Attitude display method, device and equipment of space interaction device and storage medium thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108958483A (en) | Rigid body localization method, device, terminal device and storage medium based on interaction pen | |
US20210233275A1 (en) | Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium | |
CN104781849B (en) | Monocular vision positions the fast initialization with building figure (SLAM) simultaneously | |
CN110782499B (en) | Calibration method and calibration device for augmented reality equipment and terminal equipment | |
CN106871878B (en) | Hand-held range unit and method, the storage medium that spatial model is created using it | |
TWI505709B (en) | System and method for determining individualized depth information in augmented reality scene | |
CN108537845A (en) | Pose determines method, apparatus and storage medium | |
US20220398767A1 (en) | Pose determining method and apparatus, electronic device, and storage medium | |
CN110276774B (en) | Object drawing method, device, terminal and computer-readable storage medium | |
EP4105766A1 (en) | Image display method and apparatus, and computer device and storage medium | |
KR20230028532A (en) | Creation of ground truth datasets for virtual reality experiences | |
CN108961343A (en) | Construction method, device, terminal device and the readable storage medium storing program for executing of virtual coordinate system | |
TW201101812A (en) | Derivation of 3D information from single camera and movement sensors | |
US10948994B2 (en) | Gesture control method for wearable system and wearable system | |
CN104279960A (en) | Method for measuring size of object by mobile equipment | |
CN109668545A (en) | Localization method, locator and positioning system for head-mounted display apparatus | |
CN110337674A (en) | Three-dimensional rebuilding method, device, equipment and storage medium | |
EP3291535B1 (en) | Method and apparatus for generating data representative of a bokeh associated to light-field data | |
CN111811462A (en) | Large-component portable visual ranging system and method in extreme environment | |
CN108430032A (en) | A kind of method and apparatus for realizing that VR/AR device locations are shared | |
CN107534202B (en) | A kind of method and apparatus measuring antenna attitude | |
CN110268701A (en) | Imaging device | |
CN107864372A (en) | Solid picture-taking method, apparatus and terminal | |
CN110152293A (en) | Manipulate the localization method of object and the localization method and device of device, game object | |
CN108151647A (en) | A kind of image processing method, device and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181207 |