CN106568434A - Method and system for positioning virtual reality space - Google Patents
Method and system for positioning virtual reality space Download PDFInfo
- Publication number
- CN106568434A CN106568434A CN201610988411.5A CN201610988411A CN106568434A CN 106568434 A CN106568434 A CN 106568434A CN 201610988411 A CN201610988411 A CN 201610988411A CN 106568434 A CN106568434 A CN 106568434A
- Authority
- CN
- China
- Prior art keywords
- image
- light
- globe
- type luminescence
- camera head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a method and a system for positioning a virtual reality space. The system comprises a processing unit and a positioning unit. The processing unit includes an image pickup device and an image recognition device. The image pickup device and the image recognition device are electrically connected. The positioning unit comprises at least two light-emitting devices calibrated in advance. The image pickup device can pick up the images of the light-emitting devices. The image recognition device can recognize the images of the image pickup device. Compared with the prior art, a first spherical light-emitting body and a second spherical light-emitting body can be calibrated in advance. Meanwhile, based on the flash phenomenon occurred during the changing process of the image number of spherical light-emitting bodies picked up by the image pickup device, the image of the first spherical light-emitting body and the image of the second spherical light-emitting body can be distinguished, and the confusion condition during positioning is avoided. The condition that the number of images is not zero is adopted as the flash condition, so that meaningless flashes can be reduced. The judging condition of the processing unit is further simplified.
Description
Technical field
The present invention relates to space orientation field, more particularly, it relates to a kind of method of virtual reality space positioning and be
System.
Background technology
Space orientation is typically positioned and calculated using the pattern of optics or ultrasound wave, is derived by setting up model and is treated
Survey the locus of object.At present in virtual reality space positioning field, Jing frequently with localization method be in needs such as handles
The top of the part for being positioned arranges irradiative globe-type luminescence body, and spherical is shot using scanning means and camera head
The image of body of light, and by calculating the position of globe-type luminescence body, and then draw the part that the needs such as handle are positioned
Position.Because the number of components for needing to position in space in most cases is more than one, often resulting in arithmetic processor cannot
Globe-type luminescence body image in shooting image is correctly corresponding with the hardware of reality, cause obscuring for data.Part positioner
Different hardware are distinguished using the globe-type luminescence body of different colours, but this can cause image identification equipment cost to be significantly increased again
The problems such as slack-off with processing speed.
The content of the invention
Data obfuscation, image identification equipment high cost and process speed are easily caused in order to solve current virtual realistic space positioning
The slow defect of degree, the present invention provide one kind avoid confusion, image identification equipment low cost and processing speed virtual reality faster
Sterically defined method and system.
The technical solution adopted for the present invention to solve the technical problems is:A kind of side of virtual reality space positioning is provided
Method, when the quantity that pattern recognition device detects the light-emitting device image in the image of camera head shooting is not zero and described
When the quantity of the image of electro-optical device changes, a light-emitting device demarcated in advance is glistened, and described image is known
The image that the camera head shoots before and after other device detection flash of light, and the image shot according to the camera head before and after flash of light
Demarcate the corresponding relation of light-emitting device described at least two and the image of light-emitting device described in image.
Preferably, when the camera head is not opened, arithmetic processor is given tacit consent in the image that the camera head shoots
The image quantity of the light-emitting device is zero.
Preferably, when described image identifying device detects light-emitting device described in the image that road shoots to the camera head
The quantity of image be not zero and when quantity changes, the light-emitting device that the arithmetic processor control is calibrated
Flash of light.
Preferably, the processing unit is according to the first globe-type luminescence body in the image of the camera head shooting before and after flash of light
With the corresponding relation of the image of light-emitting device described in the second globe-type luminescence body and image, judge the first globe-type luminescence body and
The motion path of the second globe-type luminescence body.
A kind of virtual reality space alignment system, including processing unit and positioning unit are provided, the processing unit includes
Camera head, pattern recognition device, the camera head is electrically connected with described image identifying device, and the positioning unit includes
At least two light-emitting devices demarcated in advance, the camera head can shoot the image of the light-emitting device, described image
Identifying device can recognize the image that the camera head shoots, and the processing unit further includes arithmetic processor, described
Arithmetic processor is electrically connected with described image identifying device.
Preferably, the processing unit further includes to process end wireless transport module, and the process end is wirelessly transferred mould
Block is electrically connected with the arithmetic processor, and the positioning unit further includes mobile terminal wireless transport module, the process
End wireless transport module and mobile terminal wireless transport module transmission information by way of being wirelessly transferred.
Preferably, the positioning unit further includes mobile terminal embedded type control module, the embedded control in the mobile terminal
Molding block, the light-emitting device and the mobile terminal wireless transport module are electrically connected with.
Preferably, the positioning unit includes first handle and second handle, and the first ball is provided with the first handle
Shape luminous body, is provided with the second globe-type luminescence body on the second handle.
Compared with prior art, the present invention is utilized and demarcates the first globe-type luminescence body and the second globe-type luminescence body in advance, and is led to
Flash of light when the image quantity for crossing the globe-type luminescence body of camera head shooting changes, has reached the first globe-type luminescence body of differentiation
With the purpose of the second globe-type luminescence body image, prevent from obscuring during positioning.It is used as flash of light by limiting image quantity and being not zero
Condition, it is possible to reduce insignificant flash of light, and further simplify the Rule of judgment of processing unit.Filled using image recognition
The image that camera head shoots before and after putting to detect flash of light, can clearly demarcate the first globe-type luminescence body by contrast images
With the image of the second globe-type luminescence body, it is very convenient, detect the first globe-type luminescence bodies and the second ball relative to using different optical frequencies
Shape luminous body image, significantly reduces cost, there is certain economic implications.When not opened by defining camera head, at computing
The image quantity of globe-type luminescence body is zero in the image that reason device default camera device shoots, it is ensured that the moment that camera head is opened
Just can be to demarcating into the globe-type luminescence body in shooting image, it is ensured that the integrity of localization method.Being wirelessly transferred can be with
Facilitate the communication of handheld device and processing unit, by arranging mobile terminal embedded type control module the order of processing unit can be made
Obtain effective and orderly execution.Demarcating the first globe-type luminescence body can help processing unit to calculate with the second globe-type luminescence body
Go out the motion path of the first globe-type luminescence body and the second globe-type luminescence body.
Description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the system module schematic diagram of virtual reality space positioning of the present invention;
Fig. 2 is that camera head shoots one of light-emitting device image schematic diagram;
Fig. 3 is that camera head shoots the two of light-emitting device image schematic diagram;
Fig. 4 is coincidence status light-emitting device identification schematic diagram;
Fig. 5 is " 0-1 " state light device identification schematic diagram;
Fig. 6 is " 0-2 " state light device identification schematic diagram;
Fig. 7 is " 1-2 " state light device identification schematic diagram;
Fig. 8 is " 2-1 " state light device identification schematic diagram.
Specific embodiment
Data obfuscation, image identification equipment high cost and process speed are easily caused in order to solve current virtual realistic space positioning
The slow defect of degree, the present invention provide one kind avoid confusion, image identification equipment low cost and processing speed virtual reality faster
Sterically defined method and system.
In order to be more clearly understood to the technical characteristic of the present invention, purpose and effect, now compare accompanying drawing and describe in detail
The specific embodiment of the present invention.
Fig. 1 is referred to, virtual reality space alignment system of the present invention includes processing unit 10 and positioning unit 20, process single
Unit 10 includes camera head 11, pattern recognition device 13, arithmetic processor 15 and processes end wireless transport module 17, image recognition
Equipment 13 is electrically connected with camera head 11 and arithmetic processor 15, and arithmetic processor 15 is electric with process end wireless transport module 17
Property connection.Camera head 11 can be with shooting image, and by captured image transmitting to pattern recognition device 13, image recognition is filled
Put 13 to process the image that camera head 11 shoots, and result is transmitted to arithmetic processor 15, at computing
Reason device 15 is further processed to the information that pattern recognition device 13 is transmitted.Positioning unit 20 includes that at least two is independent
Light-emitting device 21, mobile terminal embedded type control module 22, attitude measuring 23, power module 24, operation device 25 and movement
End wireless transport module 27, mobile terminal embedded type control module 22 is electrically connected with power module 24, transmitting terminal embedded Control
Module 22 is wirelessly transferred respectively with power module 24 with light-emitting device 21, attitude measuring 23, operation device 25 and mobile terminal
Module 27 is electrically connected with.Process the side that end wireless transport module 17 can be with mobile terminal wireless transport module 27 by being wirelessly connected
Formula transmission information.Attitude measuring 23 can with measurement and positioning unit 20 space attitude and angle information, operation device 25
Command information can be operated and sent by user, and the related data and operation device 25 that attitude measuring 23 is measured sends
Command information mobile terminal embedded type control module 22, mobile terminal embedded Control mould can be delivered to by way of the signal of telecommunication
Block 22 can by above- mentioned information by mobile terminal wireless transport module 27 be delivered to process end wireless transport module 17, process end without
Line transport module 17 can be processed so that the data for receiving are delivered to into arithmetic processor 15 by way of the signal of telecommunication.
Fig. 2 is referred to, Fig. 2 schematically illustrates the positioning list with first handle 201 and second handle 202 as embodiment
Unit 20, in this embodiment, positioning unit includes globe-type luminescence body 210, and globe-type luminescence body 210 includes the first globe-type luminescence body
211 and the second globe-type luminescence body 212, the first globe-type luminescence body 211 is arranged on first handle 201, the second globe-type luminescence body 212
It is arranged on second handle 202, the first globe-type luminescence body 211 and the second globe-type luminescence body 212 can light in positioning.Pass through
Demarcating the first globe-type luminescence body 211 and the second globe-type luminescence body 212 in advance can avoid confusion, and conveniently complete positioning.Shooting dress
Putting 11 can be with shooting image, and picture 30 is the image of the positioning units 20 that video camera 11 shoots, wherein, the first image 301 is the
The image of one globe-type luminescence body 211, the second image 302 is the image of the second globe-type luminescence body 212, in picture 30, the first ball
The image of shape luminous body 211 is complete roundness image, and the second globe-type luminescence body 212 is due to only by camera head 11 photographing one
Part, therefore the second image 302 is incomplete roundness image.
Fig. 3 is referred to, Fig. 3 shows that camera head 11 shoots a kind of special circumstances of positioning unit 20.Work as camera head
11st, the first globe-type luminescence body 211, the second globe-type luminescence body 212 point-blank when, it is possible to there is camera head 11 and shoot
The image 303 of 30 only one of which of picture the 3rd situation, this is because the first globe-type luminescence body 211 has blocked the second globe-type luminescence
The light of body 212, in this case, due to the loss of the image of the second luminous body 212, is likely to result in positioning and goes wrong.
Refer to Fig. 4, the image state that Fig. 4 is shown we referred to as " overlap " state.The image the 4th of positioning unit 20
The image 305 of image 304 and the 5th is occurred in picture 30, due to the motion of positioning unit 20, the 4th image 304 and the 5th image
305 move respectively, and at this moment, by the image of the picture 30 of two shootings of camera head 11 before and after identification, arithmetic processor 15 can
To track the motion of the 4th image 304 and the 5th image 305.When the first globe-type luminescence body 211 has blocked the second globe-type luminescence body
When 212 light or the second globe-type luminescence body 212 have blocked the light of the first globe-type luminescence body 211, only present in picture 30
One the 6th image 306, now, because one of tracking target disappears, arithmetic processor 15 cannot differentiate the 6th image 306
It is the result of the movement of the 4th image 304 or the result of the movement of the 5th image 305.When positioning unit 20 continues to move, first
The globe-type luminescence body 212 of globe-type luminescence body 211 and second separates, and the 7th image 307 and the 8th image 308 are occurred in that in picture 30.
Wherein, the 7th image 307 is probably the image of the globe-type luminescence body 210 corresponding to the 4th image 304, it is also possible to the 5th image
The image of 305 corresponding globe-type luminescence bodies 210, the 8th image 308 in the same manner, therefore, for general positioner, identification correspondence
The image of globe-type luminescence body 210 there is the problem that may be obscured.
Fig. 5-Fig. 8 is referred to, in Fig. 5-Fig. 8, we solve correspondence globe-type luminescence body by four kinds of state changes
Confounding issues.When the quantity of the image of the globe-type luminescence body 210 in picture 30 changes and globe-type luminescence body 210 in picture 30
Image quantity when being not zero, the first globe-type luminescence body 211 is once glistened.In this process, globe-type luminescence body 210
Image quantity can be identified by pattern recognition device 13, when pattern recognition device 13 detects the globe-type luminescence in picture 30
When the quantity of the image of body 210 changes and the image quantity of globe-type luminescence body 210 is not zero in picture 30, arithmetic processor
15 send a command to process end wireless transport module 17, process end wireless transport module 17 by way of being wirelessly transferred via shifting
Command information is delivered to mobile terminal embedded type control module 22, mobile terminal embedded type control module by moved end wireless transport module 27
22 the first globe-type luminescence bodies 211 of control glisten.Pattern recognition device 13 locks the first ball according to the change of picture before and after flash of light 30
The corresponding image of shape luminous body 211.In order that in picture 30 image of globe-type luminescence body 210 and the first globe-type luminescence body 211 and
Second globe-type luminescence body 212 keeps farthest correspondence, and we define arithmetic processor 15 when camera head 11 is not opened and write from memory
The image quantity for recognizing globe-type luminescence body 210 is zero.
Fig. 5 is referred to, the variable condition of the picture 30 that our Fig. 5 are shown is called " 0-1 " state, under " 0-1 " state,
The image quantity of globe-type luminescence body 210 becomes 1 from 0 in picture 30, and now, pattern recognition device 13 is detected in picture 30
The quantity of the image of globe-type luminescence body 210 changes and the image quantity of globe-type luminescence body 210 is not zero in picture 30, and first
Globe-type luminescence body 211 glistens, while pattern recognition device 13 detects the change of picture 30, when the 9th image 309 is during glistening
During disappearance, the 9th image 309 is the corresponding image of the first globe-type luminescence body 211;When the 9th image 309 does not have during glistening
During disappearance, the 9th image 309 is the corresponding image of the second globe-type luminescence body 212.
Fig. 6 is referred to, the variable condition of the picture 30 that our Fig. 6 are shown is called " 0-2 " state, under " 0-2 " state,
The image quantity of globe-type luminescence body 210 becomes 2 from 0 in picture 30, and now, pattern recognition device 13 is detected in picture 30
The quantity of the image of globe-type luminescence body 210 changes and the image quantity of globe-type luminescence body 210 is not zero in picture 30, and first
Globe-type luminescence body 211 glistens, while pattern recognition device 13 detects the change of picture 30, when the tenth image 310 is during glistening
Disappear and the 11st image 311 when not disappearing during glistening, it is corresponding that the tenth image 310 is the first globe-type luminescence body 211
Image, the 11st image 311 is the image of the second globe-type luminescence body 212;Do not disappear during glistening when the tenth image 310 and
When 11st image 311 disappears during glistening, the tenth image 310 as corresponding image of the second globe-type luminescence body 212, the tenth
One image 311 is the corresponding image of the first globe-type luminescence body 211.
Fig. 7 is referred to, the variable condition of the picture 30 that our Fig. 7 are shown is called " 1-2 " state, under " 1-2 " state,
The image quantity of globe-type luminescence body 210 becomes 2 from 1 in picture 30, and now, pattern recognition device 13 is detected in picture 30
The quantity of image of globe-type luminescence body 210 change and the image quantity of globe-type luminescence body 210 is not zero in picture 30,
One globe-type luminescence body 211 glistens, while pattern recognition device 13 detects the change of picture 30, when the 13rd image 313 is in flash of light
Period disappears and the 14th image 314 when not disappearing during glistening, and the 13rd image 313 is the first globe-type luminescence body 211
Corresponding image, the 14th image 314 is the image of the second globe-type luminescence body 212;When the 13rd image 313 is during glistening
Do not disappear and the 14th image 314 when disappearing during glistening, the 13rd image 313 is the correspondence of the second globe-type luminescence body 212
Image, the 14th image 314 is the corresponding image of the first globe-type luminescence body 211.Because the correspondence of the 12nd image 312 is closed
System determines in advance, and after the corresponding relation of the 13rd image 313 and the 14th image 314 determines, we can also correspond to and know
The movement of the image 312 of road the 12nd defines the 13rd image 313 or the 14th image 314.
Fig. 8 is referred to, the variable condition of the picture 30 that our Fig. 8 are shown is called " 2-1 " state, under " 2-1 " state,
The image quantity of globe-type luminescence body 210 becomes 1 from 2 in picture 30, and now, pattern recognition device 13 is detected in picture 30
The quantity of the image of globe-type luminescence body 210 changes and the image quantity of globe-type luminescence body 210 is not zero in picture 30, and first
Globe-type luminescence body 211 glistens, while pattern recognition device 13 detects the change of picture 30, when the 17th image 317 is in the flash of light phase
Between when disappearing, the 17th image 317 is the corresponding image of the first globe-type luminescence body 211;When the 17th image 317 is in the flash of light phase
Between when not disappearing, the 17th image 317 is the corresponding image of the second globe-type luminescence body 212.Due to the 15th image 315 and
The corresponding relation of 16 images 316 determines in advance that after the corresponding relation of the 17th image 317 determines, we can also be right
It should be recognized that the 17th image 317 is the result of the 15th image 315 or the motion of the 16th image 316.
, referring again to " coincidence " state, " coincidence " state is the combination of " 2-1 " state and " 1-2 " state for we.4th
The state of the image 305 of image 304 and the 5th determines in advance, it is assumed that the 4th image 304 is the first globe-type luminescence body 211 pairs
The image answered, the 5th image 305 is the corresponding image of the second globe-type luminescence body 212, when the first globe-type luminescence body 211 is blocked
When the light of the second globe-type luminescence body 212 or the second globe-type luminescence body 212 have blocked the light of the first globe-type luminescence body 211,
The 6th image 306 is only presented in picture 30, the image quantity of globe-type luminescence body 210 becomes 1 from 2 in picture 30, now,
Pattern recognition device 13 detects the quantity of the image of the globe-type luminescence body 210 in picture 30 and changes and spherical in picture 30
The image quantity of luminous body 210 is not zero, and the first globe-type luminescence body 211 glistens, while the detection picture 30 of pattern recognition device 13
Change, when the 6th image 306 disappears during glistening, the 6th image 306 is the corresponding shadow of the first globe-type luminescence body 211
Picture;When the 6th image 306 does not disappear during glistening, the 6th image 306 is the corresponding shadow of the second globe-type luminescence body 212
Picture;When positioning unit 20 continues to move, the first globe-type luminescence body 211 and the second globe-type luminescence body 212 separate, and go out in picture 30
Show the 7th image 307 and the 8th image 308, the image quantity of globe-type luminescence body 210 becomes 2 from 1 in picture 30, now,
Pattern recognition device 13 detects the quantity of the image of the globe-type luminescence body 210 in picture 30 and changes and spherical in picture 30
The image quantity of luminous body 210 is not zero, and the first globe-type luminescence body 211 glistens, while the detection picture 30 of pattern recognition device 13
Change, when the 7th image 307 disappears during glistening and the 8th image 308 does not disappear during glistening, the 7th image 307
The as corresponding image of the first globe-type luminescence body 211, the 8th image 308 is the image of the second globe-type luminescence body 212;When the 7th
Image 307 does not disappear and the 8th image 308 when disappearing during glistening during glistening, and it is spherical that the 7th image 307 is second
The corresponding image of luminous body 212, the 8th image 308 is the corresponding image of the first globe-type luminescence body 211.As an example, we
It is assumed that the 7th image 307 is the corresponding image of the second globe-type luminescence body 212, the 8th image 308 is the first globe-type luminescence body 211 pairs
The image answered, then the image position of the first globe-type luminescence body 211 the 6th image 306 is moved to by the position of the 4th image 304
Position, moves again to the position of the 8th image 308;The image position of the second globe-type luminescence body 212 by the 5th image 305 position
The position of the 6th image 306 is moved to, the position of the 7th image 307 is moved again to.So, processing unit 10 is it may determine that
The motion path of one globe-type luminescence body 211 and the second globe-type luminescence body 212.
Compared with prior art, the present invention is utilized and demarcates the first globe-type luminescence body 211 and the second globe-type luminescence body in advance
212, and flash of light of the image quantity of the globe-type luminescence body 210 shot by camera head 11 when changing, reach differentiation
The purpose of the first globe-type luminescence body 211 and the image of the second globe-type luminescence body 212, prevents from obscuring during positioning.By limiting shadow
As quantity is not zero as the condition glistened, it is possible to reduce insignificant flash of light, and further simplify processing unit 20
Rule of judgment.The image that camera head 11 shoots before and after flash of light is detected using pattern recognition device 13, comparison diagram can be passed through
It is very convenient as clearly demarcating the image of the first globe-type luminescence body 211 and the second globe-type luminescence body 212, relative to utilization
Different optical frequencies detect the first globe-type luminescence body 211 and the image of the second globe-type luminescence body 212, significantly reduce cost, there is certain
Economic implications.When not opened by defining camera head 11, ball in the image that 15 default camera device of arithmetic processor 11 shoots
The image quantity of shape luminous body 210 is zero, it is ensured that the moment that camera head 11 is opened just can be to into the ball in shooting image
Shape luminous body 210 is demarcated, it is ensured that the integrity of localization method.Being wirelessly transferred can facilitate handheld device and processing unit
20 communication, obtains can the order of processing unit 20 effective and orderly by setting mobile terminal embedded type control module 22
Execution.Demarcating the first globe-type luminescence body 211 and the second globe-type luminescence body 212 can help processing unit 20 to calculate the first ball
The motion path of the globe-type luminescence body 212 of shape luminous body 211 and second.
Embodiments of the invention are described above in conjunction with accompanying drawing, but be the invention is not limited in above-mentioned concrete
Embodiment, above-mentioned specific embodiment is only schematic, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of without departing from present inventive concept and scope of the claimed protection, can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (8)
1. a kind of method that virtual reality space is positioned, it is characterised in that shoot when pattern recognition device detects camera head
Image in the quantity of light-emitting device image be not zero and when the quantity of the image of the light-emitting device changes, a quilt
The light-emitting device demarcated in advance is glistened, and the camera head shoots before and after the detection flash of light of described image identifying device
Image, and according to the image calibration at least two that the camera head before and after flash of light shoots light-emitting device with described in image
The corresponding relation of the image of light-emitting device.
2. virtual reality space localization method according to claim 1, it is characterised in that when the camera head is not opened
When, it is zero that arithmetic processor gives tacit consent to the image quantity of light-emitting device described in the image that the camera head shoots.
3. virtual reality space localization method according to claim 2, it is characterised in that when the inspection of described image identifying device
When the quantity of the image of light-emitting device described in the image that survey road shoots to the camera head is not zero and quantity changes,
The light-emitting device flash of light that the arithmetic processor control is calibrated.
4. virtual reality space localization method according to claim 3, it is characterised in that the processing unit is according to flash of light
First globe-type luminescence body and the second globe-type luminescence body and the dress that lights described in image in the image that in front and back camera head shoots
The corresponding relation of the image put, judges the motion path of the first globe-type luminescence body and the second globe-type luminescence body.
5. a kind of virtual reality space alignment system, it is characterised in that including processing unit and positioning unit, the processing unit
Including camera head, pattern recognition device, the camera head is electrically connected with described image identifying device, the positioning unit
Including at least two light-emitting devices demarcated in advance, the camera head can shoot the image of the light-emitting device, described
Pattern recognition device can recognize the image that the camera head shoots, and the processing unit further includes arithmetic processor,
The arithmetic processor is electrically connected with described image identifying device.
6. virtual reality space alignment system according to claim 5, it is characterised in that the processing unit is further wrapped
Process end wireless transport module is included, process end wireless transport module is electrically connected with the arithmetic processor, the positioning
Unit further includes mobile terminal wireless transport module, and process end wireless transport module is wirelessly transferred mould with the mobile terminal
Block transmission information by way of being wirelessly transferred.
7. virtual reality space alignment system according to claim 6, it is characterised in that the positioning unit is further wrapped
Mobile terminal embedded type control module is included, the mobile terminal embedded type control module, the light-emitting device and the mobile terminal are wireless
Transport module is electrically connected with.
8. virtual reality space alignment system according to claim 7, it is characterised in that the positioning unit includes first
Handle and second handle, are provided with the first globe-type luminescence body on the first handle, the second ball is provided with the second handle
Shape luminous body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610988411.5A CN106568434A (en) | 2016-11-08 | 2016-11-08 | Method and system for positioning virtual reality space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610988411.5A CN106568434A (en) | 2016-11-08 | 2016-11-08 | Method and system for positioning virtual reality space |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106568434A true CN106568434A (en) | 2017-04-19 |
Family
ID=58540991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610988411.5A Pending CN106568434A (en) | 2016-11-08 | 2016-11-08 | Method and system for positioning virtual reality space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106568434A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107300378A (en) * | 2017-05-23 | 2017-10-27 | 北京小鸟看看科技有限公司 | A kind of personal identification method for positioning object, device and system |
CN107941208A (en) * | 2017-10-09 | 2018-04-20 | 深圳市虚拟现实科技有限公司 | Virtual reality space localization method and system |
CN110378339A (en) * | 2019-07-22 | 2019-10-25 | 青岛海信电器股份有限公司 | The location determining method and device of illuminator |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102441277A (en) * | 2012-01-18 | 2012-05-09 | 深圳市合智创盈电子有限公司 | Multi-purpose game controller, system and method with attitude sensing function |
CN102682589A (en) * | 2012-01-09 | 2012-09-19 | 西安智意能电子科技有限公司 | System for distant control of controlled device |
CN105117016A (en) * | 2015-09-07 | 2015-12-02 | 众景视界(北京)科技有限公司 | Interaction handle used in interaction control of virtual reality and augmented reality |
CN105824004A (en) * | 2016-04-29 | 2016-08-03 | 深圳市虚拟现实科技有限公司 | Method and system for positioning interactive space |
CN105867611A (en) * | 2015-12-29 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Space positioning method, device and system in virtual reality system |
CN106020451A (en) * | 2016-05-10 | 2016-10-12 | 罗阳军 | Positioning and movement identification system |
CN106019265A (en) * | 2016-05-27 | 2016-10-12 | 北京小鸟看看科技有限公司 | Multi-target positioning method and system |
-
2016
- 2016-11-08 CN CN201610988411.5A patent/CN106568434A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682589A (en) * | 2012-01-09 | 2012-09-19 | 西安智意能电子科技有限公司 | System for distant control of controlled device |
CN102441277A (en) * | 2012-01-18 | 2012-05-09 | 深圳市合智创盈电子有限公司 | Multi-purpose game controller, system and method with attitude sensing function |
CN105117016A (en) * | 2015-09-07 | 2015-12-02 | 众景视界(北京)科技有限公司 | Interaction handle used in interaction control of virtual reality and augmented reality |
CN105867611A (en) * | 2015-12-29 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Space positioning method, device and system in virtual reality system |
CN105824004A (en) * | 2016-04-29 | 2016-08-03 | 深圳市虚拟现实科技有限公司 | Method and system for positioning interactive space |
CN106020451A (en) * | 2016-05-10 | 2016-10-12 | 罗阳军 | Positioning and movement identification system |
CN106019265A (en) * | 2016-05-27 | 2016-10-12 | 北京小鸟看看科技有限公司 | Multi-target positioning method and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107300378A (en) * | 2017-05-23 | 2017-10-27 | 北京小鸟看看科技有限公司 | A kind of personal identification method for positioning object, device and system |
CN107941208A (en) * | 2017-10-09 | 2018-04-20 | 深圳市虚拟现实科技有限公司 | Virtual reality space localization method and system |
CN110378339A (en) * | 2019-07-22 | 2019-10-25 | 青岛海信电器股份有限公司 | The location determining method and device of illuminator |
CN110378339B (en) * | 2019-07-22 | 2021-08-17 | 海信视像科技股份有限公司 | Method and device for determining position of luminous body |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5064902B2 (en) | Generation method of 3D solid data | |
US10516489B1 (en) | Underwater wireless communication apparatus and communication method thereof | |
US11282224B2 (en) | Information processing apparatus and information processing method | |
KR101902283B1 (en) | Camera sensing device for obtaining three-dimensional information of object and virtual golf simulation apparatus using the same | |
CN106568434A (en) | Method and system for positioning virtual reality space | |
CN108830906B (en) | Automatic calibration method for camera parameters based on virtual binocular vision principle | |
US20150085083A1 (en) | Image-capturing system with dual lens camera | |
CN102879404B (en) | System for automatically detecting medical capsule defects in industrial structure scene | |
CN104604221A (en) | Apparatus for generating depth image | |
CN110910460A (en) | Method and device for acquiring position information and calibration equipment | |
CN107077729A (en) | For the method and apparatus for the structural element for recognizing the projection structure pattern in camera image | |
CN104204848A (en) | Surveying apparatus having a range camera | |
CN110383341A (en) | Mthods, systems and devices for visual effect | |
CN104667527A (en) | Method and system for recognizing different shooting points on screen by infrared laser | |
Wyeth et al. | Robust adaptive vision for robot soccer | |
US11776251B2 (en) | Information processing apparatus and object detection method | |
WO2009100778A1 (en) | Improved rotation independent face detection. | |
KR101346510B1 (en) | Visual odometry system and method using ground feature | |
CN110247328A (en) | Position judging method based on image recognition in switchgear | |
CN107941208A (en) | Virtual reality space localization method and system | |
CN109341664A (en) | The swing state measurement method of the two-dimentional overhead crane of view-based access control model | |
Soetedjo et al. | Detecting laser spot in shooting simulator using an embedded camera | |
CN108781258B (en) | Environment information determination method, device, robot and storage medium | |
CN105844199A (en) | Method and device for determining aiming positions of game guns on display screen | |
WO2022228321A1 (en) | Method and apparatus for identifying and positioning object within large range in video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170419 |
|
WD01 | Invention patent application deemed withdrawn after publication |