CN205334369U - Stage performance system based on motion capture - Google Patents
Stage performance system based on motion capture Download PDFInfo
- Publication number
- CN205334369U CN205334369U CN201520739518.7U CN201520739518U CN205334369U CN 205334369 U CN205334369 U CN 205334369U CN 201520739518 U CN201520739518 U CN 201520739518U CN 205334369 U CN205334369 U CN 205334369U
- Authority
- CN
- China
- Prior art keywords
- action
- server
- motion capture
- system based
- performance system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The utility model discloses a stage performance system based on motion capture, include that the server is played up to inertia motion capture equipment, data processing service ware, three -dimensional and synthetic projection server connects gradually, the node is gathered including the action to inertia motion capture equipment, the action is gathered the node and is located humanly in order to gather human action, data processor handles the actuating signal that the node was gathered is gathered in the action, restores out the action of human body to give the action data transmission after the reduction the server is played up to the three -dimensional, the three -dimensional is played up the server and is utilized the virtual role of received action data driven, makes virtual role's action synchronous with human action, synthetic projection server basis the video signal that the server was played up to the three -dimensional carries out the projection, and virtual role is synchronous with human action on the realization stage, and the actual situation interdynamic reaches the effect that increases reality.
Description
Technical field
This utility model relates to multimedia performance technical field, particularly to a kind of stage performance system based on motion capture。
Background technology
Traditional stage performance performance is performed mainly by the limbs of performer itself, dressing with it, light, sound, and on stage, video pictures etc. carrys out overview display。The picture of video makes previously according to the theme of floor show, and performer needs to mate image content through repeatedly rehearsal, and so the cycle of rehearsal is long, and the requirement of performer is higher。
Along with the development of technology, people start by some equipment to sense performer's limb action, to generate video pictures content in real time。With the Kinect of the Microsoft somatosensory device being representative, it is possible to very directly sense the limb action of performer, with video pictures interaction specially good effect。But current body-sensing technology is applied on stage performance still has some shortcomings。Such as: be subject to accent light interference;By barrier obstruction;Effective range is too little, is not enough to cover whole stage;Accuracy is still inadequate;Identification number is restricted;Being affected by blocking between limbs, these impacts can cause that final video picture does not mate with the work of performer's limbs, causes that final stage performance effect is bad, affects perception。
Therefore, it is necessary to a kind of better stage performance system of design, to solve the problems referred to above。
Utility model content
The problems such as main purpose of the present utility model is to provide a kind of stage performance system based on motion capture, it is intended to solves somatosensory device effective range in stage environment little, is subject to barrier and blocks, and light disturbs, and accuracy is not enough。
For achieving the above object, a kind of stage performance system based on motion capture that the utility model proposes includes inertia action and catches equipment, data processing server, three-dimensional rendering server and composite projection's server are sequentially connected with, described inertia action catches equipment and includes action acquisition node, described action acquisition node is located at human body to gather the action of human body, described inertia action catches equipment and is connected to described data processing server by wireless receiving antenna, the actuating signal that action acquisition node described in described data processor processes collects, restore the action of human body, and the action data after reduction is sent to described three-dimensional rendering server, the action data that described three-dimensional rendering server by utilizing receives drives virtual role, the action making virtual role is Tong Bu with the action of human body, described composite projection server projects according to the video signal of described three-dimensional rendering server。
Preferably, described inertia action seizure equipment includes some described action acquisition nodes being arranged on human body different parts。
Preferably, each described action acquisition node is respectively connected with power supply。
Preferably, it is multiple inertial sensors that described inertia action catches equipment, and each described action acquisition node is provided with described inertial sensor。
Preferably, described inertia action catches equipment and is connected to described data processing server by wireless receiving antenna。
Preferably, described data processing server, described three-dimensional rendering server and described composite projection server are connected to display。
Preferably, the described stage performance system based on motion capture also includes the screen being located at stage, and described composite projection server is connected to described screen, by video signal projection to described screen。
Preferably, described data processing server, described three-dimensional rendering server, described composite projection server and described screen are connected by netting twine or DVI data wire or USB data line between any two。
This utility model catches equipment by inertia action and catches the action of human body many places, and it is transferred to three-dimensional rendering server after data being processed, virtual role in three-dimensional rendering server is driven to represent the real-time action of human body, and by the video signal projection of virtual role action to stage, realize virtual role on stage Tong Bu with human action, deficiency and excess is interactive, reaches to increase the effect of reality。It is aided with the visual effects such as exaggeration, super reality, illusion again so that stage performance is more rich。
Comparing existing somatosensory device, the technical solution of the utility model performance range of activity is bigger, is more suitable for stage performance, blocked by barrier affect little, and the impact do not blocked by self limbs, also do not disturbed by light, and the action of multiple performer can be caught simultaneously。
Accompanying drawing explanation
Fig. 1 is this utility model structural representation based on the stage performance system of motion capture;
Drawing reference numeral illustrates:
Label | Title | Label | Title | Label | Title |
1 | Motion capture equipment | 4 | Composite projection's server | 7 | Screen |
2 | Data processing server | 5 | Wireless receiving antenna | 8 | Human body |
3 | Three-dimensional rendering server | 6 | Virtual role | 9 | Display |
The realization of this utility model purpose, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing。
Detailed description of the invention
Below in conjunction with drawings and the specific embodiments, the technical solution of the utility model is described further。Should be appreciated that specific embodiment described herein is only in order to explain this utility model, be not used to limit this utility model。
With reference to Fig. 1, the utility model proposes a kind of stage performance system based on motion capture, for stage and backstage thereof, stage is provided with screen 7, wherein stage performance system includes inertia action seizure equipment 1, data processing server 2, three-dimensional rendering server 3 and composite projection's server 4, four are sequentially connected with, preferably, inertia action catches equipment 1 and is connected to data processing server 2 by wireless receiving antenna 5, it is connected by netting twine between data processing server 2 with three-dimensional rendering server 3, it is connected by DVI data wire between three-dimensional rendering server 3 and composite projection server, it is connected also by DVI line between composite projection's server 4 and screen, certainly, in other embodiments, data processing server 2, three-dimensional rendering server 3, composite projection's server 4 and screen 7 selectable employing USB data line between any two or netting twine or DVI data wire connect。Data processing server 2, three-dimensional rendering server 3 and composite projection's server 4 are connected to display 9, to observe the process content of corresponding server。
Inertia action seizure equipment 1 includes multiple action acquisition node and is respectively arranged on human body, to gather the action of human body, in the present embodiment, this complete equipment is totally 17 action acquisition nodes, lay respectively at the rumpbone of human body 8, left thigh, right thigh, left leg, right leg, left foot, right crus of diaphragm, left shoulder, right shoulder, vest, left upper arm, right upper arm, left forearm, right forearm, left hand is slapped, the right hand is slapped, the back side of head, each action acquisition node is respectively connected with power supply, in the present embodiment, action acquisition node built-in lithium battery, in other embodiments, action acquisition node can also external power supply。After this complete equipment is dressed by human body 8 by tram, power supply opening, action acquisition node is made to gather the action of human body 8。In the present embodiment, it is multiple inertial sensors that inertia action catches equipment 1, is respectively correspondingly arranged in each action acquisition node place。
Inertia action catches after equipment 1 gathers human action and sends actuating signal to data processing server 2, this actuating signal is processed by data processing server 2, need to catch equipment 1 with inertia action before treatment to be calibrated, human body 8 need under the guiding of data processing server 2, completing T action, A action and S action, after correction, the action data of human body 8 just can accurately be reduced。The raw motion data of human body 8 is passed to data processing server 2 by wireless receiving antenna 5 by action acquisition node, after data processing server 2 receives data, process raw motion data and restore the action of human body 8, and the action data after reduction is sent to three-dimensional rendering server 3。
Three-dimensional rendering server 3 utilizes the action data received to drive virtual role 6, and the action making virtual role 6 is Tong Bu with the action of human body 8。Simultaneously in order to reach better stage effect, virtual role 6 can be placed in the performance scene of customization, simultaneously drive several different virtual role 6, or replicate virtual role 6 one-tenth troop array effect, and plus game particle effect, the change of camera position, angle and focal length, construct different stage effect, make screen that the visual effects such as exaggeration, super reality, illusion to occur so that stage performance is more rich。
Three-dimensional rendering server 3 is connected to composite projection's server 4, and composite projection's server 4, by the video signal of three-dimensional rendering server 3, together with the video content of other position, projects on the screen 7 of stage。Be combined with virtual role 6 deficiency and excess of screen 7 by human body 8, reach the effect of augmented reality, it is adaptable to the action classes such as plays for children, dancing, wushu, variety, science fiction, exaggeration, super reality effect stage performance program。
The foregoing is only preferred embodiment of the present utility model; not thereby the scope of the claims of the present utility model is limited; every equivalent structure transformation utilizing this utility model description and accompanying drawing content to make; or directly or indirectly it is used in other relevant technical fields, all in like manner include in scope of patent protection of the present utility model。
Claims (8)
1. the stage performance system based on motion capture, it is characterized in that, equipment is caught including inertia action, data processing server, three-dimensional rendering server and composite projection's server are sequentially connected with, described inertia action catches equipment and includes action acquisition node, described action acquisition node is located at human body to gather the action of human body, the actuating signal that action acquisition node described in described data processor processes collects, restore the action of human body, and the action data after reduction is sent to described three-dimensional rendering server, the action data that described three-dimensional rendering server by utilizing receives drives virtual role, the action making virtual role is Tong Bu with the action of human body, described composite projection server projects according to the video signal of described three-dimensional rendering server。
2. the stage performance system based on motion capture as claimed in claim 1, it is characterised in that described inertia action catches equipment and includes some described action acquisition nodes being arranged on human body different parts。
3. the stage performance system based on motion capture as claimed in claim 2, it is characterised in that each described action acquisition node is respectively connected with power supply。
4. the stage performance system based on motion capture as claimed in claim 1, it is characterised in that it is multiple inertial sensors that described inertia action catches equipment, and each described action acquisition node is provided with described inertial sensor。
5. the stage performance system based on motion capture as claimed in claim 1, it is characterised in that described inertia action catches equipment and is connected to described data processing server by wireless receiving antenna。
6. the stage performance system based on motion capture as claimed in claim 1, it is characterised in that described data processing server, described three-dimensional rendering server and described composite projection server are connected to display。
7. the stage performance system based on motion capture as claimed in claim 1, it is characterised in that also including the screen being located at stage, described composite projection server is connected to described screen, by video signal projection to described screen。
8. the stage performance system based on motion capture as claimed in claim 7, it is characterized in that, described data processing server, described three-dimensional rendering server, described composite projection server and described screen are connected by netting twine or DVI data wire or USB data line between any two。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201520739518.7U CN205334369U (en) | 2015-09-22 | 2015-09-22 | Stage performance system based on motion capture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201520739518.7U CN205334369U (en) | 2015-09-22 | 2015-09-22 | Stage performance system based on motion capture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN205334369U true CN205334369U (en) | 2016-06-22 |
Family
ID=56213137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201520739518.7U Active CN205334369U (en) | 2015-09-22 | 2015-09-22 | Stage performance system based on motion capture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN205334369U (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107124658A (en) * | 2017-05-02 | 2017-09-01 | 北京小米移动软件有限公司 | Net cast method and device |
CN107957773A (en) * | 2016-10-17 | 2018-04-24 | 北京诺亦腾科技有限公司 | Object wearing device, gloves, motion capture device and virtual reality system |
CN108307175A (en) * | 2018-02-08 | 2018-07-20 | 华南理工大学 | Dancing dynamic image based on flexible sensor captures and goes back original system and control method |
CN108337413A (en) * | 2018-01-31 | 2018-07-27 | 北京卡德加文化传媒有限公司 | A kind of camera arrangement and photographic method |
CN108665492A (en) * | 2018-03-27 | 2018-10-16 | 北京光年无限科技有限公司 | A kind of Dancing Teaching data processing method and system based on visual human |
CN112261422A (en) * | 2020-10-15 | 2021-01-22 | 北京德火科技有限责任公司 | Simulation remote live broadcast stream data processing method suitable for broadcasting and television field |
CN114245101A (en) * | 2021-12-14 | 2022-03-25 | 中科星宇天文科技研究院(北京)有限公司 | Three-dimensional screen display system |
CN115619912A (en) * | 2022-10-27 | 2023-01-17 | 深圳市诸葛瓜科技有限公司 | Cartoon character display system and method based on virtual reality technology |
-
2015
- 2015-09-22 CN CN201520739518.7U patent/CN205334369U/en active Active
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107957773A (en) * | 2016-10-17 | 2018-04-24 | 北京诺亦腾科技有限公司 | Object wearing device, gloves, motion capture device and virtual reality system |
CN107957773B (en) * | 2016-10-17 | 2024-02-06 | 北京诺亦腾科技有限公司 | Wearing device, glove, motion capture device and virtual reality system |
CN107124658A (en) * | 2017-05-02 | 2017-09-01 | 北京小米移动软件有限公司 | Net cast method and device |
CN108337413A (en) * | 2018-01-31 | 2018-07-27 | 北京卡德加文化传媒有限公司 | A kind of camera arrangement and photographic method |
CN108307175A (en) * | 2018-02-08 | 2018-07-20 | 华南理工大学 | Dancing dynamic image based on flexible sensor captures and goes back original system and control method |
WO2019153783A1 (en) * | 2018-02-08 | 2019-08-15 | 华南理工大学 | Dynamic dance image capture and restoration system based on flexible sensor, and control method |
CN108307175B (en) * | 2018-02-08 | 2020-01-14 | 华南理工大学 | Dance dynamic image capturing and restoring system based on flexible sensor and control method |
CN108665492A (en) * | 2018-03-27 | 2018-10-16 | 北京光年无限科技有限公司 | A kind of Dancing Teaching data processing method and system based on visual human |
CN108665492B (en) * | 2018-03-27 | 2020-09-18 | 北京光年无限科技有限公司 | Dance teaching data processing method and system based on virtual human |
CN112261422A (en) * | 2020-10-15 | 2021-01-22 | 北京德火科技有限责任公司 | Simulation remote live broadcast stream data processing method suitable for broadcasting and television field |
CN114245101A (en) * | 2021-12-14 | 2022-03-25 | 中科星宇天文科技研究院(北京)有限公司 | Three-dimensional screen display system |
CN115619912A (en) * | 2022-10-27 | 2023-01-17 | 深圳市诸葛瓜科技有限公司 | Cartoon character display system and method based on virtual reality technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN205334369U (en) | Stage performance system based on motion capture | |
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
CN102221887B (en) | Interactive projection system and method | |
CN102947777B (en) | Usertracking feeds back | |
CN104866101B (en) | The real-time interactive control method and device of virtual objects | |
CN102622774B (en) | Living room film creates | |
CN106170083A (en) | Image procossing for head mounted display equipment | |
CN112198959A (en) | Virtual reality interaction method, device and system | |
CN106652590B (en) | Teaching method, teaching identifier and tutoring system | |
CN105373224A (en) | Hybrid implementation game system based on pervasive computing, and method thereof | |
WO2012071466A3 (en) | System and method for acquiring virtual and augmented reality scenes by a user | |
CN103207667B (en) | A kind of control method of human-computer interaction and its utilization | |
CN104731343A (en) | Virtual reality man-machine interaction children education experience system based on mobile terminal | |
CN102576466A (en) | Systems and methods for tracking a model | |
CN205581784U (en) | Can mix real platform alternately based on reality scene | |
CN106310660A (en) | Mechanics-based visual virtual football control system | |
CN107027014A (en) | A kind of intelligent optical projection system of trend and its method | |
CN103208214A (en) | Novel simulation system of power transformer substation | |
CN206301285U (en) | A kind of roller-coaster virtual reality Interactive Experience system equipment | |
CN111862348A (en) | Video display method, video generation method, video display device, video generation device, video display equipment and storage medium | |
CN105183161A (en) | Synchronized moving method for user in real environment and virtual environment | |
CN107901040A (en) | Robot myoelectric control system based on ROS | |
CN104474710B (en) | Based on large scale scene group of subscribers tracking system and the method for Kinect network | |
CN104765456A (en) | Virtual space system and building method thereof | |
CN109806580A (en) | Mixed reality system and method based on wireless transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |