Nothing Special   »   [go: up one dir, main page]

WO2021238804A1 - Mixed reality virtual preview photographing system - Google Patents

Mixed reality virtual preview photographing system Download PDF

Info

Publication number
WO2021238804A1
WO2021238804A1 PCT/CN2021/095242 CN2021095242W WO2021238804A1 WO 2021238804 A1 WO2021238804 A1 WO 2021238804A1 CN 2021095242 W CN2021095242 W CN 2021095242W WO 2021238804 A1 WO2021238804 A1 WO 2021238804A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
optical
camera
shooting
motion capture
Prior art date
Application number
PCT/CN2021/095242
Other languages
French (fr)
Chinese (zh)
Inventor
吴迪云
黄秀强
郭胜男
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Publication of WO2021238804A1 publication Critical patent/WO2021238804A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2228Video assist systems used in motion picture production, e.g. video cameras connected to viewfinders of motion picture cameras or related video signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the invention relates to the technical field of motion capture, in particular to a mixed reality virtual rehearsal shooting system.
  • Mixed reality technology is a further development of virtual reality technology.
  • an information loop of interactive feedback is set up between the real world, the virtual world and the user to enhance the realism of the user experience.
  • the offline method is adopted, and the 3D 3D animator is used to produce the shots, and the production cycle is long. Therefore, the hybrid virtual and real shooting system lacks on-site shots and previews. As a result, the efficiency of computer synthesis of 3D video in the later stage is low.
  • the main purpose of the present invention is to solve the problem that the mixed virtual and real shooting cannot be used for on-site split-shot rehearsal, which leads to the low efficiency of post-computer production.
  • the first aspect of the present invention provides a mixed reality virtual rehearsal shooting system.
  • the mixed reality virtual rehearsal shooting system includes a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system that are communicatively connected; in,
  • the real shooting system includes a video camera and a video capture card, and the video camera and the video capture card are in communication connection;
  • the virtual shooting system includes a virtual camera, an inertial measurement device, and a wireless transmission device, the inertial measurement device is installed on the virtual camera, and the inertial measurement device is in communication connection with the wireless transmission device;
  • the optical positioning and motion capture system includes a light capture processing server and a plurality of optical motion capture cameras, and the light capture processing server is respectively communicatively connected with the plurality of optical motion capture cameras and the wireless transmission device;
  • the central control system includes a central management server, and the central management server is respectively communicatively connected with the video capture card, the wireless transmission device, and the optical capture processing server;
  • the display system includes a display device, and the display device is network connected to the central management server.
  • the video camera is used to shoot the object to be captured in the actual shooting scene to obtain real-shot video data; the video capture card is used to send the real-shot video data to the central management server.
  • the object to be captured wears a motion capture suit with a plurality of optical marking points affixed in advance, and the plurality of optical marking points are used to locate each joint position of the object to be captured.
  • a preset number of optical marking points are installed on the video camera and the virtual camera respectively;
  • the inertial measurement device is used to collect the inertial navigation data corresponding to the virtual camera in moving shooting through a nine-axis inertial sensor;
  • the wireless transmission device is used to send the corresponding inertial navigation data to the optical capture processing server.
  • the light capture processing server further includes a three-dimensional motion capture processing module and an optical inertial fusion module, the three-dimensional motion capture processing module is communicatively connected with the plurality of optical motion capture cameras, and the optical inertial fusion module is respectively connected with
  • the central management server is in a communication connection with the wireless transmission device;
  • the multiple optical motion capture cameras are used for positioning and shooting each optical marking point to obtain respective corresponding two-dimensional image data
  • the three-dimensional motion capture processing module is configured to obtain the corresponding three-dimensional coordinate data and the motion posture information of the object to be captured according to the corresponding two-dimensional image data;
  • the optical inertial fusion module is used to perform coordinate system calibration and posture fusion processing in sequence according to the three-dimensional coordinate data corresponding to each optical mark point on the virtual camera and the corresponding inertial navigation data to obtain camera posture data, and then The respective corresponding three-dimensional coordinate data, the camera posture data and the motion posture information are sent to the central management server.
  • the optical capture processing server further includes an inertial navigation setting module, which is communicatively connected with the optical inertial fusion module and configured to use a preset link port number and a preset rigid body name to connect the nine-axis
  • the inertial sensor and the preset number of optical marking points are installed on the virtual camera according to a preset positional relationship.
  • the central management server includes a virtual rehearsal shooting synthesis module and a rendering synthesis module, the virtual rehearsal shooting synthesis module is respectively communicatively connected with the video capture card and the optical inertial fusion module, and the rendering synthesis module is respectively Communicatively connected with the virtual rehearsal shooting synthesis module and the wireless transmission device;
  • the virtual rehearsal shooting synthesis module is used to perform real-time keying of the real-shot video data, so as to generate an avatar in a virtual scene, and adjust the angle of the virtual scene in real time;
  • the rendering synthesis module is configured to convert the keying image information and the adjusted virtual scene into a three-dimensional virtual reality mixed video stream according to the camera posture data, the respective corresponding three-dimensional coordinate data, and the motion posture information, and The three-dimensional virtual reality mixed video stream is sent to the virtual camera and the display device respectively.
  • the display device is configured to receive and synchronously display the 3D virtual reality mixed video stream, so that the target person previews the 3D virtual reality mixed video stream during the displacement of the video camera and the virtual camera , And adjust the shooting action and shooting angle of the object to be captured in real time.
  • the optical positioning and motion capture system further includes a calibration device, and the calibration device is used to calibrate the positions of the plurality of optical motion capture cameras in the actual shooting scene through a calibration rod.
  • the mixed reality virtual rehearsal shooting system further includes a camera setting system, which is communicatively connected with the central control system, and is used to obtain the site parameters in the actual shooting scene, and according to the site
  • the parameters determine the number of cameras corresponding to the optical motion capture camera and the corresponding camera installation positions, so that each optical mark point is captured by any three optical motion capture cameras, and the site parameters include large space site length information and width information.
  • the mixed reality virtual rehearsal shooting system includes a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system that are communicatively connected;
  • the real shooting system includes A video camera and a video capture card, the video camera and the video capture card are in communication connection;
  • the virtual shooting system includes a virtual camera, an inertial measurement device and a wireless transmission device, the inertial measurement device is installed on the virtual camera, The inertial measurement device is in communication connection with the wireless transmission device;
  • the optical positioning and motion capture system includes a light capture processing server and a plurality of optical motion capture cameras, and the light capture processing server is respectively connected to the plurality of optical motion capture
  • the camera is in communication connection with the wireless transmission device;
  • the central control system includes a central management server, and the central management server is respectively in communication connection with the video capture card, the wireless transmission device, and the light capture processing server;
  • the display system includes a display device, and the display device is network connected to the central management server.
  • the real shooting system, the virtual shooting system, the optical positioning and motion capture system, the central control system, and the display system realize motion track recording, data export and fusion, redundant target shielding, and real-time imaging preview.
  • the present invention obtains the three-dimensional motion trajectory information of the object to be captured in real time, realizes the superimposition of the scene and the character in the three-dimensional space, and has the effect of synthesizing the three-dimensional depth information; at the same time, through virtual reality interaction, real-time preview of the captured image
  • the visual effect of the moving mirror of the film improves the efficiency of mixed reality virtual video shooting, and reduces the production cost and production cycle.
  • FIG. 1 is a schematic structural diagram of a mixed reality virtual rehearsal shooting system in an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an optical positioning and motion capture system in an embodiment of the present invention
  • Figure 3 is a schematic diagram of a structure of a central management server in an embodiment of the present invention.
  • FIG. 4 is another schematic diagram of the structure of the mixed reality virtual rehearsal shooting system in the embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an application scenario of the mixed reality virtual rehearsal shooting system in an embodiment of the present invention.
  • the embodiment of the present invention provides a mixed reality virtual rehearsal shooting system, through virtual reality interaction, real-time preview of the lens moving visual effect of a filmed film, improving the efficiency of mixed reality virtual video shooting, and reducing production costs and production cycles.
  • the mixed reality virtual rehearsal shooting system includes a real shooting system 1, a virtual shooting system 2, an optical positioning and motion capture system 3 connected in communication. , Central control system 4 and display system 5; among them,
  • the real shooting system 1 includes a video camera 11 and a video capture card 12, and the video camera 11 and the video capture card 12 are in communication connection;
  • the virtual shooting system 2 includes a virtual camera 21, an inertial measurement device 22, and a wireless transmission device 23.
  • the inertial measurement device 22 is installed on the virtual camera 21, and the inertial measurement device 22 is in communication connection with the wireless transmission device 23;
  • the optical positioning and motion capture system 3 includes a light capturing processing server 31 and a plurality of optical motion capturing cameras 32, and the light capturing processing server 31 is respectively communicatively connected with a plurality of optical motion capturing cameras 32 and a wireless transmission device 23;
  • the central control system 4 includes a central management server 41, and the central management server 41 is in communication connection with the video capture card 12, the wireless transmission device 23, and the light capture processing server 31, respectively;
  • the display system 5 includes a display device 51, and the display device 51 is connected to the central management server 41 in a network.
  • the real shooting system 1 is used to receive preset shooting instructions, and based on the preset shooting instructions to control the shooting of the video camera 11, the video camera 11 is used to perform video shooting of the object to be captured in the actual shooting scene, Obtain real-time video data.
  • the video capture card 12 is used to upload the real shot video data to the central management server 41, so that the central management server 41 performs data storage and data processing on the real shot video data.
  • the object to be captured is a moving object, including people and props
  • the real shot video data is used to indicate the moving picture information of the object to be captured in the actual shooting scene.
  • the actual shooting scene includes a green screen and a truss. The truss is used to arrange multiple optical motion capture cameras 32.
  • the background of the actual shooting scene may also be a blue screen, which is not specifically limited here.
  • the video camera 11 is also used to support the lens translation, up and down movement, zoom in and out of the fixed PTZ camera, and to support the movement of the camera with a large rocker arm in a large range, for example, the video camera 11 is set to 10 Used in actual shooting scenes up to 2000 square meters.
  • the virtual camera 21 in the virtual shooting system 2 is a handheld camera with six degrees of freedom, and the virtual camera 21 is used to move and shoot a preset virtual scene corresponding to the actual shooting scene.
  • the six degrees of freedom means that the object has a three-dimensional space.
  • Six degrees of freedom that is, the degrees of freedom of movement along the directions of the three rectangular coordinate axes of x, y, and z, and the degrees of freedom of rotation around the three coordinate axes of x, y, and z. Therefore, the virtual camera 21 can be used to adjust the shooting angle of a preset virtual scene, can also be used to adjust the focal length and aperture value, and to control the start and end time of shooting for the preset virtual scene.
  • the virtual camera 21 when the start button in the virtual camera 21 is activated, the virtual camera 21 is used to receive a start recording instruction and start shooting a preset virtual scene.
  • the pause button or stop button of the virtual camera 21 When the pause button or stop button of the virtual camera 21 is activated, the virtual camera 21 21 is used to receive a pause recording instruction or a stop recording instruction, and stop shooting a preset virtual scene. And it is used to synchronously display the 3D virtual reality mixed video stream sent by the central management server 41.
  • the inertial measurement device 22 is used to capture the virtual camera 21.
  • the movement posture information, the wireless transmission device 23, is used to upload the collected movement posture information to the light capture processing server 31 in the optical positioning and motion capture system 3.
  • the wireless transmission device 23 can be wireless Bluetooth and both wireless
  • the Bluetooth image transmission function can also be wireless WIFI.
  • the multiple optical motion capture cameras 32 in the optical positioning and motion capture system 3 are used to identify the optical marking points bound to different parts of the object to be captured, and the light capturing processing server 31 is used to obtain the actual position of each optical marking point.
  • the position and orientation in the shooting scene are further determined to determine the movement trajectory of the object to be captured in the actual shooting scene, and synchronously imported into the central management server 41 in real time.
  • the object to be captured wears a motion capture suit with multiple optical marking points affixed in advance.
  • the multiple optical marking points are used to locate the joint positions of the object to be captured.
  • the optical marking points include reflective marking points and active markings. Reflective marking points can use reflective balls.
  • Active marking points are suitable for situations where ambient lighting conditions make it difficult to locate and track the reflective marking points, so that the optical positioning and motion capture system 3 can be applied both indoors and outdoors.
  • the video camera 11 and the virtual camera 21 are respectively preset with a preset number of optical marking points, and the preset number of optical marking points are combined with a plurality of optical motion capture cameras 32 to perform the operation on the video camera 11 and the virtual camera 21.
  • Positioning space location information where the preset number is a positive integer. For example, 3, 4, or 5, and the details are not limited here.
  • different reflective marking points are arranged on the head, hands, feet, and back waist of the object to be captured, and multiple optical motion capture cameras 32 are used to track the reflective marking points on the object to be captured. Realize the precise positioning of the spatial position and orientation of the reflective marking point, so as to obtain the extremity position of the object to be captured.
  • the light-capturing processing server 31 is configured to determine the motion posture information of the object to be captured according to the position of the extremities of the object to be captured.
  • the central management server 41 in the central control system 4 is first used to perform keying processing on the real shot video data captured by the film and television camera 11 to obtain keyed image information, where the keyed image information includes character model information, Secondly, it is used to synthesize the keying image information into a preset virtual scene (virtual scene corresponding to the actual shooting scene) through 3D synthesis software or a preset engine, or to obtain the virtual character corresponding to the character model information, and merge the corresponding virtual character Go to the corresponding preset virtual scene, and then drive the character model information or virtual character movement through forward kinematics and motion posture information.
  • a preset virtual scene virtual scene corresponding to the actual shooting scene
  • 3D synthesis software or a preset engine or to obtain the virtual character corresponding to the character model information
  • merge the corresponding virtual character Go to the corresponding preset virtual scene and then drive the character model information or virtual character movement through forward kinematics and motion posture information.
  • the light capture processing server 31 and multiple optical motion capture cameras 32 are used to obtain all the motion capture actor's Sports posture information.
  • the central management server 41 is used to perform overall picture calculations on the real shot video data shot by the real shooting system 1, to obtain the keying image information containing the motion capture actors and the virtual characters corresponding to the keying image information, and use three-dimensional synthesis software or
  • the preset engine synthesizes the corresponding virtual characters into the preset virtual scenes that are actually needed, and uses the motion posture information to drive the preset virtual characters.
  • the virtual characters are virtual objects that dance with music to obtain 3D virtual reality mixed video flow.
  • the central management server 41 is configured to send the three-dimensional virtual reality mixed video stream to the virtual camera 21 and the display device 51.
  • the central management server 41 can be used to superimpose and synthesize a preset virtual scene and a virtual character, and can also be used to perform three-dimensional depth of the real character and the preset virtual scene, and the preset real scene and the virtual character.
  • the synthesis process is not specifically limited here.
  • the central control system 4 also includes input devices and output devices, such as a mouse, a keyboard, and a display.
  • the display system 5 includes a display device 51 for receiving and synchronously displaying the three-dimensional virtual reality mixed video stream sent by the central management server 41, so that the target person can adjust the shooting action and shooting angle of the object to be captured in real time, and display it on the video camera 11 During the displacement process of the virtual camera 21, the visual effect of the lens movement of the filmed film is previewed, and the target personnel include the director and the photographer. Therefore, the display device 51 is used to provide a real-time scheduling preview of the lens frame composition, lens motion, and composite effect, so that the target person can adjust and modify the creativity and frame composition in time. In this way, the problems of high production cost and long production cycle of traditional film and television production in the early and late stages of production are avoided, and the visualized shooting effect is provided in real time through the virtual rehearsal shooting method.
  • a hybrid display virtual rehearsal shooting system which realizes motion track recording, data export and fusion, and redundant targets through a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system.
  • a hybrid display virtual rehearsal shooting system which realizes motion track recording, data export and fusion, and redundant targets through a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system. (Objects other than the object to be captured) Real-time preview of shielding and imaging.
  • the present invention obtains the three-dimensional motion trajectory information of the object to be captured in real time, realizes the superimposition of the scene and the character in the three-dimensional space, and has the effect of synthesizing the three-dimensional depth information; at the same time, through virtual reality interaction, real-time preview of the captured image
  • the visual effect of the moving mirror of the film improves the efficiency of mixed reality virtual video shooting, and reduces the production cost and production cycle.
  • the optical capture processing server 31 also includes a three-dimensional motion capture processing module 311 and an optical inertial fusion module 312.
  • the three-dimensional motion capture processing module 311 is communicatively connected with a plurality of optical motion capture cameras 32, and the optical inertial fusion module 312 Respectively communicate with the central management server 41 and the wireless transmission device 23;
  • a plurality of optical motion capture cameras 32 are used for positioning and shooting each optical marking point to obtain respective corresponding two-dimensional image data
  • the three-dimensional motion capture processing module 311 is configured to obtain the corresponding three-dimensional coordinate data and the motion posture information of the object to be captured according to the corresponding two-dimensional image data;
  • the optical inertial fusion module 312 is used to perform coordinate system calibration and posture fusion processing in sequence according to the motion posture information corresponding to each optical mark point on the virtual camera 21 and the corresponding inertial navigation data, to obtain camera posture data, and to compare the corresponding
  • the three-dimensional coordinate data, camera posture data, and motion posture information are sent to the central management server 41.
  • the multiple optical motion capture cameras 32 and the light capture processing server 31 may be connected by wire or wirelessly, for example, by a POE switch.
  • multiple optical motion capture cameras 32 are used to collect the two-dimensional image information corresponding to each of the video camera 11, the virtual camera 21, and the multiple optical marker points, that is, the video camera 11, the virtual camera 21, and the object to be captured
  • Each optical marking point is captured by at least two or more optical motion capture cameras 32 at the same time
  • the three-dimensional motion capture processing module 311 is used to convert the corresponding two-dimensional image data into each corresponding three-dimensional coordinate data, that is, spatial position information, And the movement posture information of the object to be captured, wherein the respective corresponding three-dimensional coordinate data is used to indicate the position and orientation of each optical mark point in the world coordinate system, so as to realize the positioning and tracking of the moving object
  • the optical inertial fusion module 312 is used for Receive the inertial navigation data corresponding to the virtual camera 21, and perform coordinate system calibration and posture fusion calculation based on the three-digit coordinate data
  • the optical positioning and motion capture system 3 further includes a calibration device 33, which is used to calibrate the positions of a plurality of optical motion capture cameras 32 in the actual shooting scene through a calibration rod.
  • a calibration device 33 which is used to calibrate the positions of a plurality of optical motion capture cameras 32 in the actual shooting scene through a calibration rod.
  • the shape of the calibration rod may be a T-shaped structure, and the dimensions of the calibration rod are 418 mm in the length of the horizontal rod and 578 mm in the length of the vertical rod.
  • the inertial measurement device 22 is used to collect the inertial navigation data corresponding to each of the virtual camera 21 and the multiple optical marker points during the moving shooting through the nine-axis inertial sensor, and send the corresponding inertial navigation data to the wireless transmission device 23 ,
  • the inertial measurement device 22 is an inertial measurement unit (IMU);
  • the wireless transmission device 23 is used to send respective corresponding inertial navigation data to the optical inertial fusion module 312.
  • the optical capture processing server 31 further includes an inertial navigation setting module 313.
  • the inertial navigation setting module 313 is communicatively connected with the optical inertial fusion module 312, and is used to connect the nine-axis inertial sensor with the preset A set number of optical marking points are installed on the virtual camera 21 according to a preset positional relationship.
  • the nine-axis inertial sensor includes a three-axis angular velocity sensor (gyro), a three-axis acceleration sensor and a three-axis magnetic induction sensor.
  • the nine-axis inertial sensor can ensure high-precision calculation of the rotation angle of the rigid body, and the error range is within 0.05 degrees, so that the synthesized 3D virtual reality mixed video stream can be more stable and can be interacted in real time with low latency.
  • the refresh rate of Hertz angle data is used to update the picture of the dimensional virtual reality hybrid video stream.
  • at least 3 reflective balls can be used for the preset number of optical marking points.
  • the reflective ball is a rigid body and also a motion tracker, and its motion trajectory represents a change in spatial coordinates.
  • a nine-axis inertial sensor (inertial measurement device 22) is generally used together, and the spatial coordinates of the rigid body and the nine-axis inertial sensor are unified to finally obtain the precise pose data of the rigid body.
  • the purpose of the calibration of the rigid body and the nine-axis inertial sensor is to unify the space coordinate system.
  • the motion posture information of the rigid body is the position and posture in the custom world coordinate system.
  • the posture of the nine-axis inertial sensor (corresponding to the inertial navigation data) is relative to the hardware
  • the posture change when the body is started, the coordinate system alignment processing of the rigid body and the nine-axis inertial sensor, and the posture fusion processing can all improve the calculation accuracy of the rigid body’s pose.
  • the reflective ball (optical marking point) and the nine-axis inertial sensor are bound flexibly, and the preset rigid body name and the preset link port number have a one-to-one mapping relationship. Therefore, the inertial navigation setting module 313 is used to pass the Set the rigid body name and the preset link port number to update the configuration and clear the existing configuration, and the binding and unbinding of the reflector (optical mark) and the nine-axis inertial sensor can be realized.
  • the nine-axis inertial sensor reduces the circuit board and overall space, and the corresponding hardware is small. It is suitable for portable installation on moving objects that accurately capture rotation data, and improves the calculation accuracy of a single sensor in locating the spatial position and movement direction.
  • the inertial navigation setting module 313 is not only used to determine the motion posture data of the virtual camera 21, but also can be used to determine the motion posture data of the object to be captured, determine according to the actual shooting scene, and set the nine-axis inertial sensor and optical markers. The relationship between points, so as to achieve high-precision motion detection.
  • the central management server 41 includes a virtual rehearsal shooting synthesis module 411 and a rendering synthesis module 412.
  • the virtual rehearsal shooting synthesis module 411 is respectively communicatively connected with the video capture card 12 and the optical inertial fusion module 312,
  • the rendering synthesis module 412 is in communication connection with the virtual rehearsal shooting synthesis module 411 and the wireless transmission device 23 respectively;
  • the virtual rehearsal shooting synthesis module 411 is used to perform real-time keying of the real shot video data, so as to generate a virtual image in the virtual scene, and adjust the angle of the virtual scene in real time;
  • the rendering and synthesis module 412 is used to convert the keying image information and the adjusted virtual scene into a 3D virtual reality mixed video stream according to the camera pose data, the respective corresponding three-dimensional coordinate data and motion posture information, and the 3D virtual reality mixed video stream They are sent to the virtual camera 21 and the display device 51 respectively.
  • the rendering synthesis module 412 receives the keying image information sent by the virtual rehearsal shooting synthesis module 411, where the keying image information is image information containing the object to be captured, and the rendering synthesis module 412 is used to use Unreal Engine UE4 for 3D animation character settings and sound Simulation and lighting rendering.
  • the virtual rehearsal shooting synthesis module 411 is used to set the virtual character corresponding to the keying image information, and to adjust the angle of the virtual scene in real time;
  • the rendering synthesis module 412 is used to use the Unreal Engine UE4 according to the camera pose data and respective corresponding three-dimensional coordinates
  • the data and the motion posture information are synchronized to set the motion posture of the virtual character; at the same time, the preset voice information is added to the preset virtual scene to obtain a three-dimensional virtual reality mixed video stream.
  • the wireless transmission device 23 is used to send the three-dimensional virtual reality mixed video stream to the virtual camera 21 and the display device 51 respectively, so that the target person can preview the shooting effect in real time.
  • the display device 51 may use a large-screen projection method to display the split lens, so as to achieve the visualization of the split lens.
  • the central management server 41 is also used to store the camera posture data, the respective corresponding three-dimensional coordinate data and motion posture information, and the three-dimensional virtual reality mixed video stream to facilitate post-production and synthesis.
  • the central management server 41 before performing video synthesis, is also used to align the adjusted virtual scene with the actual shooting scene in the three-dimensional space coordinate system according to the camera pose data and the respective corresponding three-dimensional coordinate data, so that
  • the output 3D virtual reality mixed video stream has high frame rate, stable, no-delay spatial positioning and virtual-real synchronization effects.
  • the virtual rehearsal shooting synthesis module 411 is used to instruct the color keying process, that is, to define the transparency according to the specific color value or brightness value of the image in the real shot video data. When a certain value is keyed out, all have Pixels with similar colors or brightness values will all become transparent. Further, the virtual preview shooting synthesis module 411 is used to obtain keyed image information from the real shot video data.
  • the mixed reality virtual rehearsal shooting system also includes a camera setting system 6.
  • the camera setting system 6 is in communication connection with the central control system 4 for obtaining site parameters in the actual shooting scene and determining them according to the site parameters
  • the number of cameras corresponding to the optical motion capture cameras 32 and the corresponding camera installation positions, so that each optical mark point is captured by any three optical motion capture cameras 32, and the field parameters include large space field length information and width information.
  • each optical marking point needs to be captured by any three optical motion capture cameras 32 to determine each optical point.
  • the spatial location information of the marker points, and the excessive number of camera configurations will lead to the problem of excessive construction costs. Therefore, the camera setting system 6 is used to lay out the optical motion capture camera 32.
  • the actual shooting scene is an open field with a height of 3.5 meters to 7 meters, and is equipped with professional film and television lights, trusses, and green boxes. Or blue box, where the truss includes a single-layer truss, a two-layer truss, and a multi-layer truss.
  • the site parameters of the actual shooting scene include the large space site length information of 20 meters and width information of 10 meters.
  • the camera setting system 6 is used to calculate the cost value of the camera configuration in the actual shooting scene according to the site parameters of the actual shooting scene, and determine the number of cameras corresponding to the optical motion capture camera 32 and the corresponding camera installation position through the camera configuration cost value .
  • the multiple optical motion capture cameras 32 are arranged at different positions of the truss in the actual shooting scene, and the visual range of each optical motion capture camera 32 covers the entire actual shooting scene.
  • the video camera 11 and the virtual camera 21 are used to shoot the object to be captured.
  • the hybrid display virtual rehearsal shooting system integrates multiple functions of real-time rehearsal, virtual studio and motion capture.
  • it uses spatial positioning motion capture technology to synchronize the film and television cameras in the space 11 and virtual cameras. 21 and the position of all virtual objects (for example, props), enabling real-time interaction and synthesis effect preview in Unreal Engine.
  • the effect of combining virtual and real is realized, which improves the storytelling and visual expressiveness of film and television shooting techniques.
  • the target person not only can the target person (director) have a clear need for interaction on the spot, obtain the preset composition angle according to the demand, preview the shooting effect in real time, and determine the effects and elements of each lens in real time through the interactive visualization lens.
  • the shooting process is in place in one step, and later The direction of the production has been determined during the shooting stage.
  • the interaction between characters and CG is no longer simply superimposed back and forth, and the positions of objects all have 3D depth information.
  • a hybrid display virtual rehearsal shooting system which realizes motion track recording, data export and fusion, and redundant targets through a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system.
  • Real-time preview of shielding and imaging Compared with the traditional green-screen keying, the real-time acquisition of the three-dimensional motion trajectory information of the object to be captured, the superimposition of the scene and the character in the three-dimensional space, and the synthesis and processing effect of the three-dimensional depth information; at the same time, through the virtual reality interaction, the real-time preview of the film has been taken Mirroring visual effects, improving the efficiency of mixed reality virtual video shooting, and reducing production costs and production cycles.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The present invention relates to the technical field of motion capture. Disclosed is a mixed reality virtual preview photographing system, used for improving the efficiency of mixed reality virtual video photographing and reducing the production cost. The mixed reality virtual preview photographing system comprises a real photographing system, a virtual photographing system, an optical positioning and motion capture system, a central control system, and a display system; the real photographing system comprises a video camera and a video capture card; the virtual photographing system comprises a virtual camera, an inertia measurement apparatus, and a wireless transmission apparatus; the optical positioning and motion capture system comprises an optical capture processing server and a plurality of optical motion capture cameras, and the optical capture processing server is in communication connection with the plurality of optical motion capture cameras and the wireless transmission apparatus; the central control system comprises a central management server which is respectively in communication connection with the video capture card, the wireless transmission apparatus, and the optical capture processing server; and the display system comprises a display device which is in network connection with the central management server.

Description

混合现实虚拟预演拍摄系统Mixed reality virtual rehearsal shooting system 技术领域Technical field
本发明涉及运动捕捉技术领域,尤其涉及一种混合现实虚拟预演拍摄系统。The invention relates to the technical field of motion capture, in particular to a mixed reality virtual rehearsal shooting system.
背景技术Background technique
混合现实技术是虚拟现实技术的进一步发展,通过在现实场景呈现虚拟场景信息,在现实世界、虚拟世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感。Mixed reality technology is a further development of virtual reality technology. By presenting virtual scene information in the real scene, an information loop of interactive feedback is set up between the real world, the virtual world and the user to enhance the realism of the user experience.
在传统影视视觉设计中,传统虚拟演播室支持的摄像机、镜头比较少,镜头大部分为固定或不可自由移动,对接的渲染引擎单一或者老旧,人物和视觉设计CG的合成关系多为单一的前后图层叠加,没有复杂的混合虚实交互部分。同时为了不穿帮,需要机位走位完全按照固定角度,很难现场临时调整,单纯靠视觉错觉提高融入感,不能对光线作出反应。In traditional film and television visual design, traditional virtual studios support relatively few cameras and lenses. Most of the lenses are fixed or non-movable. The docking rendering engine is single or old. The synthesis relationship between characters and visual design CG is mostly single. The front and back layers are superimposed, and there is no complicated mixed virtual and real interactive part. At the same time, in order not to wear the help, the position of the camera needs to be completely fixed at a fixed angle, which is difficult to temporarily adjust on-site. It is purely a visual illusion to improve the sense of integration and cannot respond to light.
在现有技术中,从获取导演剧组的分镜时开始,采用线下方式,依靠三维3D动画师制作镜头,并分镜预演,制作周期长,因而,混合虚实拍摄系统缺少现场分镜预演,导致后期电脑合成三维视频效率低下。In the prior art, starting from the time the director’s crew is obtained, the offline method is adopted, and the 3D 3D animator is used to produce the shots, and the production cycle is long. Therefore, the hybrid virtual and real shooting system lacks on-site shots and previews. As a result, the efficiency of computer synthesis of 3D video in the later stage is low.
发明内容Summary of the invention
本发明的主要目的在于解决混合虚实拍摄无法现场分镜预演,导致后期电脑制作效率低下的问题。The main purpose of the present invention is to solve the problem that the mixed virtual and real shooting cannot be used for on-site split-shot rehearsal, which leads to the low efficiency of post-computer production.
本发明第一方面提供了一种混合现实虚拟预演拍摄系统,所述混合现实虚拟预演拍摄系统包括通信连接的真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统;其中,The first aspect of the present invention provides a mixed reality virtual rehearsal shooting system. The mixed reality virtual rehearsal shooting system includes a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system that are communicatively connected; in,
所述真实拍摄系统包括影视摄像机和视频采集卡,所述影视摄像机和所述视频采集卡通信连接;The real shooting system includes a video camera and a video capture card, and the video camera and the video capture card are in communication connection;
所述虚拟拍摄系统包括虚拟摄像机、惯性测量装置和无线传输装置,所述惯性测量装置安装在所述虚拟摄像机上,所述惯性测量装置与所述无线传输装置通信连接;The virtual shooting system includes a virtual camera, an inertial measurement device, and a wireless transmission device, the inertial measurement device is installed on the virtual camera, and the inertial measurement device is in communication connection with the wireless transmission device;
所述光学定位及动作捕捉系统包括光捕处理服务器和多个光学动作捕捉相机,所述光捕处理服务器分别与所述多个光学动作捕捉相机和所述无线传输装置通信连接;The optical positioning and motion capture system includes a light capture processing server and a plurality of optical motion capture cameras, and the light capture processing server is respectively communicatively connected with the plurality of optical motion capture cameras and the wireless transmission device;
所述中心控制系统包括中央管理服务器,所述中央管理服务器分别与所述视频采集卡、所述无线传输装置和所述光捕处理服务器通信连接;The central control system includes a central management server, and the central management server is respectively communicatively connected with the video capture card, the wireless transmission device, and the optical capture processing server;
所述显示系统包括显示设备,所述显示设备与所述中央管理服务器网络连接。The display system includes a display device, and the display device is network connected to the central management server.
可选的,所述影视摄像机用于在实际拍摄场景内对待捕捉对象进行拍摄,得到实拍视频数据;所述视频采集卡用于将所述实拍视频数据发送到所述中央管理服务器中。Optionally, the video camera is used to shoot the object to be captured in the actual shooting scene to obtain real-shot video data; the video capture card is used to send the real-shot video data to the central management server.
可选的,所述待捕捉对象预先穿戴已贴有多个光学标记点的动作捕捉服,所述多个光学标记点用于定位待捕捉对象的各关节位置。Optionally, the object to be captured wears a motion capture suit with a plurality of optical marking points affixed in advance, and the plurality of optical marking points are used to locate each joint position of the object to be captured.
可选的,所述影视摄像机和所述虚拟摄像机分别安装预置数量的光学标记点;Optionally, a preset number of optical marking points are installed on the video camera and the virtual camera respectively;
所述惯性测量装置,用于通过九轴惯性传感器采集所述虚拟摄像机在移动拍摄中对应的惯导数据;The inertial measurement device is used to collect the inertial navigation data corresponding to the virtual camera in moving shooting through a nine-axis inertial sensor;
所述无线传输装置,用于将所述对应的惯导数据发送到所述光捕处理服务器中。The wireless transmission device is used to send the corresponding inertial navigation data to the optical capture processing server.
可选的,所述光捕处理服务器还包括三维动作捕捉处理模块和光学惯性融合模块,所述三维动作捕捉处理模块与所述多个光学动作捕捉相机通信连接,所述光学惯性融合模块分别与所述中央管理服务器和所述无线传输装置通信连接;Optionally, the light capture processing server further includes a three-dimensional motion capture processing module and an optical inertial fusion module, the three-dimensional motion capture processing module is communicatively connected with the plurality of optical motion capture cameras, and the optical inertial fusion module is respectively connected with The central management server is in a communication connection with the wireless transmission device;
所述多个光学动作捕捉相机,用于分别对各光学标记点进行定位拍摄,得到各自对应的二维图像数据;The multiple optical motion capture cameras are used for positioning and shooting each optical marking point to obtain respective corresponding two-dimensional image data;
所述三维动作捕捉处理模块,用于根据所述各自对应的二维图像数据获取各自对应的三维坐标数据和待捕捉对象的运动姿态信息;The three-dimensional motion capture processing module is configured to obtain the corresponding three-dimensional coordinate data and the motion posture information of the object to be captured according to the corresponding two-dimensional image data;
所述光学惯性融合模块,用于根据所述虚拟摄像机上各光学标记点对应的三维坐标数据和所述对应的惯导数据依次进行坐标系标定和姿态融合处理,得到相机姿态数据,并将所述各自对应的三维坐标数据、所述相机姿态数据和所述运动姿态信息发送到所述中央管理服务器中。The optical inertial fusion module is used to perform coordinate system calibration and posture fusion processing in sequence according to the three-dimensional coordinate data corresponding to each optical mark point on the virtual camera and the corresponding inertial navigation data to obtain camera posture data, and then The respective corresponding three-dimensional coordinate data, the camera posture data and the motion posture information are sent to the central management server.
可选的,所述光捕处理服务器还包括惯导设置模块,所述惯导设置模块与所述光学惯性融合模块通信连接,用于采用预设链接端口号和预设刚体名将所述九轴惯性传感器与所述预置数量的光学标记点按照预设位置关系安装在所述虚拟摄像机上。Optionally, the optical capture processing server further includes an inertial navigation setting module, which is communicatively connected with the optical inertial fusion module and configured to use a preset link port number and a preset rigid body name to connect the nine-axis The inertial sensor and the preset number of optical marking points are installed on the virtual camera according to a preset positional relationship.
可选的,所述中央管理服务器包括虚拟预演拍摄合成模块和渲染合成模块,所述虚拟预演拍摄合成模块分别与所述视频采集卡和所述光学惯性融合模块通信连接,所述渲染合成模块分别与所述虚拟预演拍摄合成模块和所述无线传输装置通信连接;Optionally, the central management server includes a virtual rehearsal shooting synthesis module and a rendering synthesis module, the virtual rehearsal shooting synthesis module is respectively communicatively connected with the video capture card and the optical inertial fusion module, and the rendering synthesis module is respectively Communicatively connected with the virtual rehearsal shooting synthesis module and the wireless transmission device;
所述虚拟预演拍摄合成模块,用于对所述实拍视频数据进行实时抠像,以便在虚拟场景中生成虚拟形象,并对虚拟场景角度进行实时调整;The virtual rehearsal shooting synthesis module is used to perform real-time keying of the real-shot video data, so as to generate an avatar in a virtual scene, and adjust the angle of the virtual scene in real time;
所述渲染合成模块,用于按照所述相机姿态数据、所述各自对应的三维坐标数据和所述运动姿态信息将抠像图像信息与调整后的虚拟场景转换为三维虚拟现实混合视频流,并将所述三维虚拟现实混合视频流分别发送到所述虚拟摄像机和所述显示设备中。The rendering synthesis module is configured to convert the keying image information and the adjusted virtual scene into a three-dimensional virtual reality mixed video stream according to the camera posture data, the respective corresponding three-dimensional coordinate data, and the motion posture information, and The three-dimensional virtual reality mixed video stream is sent to the virtual camera and the display device respectively.
可选的,所述显示设备,用于接收并同步显示所述三维虚拟现实混合视频流,以使得目标人员在所述影视摄像机和所述虚拟摄像机位移过程中预览所述三维虚拟现实混合视频流,并对所述待捕捉对象实时调整拍摄动作和拍摄角度。Optionally, the display device is configured to receive and synchronously display the 3D virtual reality mixed video stream, so that the target person previews the 3D virtual reality mixed video stream during the displacement of the video camera and the virtual camera , And adjust the shooting action and shooting angle of the object to be captured in real time.
可选的,所述光学定位及动作捕捉系统还包括标定装置,所述标定装置,用于通过标定杆在所述实际拍摄场景中对所述多个光学动作捕捉相机进行位置标定。Optionally, the optical positioning and motion capture system further includes a calibration device, and the calibration device is used to calibrate the positions of the plurality of optical motion capture cameras in the actual shooting scene through a calibration rod.
可选的,所述混合现实虚拟预演拍摄系统还包括相机设置系统,所述相机设置系统与所述中心控制系统通信连接,用于获取所述实际拍摄场景中的场地参数,并按照所述场地参数确定所述光学动作捕捉相机对应的相机数量和对应的相机安装位置,以使得各光学标记点被任意3个光学动作捕捉相机所捕捉,所述场地参数包括大空间场地长度信息和宽度信息。Optionally, the mixed reality virtual rehearsal shooting system further includes a camera setting system, which is communicatively connected with the central control system, and is used to obtain the site parameters in the actual shooting scene, and according to the site The parameters determine the number of cameras corresponding to the optical motion capture camera and the corresponding camera installation positions, so that each optical mark point is captured by any three optical motion capture cameras, and the site parameters include large space site length information and width information.
本发明提供的技术方案中,所述混合现实虚拟预演拍摄系统包括通信连接的真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统;其中,所述真实拍摄系统包括影视摄像机和视频采集卡,所述影视摄像机和所述视频采集卡通信连接;所述虚拟拍摄系统包括虚拟摄像 机、惯性测量装置和无线传输装置,所述惯性测量装置安装在所述虚拟摄像机上,所述惯性测量装置与所述无线传输装置通信连接;所述光学定位及动作捕捉系统包括光捕处理服务器和多个光学动作捕捉相机,所述光捕处理服务器分别与所述多个光学动作捕捉相机和所述无线传输装置通信连接;所述中心控制系统包括中央管理服务器,所述中央管理服务器分别与所述视频采集卡、所述无线传输装置和所述光捕处理服务器通信连接;所述显示系统包括显示设备,所述显示设备与所述中央管理服务器网络连接。本发明实施例中,通过真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统实现运动轨迹记录、数据导出融合、多余目标屏蔽和成像实时预览。与传统绿幕抠像相比,本发明实时获取待捕捉对象的三维运动轨迹信息,实现三维空间中场景与角色叠加,具有三维深度信息的合成处理效果;同时通过虚拟现实互动,实时预览已拍摄影片的运镜视觉效果,提高混合现实虚拟视频拍摄的效率,并降低制作成本和制作周期。In the technical solution provided by the present invention, the mixed reality virtual rehearsal shooting system includes a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system that are communicatively connected; wherein, the real shooting system includes A video camera and a video capture card, the video camera and the video capture card are in communication connection; the virtual shooting system includes a virtual camera, an inertial measurement device and a wireless transmission device, the inertial measurement device is installed on the virtual camera, The inertial measurement device is in communication connection with the wireless transmission device; the optical positioning and motion capture system includes a light capture processing server and a plurality of optical motion capture cameras, and the light capture processing server is respectively connected to the plurality of optical motion capture The camera is in communication connection with the wireless transmission device; the central control system includes a central management server, and the central management server is respectively in communication connection with the video capture card, the wireless transmission device, and the light capture processing server; The display system includes a display device, and the display device is network connected to the central management server. In the embodiment of the present invention, the real shooting system, the virtual shooting system, the optical positioning and motion capture system, the central control system, and the display system realize motion track recording, data export and fusion, redundant target shielding, and real-time imaging preview. Compared with the traditional green screen keying, the present invention obtains the three-dimensional motion trajectory information of the object to be captured in real time, realizes the superimposition of the scene and the character in the three-dimensional space, and has the effect of synthesizing the three-dimensional depth information; at the same time, through virtual reality interaction, real-time preview of the captured image The visual effect of the moving mirror of the film improves the efficiency of mixed reality virtual video shooting, and reduces the production cost and production cycle.
附图说明Description of the drawings
图1为本发明实施例中混合现实虚拟预演拍摄系统的一个结构示意图;FIG. 1 is a schematic structural diagram of a mixed reality virtual rehearsal shooting system in an embodiment of the present invention;
图2为本发明实施例中光学定位及动作捕捉系统的一个结构示意图;FIG. 2 is a schematic structural diagram of an optical positioning and motion capture system in an embodiment of the present invention;
图3为本发明实施例中中央管理服务器的一个结构示意图;Figure 3 is a schematic diagram of a structure of a central management server in an embodiment of the present invention;
图4为本发明实施例中混合现实虚拟预演拍摄系统的另一个结构示意图;4 is another schematic diagram of the structure of the mixed reality virtual rehearsal shooting system in the embodiment of the present invention;
图5为本发明实施例中混合现实虚拟预演拍摄系统的一个应用场景示意图。FIG. 5 is a schematic diagram of an application scenario of the mixed reality virtual rehearsal shooting system in an embodiment of the present invention.
具体实施方式Detailed ways
本发明实施例提供了一种混合现实虚拟预演拍摄系统,通过虚拟现实互动,实时预览已拍摄影片的运镜视觉效果,提高混合现实虚拟视频拍摄的效率,并降低制作成本和制作周期。The embodiment of the present invention provides a mixed reality virtual rehearsal shooting system, through virtual reality interaction, real-time preview of the lens moving visual effect of a filmed film, improving the efficiency of mixed reality virtual video shooting, and reducing production costs and production cycles.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。The terms "first", "second", "third", "fourth", etc. (if any) in the description and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects, without having to use To describe a specific order or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances, so that the embodiments described herein can be implemented in a sequence other than the content illustrated or described herein.
为便于理解,下面对本发明实施例进行描述,请参阅图1,本发明实施例 中,混合现实虚拟预演拍摄系统包括通信连接的真实拍摄系统1、虚拟拍摄系统2、光学定位及动作捕捉系统3、中心控制系统4和显示系统5;其中,For ease of understanding, the following describes the embodiments of the present invention, please refer to FIG. 1. In the embodiment of the present invention, the mixed reality virtual rehearsal shooting system includes a real shooting system 1, a virtual shooting system 2, an optical positioning and motion capture system 3 connected in communication. , Central control system 4 and display system 5; among them,
真实拍摄系统1,包括影视摄像机11和视频采集卡12,影视摄像机11和视频采集卡12通信连接;The real shooting system 1 includes a video camera 11 and a video capture card 12, and the video camera 11 and the video capture card 12 are in communication connection;
虚拟拍摄系统2包括虚拟摄像机21、惯性测量装置22和无线传输装置23,惯性测量装置22安装在虚拟摄像机21上,惯性测量装置22与无线传输装置23通信连接;The virtual shooting system 2 includes a virtual camera 21, an inertial measurement device 22, and a wireless transmission device 23. The inertial measurement device 22 is installed on the virtual camera 21, and the inertial measurement device 22 is in communication connection with the wireless transmission device 23;
光学定位及动作捕捉系统3包括光捕处理服务器31和多个光学动作捕捉相机32,光捕处理服务器31分别与多个光学动作捕捉相机32和无线传输装置23通信连接;The optical positioning and motion capture system 3 includes a light capturing processing server 31 and a plurality of optical motion capturing cameras 32, and the light capturing processing server 31 is respectively communicatively connected with a plurality of optical motion capturing cameras 32 and a wireless transmission device 23;
中心控制系统4包括中央管理服务器41,中央管理服务器41分别与视频采集卡12、无线传输装置23和光捕处理服务器31通信连接;The central control system 4 includes a central management server 41, and the central management server 41 is in communication connection with the video capture card 12, the wireless transmission device 23, and the light capture processing server 31, respectively;
显示系统5包括显示设备51,显示设备51与中央管理服务器41网络连接。The display system 5 includes a display device 51, and the display device 51 is connected to the central management server 41 in a network.
具体的,真实拍摄系统1,用于接收预设的拍摄指令,并基于预设的拍摄指令对影视摄像机11进行拍摄控制,影视摄像机11,用于在实际拍摄场景内对待捕捉对象进行视频拍摄,得到实拍视频数据。进一步地,视频采集卡12,用于将实拍视频数据上传到中央管理服务器41中,以供中央管理服务器41对实拍视频数据进行数据存储与数据处理。其中,待捕捉对象为运动物体,包括人物和道具,实拍视频数据用于指示待捕捉对象在实际拍摄场景内的运动画面信息。可选的,实际拍摄场景包括绿幕和桁架,桁架用于布局多个光学动作捕捉相机32,其中,实际拍摄场景的背景还可以为蓝幕,具体此处不做限定。需要说明的是,影视摄像机11,还用于支持固定云台的镜头平移、上下移动、镜头拉近拉远,并且支持大范围内大摇臂的摄像机运镜移动,例如将影视摄像机11在10至2000平方米的实际拍摄场景中使用。Specifically, the real shooting system 1 is used to receive preset shooting instructions, and based on the preset shooting instructions to control the shooting of the video camera 11, the video camera 11 is used to perform video shooting of the object to be captured in the actual shooting scene, Obtain real-time video data. Further, the video capture card 12 is used to upload the real shot video data to the central management server 41, so that the central management server 41 performs data storage and data processing on the real shot video data. Among them, the object to be captured is a moving object, including people and props, and the real shot video data is used to indicate the moving picture information of the object to be captured in the actual shooting scene. Optionally, the actual shooting scene includes a green screen and a truss. The truss is used to arrange multiple optical motion capture cameras 32. The background of the actual shooting scene may also be a blue screen, which is not specifically limited here. It should be noted that the video camera 11 is also used to support the lens translation, up and down movement, zoom in and out of the fixed PTZ camera, and to support the movement of the camera with a large rocker arm in a large range, for example, the video camera 11 is set to 10 Used in actual shooting scenes up to 2000 square meters.
具体的,虚拟拍摄系统2中虚拟摄像机21为手持六自由度的摄像机,虚拟摄像机21用于移动拍摄实际拍摄场景对应的预设的虚拟场景,其中,六自由度是指物体在三维空间中具有六个自由度,也就是,沿x、y、z三个直角坐标轴方向的移动自由度和围绕x、y、z三个坐标轴的转动自由度。因此,虚拟摄像机21,可以用于对预设的虚拟场景调整拍摄角度,也可以用于调整 焦距和光圈值,以及用于对预设的虚拟场景控制拍摄起止时刻。例如,当虚拟摄像机21中的开始按键被启动时,虚拟摄像机21用于接收到开始录制指令,对预设的虚拟场景开始拍摄,当虚拟摄像机21的暂停按键或者停止按键被启动时,虚拟摄像机21用于接收到暂停录制指令或者停止录制指令,对预设的虚拟场景停止拍摄。并用于同步显示中央管理服务器41发送的三维虚拟现实混合视频流。需要说明的是,虚拟摄像机21在移动过程中,对预设的虚拟场景中的待捕捉对象进行拍摄,存在拍摄角度、移动速度的变化,因此,惯性测量装置22,用于对虚拟摄像机21采集运动姿态信息,无线传输装置23,用于将采集的运动姿态信息上传到光学定位及动作捕捉系统3中的光捕处理服务器31中,其中,无线传输装置23可以为无线蓝牙,并且兼具无线蓝牙图传功能,也可以为无线WIFI。Specifically, the virtual camera 21 in the virtual shooting system 2 is a handheld camera with six degrees of freedom, and the virtual camera 21 is used to move and shoot a preset virtual scene corresponding to the actual shooting scene. The six degrees of freedom means that the object has a three-dimensional space. Six degrees of freedom, that is, the degrees of freedom of movement along the directions of the three rectangular coordinate axes of x, y, and z, and the degrees of freedom of rotation around the three coordinate axes of x, y, and z. Therefore, the virtual camera 21 can be used to adjust the shooting angle of a preset virtual scene, can also be used to adjust the focal length and aperture value, and to control the start and end time of shooting for the preset virtual scene. For example, when the start button in the virtual camera 21 is activated, the virtual camera 21 is used to receive a start recording instruction and start shooting a preset virtual scene. When the pause button or stop button of the virtual camera 21 is activated, the virtual camera 21 21 is used to receive a pause recording instruction or a stop recording instruction, and stop shooting a preset virtual scene. And it is used to synchronously display the 3D virtual reality mixed video stream sent by the central management server 41. It should be noted that during the movement of the virtual camera 21, the object to be captured in the preset virtual scene is photographed, and there are changes in the shooting angle and moving speed. Therefore, the inertial measurement device 22 is used to capture the virtual camera 21. The movement posture information, the wireless transmission device 23, is used to upload the collected movement posture information to the light capture processing server 31 in the optical positioning and motion capture system 3. The wireless transmission device 23 can be wireless Bluetooth and both wireless The Bluetooth image transmission function can also be wireless WIFI.
具体的,光学定位及动作捕捉系统3中多个光学动作捕捉相机32,用于识别绑定在待捕捉对象不同部位的光学标记点,光捕处理服务器31,用于获取各光学标记点在实际拍摄场景内的位置朝向,进而确定待捕捉对象在实际拍摄场景内的运动轨迹,并实时同步导入至中央管理服务器41中。可选的,待捕捉对象预先穿戴已贴有多个光学标记点的动作捕捉服,多个光学标记点用于定位待捕捉对象的各关节位置,其中,光学标记点包括反光标记点和主动标记点,反光标记点可以采用反光球,主动标记点适用于环境照明条件使反光标记点难以定位追踪的情形,以便于光学定位及动作捕捉系统3在支持室内和室外均可应用。进一步地,影视摄像机11和虚拟摄像机21也分别预先设置了预置数量的光学标记点,预置数量的光学标记点结合多个光学动作捕捉相机32,用于对影视摄像机11和虚拟摄像机21进行定位空间位置信息,其中,预置数量为正整数。例如,3个、4个或者5个,具体此处不做限定。Specifically, the multiple optical motion capture cameras 32 in the optical positioning and motion capture system 3 are used to identify the optical marking points bound to different parts of the object to be captured, and the light capturing processing server 31 is used to obtain the actual position of each optical marking point. The position and orientation in the shooting scene are further determined to determine the movement trajectory of the object to be captured in the actual shooting scene, and synchronously imported into the central management server 41 in real time. Optionally, the object to be captured wears a motion capture suit with multiple optical marking points affixed in advance. The multiple optical marking points are used to locate the joint positions of the object to be captured. The optical marking points include reflective marking points and active markings. Reflective marking points can use reflective balls. Active marking points are suitable for situations where ambient lighting conditions make it difficult to locate and track the reflective marking points, so that the optical positioning and motion capture system 3 can be applied both indoors and outdoors. Further, the video camera 11 and the virtual camera 21 are respectively preset with a preset number of optical marking points, and the preset number of optical marking points are combined with a plurality of optical motion capture cameras 32 to perform the operation on the video camera 11 and the virtual camera 21. Positioning space location information, where the preset number is a positive integer. For example, 3, 4, or 5, and the details are not limited here.
举例说明,在待捕捉对象的头部、双手、双脚和后腰部布置不同的反光标记点(反光球),多个光学动作捕捉相机32,用于对待捕捉对象身上的反光标记点进行跟踪,实现对反光标记点的空间位置及朝向的精准定位,从而获取待捕捉对象的肢端位置。光捕处理服务器31,用于根据待捕捉对象的肢端位置确定待捕捉对象的运动姿态信息。For example, different reflective marking points (reflective balls) are arranged on the head, hands, feet, and back waist of the object to be captured, and multiple optical motion capture cameras 32 are used to track the reflective marking points on the object to be captured. Realize the precise positioning of the spatial position and orientation of the reflective marking point, so as to obtain the extremity position of the object to be captured. The light-capturing processing server 31 is configured to determine the motion posture information of the object to be captured according to the position of the extremities of the object to be captured.
具体的,中心控制系统4中的中央管理服务器41,首先用于将影视摄像机11拍摄到的实拍视频数据进行抠像处理,得到抠像图像信息,其中,抠像 图像信息包括人物模型信息,其次用于将抠像图像信息通过三维合成软件或者预设引擎合成到预设的虚拟场景(实际拍摄场景对应的虚拟场景)中,或者获取人物模型信息对应的虚拟角色,将对应的虚拟角色融合到对应的预设的虚拟场景中,然后通过正向运动学和运动姿态信息驱动人物模型信息或者虚拟角色运动。例如,当身穿绿色动作捕捉服的动捕演员(待捕捉对象)在绿幕拍摄空间中进行动作表演时,光捕处理服务器31和多个光学动作捕捉相机32用于获取动捕演员所有的运动姿态信息。中央管理服务器41,用于对真实拍摄系统1拍摄的实拍视频数据进行整体画面运算,获取包含动捕演员的抠像图像信息,以及抠像图像信息对应的虚拟角色,并通过三维合成软件或者预设引擎将对应的虚拟角色合成到实际需要的预设的虚拟场景中,同时采用运动姿态信息驱动预先预设的虚拟角色,虚拟角色为随着音乐起舞的虚拟物体,得到三维虚拟现实混合视频流。进一步地,中央管理服务器41,用于将三维虚拟现实混合视频流发送到虚拟摄像机21和显示设备51中。可选的,中央管理服务器41除了用于将预设的虚拟场景与虚拟角色进行叠加合成以外,还可以用于对真实角色与预设的虚拟场景、预设的真实场景与虚拟角色进行三维深度合成处理,具体此处不做限定。需要说明的是,中心控制系统4还包括输入设备和输出设备,例如,鼠标、键盘和显示器。Specifically, the central management server 41 in the central control system 4 is first used to perform keying processing on the real shot video data captured by the film and television camera 11 to obtain keyed image information, where the keyed image information includes character model information, Secondly, it is used to synthesize the keying image information into a preset virtual scene (virtual scene corresponding to the actual shooting scene) through 3D synthesis software or a preset engine, or to obtain the virtual character corresponding to the character model information, and merge the corresponding virtual character Go to the corresponding preset virtual scene, and then drive the character model information or virtual character movement through forward kinematics and motion posture information. For example, when a motion capture actor (object to be captured) wearing a green motion capture suit performs an action performance in a green-screen shooting space, the light capture processing server 31 and multiple optical motion capture cameras 32 are used to obtain all the motion capture actor's Sports posture information. The central management server 41 is used to perform overall picture calculations on the real shot video data shot by the real shooting system 1, to obtain the keying image information containing the motion capture actors and the virtual characters corresponding to the keying image information, and use three-dimensional synthesis software or The preset engine synthesizes the corresponding virtual characters into the preset virtual scenes that are actually needed, and uses the motion posture information to drive the preset virtual characters. The virtual characters are virtual objects that dance with music to obtain 3D virtual reality mixed video flow. Further, the central management server 41 is configured to send the three-dimensional virtual reality mixed video stream to the virtual camera 21 and the display device 51. Optionally, the central management server 41 can be used to superimpose and synthesize a preset virtual scene and a virtual character, and can also be used to perform three-dimensional depth of the real character and the preset virtual scene, and the preset real scene and the virtual character. The synthesis process is not specifically limited here. It should be noted that the central control system 4 also includes input devices and output devices, such as a mouse, a keyboard, and a display.
具体的,显示系统5包括显示设备51,用于接收并同步显示中央管理服务器41发送的三维虚拟现实混合视频流,以使得目标人员对待捕捉对象实时调整拍摄动作和拍摄角度,并在影视摄像机11和虚拟摄像机21位移过程中预览已拍摄影片的运镜视觉效果,其中目标人员包括导演和摄影师。因此,显示设备51用于提供对镜头画面构成、镜头运动、合成效果进行实时调度预览,以供目标人员及时调整修改创意和画面构图。从而避免了传统影视制作前期、后期衔接成本制作高和制作周期太长的问题,通过虚拟预演拍摄方式实时提供可视化拍摄效果。Specifically, the display system 5 includes a display device 51 for receiving and synchronously displaying the three-dimensional virtual reality mixed video stream sent by the central management server 41, so that the target person can adjust the shooting action and shooting angle of the object to be captured in real time, and display it on the video camera 11 During the displacement process of the virtual camera 21, the visual effect of the lens movement of the filmed film is previewed, and the target personnel include the director and the photographer. Therefore, the display device 51 is used to provide a real-time scheduling preview of the lens frame composition, lens motion, and composite effect, so that the target person can adjust and modify the creativity and frame composition in time. In this way, the problems of high production cost and long production cycle of traditional film and television production in the early and late stages of production are avoided, and the visualized shooting effect is provided in real time through the virtual rehearsal shooting method.
本发明实施例中,提供了一种混合显示虚拟预演拍摄系统,通过真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统实现运动轨迹记录、数据导出融合、多余目标(待捕捉对象以外的物体)屏蔽和成像实时预览。与传统绿幕抠像相比,本发明实时获取待捕捉对象的三维运动轨迹信息,实现三维空间中场景与角色叠加,具有三维深度信息的 合成处理效果;同时通过虚拟现实互动,实时预览已拍摄影片的运镜视觉效果,提高混合现实虚拟视频拍摄的效率,并降低制作成本和制作周期。In the embodiment of the present invention, a hybrid display virtual rehearsal shooting system is provided, which realizes motion track recording, data export and fusion, and redundant targets through a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system. (Objects other than the object to be captured) Real-time preview of shielding and imaging. Compared with the traditional green screen keying, the present invention obtains the three-dimensional motion trajectory information of the object to be captured in real time, realizes the superimposition of the scene and the character in the three-dimensional space, and has the effect of synthesizing the three-dimensional depth information; at the same time, through virtual reality interaction, real-time preview of the captured image The visual effect of the moving mirror of the film improves the efficiency of mixed reality virtual video shooting, and reduces the production cost and production cycle.
请参阅图1与图2,光捕处理服务器31还包括三维动作捕捉处理模块311和光学惯性融合模块312,三维动作捕捉处理模块311与多个光学动作捕捉相机32通信连接,光学惯性融合模块312分别与中央管理服务器41和无线传输装置23通信连接;1 and 2, the optical capture processing server 31 also includes a three-dimensional motion capture processing module 311 and an optical inertial fusion module 312. The three-dimensional motion capture processing module 311 is communicatively connected with a plurality of optical motion capture cameras 32, and the optical inertial fusion module 312 Respectively communicate with the central management server 41 and the wireless transmission device 23;
多个光学动作捕捉相机32,用于分别对各光学标记点进行定位拍摄,得到各自对应的二维图像数据;A plurality of optical motion capture cameras 32 are used for positioning and shooting each optical marking point to obtain respective corresponding two-dimensional image data;
三维动作捕捉处理模块311,用于根据各自对应的二维图像数据获取各自对应的三维坐标数据和待捕捉对象的运动姿态信息;The three-dimensional motion capture processing module 311 is configured to obtain the corresponding three-dimensional coordinate data and the motion posture information of the object to be captured according to the corresponding two-dimensional image data;
光学惯性融合模块312,用于根据虚拟摄像机21上各光学标记点对应的运动姿态信息和所述对应的惯导数据依次进行坐标系标定和姿态融合处理,得到相机姿态数据,并将各自对应的三维坐标数据、相机姿态数据和运动姿态信息发送到中央管理服务器41中。The optical inertial fusion module 312 is used to perform coordinate system calibration and posture fusion processing in sequence according to the motion posture information corresponding to each optical mark point on the virtual camera 21 and the corresponding inertial navigation data, to obtain camera posture data, and to compare the corresponding The three-dimensional coordinate data, camera posture data, and motion posture information are sent to the central management server 41.
多个光学动作捕捉相机32与光捕处理服务器31可以有线连接,也可以无线连接,例如,通过POE交换机连接。具体的,多个光学动作捕捉相机32,用于采集影视摄像机11、虚拟摄像机21和多个光学标记点各自对应的二维图像信息,也就是,影视摄像机11、虚拟摄像机21和待捕捉对象中各光学标记点至少同时被两个以上光学动作捕捉相机32拍摄;三维动作捕捉处理模块311,用于将对各自对应的二维图像数据转换为各自对应的三维坐标数据,也就是空间位置信息,以及待捕捉对象的运动姿态信息,其中,各自对应的三维坐标数据用于指示世界坐标系下的各光学标记点的位置及朝向,实现运动物体的定位与跟踪;而光学惯性融合模块312用于接收虚拟摄像机21对应的惯导数据,并基于虚拟摄像机21上各光学标记点对应的三位坐标数据和对应的惯导数据进行坐标系标定和姿态融合计算,得到相机姿态数据,并将各自对应的三维坐标数据、相机姿态数据和运动姿态信息发送到中央管理服务器41中。可选的,光学定位及动作捕捉系统3还包括标定装置33,标定装置33,用于通过标定杆在实际拍摄场景中对多个光学动作捕捉相机32进行位置标定。例如,标定杆形状可以为T型结构,标定杆的尺寸为横杆长度418毫米, 竖杆长度578毫米。The multiple optical motion capture cameras 32 and the light capture processing server 31 may be connected by wire or wirelessly, for example, by a POE switch. Specifically, multiple optical motion capture cameras 32 are used to collect the two-dimensional image information corresponding to each of the video camera 11, the virtual camera 21, and the multiple optical marker points, that is, the video camera 11, the virtual camera 21, and the object to be captured Each optical marking point is captured by at least two or more optical motion capture cameras 32 at the same time; the three-dimensional motion capture processing module 311 is used to convert the corresponding two-dimensional image data into each corresponding three-dimensional coordinate data, that is, spatial position information, And the movement posture information of the object to be captured, wherein the respective corresponding three-dimensional coordinate data is used to indicate the position and orientation of each optical mark point in the world coordinate system, so as to realize the positioning and tracking of the moving object; and the optical inertial fusion module 312 is used for Receive the inertial navigation data corresponding to the virtual camera 21, and perform coordinate system calibration and posture fusion calculation based on the three-digit coordinate data corresponding to each optical mark point on the virtual camera 21 and the corresponding inertial navigation data to obtain the camera posture data and correspond to each The three-dimensional coordinate data, camera posture data, and motion posture information of the camera are sent to the central management server 41. Optionally, the optical positioning and motion capture system 3 further includes a calibration device 33, which is used to calibrate the positions of a plurality of optical motion capture cameras 32 in the actual shooting scene through a calibration rod. For example, the shape of the calibration rod may be a T-shaped structure, and the dimensions of the calibration rod are 418 mm in the length of the horizontal rod and 578 mm in the length of the vertical rod.
进一步地,惯性测量装置22,用于通过九轴惯性传感器采集虚拟摄像机21和多个光学标记点在移动拍摄中各自对应的惯导数据,并将各自对应的惯导数据发送到无线传输装置23中,而惯性测量装置22为惯性测量单元(inertial measurement unit,IMU);无线传输装置23,用于将各自对应的惯导数据发送到光学惯性融合模块312中。可选的,光捕处理服务器31还包括惯导设置模块313,惯导设置模块313与光学惯性融合模块312通信连接,用于采用预设链接端口号和预设刚体名将九轴惯性传感器与预置数量的光学标记点按照预设位置关系安装在虚拟摄像机21上。其中,九轴惯性传感器包括三轴角速度传感器(陀螺仪)、三轴加速度传感器和三轴磁感应传感器。九轴惯性传感器可以确保高精度对刚体旋转角度进行计算,其误差范围在0.05度以内,以使得合成的三维虚拟现实混合视频流的画面更稳定,并且能够低延时实时交互,例如,采用200赫兹角度数据刷新率对维虚拟现实混合视频流进行画面更新。可以理解的是,预置数量的光学标记点可以采用至少3个反光球,反光球为一种刚体,也是一种运动跟踪器,其运动轨迹代表空间坐标变化。为了获得更精确的位姿,一般会搭配九轴惯性传感器(惯性测量装置22)一并使用,将刚体与九轴惯性传感器的空间坐标统一化才能最终得出刚体的精准位姿数据。刚体与九轴惯性传感器标定的目的是为了统一空间坐标系,刚体的运动姿态信息是自定义世界坐标系下的位置和姿态,九轴惯性传感器的姿态(对应的惯导数据)是相对于硬件自身启动时的姿态变化,刚体与九轴惯性传感器的坐标系对齐处理,以及进行姿态融合处理,均可以提高刚体位姿的计算精度。Further, the inertial measurement device 22 is used to collect the inertial navigation data corresponding to each of the virtual camera 21 and the multiple optical marker points during the moving shooting through the nine-axis inertial sensor, and send the corresponding inertial navigation data to the wireless transmission device 23 , And the inertial measurement device 22 is an inertial measurement unit (IMU); the wireless transmission device 23 is used to send respective corresponding inertial navigation data to the optical inertial fusion module 312. Optionally, the optical capture processing server 31 further includes an inertial navigation setting module 313. The inertial navigation setting module 313 is communicatively connected with the optical inertial fusion module 312, and is used to connect the nine-axis inertial sensor with the preset A set number of optical marking points are installed on the virtual camera 21 according to a preset positional relationship. Among them, the nine-axis inertial sensor includes a three-axis angular velocity sensor (gyro), a three-axis acceleration sensor and a three-axis magnetic induction sensor. The nine-axis inertial sensor can ensure high-precision calculation of the rotation angle of the rigid body, and the error range is within 0.05 degrees, so that the synthesized 3D virtual reality mixed video stream can be more stable and can be interacted in real time with low latency. For example, using 200 The refresh rate of Hertz angle data is used to update the picture of the dimensional virtual reality hybrid video stream. It is understandable that at least 3 reflective balls can be used for the preset number of optical marking points. The reflective ball is a rigid body and also a motion tracker, and its motion trajectory represents a change in spatial coordinates. In order to obtain a more accurate pose, a nine-axis inertial sensor (inertial measurement device 22) is generally used together, and the spatial coordinates of the rigid body and the nine-axis inertial sensor are unified to finally obtain the precise pose data of the rigid body. The purpose of the calibration of the rigid body and the nine-axis inertial sensor is to unify the space coordinate system. The motion posture information of the rigid body is the position and posture in the custom world coordinate system. The posture of the nine-axis inertial sensor (corresponding to the inertial navigation data) is relative to the hardware The posture change when the body is started, the coordinate system alignment processing of the rigid body and the nine-axis inertial sensor, and the posture fusion processing can all improve the calculation accuracy of the rigid body’s pose.
需要说明的是,反光球(光学标记点)与九轴惯性传感器绑定灵活,预设刚体名与预设链接端口号是一一对应的映射关系,因此惯导设置模块313用于通过对预设刚体名和预设链接端口号进行更新配置和清除已有配置,就能够实现反光球(光学标记)与九轴惯性传感器的绑定与解绑。同时,九轴惯性传感器减少了电路板和整体空间,对应的硬件体积小,适合用便携安装在精确捕捉旋转数据的运动物体上,提高单个传感器在定位空间位置和运动方向时的计算准确度。可以理解的是,惯导设置模块313不仅用于对虚拟摄像机21确定运动姿态数据,还可以用于确定待捕捉对象的运动姿态数据,根 据实际拍摄场景进行确定并设置九轴惯性传感器与光学标记点的关联关系,从而实现高精度的运动检测。It should be noted that the reflective ball (optical marking point) and the nine-axis inertial sensor are bound flexibly, and the preset rigid body name and the preset link port number have a one-to-one mapping relationship. Therefore, the inertial navigation setting module 313 is used to pass the Set the rigid body name and the preset link port number to update the configuration and clear the existing configuration, and the binding and unbinding of the reflector (optical mark) and the nine-axis inertial sensor can be realized. At the same time, the nine-axis inertial sensor reduces the circuit board and overall space, and the corresponding hardware is small. It is suitable for portable installation on moving objects that accurately capture rotation data, and improves the calculation accuracy of a single sensor in locating the spatial position and movement direction. It is understandable that the inertial navigation setting module 313 is not only used to determine the motion posture data of the virtual camera 21, but also can be used to determine the motion posture data of the object to be captured, determine according to the actual shooting scene, and set the nine-axis inertial sensor and optical markers. The relationship between points, so as to achieve high-precision motion detection.
请参阅图1和图3,可选的,中央管理服务器41包括虚拟预演拍摄合成模块411和渲染合成模块412,虚拟预演拍摄合成模块411分别与视频采集卡12和光学惯性融合模块312通信连接,渲染合成模块412分别与虚拟预演拍摄合成模块411和无线传输装置23通信连接;1 and 3, optionally, the central management server 41 includes a virtual rehearsal shooting synthesis module 411 and a rendering synthesis module 412. The virtual rehearsal shooting synthesis module 411 is respectively communicatively connected with the video capture card 12 and the optical inertial fusion module 312, The rendering synthesis module 412 is in communication connection with the virtual rehearsal shooting synthesis module 411 and the wireless transmission device 23 respectively;
虚拟预演拍摄合成模块411,用于对实拍视频数据进行实时抠像,以便在虚拟场景中生成虚拟形象,并对虚拟场景角度进行实时调整;The virtual rehearsal shooting synthesis module 411 is used to perform real-time keying of the real shot video data, so as to generate a virtual image in the virtual scene, and adjust the angle of the virtual scene in real time;
渲染合成模块412,用于按照相机姿态数据、各自对应的三维坐标数据和运动姿态信息将抠像图像信息与调整后的虚拟场景转换为三维虚拟现实混合视频流,并将三维虚拟现实混合视频流分别发送到虚拟摄像机21和显示设备51中。The rendering and synthesis module 412 is used to convert the keying image information and the adjusted virtual scene into a 3D virtual reality mixed video stream according to the camera pose data, the respective corresponding three-dimensional coordinate data and motion posture information, and the 3D virtual reality mixed video stream They are sent to the virtual camera 21 and the display device 51 respectively.
渲染合成模块412接收虚拟预演拍摄合成模块411发送的抠像图像信息,其中,抠像图像信息为包含待捕捉对象的图像信息,渲染合成模块412用于采用虚幻引擎UE4进行三维动画人物设置、声音模拟和灯光渲染。具体的,虚拟预演拍摄合成模块411用于设置抠像图像信息对应的虚拟角色,并对虚拟场景角度进行实时调整;渲染合成模块412用于通过虚幻引擎UE4按照相机姿态数据、各自对应的三维坐标数据和运动姿态信息同步设置虚拟角色的运动姿态;同时将预设的语音信息加入到预设的虚拟场景中,得到三维虚拟现实混合视频流。进一步地,无线传输装置23用于将三维虚拟现实混合视频流分别发送到虚拟拍摄机21和显示设备51中,以使得目标人员实时预览拍摄效果。其中,显示设备51可以采用大荧幕投屏方式展示分镜镜头,以便于达到分镜可视化。The rendering synthesis module 412 receives the keying image information sent by the virtual rehearsal shooting synthesis module 411, where the keying image information is image information containing the object to be captured, and the rendering synthesis module 412 is used to use Unreal Engine UE4 for 3D animation character settings and sound Simulation and lighting rendering. Specifically, the virtual rehearsal shooting synthesis module 411 is used to set the virtual character corresponding to the keying image information, and to adjust the angle of the virtual scene in real time; the rendering synthesis module 412 is used to use the Unreal Engine UE4 according to the camera pose data and respective corresponding three-dimensional coordinates The data and the motion posture information are synchronized to set the motion posture of the virtual character; at the same time, the preset voice information is added to the preset virtual scene to obtain a three-dimensional virtual reality mixed video stream. Further, the wireless transmission device 23 is used to send the three-dimensional virtual reality mixed video stream to the virtual camera 21 and the display device 51 respectively, so that the target person can preview the shooting effect in real time. Among them, the display device 51 may use a large-screen projection method to display the split lens, so as to achieve the visualization of the split lens.
可选的,中央管理服务器41,还用于对相机姿态数据、各自对应的三维坐标数据和运动姿态信息,以及三维虚拟现实混合视频流进行存储,以便于后期制作与合成。Optionally, the central management server 41 is also used to store the camera posture data, the respective corresponding three-dimensional coordinate data and motion posture information, and the three-dimensional virtual reality mixed video stream to facilitate post-production and synthesis.
可选的,中央管理服务器41,在进行视频合成之前,还用于按照相机姿态数据和各自对应的三维坐标数据将调整后的虚拟场景和实际拍摄场景进行三维空间坐标系的对齐处理,以使得输出的三维虚拟现实混合视频流具有高 帧率、稳定、无延迟的空间定位和虚实同步效果。Optionally, the central management server 41, before performing video synthesis, is also used to align the adjusted virtual scene with the actual shooting scene in the three-dimensional space coordinate system according to the camera pose data and the respective corresponding three-dimensional coordinate data, so that The output 3D virtual reality mixed video stream has high frame rate, stable, no-delay spatial positioning and virtual-real synchronization effects.
可以理解的是,虚拟预演拍摄合成模块411用于指示色键控抠像处理,也就是按照实拍视频数据中图像的特定颜色值或亮度值定义透明度,当键出某个值时,所有具有相似颜色或明亮度值的像素都将变为透明,进一步地,虚拟预演拍摄合成模块411用于从实拍视频数据中获取抠像图像信息。It is understandable that the virtual rehearsal shooting synthesis module 411 is used to instruct the color keying process, that is, to define the transparency according to the specific color value or brightness value of the image in the real shot video data. When a certain value is keyed out, all have Pixels with similar colors or brightness values will all become transparent. Further, the virtual preview shooting synthesis module 411 is used to obtain keyed image information from the real shot video data.
请参阅图4,可选的,混合现实虚拟预演拍摄系统还包括相机设置系统6,相机设置系统6与中心控制系统4通信连接,用于获取实际拍摄场景中的场地参数,并按照场地参数确定光学动作捕捉相机32对应的相机数量和对应的相机安装位置,以使得各光学标记点被任意3个光学动作捕捉相机32所捕捉,场地参数包括大空间场地长度信息和宽度信息。Please refer to Figure 4. Optionally, the mixed reality virtual rehearsal shooting system also includes a camera setting system 6. The camera setting system 6 is in communication connection with the central control system 4 for obtaining site parameters in the actual shooting scene and determining them according to the site parameters The number of cameras corresponding to the optical motion capture cameras 32 and the corresponding camera installation positions, so that each optical mark point is captured by any three optical motion capture cameras 32, and the field parameters include large space field length information and width information.
混合现实虚拟预演拍摄系统中需要对多个光学标记点进行运动轨迹的定位与追踪,为了提高定位的准确度,需要各光学标记点被任意3个光学动作捕捉相机32所捕捉,进而确定各光学标记点的空间位置信息,而相机配置数量过多,又会导致搭建成本过高的问题。因此,相机设置系统6用于对光学动作捕捉相机32进行布局,请参阅图4与图5,实际拍摄场景为一个3.5米至7米高的空旷场地,并配备专业影视灯光、桁架、绿箱或蓝箱,其中,桁架包括单层桁架、两层桁架和多层桁架,实际拍摄场景的场地参数包括大空间场地长度信息为20米、宽度信息为10米。进一步地,相机设置系统6用于根据实际拍摄场景的场地参数计算计算该实际拍摄场景中相机配置代价值,并通过相机配置代价值确定光学动作捕捉相机32对应的相机数量和对应的相机安装位置。而多个光学动作捕捉相机32布设置于实际拍摄场景中桁架的不同位置,每个光学动作捕捉相机32的可视范围覆盖整个实际拍摄场景。而影视摄像机11和虚拟摄像机21用于对待捕捉对象进行拍摄。In the mixed reality virtual rehearsal shooting system, it is necessary to locate and track the motion trajectory of multiple optical marking points. In order to improve the accuracy of positioning, each optical marking point needs to be captured by any three optical motion capture cameras 32 to determine each optical point. The spatial location information of the marker points, and the excessive number of camera configurations will lead to the problem of excessive construction costs. Therefore, the camera setting system 6 is used to lay out the optical motion capture camera 32. Please refer to Figures 4 and 5. The actual shooting scene is an open field with a height of 3.5 meters to 7 meters, and is equipped with professional film and television lights, trusses, and green boxes. Or blue box, where the truss includes a single-layer truss, a two-layer truss, and a multi-layer truss. The site parameters of the actual shooting scene include the large space site length information of 20 meters and width information of 10 meters. Further, the camera setting system 6 is used to calculate the cost value of the camera configuration in the actual shooting scene according to the site parameters of the actual shooting scene, and determine the number of cameras corresponding to the optical motion capture camera 32 and the corresponding camera installation position through the camera configuration cost value . The multiple optical motion capture cameras 32 are arranged at different positions of the truss in the actual shooting scene, and the visual range of each optical motion capture camera 32 covers the entire actual shooting scene. The video camera 11 and the virtual camera 21 are used to shoot the object to be captured.
可以理解的是,混合显示虚拟预演拍摄系统集合了实时预演、虚拟演播、动作捕捉多种功能,在传统抠像技术的基础上,利用空间定位动作捕捉技术同步空间内的影视摄像机11、虚拟摄像机21和所有虚实物件(例如,道具)位置,使其在虚幻引擎内进行实时的互动和合成效果预览。在流畅的画面渲染,逼真的三维融合画面中,实现虚实结合效果,提高了影视拍摄手法的故事叙述性和视觉表现力。不仅能够实现目标人员(导演)在现场互动明确需 求,根据需求获取预设构图的画面角度,实时预览拍摄效果,而且通过交互可视化镜头实时确定每一个镜头的效果和元素,拍摄过程一步到位,后期制作在拍摄阶段已确定方向。同时,对于真人演员、虚拟演员、虚拟角色实时互动,人物和CG的互动不再是单纯的前后叠加,物件的位置都拥有3D深度信息。例如,真人穿插走入虚拟古城、真人绕着CG车走、真人和虚拟人物进行拥抱、打斗复杂自然的演艺效果,摄像机的位置在片场直接录制,无缝导入幻境Maya三维软件进行渲染和合成,降低前期到后期制作中的反复来回沟通和修改成本。而且在任何拍摄条件下,虚拟人物场景都能在真实环境下准确追踪,影视摄像机11机位位移镜头拉远拉近,虚实结合场景都可以实时完整呈现。从而实现影视、电视、动漫游戏的动画制作以及演播室、摄影棚和大型展厅的虚拟现实预演拍摄。It is understandable that the hybrid display virtual rehearsal shooting system integrates multiple functions of real-time rehearsal, virtual studio and motion capture. On the basis of traditional keying technology, it uses spatial positioning motion capture technology to synchronize the film and television cameras in the space 11 and virtual cameras. 21 and the position of all virtual objects (for example, props), enabling real-time interaction and synthesis effect preview in Unreal Engine. In the smooth rendering of the picture and the realistic three-dimensional fusion picture, the effect of combining virtual and real is realized, which improves the storytelling and visual expressiveness of film and television shooting techniques. Not only can the target person (director) have a clear need for interaction on the spot, obtain the preset composition angle according to the demand, preview the shooting effect in real time, and determine the effects and elements of each lens in real time through the interactive visualization lens. The shooting process is in place in one step, and later The direction of the production has been determined during the shooting stage. At the same time, for real-time interactions between real actors, virtual actors, and virtual characters, the interaction between characters and CG is no longer simply superimposed back and forth, and the positions of objects all have 3D depth information. For example, real people interspersed into the virtual ancient city, real people walk around the CG car, real people and virtual characters hug, fight with complex and natural performance effects, the position of the camera is directly recorded on the set, and seamlessly imported into Maya 3D software for rendering and synthesis. Reduce the cost of repeated communication and modification from the early stage to the post-production stage. Moreover, under any shooting conditions, the virtual character scene can be accurately tracked in the real environment, the 11-position shift lens of the video camera is zoomed in and out, and the combined virtual and real scenes can be fully presented in real time. So as to realize the animation production of film, television, animation and games, as well as the virtual reality preview shooting of studios, studios and large exhibition halls.
本发明实施例中,提供了一种混合显示虚拟预演拍摄系统,通过真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统实现运动轨迹记录、数据导出融合、多余目标屏蔽和成像实时预览。与传统绿幕抠像相比,实时获取待捕捉对象的三维运动轨迹信息,实现三维空间中场景与角色叠加,具有三维深度信息的合成处理效果;同时通过虚拟现实互动,实时预览已拍摄影片的运镜视觉效果,提高混合现实虚拟视频拍摄的效率,并降低制作成本和制作周期。In the embodiment of the present invention, a hybrid display virtual rehearsal shooting system is provided, which realizes motion track recording, data export and fusion, and redundant targets through a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system. Real-time preview of shielding and imaging. Compared with the traditional green-screen keying, the real-time acquisition of the three-dimensional motion trajectory information of the object to be captured, the superimposition of the scene and the character in the three-dimensional space, and the synthesis and processing effect of the three-dimensional depth information; at the same time, through the virtual reality interaction, the real-time preview of the film has been taken Mirroring visual effects, improving the efficiency of mixed reality virtual video shooting, and reducing production costs and production cycles.
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that: The technical solutions recorded in the embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

  1. 一种混合现实虚拟预演拍摄系统,其特征在于,所述混合现实虚拟预演拍摄系统包括通信连接的真实拍摄系统、虚拟拍摄系统、光学定位及动作捕捉系统、中心控制系统和显示系统;其中,A mixed reality virtual rehearsal shooting system, characterized in that the mixed reality virtual rehearsal shooting system includes a real shooting system, a virtual shooting system, an optical positioning and motion capture system, a central control system, and a display system that are connected in communication;
    所述真实拍摄系统包括影视摄像机和视频采集卡,所述影视摄像机和所述视频采集卡通信连接;The real shooting system includes a video camera and a video capture card, and the video camera and the video capture card are in communication connection;
    所述虚拟拍摄系统包括虚拟摄像机、惯性测量装置和无线传输装置,所述惯性测量装置安装在所述虚拟摄像机上,所述惯性测量装置与所述无线传输装置通信连接;The virtual shooting system includes a virtual camera, an inertial measurement device, and a wireless transmission device, the inertial measurement device is installed on the virtual camera, and the inertial measurement device is in communication connection with the wireless transmission device;
    所述光学定位及动作捕捉系统包括光捕处理服务器和多个光学动作捕捉相机,所述光捕处理服务器分别与所述多个光学动作捕捉相机和所述无线传输装置通信连接;The optical positioning and motion capture system includes a light capture processing server and a plurality of optical motion capture cameras, and the light capture processing server is respectively communicatively connected with the plurality of optical motion capture cameras and the wireless transmission device;
    所述中心控制系统包括中央管理服务器,所述中央管理服务器分别与所述视频采集卡、所述无线传输装置和所述光捕处理服务器通信连接;The central control system includes a central management server, and the central management server is respectively communicatively connected with the video capture card, the wireless transmission device, and the optical capture processing server;
    所述显示系统包括显示设备,所述显示设备与所述中央管理服务器网络连接。The display system includes a display device, and the display device is network connected to the central management server.
  2. 根据权利要求1所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 1, wherein:
    所述影视摄像机用于在实际拍摄场景内对待捕捉对象进行拍摄,得到实拍视频数据;所述视频采集卡用于将所述实拍视频数据发送到所述中央管理服务器中。The video camera is used to shoot the object to be captured in the actual shooting scene to obtain real-shot video data; the video capture card is used to send the real-shot video data to the central management server.
  3. 根据权利要求2所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 2, wherein:
    所述待捕捉对象预先穿戴已贴有多个光学标记点的动作捕捉服,所述多个光学标记点用于定位待捕捉对象的各关节位置。The object to be captured wears a motion capture suit with a plurality of optical marking points affixed in advance, and the plurality of optical marking points are used to locate each joint position of the object to be captured.
  4. 根据权利要求2所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 2, wherein:
    所述影视摄像机和所述虚拟摄像机分别安装预置数量的光学标记点;The video camera and the virtual camera are respectively installed with a preset number of optical marking points;
    所述惯性测量装置,用于通过九轴惯性传感器采集所述虚拟摄像机在移动拍摄中对应的惯导数据;The inertial measurement device is used to collect the inertial navigation data corresponding to the virtual camera in moving shooting through a nine-axis inertial sensor;
    所述无线传输装置,用于将所述对应的惯导数据发送到所述光捕处理服务器中。The wireless transmission device is used to send the corresponding inertial navigation data to the optical capture processing server.
  5. 根据权利要求4所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 4, wherein:
    所述光捕处理服务器还包括三维动作捕捉处理模块和光学惯性融合模块,所述三维动作捕捉处理模块与所述多个光学动作捕捉相机通信连接,所述光学惯性融合模块分别与所述中央管理服务器和所述无线传输装置通信连接;The optical capture processing server further includes a three-dimensional motion capture processing module and an optical inertial fusion module, the three-dimensional motion capture processing module is communicatively connected with the plurality of optical motion capture cameras, and the optical inertial fusion module is respectively connected to the central management The server is in communication connection with the wireless transmission device;
    所述多个光学动作捕捉相机,用于分别对各光学标记点进行定位拍摄,得到各自对应的二维图像数据;The multiple optical motion capture cameras are used for positioning and shooting each optical marking point to obtain respective corresponding two-dimensional image data;
    所述三维动作捕捉处理模块,用于根据所述各自对应的二维图像数据获取各自对应的三维坐标数据和待捕捉对象的运动姿态信息;The three-dimensional motion capture processing module is configured to obtain the corresponding three-dimensional coordinate data and the motion posture information of the object to be captured according to the corresponding two-dimensional image data;
    所述光学惯性融合模块,用于根据所述虚拟摄像机上各光学标记点对应的三维坐标数据和所述对应的惯导数据依次进行坐标系标定和姿态融合处理,得到相机姿态数据,并将所述各自对应的三维坐标数据、所述相机姿态数据和所述运动姿态信息发送到所述中央管理服务器中。The optical inertial fusion module is used to perform coordinate system calibration and posture fusion processing in sequence according to the three-dimensional coordinate data corresponding to each optical mark point on the virtual camera and the corresponding inertial navigation data to obtain camera posture data, and then The respective corresponding three-dimensional coordinate data, the camera posture data and the motion posture information are sent to the central management server.
  6. 根据权利要求5所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 5, wherein:
    所述光捕处理服务器还包括惯导设置模块,所述惯导设置模块与所述光学惯性融合模块通信连接,用于采用预设链接端口号和预设刚体名将所述九轴惯性传感器与所述预置数量的光学标记点按照预设位置关系安装在所述虚拟摄像机上。The optical capture processing server also includes an inertial navigation setting module, which is communicatively connected to the optical inertial fusion module, and is configured to use a preset link port number and a preset rigid body name to connect the nine-axis inertial sensor to the The preset number of optical marking points are installed on the virtual camera according to a preset position relationship.
  7. 根据权利要求5所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 5, wherein:
    所述中央管理服务器包括虚拟预演拍摄合成模块和渲染合成模块,所述虚拟预演拍摄合成模块分别与所述视频采集卡和所述光学惯性融合模块通信连接,所述渲染合成模块分别与所述虚拟预演拍摄合成模块和所述无线传输装置通信连接;The central management server includes a virtual rehearsal shooting synthesis module and a rendering synthesis module, the virtual rehearsal shooting synthesis module is respectively connected to the video capture card and the optical inertial fusion module in communication, and the rendering synthesis module is respectively connected to the virtual The rehearsal shooting synthesis module is in communication connection with the wireless transmission device;
    所述虚拟预演拍摄合成模块,用于对所述实拍视频数据进行实时抠像,以便在虚拟场景中生成虚拟形象,并对虚拟场景角度进行实时调整;The virtual rehearsal shooting synthesis module is used to perform real-time keying of the real-shot video data, so as to generate an avatar in a virtual scene, and adjust the angle of the virtual scene in real time;
    所述渲染合成模块,用于按照所述相机姿态数据、所述各自对应的三维坐标数据和所述运动姿态信息将抠像图像信息与调整后的虚拟场景转换为三维虚拟现实混合视频流,并将所述三维虚拟现实混合视频流分别发送到所述虚拟摄像机和所述显示设备中。The rendering synthesis module is configured to convert the keying image information and the adjusted virtual scene into a three-dimensional virtual reality mixed video stream according to the camera posture data, the respective corresponding three-dimensional coordinate data, and the motion posture information, and The three-dimensional virtual reality mixed video stream is sent to the virtual camera and the display device respectively.
  8. 根据权利要求7所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 7, wherein:
    所述显示设备,用于接收并同步显示所述三维虚拟现实混合视频流,以使得目标人员在所述影视摄像机和所述虚拟摄像机位移过程中预览所述三维虚拟现实混合视频流,并对所述待捕捉对象实时调整拍摄动作和拍摄角度。The display device is configured to receive and synchronously display the 3D virtual reality mixed video stream, so that the target person previews the 3D virtual reality mixed video stream during the displacement process of the film and television camera and the virtual camera, and responds to all The shooting action and shooting angle of the object to be captured are adjusted in real time.
  9. 根据权利要求2所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to claim 2, wherein:
    所述光学定位及动作捕捉系统还包括标定装置,所述标定装置,用于通过标定杆在所述实际拍摄场景中对所述多个光学动作捕捉相机进行位置标定。The optical positioning and motion capture system further includes a calibration device, which is used to calibrate the positions of the plurality of optical motion capture cameras in the actual shooting scene through a calibration rod.
  10. 根据权利要求1-9中任一项所述的混合现实虚拟预演拍摄系统,其特征在于,The mixed reality virtual rehearsal shooting system according to any one of claims 1-9, wherein:
    所述混合现实虚拟预演拍摄系统还包括相机设置系统,所述相机设置系统与所述中心控制系统通信连接,用于获取所述实际拍摄场景中的场地参数,并按照所述场地参数确定所述光学动作捕捉相机对应的相机数量和对应的相机安装位置,以使得各光学标记点被任意3个光学动作捕捉相机所捕捉,所述场地参数包括大空间场地长度信息和宽度信息。The mixed reality virtual rehearsal shooting system further includes a camera setting system, which is communicatively connected with the central control system, and is used to obtain the site parameters in the actual shooting scene and determine the site parameters according to the site parameters. The number of cameras corresponding to the optical motion capture camera and the corresponding camera installation positions, so that each optical mark point is captured by any three optical motion capture cameras, and the site parameters include large space site length information and width information.
PCT/CN2021/095242 2020-05-29 2021-05-21 Mixed reality virtual preview photographing system WO2021238804A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010476636.9 2020-05-29
CN202010476636.9A CN111447340A (en) 2020-05-29 2020-05-29 Mixed reality virtual preview shooting system

Publications (1)

Publication Number Publication Date
WO2021238804A1 true WO2021238804A1 (en) 2021-12-02

Family

ID=71657625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095242 WO2021238804A1 (en) 2020-05-29 2021-05-21 Mixed reality virtual preview photographing system

Country Status (2)

Country Link
CN (1) CN111447340A (en)
WO (1) WO2021238804A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210102811A1 (en) * 2018-03-30 2021-04-08 Nolo Co., Ltd. Calibration-free positioning method and system
CN114283447A (en) * 2021-12-13 2022-04-05 凌云光技术股份有限公司 Motion capture system and method
CN114422696A (en) * 2022-01-19 2022-04-29 浙江博采传媒有限公司 Virtual shooting method and device and storage medium
CN114513648A (en) * 2022-02-14 2022-05-17 河南大学 Visualization system and method for virtual shooting and production of film and television
CN114723896A (en) * 2022-06-08 2022-07-08 山东无界智能科技有限公司 AR three-dimensional modeling system through camera multi-angle image capture
CN114785999A (en) * 2022-04-12 2022-07-22 先壤影视制作(上海)有限公司 Real-time virtual shooting synchronous control method and system
CN115065816A (en) * 2022-05-09 2022-09-16 北京大学 Real geospatial scene real-time construction method based on panoramic video technology
CN115278364A (en) * 2022-07-29 2022-11-01 苏州创意云网络科技有限公司 Video stream synthesis method and device
CN115334235A (en) * 2022-07-01 2022-11-11 西安诺瓦星云科技股份有限公司 Video processing method, device, terminal equipment and storage medium
CN115866160A (en) * 2022-11-10 2023-03-28 北京电影学院 Low-cost movie virtualization production system and method
CN115955554A (en) * 2022-06-27 2023-04-11 浙江传媒学院 Graphic virtual-real fusion video production method
CN116095494A (en) * 2023-01-10 2023-05-09 杭州易现先进科技有限公司 Action detection method and system based on mobile terminal AR
CN117354627A (en) * 2023-09-22 2024-01-05 广州磐碟塔信息科技有限公司 Automatic focus following method and system for virtual scene
CN117516877A (en) * 2023-10-30 2024-02-06 苏州工业园区精泰达自动化有限公司 Center control screen simulation touch control detection equipment
CN118229889A (en) * 2024-05-23 2024-06-21 武汉大学 Video scene previewing auxiliary method and device

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN111988535A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 System and method for optically positioning fusion picture
CN111709970B (en) * 2020-08-19 2020-11-13 北京理工大学 Live emulation preview system of intelligence
CN112040092B (en) * 2020-09-08 2021-05-07 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method
CN113849072A (en) * 2021-10-11 2021-12-28 深圳市瑞立视多媒体科技有限公司 Wireless handle and motion capture system
CN113727041A (en) * 2021-10-14 2021-11-30 北京七维视觉科技有限公司 Image matting region determination method and device
CN114598790B (en) * 2022-03-21 2024-02-02 北京迪生数字娱乐科技股份有限公司 Subjective visual angle posture capturing and real-time image system
CN115118880A (en) * 2022-06-24 2022-09-27 中广建融合(北京)科技有限公司 XR virtual shooting system based on immersive video terminal is built
CN114885147B (en) * 2022-07-12 2022-10-21 中央广播电视总台 Fusion production and broadcast system and method
CN116030228B (en) * 2023-02-22 2023-06-27 杭州原数科技有限公司 Method and device for displaying mr virtual picture based on web
CN116051700A (en) * 2023-04-03 2023-05-02 北京墨境天合数字图像科技有限公司 Virtual shooting method and device, electronic equipment and storage medium
CN116320363B (en) * 2023-05-25 2023-07-28 四川中绳矩阵技术发展有限公司 Multi-angle virtual reality shooting method and system
CN117527994A (en) * 2023-11-06 2024-02-06 中影电影数字制作基地有限公司 Visual presentation method and system for space simulation shooting
CN117425076B (en) * 2023-12-18 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Shooting method and system for virtual camera
CN117979168B (en) * 2024-04-01 2024-06-11 佳木斯大学 Intelligent camera management system for aerobics competition video shooting

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
CN107231531A (en) * 2017-05-23 2017-10-03 青岛大学 A kind of networks VR technology and real scene shooting combination production of film and TV system
CN207460313U (en) * 2017-12-04 2018-06-05 上海幻替信息科技有限公司 Mixed reality studio system
CN109345635A (en) * 2018-11-21 2019-02-15 北京迪生数字娱乐科技股份有限公司 Unmarked virtual reality mixes performance system
US20190102949A1 (en) * 2017-10-03 2019-04-04 Blueprint Reality Inc. Mixed reality cinematography using remote activity stations
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097435A (en) * 2016-06-07 2016-11-09 北京圣威特科技有限公司 A kind of augmented reality camera system and method
CN107231531A (en) * 2017-05-23 2017-10-03 青岛大学 A kind of networks VR technology and real scene shooting combination production of film and TV system
US20190102949A1 (en) * 2017-10-03 2019-04-04 Blueprint Reality Inc. Mixed reality cinematography using remote activity stations
CN207460313U (en) * 2017-12-04 2018-06-05 上海幻替信息科技有限公司 Mixed reality studio system
CN109345635A (en) * 2018-11-21 2019-02-15 北京迪生数字娱乐科技股份有限公司 Unmarked virtual reality mixes performance system
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210102811A1 (en) * 2018-03-30 2021-04-08 Nolo Co., Ltd. Calibration-free positioning method and system
US12031825B2 (en) * 2018-03-30 2024-07-09 Nolo Co., Ltd. Calibration-free positioning method and system
CN114283447A (en) * 2021-12-13 2022-04-05 凌云光技术股份有限公司 Motion capture system and method
CN114283447B (en) * 2021-12-13 2024-03-26 北京元客方舟科技有限公司 Motion capturing system and method
CN114422696A (en) * 2022-01-19 2022-04-29 浙江博采传媒有限公司 Virtual shooting method and device and storage medium
CN114513648A (en) * 2022-02-14 2022-05-17 河南大学 Visualization system and method for virtual shooting and production of film and television
CN114785999A (en) * 2022-04-12 2022-07-22 先壤影视制作(上海)有限公司 Real-time virtual shooting synchronous control method and system
CN114785999B (en) * 2022-04-12 2023-12-15 先壤影视制作(上海)有限公司 Real-time virtual shooting synchronous control method and system
CN115065816B (en) * 2022-05-09 2023-04-07 北京大学 Real geospatial scene real-time construction method and real-time construction device
US11836878B2 (en) 2022-05-09 2023-12-05 Peking University Method and apparatus for constructing real-geographic-space scene in real time
CN115065816A (en) * 2022-05-09 2022-09-16 北京大学 Real geospatial scene real-time construction method based on panoramic video technology
CN114723896A (en) * 2022-06-08 2022-07-08 山东无界智能科技有限公司 AR three-dimensional modeling system through camera multi-angle image capture
CN115955554A (en) * 2022-06-27 2023-04-11 浙江传媒学院 Graphic virtual-real fusion video production method
CN115334235A (en) * 2022-07-01 2022-11-11 西安诺瓦星云科技股份有限公司 Video processing method, device, terminal equipment and storage medium
CN115334235B (en) * 2022-07-01 2024-06-04 西安诺瓦星云科技股份有限公司 Video processing method, device, terminal equipment and storage medium
CN115278364B (en) * 2022-07-29 2024-05-17 苏州创意云网络科技有限公司 Video stream synthesis method and device
CN115278364A (en) * 2022-07-29 2022-11-01 苏州创意云网络科技有限公司 Video stream synthesis method and device
CN115866160A (en) * 2022-11-10 2023-03-28 北京电影学院 Low-cost movie virtualization production system and method
CN116095494A (en) * 2023-01-10 2023-05-09 杭州易现先进科技有限公司 Action detection method and system based on mobile terminal AR
CN117354627A (en) * 2023-09-22 2024-01-05 广州磐碟塔信息科技有限公司 Automatic focus following method and system for virtual scene
CN117516877A (en) * 2023-10-30 2024-02-06 苏州工业园区精泰达自动化有限公司 Center control screen simulation touch control detection equipment
CN118229889A (en) * 2024-05-23 2024-06-21 武汉大学 Video scene previewing auxiliary method and device

Also Published As

Publication number Publication date
CN111447340A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
WO2021238804A1 (en) Mixed reality virtual preview photographing system
CN110650354B (en) Live broadcast method, system, equipment and storage medium for virtual cartoon character
US9729765B2 (en) Mobile virtual cinematography system
US9299184B2 (en) Simulating performance of virtual camera
CN105264436B (en) System and method for controlling equipment related with picture catching
US20190358547A1 (en) Spectator virtual reality system
CN105488457A (en) Virtual simulation method and system of camera motion control system in film shooting
JP2013183249A (en) Moving image display device
CN109032357A (en) More people's holography desktop interactive systems and method
CN110992486B (en) Shooting method of underwater simulation shooting system based on VR technology
CN105872367B (en) Video generation method and video capture device
JPH07184115A (en) Picture display device
CN212231547U (en) Mixed reality virtual preview shooting system
Wang et al. Applied research on real-time film and television animation virtual shooting for multiplayer action capture technology based on optical positioning and inertial attitude sensing technology
WO2018089040A1 (en) Spectator virtual reality system
CN213186216U (en) Virtual movie & TV shooting device
CN110764247A (en) AR telescope
CN117119294B (en) Shooting method, device, equipment, medium and program of virtual scene
CN105872396B (en) Method for imaging and device
JPH10320590A (en) Composite image production device and method therefor
CN109769082A (en) A kind of virtual studio building system and method for recording based on VR tracking
CN105872368A (en) Photographic method and device
US11682175B2 (en) Previsualization devices and systems for the film industry
CN205450530U (en) Mobile lamp light unit
TWI794512B (en) System and apparatus for augmented reality and method for enabling filming using a real-time display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812701

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21812701

Country of ref document: EP

Kind code of ref document: A1