Nothing Special   »   [go: up one dir, main page]

CN108495106B - Port crane simulator vision system and image processing method and system thereof - Google Patents

Port crane simulator vision system and image processing method and system thereof Download PDF

Info

Publication number
CN108495106B
CN108495106B CN201810353441.8A CN201810353441A CN108495106B CN 108495106 B CN108495106 B CN 108495106B CN 201810353441 A CN201810353441 A CN 201810353441A CN 108495106 B CN108495106 B CN 108495106B
Authority
CN
China
Prior art keywords
vision
network switch
projector
spherical screen
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810353441.8A
Other languages
Chinese (zh)
Other versions
CN108495106A (en
Inventor
陆后军
马乃金
何军良
苌道方
周强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201810353441.8A priority Critical patent/CN108495106B/en
Publication of CN108495106A publication Critical patent/CN108495106A/en
Application granted granted Critical
Publication of CN108495106B publication Critical patent/CN108495106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a port crane simulator vision system and an image processing method and system thereof. The system comprises a network switch, wherein the network switch comprises a first network switch and a second network switch; one end of the first network switch is connected with one end of the vision machine through the synchronous control system; the other end of the first network switch is connected with the second network switch, the video monitoring computer, the teacher management computer, the scene monitoring computer and the main simulation computer in sequence; the vision machine comprises a plurality of vision machines; the visual machines correspond to the projectors one by one; the visual machine is connected with the projector through the virtual scene system and the correction fusion system; the projector is arranged above the spherical screen, and the horizontal layout mode of the projector is symmetrical about the central line of the spherical screen. The vision system, the image processing method and the image processing system provided by the invention can comprehensively project the operation working condition of the port crane, and improve the fidelity and the reliability of the projected image.

Description

Port crane simulator vision system and image processing method and system thereof
Technical Field
The invention relates to the technical field of multi-channel projection, in particular to a port crane simulator vision system and an image processing method and system thereof.
Background
The vision system provides the same external environment for human as the real crane, and the operation result is simulated by adopting the virtual reality technology, and the feedback of the crane operated by human or the harbor crane is limited in the operation environment of a driver taking human as the center. The visual projection system is a very key subsystem in the whole system for operating the simulator, directly influences the presence and immersion of the simulator operator, and the optimization of the visual projection scheme of the simulator is a main means for improving the fidelity of the simulator. Because the simulator has no requirements on high resolution, wide view angle, no deformation of pictures and the like of a visual channel, a multi-channel projection scheme is generally adopted as an implementation mode of the visual channel of the simulator.
The multi-channel projection scheme is a large-screen display system formed by combining and projecting a plurality of projectors, and has the advantages of larger display size, wider visual field, more display content, higher display resolution and visual effect with impact force and immersion feeling compared with the common single projector system. Generally, a projection scheme is adopted in which a plurality of projectors are arranged according to a proper position angle relationship, and an image generated by a computer is projected on different types of projection screens by using a front projection or back projection mode. The multi-channel projection system can form different types of projection systems due to different arrangement modes of the projectors and different sizes, shapes, materials and the like of projection screens. According to the shape division of a projection screen, common projection systems comprise a straight screen, a circular screen, an arc screen and the like, and a suitable projection scheme can be selected according to resolution, projection area, projection distance and the like in specific application.
Traditional projection mode is taken the straight curtain as leading, very big restriction viewer's visual angle, because the characteristic of harbour hoist operation operating mode, driver's visual field mainly is located below the driver's cabin, and the driver field of vision requires can observe boats and ships and other hoist operation condition, traditional projection mode, and the driver can't observe the circumstances around comprehensively, and the projected image fidelity and the credibility of traditional projection mode are low. Therefore, the ball screen projection scheme is usually adopted to enhance the presence and immersion of the visual system of the simulator, thereby improving the fidelity and reliability of the whole simulator. However, since the dome screen projection is configured by a plurality of projectors, the projected screen has problems such as distortion and edge overlapping in the collected images in the dome screen, and the accuracy of the projected image on the dome screen is low.
Disclosure of Invention
The invention aims to provide a port crane simulator vision system and an image processing method and system thereof, which are used for solving the problems that a driver cannot comprehensively observe the operation condition of a port crane, and the fidelity and reliability of a projected image are low.
In order to achieve the purpose, the invention provides the following scheme:
a port crane simulator vision system comprising: the system comprises a dome screen, a projector, a video machine, a network switch, a video monitoring computer, a teacher management computer, a scene monitoring computer and a main simulation computer;
the network switch comprises a first network switch and a second network switch;
one end of the first network switch is connected with one end of the vision machine through a synchronous control system; the other end of the first network switch is connected with the second network switch, the video monitoring computer, the teacher management computer, the scene monitoring computer and the main simulation computer in sequence;
the vision machine comprises a plurality of vision machines; the vision machines correspond to the projectors one by one;
the vision machine is connected with the projector through a virtual scene system and a correction fusion system;
the projector is arranged above the spherical screen, and the horizontal layout mode of the projector is symmetrical about the central line of the spherical screen; the spherical screen adopts a 9-channel spherical screen projection system.
Optionally, the synchronization control system adopts a forced double-cache interaction synchronization method, arbitrarily selects one of the video machines as a forced command initiator, and according to a fixed period, the forced command initiator is responsible for sending rendering messages to each video machine, and implements synchronization control of rendering efficiency between different video machines by setting a message feedback mechanism, thereby implementing synchronization control of the exchange process from the temporary buffer area to the display buffer area of each video machine;
the synchronous control system also adopts a dynamic interpolation compensation algorithm to pre-estimate the state of the simulation entity, so as to improve the simulation fidelity.
Optionally, the spherical screen adopts a 9-channel spherical screen projection system;
the radius of the spherical curtain is 3.6 meters; the horizontal visual angle range is-110 to 110 degrees; the vertical visual angle range is-90 degrees to 0 degrees; the screen gain is 0.9.
Optionally, the number of the projectors is 9, the projectors adopt a single-chip digital light processing technology, and the resolution of the projectors is 1920 × 1200; the brightness was 6000 lumens.
A vision system image processing method is applied to a port crane simulator vision system, and is characterized in that the port crane simulator vision system comprises: the system comprises a dome screen, a projector, a video machine, a network switch, a video monitoring computer, a teacher management computer, a scene monitoring computer and a main simulation computer; the network switch comprises a first network switch and a second network switch; one end of the first network switch is connected with one end of the vision machine through a synchronous control system; the other end of the first network switch is connected with the second network switch, the video monitoring computer, the teacher management computer, the scene monitoring computer and the main simulation computer in sequence; the vision machine comprises a plurality of vision machines; the vision machines correspond to the projectors one by one; the vision machine is connected with the projector through a virtual scene system and a correction fusion system; the projector is arranged above the spherical screen, and the horizontal layout mode of the projector is symmetrical about the central line of the spherical screen;
the vision system image processing method comprises the following steps:
acquiring a dome screen projection image;
performing geometric correction processing on the spherical screen projection image to obtain a corrected projection image and recording correction pixel points; the geometric correction processing is used for eliminating image distortion and image deformation;
performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; the edge fusion processing is used for adjusting the brightness of the overlapping area;
and processing the spherical screen projection image according to the correction pixel points and the fusion pixel points.
Optionally, the geometric correction processing is performed on the dome screen projection image to obtain a corrected projection image and record correction pixel points, and the method specifically includes:
establishing a spherical screen coordinate system by taking the spherical center of the spherical screen as an original point, the sight line direction as a Y axis, the direction vertical to the ground as a Z axis and the direction vertical to the YOZ plane as an X axis;
selecting grid points on the spherical screen coordinate system according to a grid plate of the projector;
measuring the distance of the grid point by using a total station, and calculating the corresponding coordinate of the grid point;
judging whether the coordinate is within a coordinate threshold range to obtain a first judgment result;
if the first judgment result shows that the coordinate is in the coordinate threshold range, recording a correction pixel point corresponding to the coordinate;
and if the first judgment result shows that the coordinate is not in the coordinate threshold range, adjusting the position of the grid point.
Optionally, the edge blending processing is performed on the corrected projection image to obtain a blended projection image and record a blending pixel point, and the method specifically includes:
using formulas
Figure BDA0001633997060000041
Performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points;
wherein, f (x) is a nonlinear function, and x is the relative position of the pixel point in the overlapping region; a is a brightness adjustment coefficient and p is an influence factor.
A vision system image processing system, comprising:
the spherical screen projection head image acquisition module is used for acquiring a spherical screen projection image;
the geometric correction processing module is used for carrying out geometric correction processing on the spherical screen projection image to obtain a corrected projection image and recording correction pixel points; the geometric correction processing is used for eliminating image distortion and image deformation;
the edge fusion processing module is used for carrying out edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; the edge fusion processing is used for adjusting the brightness of the overlapping area;
and the image processing module is used for processing the spherical screen projection image according to the correction pixel points and the fusion pixel points.
Optionally, the geometric correction processing module specifically includes:
the spherical screen coordinate system establishing unit is used for establishing a spherical screen coordinate system by taking the spherical center of the spherical screen as an origin, the sight line direction as a Y axis, the direction vertical to the ground as a Z axis and the direction vertical to the YOZ plane as an X axis;
the grid point selecting unit is used for selecting grid points on the spherical screen coordinate system according to a grid plate of the projector;
a grid point coordinate calculation unit, configured to perform ranging on the grid point by using a total station, and calculate a coordinate corresponding to the grid point;
the first judgment unit is used for judging whether the coordinates are in a coordinate threshold range or not to obtain a first judgment result;
the recording unit is used for recording a correction pixel point corresponding to the coordinate if the first judgment result shows that the coordinate is in the coordinate threshold range;
and the adjusting unit is used for adjusting the positions of the grid points if the first judgment result shows that the coordinates are not in the coordinate threshold range.
Optionally, the edge fusion processing module specifically includes:
an edge blending processing unit for utilizing the formula
Figure BDA0001633997060000051
Performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points;
wherein, f (x) is a nonlinear function, and x is the relative position of the pixel point in the overlapping region; a is a brightness adjustment coefficient and p is an influence factor.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a port crane simulator vision system, which applies ball screen projection to a port crane simulator, wherein the ball screen projection not only enables the field angle of a viewer to be wider, but also enables the scene sense and immersion sense to be better.
In consideration of the fact that the dome screen projection provided by the invention is composed of a plurality of projectors, the projected images are integrated in the dome screen and have the problems of deformation, edge overlapping and the like, the invention also discloses a method and a system for processing the images of the visual system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic diagram illustrating a connection relationship between a vision system and other modules according to an embodiment of the present invention;
fig. 2 is a view system structure diagram of a port crane simulator provided in an embodiment of the present invention;
fig. 3 is a schematic view of a virtual scene of a vision system according to an embodiment of the present invention;
FIG. 4 is a three-dimensional effect diagram of a projection screen according to an embodiment of the present invention;
FIG. 5 is a top view of a projector layout according to the present invention;
FIG. 6 is a side view of a projector layout provided by the present invention;
FIG. 7 is a front view of a projector layout provided by the present invention;
FIG. 8 is a schematic diagram of a projector position calibration coordinate system according to an embodiment of the present invention;
FIG. 9 is a flowchart of a method for processing images in a vision system according to the present invention;
FIG. 10 is a flowchart illustrating non-linear calibration of a spherical screen according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a multi-channel edge blending band provided by an embodiment of the present invention;
FIG. 12 is a flowchart of edge blending according to an embodiment of the present invention;
fig. 13 is a diagram of a forced double-buffer exchange synchronization sequence based on message feedback according to an embodiment of the present invention;
FIG. 14 is a block diagram of a dynamic interpolation compensation algorithm provided in accordance with an embodiment of the present invention;
fig. 15 is a block diagram of a view system image processing system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a harbor crane simulator vision system and an image processing method and system thereof, which realize brightness fusion and improve the precision of a projected image, thereby improving the complete machine fidelity and reliability of a simulator.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The port crane simulator comprises different components such as a teacher's station, a main simulation, a linkage station, a motion platform, a vision system and the like, wherein the vision system is used for providing a simulation function of a human vision channel. 70-80% of external environment information obtained by human beings comes from a visual channel, so that the performance of a visual system such as a field angle, a rendering speed, a frame refresh rate and the like can have an important influence on the confidence coefficient of human-computer collaborative simulation. The visual system relates to various technologies such as three-dimensional modeling, real-time rendering and projection display, wherein the field angle range directly determines the information amount of a visual channel received by a human. In order to provide the same range of field angle as the real container crane, a multi-channel projection mode is adopted to meet the visual simulation requirements of large field angle and high telepresence. The relationship between the vision system provided by the invention and other modules of the simulator is shown in figure 1.
The port crane simulator adopts visual simulation based on a virtual reality technology, and renders and outputs three-dimensional models such as equipment facilities and the like forming a wharf scene on display equipment in real time, so that pictures of a visual channel virtual scene are provided for human beings.
The visual system is divided into a hardware part and a software part from the structural division. The hardware part mainly comprises a visual machine, a projector, a spherical screen and other accessories. The software part comprises synchronous control software, a virtual scene and a correction fusion system.
Fig. 2 is a structural diagram of a view system of a port crane simulator provided in an embodiment of the present invention, and as shown in fig. 2, a view system of a port crane simulator includes: the system comprises a dome screen 2-1, a projector 2-2, a vision machine 2-3, a network switch, a video monitoring computer 2-5, a teacher management computer 2-6, a scene monitoring computer 2-7 and a main simulation computer 2-8.
The network switches include a first network switch 2-4-1 and a second network switch 2-4-2.
One end of the first network switch 2-4-1 is connected with one end of the vision machine 2-3 through a synchronous control system 2-9; the other end of the first network switch 2-4-1 is connected with the second network switch 2-4-2, the video monitoring computer 2-5, the teacher management computer 2-6, the scene monitoring computer 2-7 and the main simulation computer 2-8 in sequence.
The vision machine 2-3 includes a plurality of vision machines. The vision machines 2-3 correspond to the projectors 2-2 one by one.
The vision machine 2-3 is connected with the projector 2-2 through a virtual scene system 2-10 and a correction fusion system 2-11.
The projector 2-2 is arranged above the spherical screen 2-1, and the horizontal layout mode of the projector 2-2 is symmetrical about the central line of the spherical screen 2-1; the spherical screen adopts a 9-channel spherical screen projection system; the ball curtain is 7.2 meters in diameter and is approximately a quarter of a ball with an upward opening.
In practical application, the synchronous control system adopts a forced double-cache interactive synchronous method, one of the video machines 2-3 is arbitrarily selected as a forced command initiator, the forced command initiator is responsible for sending rendering messages to the video machines 2-3 according to a fixed period, and synchronous control of rendering efficiency among the different video machines 2-3 is realized by setting a message feedback mechanism, so that synchronous control of the exchange process from the temporary buffer areas of the video machines 2-3 to the display buffer areas is realized.
The synchronous control system also adopts a dynamic interpolation compensation algorithm to pre-estimate the state of the simulation entity, so as to improve the simulation fidelity.
The logistics system of the container terminal port comprises various types of equipment and facilities, and the reusability of the three-dimensional model needs to be considered, so that the invention establishes the three-dimensional model library of the container terminal visual simulation, integrates various model formats, browses and observes the three-dimensional model of the container terminal through a uniform software interface, can store and assemble model resources on a large scale by establishing the three-dimensional model library, realizes the reuse of the model, reduces the repeated labor expenditure in the development of a similar visual system, accelerates the development speed and improves the benefit. The three-dimensional model library comprises common three-dimensional models of a container logistics system, and can be continuously improved and increased, and a table 1 is a three-dimensional model library model list as shown in table 1.
TABLE 1
Figure BDA0001633997060000081
Figure BDA0001633997060000091
The invention adopts three-dimensional real-time rendering to generate a wharf virtual scene, dynamically loads three-dimensional models of wharf equipment, facilities, ships and the like, and can perform real-time simulation on different loading and unloading process flows and equipment operation based on the wharf loading and unloading process, and an exemplary interface of the method is shown in figure 3.
The invention adopts a 9-channel spherical screen projection system, and the design parameters are as follows: radius of the spherical screen: 3.6 m, horizontal viewing angle: -110 ° to 110 °; vertical viewing angle: -90 ° to 0 °; screen gain: 0.9; fig. 4 is a three-dimensional effect diagram of a projection screen according to an embodiment of the invention, as shown in fig. 4.
The dome screen is in a quarter-sphere form, the projectors are arranged above the dome screen, and the horizontal layout of the 9 projectors is symmetrical about the center line of the dome screen. Three views of the table projector are shown in fig. 5-7.
9 engineering projectors, single-chip DLP technology, resolution 1920 x 1200, and brightness 6000 lumens. Table 2 shows the table 2 of the angle of view and the angle of deflection of 9 projectors provided by the present invention, and the installation attitude data of 9 projectors is shown in table 2.
Table 2 units: degree of rotation
Figure BDA0001633997060000101
Fig. 8 is a schematic diagram of a projector position calibration coordinate system according to an embodiment of the present invention, and as shown in fig. 8, a coordinate system is established with a certain point of the moving platform as an origin of the coordinate system (point a in fig. 8), a distance of 2.5m from the ground, a direction facing the spherical screen as a positive y-axis direction, a direction perpendicular to the y-axis as a positive x-axis direction, and a direction perpendicular to the xy-plane and upward as a positive z-axis direction.
Table 3 is a projector position parameter table provided by the present invention, and the position coordinate parameters of 9 projectors are shown in table 3.
Table 3 units: m is
Figure BDA0001633997060000102
By adopting the port crane simulator vision system provided by the invention, the operation condition of the port crane can be comprehensively presented, and the fidelity and the reliability of a projected image are improved.
Fig. 9 is a flowchart of a method for processing an image of a vision system according to the present invention, and as shown in fig. 9, the method for processing an image of a vision system includes:
step 901: and acquiring a dome screen projection image.
Step 902: performing geometric correction processing on the spherical screen projection image to obtain a corrected projection image and recording correction pixel points; the geometric correction process is used to remove image distortion and image distortion.
The 902 specifically comprises: establishing a spherical screen coordinate system by taking the spherical center of the spherical screen as an original point, the sight line direction as a Y axis, the direction vertical to the ground as a Z axis and the direction vertical to the YOZ plane as an X axis; selecting grid points on the spherical screen coordinate system according to a grid plate of the projector; measuring the distance of the grid point by using a total station, and calculating the corresponding coordinate of the grid point; judging whether the coordinates are within a coordinate threshold range, and if so, recording correction pixel points corresponding to the coordinates; if not, adjusting the position of the grid point.
The virtual scene picture generated by the rendering of the visual machine is a plane, the projection screen is a spherical surface, and the plane is projected on the spherical surface to generate deformation and distortion. In the prior art, various methods are proposed for the problem of nonlinear correction of multi-channel projection, such as a quadratic transformation method using a parametric surface, a linear approximation method based on a polar coordinate system grid and using a texture mapping, an automatic correction method using a bicubic surface and a camera, and the like. The correction method provided by the invention aims at a dome scene system for a crane simulator, adopts a manual calibration and correction parameter calculation mode, and aims to reduce the research and development time and complexity, and the specific flow is shown in fig. 10.
Due to the fact that observation points are different, the geometric nonlinear distortion conditions of the special-shaped screen are also different, and therefore the difference of correction parameter calculation is large due to the fact that the coordinate system and the coordinate origin are different in selection. When the crane simulator is integrally designed, in order to improve the fidelity and the immersion as much as possible, the position of the human eyes is designed to be the spherical center of the spherical screen, so that the spherical center of the spherical screen is selected as the origin of a coordinate system, the sight line direction is the Y axis, the direction vertical to the ground is the Z axis, and the direction vertical to the XOZ plane is the X axis by the correction method.
The method selects a common engineering projector, has the function of setting a standard projection grid, and has 9 points in the horizontal direction and 6 points in the vertical direction.
The method of the invention selects a Non-cubic Non-Uniform Rational B-Splines (NURBS) curved surface given by the prior document to represent the nonlinear distortion curved surface, because the shape of the curved surface can be better changed by adjusting the control points of the curved surface.
The method adopts a high-precision total station to measure the distance of the adjusted grid point, calculates the coordinates of each point on the curved surface and further compares whether the error meets the requirement. If the requirement is met, recording control points for parameters in the realization process of the vision system; otherwise, the manual correction is continued.
Step 903: performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; the edge blending process is used to adjust the overlap region brightness.
The step 903 specifically includes: using formulas
Figure BDA0001633997060000121
Performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; wherein, f (x) is a nonlinear function, and x is the relative position of the pixel point in the overlapping region; a is a brightness adjusting coefficient, and p is an influence factor; the parameters a and p are determined by dynamic debugging.
Because a plurality of channels are displayed in an overlapped mode, the brightness of adjacent projection overlapping areas is higher than that of peripheral areas by multiple times, the output brightness of different projectors in the projection overlapping areas is equal to that of a single projector due to edge fusion, the color brightness of images in the overlapping areas is in smooth transition through adjustment of the brightness of the overlapping areas, and therefore brightness fusion is achieved. The key of the edge fusion problem is to adopt what method to process the highlight of the overlapped area, and for the plane screen or the cylindrical screen, the bilateral fusion is usually adopted, and a good effect can be achieved by adopting a linear function. The method is used for solving the problem of multi-channel spherical screen projection edge fusion, and the complexity is embodied in that the fusion zone is irregular in shape, a plurality of channels are overlapped and multilaterally crossed, and the edge fusion zone is shown in figure 11.
The invention adopts a nonlinear function-based method to process the brightness of the overlapping area, and multiplies each pixel point of the image of the overlapping area by the nonlinear function respectively, so that the brightness of the pixel points of different channels after being overlapped is basically consistent with the brightness of the non-overlapping area. The expression of the nonlinear function f (x) is as follows
Figure BDA0001633997060000122
Wherein x is the relative position of the pixel point in the overlapping region; a is a brightness adjustment coefficient and p is an influence factor.
The fusion method is carried out after the step of nonlinear geometric correction, parameters of a nonlinear function are debugged and determined in a manual mode and are used in the realization of a vision system, and the specific flow is shown in fig. 12.
Step 904: and processing the spherical screen projection image according to the correction pixel points and the fusion pixel points.
In order to solve the defects of the single-channel projection scheme, the adoption of the multi-channel projection scheme to construct a simulator vision system with higher resolution and wider field angle is a trend in the field, but the multi-channel projection scheme not only needs to solve the fusion correction problem of projection pictures, but also needs to solve the synchronous control problem among a plurality of vision computers.
The invention provides a forced double-cache interaction synchronization method based on message feedback, the principle of which is shown in figure 13:
firstly, a visual machine is selected as a forced command initiator at will, the forced command initiator is responsible for sending rendering messages to each visual machine according to a fixed period, and the synchronous control of the rendering efficiency among different visual machines is realized by setting a message feedback mechanism, so that the synchronous control of the exchange process from the temporary buffer area to the display buffer area of each visual machine is realized. The forced command initiator firstly sends rendering commands to all the video machines, then waits for feedback messages of the video machines, and if a certain video machine has no feedback message, the whole system waits for a long time; after receiving the feedback information of all the video rendering completion, the command initiator is forced to send the information output by all the video rendering, thereby ensuring the consistency of the pictures of the video system.
The invention provides a dynamic interpolation compensation algorithm for estimating the state of a simulation entity. In order to solve the problems of packet loss and network delay of simulation entity state data in the network transmission process, a state estimation model is built in each view computer, an interpolation theory is based, an estimated value of lost data is obtained through interpolation calculation based on received historical data, data are provided for rendering of the next frame, the deficiency of simulation data is compensated, and the difference among different channels is reduced, so that the multichannel synchronization performance of a view system is greatly improved, the change of the simulation data is smoothed, the simulation fidelity is improved, and an algorithm framework is shown in fig. 14.
The dynamic interpolation compensation algorithm provided by the invention is essentially a method for carrying out interpolation calculation according to the existing data, and the key is how to carry out interpolation calculation. The state estimation model can obtain simulation data with specified precision according to the crane state parameter interpolation, the interpolation algorithm has different orders, and in general, the higher the order of the algorithm is, the more accurate the calculated numerical value is, but the complexity of calculation time is increased.
The interpolation equations for states such as 1, 2 and 3 of the x coordinate of the container crane mechanism position data are respectively:
1, order:
Figure BDA0001633997060000131
2, stage:
Figure BDA0001633997060000141
3, stage:
Figure BDA0001633997060000142
wherein x is0,v0,a0Are respectively t0Coordinate position, velocity and acceleration of the time of day mechanism, x-1,v-1,a-1Are respectively t-1The coordinate position, velocity and acceleration at the moment, h is the simulated complement, and k is the number of steps recurred from the last time.
According to the theory of multi-body system dynamics, a rigid body in a three-dimensional space has 6 degrees of freedom, so that the position and the posture of a crane mechanism in a virtual scene can be expressed by six degrees of freedom of x, y, z, h, p and r. Setting { x (t), y (t), z (t), h (t), p (t), r (t) } as the position and posture of the main simulation computer at the time t and sent to the view computer, { v (t) }x(t),vy(t),vz(t) velocity vector sent to view computer at time t, { a }x(t),ay(t),az(t) is the acceleration vector, ω, sent to the view computer at time th(t),ωp(t),ωr(t) the angular velocity vector sent to the view computer at time t, { a }h(t),ap(t),ar(t) is the angular acceleration vector sent to the view computer at time t. The calculation formula of the physical quantities such as displacement, speed, acceleration and the like of the moving object by adopting a first-order interpolation algorithm is as follows:
Figure BDA0001633997060000143
according to the rule of the first-order interpolation algorithm, the second-order interpolation algorithm is easy to understand, and the position of the simulation object after passing through one simulation step length is calculated as follows:
Figure BDA0001633997060000151
when the video machine judges that the data packet is lost or the network is delayed, the estimation data is calculated according to the received state data through a dynamic interpolation compensation algorithm and is sent to the video frame circulation until new simulation data is received.
Considering that the general dynamics solution computation step size can reach 11 ms and the view frame cycle can only reach 33 ms, the optimization can be continued by using the difference of two simulation step sizes: the motion delay compensation algorithm on the view computer also adopts a calculation step length of 11 milliseconds, analyzes whether packet loss or delay occurs in a 3k (k is 0,1, … …) node of the simulation step length, and if packet loss or delay occurs, adopts a 3k-1 node and a 3k-2 node or more nodes to calculate data, and fully utilizes the result data of the main simulation to calculate.
The invention adopts the dynamic interpolation compensation technology to pre-estimate the state parameters of the simulation entities of a plurality of view computers, adjusts the algorithm performance by setting the parameters of interpolation calculation, not only can balance the relationship between the frame frequency and the resource consumption, but also can solve the view synchronization problem caused by network delay and data loss.
Fig. 15 is a structural diagram of a view system image processing system according to an embodiment of the present invention, and as shown in fig. 15, a view system image processing system includes:
and a dome screen projection head image obtaining module 1501, configured to obtain a dome screen projection image.
A geometric correction processing module 1502, configured to perform geometric correction processing on the dome screen projection image to obtain a corrected projection image and record a correction pixel point; the geometric correction process is used to remove image distortion and image distortion.
The geometric correction processing module 1502 specifically includes: the spherical screen coordinate system establishing unit is used for establishing a spherical screen coordinate system by taking the spherical center of the spherical screen as an origin, the sight line direction as a Y axis, the direction vertical to the ground as a Z axis and the direction vertical to the YOZ plane as an X axis; the grid point selecting unit is used for selecting grid points on the spherical screen coordinate system according to a grid plate of the projector; a grid point coordinate calculation unit, configured to perform ranging on the grid point by using a total station, and calculate a coordinate corresponding to the grid point; the first judgment unit is used for judging whether the coordinates are in a coordinate threshold range or not to obtain a first judgment result; the recording unit is used for recording a correction pixel point corresponding to the coordinate if the first judgment result shows that the coordinate is in the coordinate threshold range; and the adjusting unit is used for adjusting the positions of the grid points if the first judgment result shows that the coordinates are not in the coordinate threshold range.
An edge blending processing module 1503, configured to perform edge blending processing on the corrected projection image to obtain a blended projection image and record a blending pixel point; the edge blending process is used to adjust the overlap region brightness.
The edge fusion processing module 1503 specifically includes: an edge blending processing unit for utilizing the formula
Figure BDA0001633997060000161
Performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; wherein, f (x) is a nonlinear function, and x is the relative position of the pixel point in the overlapping region; a is a brightness adjustment coefficient and p is an influence factor.
And the image processing module 1504 is used for processing the dome screen projection image according to the correction pixel points and the fusion pixel points.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. A port crane simulator vision system, comprising: the system comprises a dome screen, a projector, a video machine, a network switch, a video monitoring computer, a teacher management computer, a scene monitoring computer and a main simulation computer;
the network switch comprises a first network switch and a second network switch;
one end of the first network switch is connected with one end of the vision machine through a synchronous control system; the other end of the first network switch is connected with the second network switch, the video monitoring computer, the teacher management computer, the scene monitoring computer and the main simulation computer in sequence;
the vision machine comprises a plurality of vision machines; the vision machines correspond to the projectors one by one;
the vision machine is connected with the projector through a virtual scene system and a correction fusion system;
the spherical screen is in a quarter-sphere form, the opening of the spherical screen is upward, the projector is arranged above the spherical screen, and the horizontal layout mode of the projector is symmetrical about the center line of the spherical screen; the spherical screen adopts a 9-channel spherical screen projection system.
2. The vision system as claimed in claim 1, wherein the synchronous control system adopts a forced double-buffer interactive synchronous method, arbitrarily selects one vision machine as a forced command initiator, the forced command initiator is responsible for sending rendering messages to each vision machine according to a fixed period, and a message feedback mechanism is set to realize synchronous control of rendering efficiency between different vision machines, thereby realizing synchronous control of the exchange process from each vision machine temporary buffer area to a display buffer area;
the synchronous control system also adopts a dynamic interpolation compensation algorithm to pre-estimate the state of the simulation entity, so as to improve the simulation fidelity.
3. A vision system according to claim 1, wherein the radius of the dome is 3.6 meters; the horizontal visual angle range is-110 to 110 degrees; the vertical visual angle range is-90 degrees to 0 degrees; the screen gain is 0.9.
4. A vision system as claimed in claim 3, wherein said projectors are 9 projectors, said projectors use single-chip digital light processing technology, said projectors have a resolution of 1920 x 1200 and a brightness of 6000 lumens.
5. A vision system image processing method is applied to a port crane simulator vision system, and is characterized in that the port crane simulator vision system comprises: the system comprises a dome screen, a projector, a video machine, a network switch, a video monitoring computer, a teacher management computer, a scene monitoring computer and a main simulation computer; the network switch comprises a first network switch and a second network switch; one end of the first network switch is connected with one end of the vision machine through a synchronous control system; the other end of the first network switch is connected with the second network switch, the video monitoring computer, the teacher management computer, the scene monitoring computer and the main simulation computer in sequence; the vision machine comprises a plurality of vision machines; the vision machines correspond to the projectors one by one; the vision machine is connected with the projector through a virtual scene system and a correction fusion system; the projector is arranged above the spherical screen, and the horizontal layout mode of the projector is symmetrical about the central line of the spherical screen;
the vision system image processing method comprises the following steps:
acquiring a dome screen projection image;
performing geometric correction processing on the spherical screen projection image to obtain a corrected projection image and recording correction pixel points; the geometric correction processing is used for eliminating image distortion; the geometric correction processing is carried out on the spherical screen projection image, the corrected projection image is obtained, and correction pixel points are recorded, and the method specifically comprises the following steps: setting the position of the human eye as the spherical center of the spherical screen, selecting the spherical center of the spherical screen as the origin of a coordinate system, setting the sight line direction as a Y axis, setting the direction vertical to the ground as a Z axis, and setting the direction vertical to the YOZ plane as an X axis to establish a spherical screen coordinate system;
selecting grid points on the spherical screen coordinate system according to a grid plate of the projector;
measuring the distance of the grid point by using a total station, and calculating the corresponding coordinate of the grid point;
judging whether the coordinate is within a coordinate threshold range to obtain a first judgment result;
if the first judgment result shows that the coordinate is in the coordinate threshold range, recording a correction pixel point corresponding to the coordinate;
if the first judgment result shows that the coordinate is not in the coordinate threshold range, adjusting the position of a grid point;
performing edge fusion processing on the corrected projection image to obtain a fused projection image and recording fusion pixel points; the edge fusion processing is used for adjusting the brightness of the overlapping area;
and processing the spherical screen projection image according to the correction pixel points and the fusion pixel points.
6. The image processing method according to claim 5, wherein the edge blending processing is performed on the corrected projection image to obtain a blended projection image and record a blending pixel point, and specifically includes:
multiplying the brightness of each pixel point of the overlapped region image by a nonlinear function to ensure that the superposed brightness of the pixel points of different channels is consistent with the brightness of the non-overlapped region, determining a fused projection image and recording the fused pixel points;
the expression of the nonlinear function is:
Figure FDA0002633018490000031
wherein, f (x) is a nonlinear function, and x is the position of the pixel point in the overlapping area; a is a brightness adjustment coefficient and p is an influence factor.
CN201810353441.8A 2018-04-19 2018-04-19 Port crane simulator vision system and image processing method and system thereof Active CN108495106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810353441.8A CN108495106B (en) 2018-04-19 2018-04-19 Port crane simulator vision system and image processing method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810353441.8A CN108495106B (en) 2018-04-19 2018-04-19 Port crane simulator vision system and image processing method and system thereof

Publications (2)

Publication Number Publication Date
CN108495106A CN108495106A (en) 2018-09-04
CN108495106B true CN108495106B (en) 2020-09-29

Family

ID=63312692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810353441.8A Active CN108495106B (en) 2018-04-19 2018-04-19 Port crane simulator vision system and image processing method and system thereof

Country Status (1)

Country Link
CN (1) CN108495106B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292576A (en) * 2020-04-01 2020-06-16 长春天骄翔宇科技有限公司 Real image spherical screen display system of fighter plane simulator
CN114415460A (en) * 2021-12-30 2022-04-29 南京英科信釜航空技术研究院有限公司 Simulated visual spherical screen system and device with super-large field angle

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100561543C (en) * 2006-12-29 2009-11-18 大连海事大学 High quality marine simulator and development platform system thereof
CN100523993C (en) * 2007-02-13 2009-08-05 上海水晶石信息技术有限公司 Multiscreen playing automatically integrating pretreatment method suitable for irregular screen
CN101344707A (en) * 2008-01-09 2009-01-14 上海海事大学 Non-linear geometry correction and edge amalgamation method of automatic multi-projection apparatus
CN101419756A (en) * 2008-11-24 2009-04-29 常州基腾电气有限公司 Simulated operating system for port crane
CN103794103B (en) * 2014-02-26 2016-08-17 上海海事大学 A kind of portable two passage portal crane simulator construction methods
CN104836970B (en) * 2015-03-27 2018-06-15 北京联合大学 More projection fusion methods and system based on GPU real time video processings
CN206248999U (en) * 2016-12-06 2017-06-13 山东金东数字创意股份有限公司 A kind of hyperbolic spherical balls curtain immersion audio-video system
CN106782022A (en) * 2017-02-27 2017-05-31 武汉科码软件有限公司 Six degree of freedom bank bridge counterweight simulator

Also Published As

Publication number Publication date
CN108495106A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN101572787B (en) Computer vision precision measurement based multi-projection visual automatic geometric correction and splicing method
US8147073B2 (en) Image signal processing apparatus and virtual reality creating system
CN103177440B (en) The system and method for synthetic image depth map
CN101916175B (en) Intelligent projecting method capable of adapting to projection surface automatically
CN107193372A (en) From multiple optional position rectangle planes to the projecting method of variable projection centre
CN111062869B (en) Multi-channel correction splicing method for curved curtain
CN104427230B (en) The method of augmented reality and the system of augmented reality
CN106600672B (en) A kind of network-based distributed synchronization rendering system and method
KR100962557B1 (en) Augmented reality implementation apparatus and method of the same
CN112489225A (en) Method and device for fusing video and three-dimensional scene, electronic equipment and storage medium
CN114998559B (en) Real-time remote rendering method for mixed reality binocular stereoscopic vision image
CN104349155A (en) Method and equipment for displaying simulated three-dimensional image
CN108495106B (en) Port crane simulator vision system and image processing method and system thereof
CN105025281B (en) Large-size spherical screen super-definition film playing and interactive application splicing and fusing method
CN206819048U (en) A kind of ball curtain projection system
CN113035010A (en) Virtual and real scene combined visual system and flight simulation device
CA3199128A1 (en) Systems and methods for augmented reality video generation
WO2017203796A1 (en) Information processing device, information processing method, and program
WO2022042111A1 (en) Video image display method and apparatus, and multimedia device and storage medium
CN107172415A (en) A kind of holographic integrated synchronous interactive exhibition systems of VR and its control method
CN115002427A (en) Projection fusion method based on neural network
JP5249733B2 (en) Video signal processing device
CN117333641A (en) Construction method of non-geometric continuous space immersion type interactive exhibition hall system
US20230245378A1 (en) Information processing apparatus, information processing method, and medium
KR20230114375A (en) AI-based XR content service method using cloud server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant