CN118298120A - Novel view angle synthesizing method of remote sensing image based on data driving - Google Patents
Novel view angle synthesizing method of remote sensing image based on data driving Download PDFInfo
- Publication number
- CN118298120A CN118298120A CN202410449709.3A CN202410449709A CN118298120A CN 118298120 A CN118298120 A CN 118298120A CN 202410449709 A CN202410449709 A CN 202410449709A CN 118298120 A CN118298120 A CN 118298120A
- Authority
- CN
- China
- Prior art keywords
- module
- image
- information
- image information
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002194 synthesizing effect Effects 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 119
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims description 137
- 230000004913 activation Effects 0.000 claims description 33
- 238000001914 filtration Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 23
- 230000004927 fusion Effects 0.000 claims description 22
- 230000003213 activating effect Effects 0.000 claims description 14
- 238000013210 evaluation model Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 11
- 238000011156 evaluation Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 description 16
- 230000015572 biosynthetic process Effects 0.000 description 16
- 238000003786 synthesis reaction Methods 0.000 description 16
- 238000004590 computer program Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 239000010410 layer Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a new view angle synthesizing method of a remote sensing image based on data driving, which comprises the following steps: acquiring multi-dimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed; preprocessing the multidimensional image data information to obtain initial image processing information; and processing the initial image processing information by using the image processing model to obtain new view angle synthesized image information of the remote sensing image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for synthesizing a new visual angle of a remote sensing image based on data driving.
Background
The space remote sensing technology has the advantages of wide observation range, no limitation of geographical conditions, quick ground feature discovery and the like, and can acquire important observation information which is difficult to acquire by other approaches. Therefore, the development of earth observation and remote sensing, especially the development and emission of earth observation satellites, is extremely important for all countries. At present, with the sustainable development of the technology in the remote sensing field, earth observation is converted from single-mode, single-phase, multi-mode and all-weather in the past, and multi-mode data provides brand new potential for high-precision earth observation, and different mode data have characteristics and complementary advantages. The existing massive, multi-mode and multi-phase satellite images provide multi-view data for the three-dimensional reconstruction of the ground features in a large range, and the possibility is brought to the three-dimensional reconstruction of the ground features based on the multi-view satellite images. However, the observation of ground objects is often not comprehensive due to the restriction of factors such as the observation angle of the remote sensing satellite, the revisit period and the like in the current stage. At present, due to the restriction of factors such as remote sensing satellite observation angles, revisit periods and the like, the observation of various ground objects is not comprehensive enough, and satellite image multi-view joint observation systems of different satellites, different sensors and different modes are not formed yet, so that the research of multi-angle three-dimensional modeling of a large-scale typical three-dimensional scene is very deficient and urgent need to be explored. Therefore, the method and the device for synthesizing the new view angles of the remote sensing images based on the data driving are provided, so that the efficiency and the reliability of synthesizing the new view angles of the multi-dimensional remote sensing images based on the data driving are improved, and further, the panoramic remote sensing image observation of multi-position, multi-angle and three-dimensional stereoscopic scenes is realized.
Disclosure of Invention
The invention aims to solve the technical problem of providing a data-driven remote sensing image new view angle synthesizing method which is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In order to solve the above technical problems, a first aspect of the present invention discloses a method for synthesizing a new view angle of a remote sensing image based on data driving, the method comprising:
Acquiring multi-dimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed;
preprocessing the multidimensional image data information to obtain initial image processing information;
and processing the initial image processing information by using the image processing model to obtain new view angle synthesized image information of the remote sensing image.
The second aspect of the embodiment of the invention discloses a new view angle synthesizing device of a remote sensing image based on data driving, which comprises the following components:
the acquisition module is used for acquiring the multidimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed;
the first processing module is used for preprocessing the multi-dimensional image data information to obtain initial image processing information;
And the second processing module is used for processing the initial image processing information by utilizing the image processing model to obtain new view angle synthesized image information of the remote sensing image.
The third aspect of the invention discloses another data-driven remote sensing image new view angle synthesizing device, which comprises:
A memory storing executable program code;
A processor coupled to the memory;
the processor calls executable program codes stored in the memory to execute part or all of the steps in the data-driven remote sensing image new view angle synthesizing method disclosed in the first aspect of the embodiment of the invention.
The fourth aspect of the present invention discloses a computer readable storage medium, where the computer readable storage medium stores computer instructions, and when the computer instructions are called, the computer instructions are used to execute part or all of the steps in the method for synthesizing a new view angle of a remote sensing image based on data driving disclosed in the first aspect of the present invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a new view angle synthesizing method of a remote sensing image based on data driving, which is disclosed by the embodiment of the invention;
Fig. 2 is a schematic structural diagram of a new view angle synthesizing device based on data driving for remote sensing images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another apparatus for synthesizing new view angles of remote sensing images based on data driving according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image processing model according to an embodiment of the present invention.
Detailed Description
In order to make the present invention better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or elements but may, in the alternative, include other steps or elements not expressly listed or inherent to such process, method, article, or device.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a data-driven remote sensing image new view angle synthesizing method which is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a new view angle synthesizing method of a remote sensing image based on data driving according to an embodiment of the invention. The method for synthesizing the new view angle of the remote sensing image based on the data driving described in fig. 1 is applied to a management system, such as a local server or a cloud server for management, and the embodiment of the invention is not limited. As shown in fig. 1, the method for synthesizing new view angles of remote sensing images based on data driving may include the following operations:
101. and acquiring multi-dimensional image data information.
In the embodiment of the invention, the multi-dimensional image data information comprises a plurality of pieces of image information to be processed.
102. And preprocessing the multidimensional image data information to obtain initial image processing information.
103. And processing the initial image processing information by using the image processing model to obtain new view angle synthesized image information of the remote sensing image.
It should be noted that, the above image information to be processed includes using devices such as satellites, radars, unmanned aerial vehicles, etc. to collect multi-source image data of cities. The multi-temporal satellite images include WorldView-3 full-color and 8-band visible and near-infrared (VNIR) images, the airborne lidar data is used to provide ground geometry real information, the Aggregate Nominal Pulse Spacing (ANPS) of which is about 80 cm, the point cloud data is provided in ASCII text format, including { x, y, z, intensity, number of returns } and other information, and the lidar-derived training data includes ground elevation (AGL) real height images, paired parallax images (for challenge 2) and Digital Surface Models (DSM), all provided in TIFF format.
Compared with the prior art, the novel view angle synthesis method based on the remote sensing image driven by data solves the problem of limitation of large-scale and multi-view three-dimensional reconstruction in a stereoscopic scene, can support multi-position, multi-angle and three-dimensional panoramic observation of the stereoscopic scene, can be suitable for multi-satellite networking earth observation of remote sensing image data and various angle observation (satellite side swing angle is minus 40 degrees to 40 degrees, pitch angle is minus 40 degrees to 40 degrees) data, and realizes the novel view angle synthesis of a typical stereoscopic scene. When providing multisource satellite remote sensing data at 1 meter and higher resolution, the peak signal-to-noise ratio of the new view composite image of the typically earth-looking scene is not less than 24.
It should be noted that, after obtaining the new view angle synthesized image information of the remote sensing image, a series of quantitative indexes including peak signal-to-noise ratio and structural similarity are utilized to learn the similarity of the perceived image blocks, and the new view angle synthesized performance verification is performed by measuring the similarity between the tested view angle image and the corresponding new view angle synthesized image information of the remote sensing image. The new view angle synthesized image information of the remote sensing image subjected to image synthesis processing is obviously improved on the evaluation indexes. In particular, the elevation map generated by the method may visually exhibit a high degree of continuity and clarity of topographical features, while also revealing minor changes in the surface, such as contours of buildings, relief of topography, and the like. Further prove the effectiveness of the image synthesis technology in improving the usability of the remote sensing image. The quantitative data provide solid data support for our image synthesis experiments, and lay a foundation for applying image synthesis technology in other areas or under different conditions in the future. The method provided by the invention can be used for highly restoring scene details, constructing a large-scale multi-view three-dimensional reconstruction of a three-dimensional scene, and providing important support for supporting multi-position multi-angle three-dimensional panoramic observation.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In an optional embodiment, the preprocessing the multi-dimensional image data information to obtain initial image processing information includes:
Correcting the multi-dimensional image data information to obtain multi-dimensional corrected image information; the multi-dimensional correction image information comprises a plurality of pieces of target correction image information;
and carrying out registration processing on the multidimensional correction image information to obtain initial image processing information.
It should be noted that, the above-mentioned correction processing for the multidimensional image data information may be a process of correcting distortion, distortion or distortion that may exist in the image. In remote sensing images, the images may be affected by various forms of distortion due to factors such as different photographing devices, angles, and atmospheric conditions. In order to ensure the accuracy of subsequent processing, each image is corrected, and possible geometric deformation is eliminated, so that the image is more true and accurate in geometric structure.
It should be noted that, the above-mentioned registration processing for the multi-dimensional correction image information is to spatially align the plurality of pieces of image information to be processed so as to have the same coordinate system and scale, so that they can be seamlessly joined in the subsequent processing. For data from different sources, image registration is performed, and different images are matched by searching for specific corresponding points or characteristic points, so that the images are consistent in space. This not only helps to improve the overall accuracy of the data, but also ensures that the model can accurately understand and process the data from various sources as it is trained and inferred.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In another alternative embodiment, the correcting process is performed on the multi-dimensional image data information to obtain multi-dimensional corrected image information, including:
For any image information to be processed in the multi-dimensional image data information, performing image filtering adjustment processing on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed;
And correcting the target adjustment image information to obtain target correction image information corresponding to the image information to be processed.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another optional embodiment, performing image filtering adjustment processing on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed, including:
sequentially carrying out twice filtering processing on the image information to be processed based on the first filtering frequency and the second filtering frequency to obtain target filtering image information corresponding to the image information to be processed; the first filtering frequency is larger than the second filtering frequency;
performing contrast enhancement processing on the target filtered image information to obtain first enhanced image information corresponding to the image information to be processed;
And carrying out brightness optimization adjustment processing on the first enhanced image information to obtain target adjustment image information corresponding to the image information to be processed.
The first filter frequency and the second filter frequency are both center frequencies. Further, the first filtering frequency is not less than 200Hz. Further, the second filtering frequency is not greater than 100Hz.
The contrast enhancement processing of the target filtered image information is performed by adjusting the contrast of the image, which means that details in the image are clearer.
It should be noted that, the above-mentioned process of performing brightness optimization adjustment on the first enhanced image information optimizes the overall brightness of the image so as to make the image softer, which is not limited by the embodiment of the present invention.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another optional embodiment, performing correction processing on the target adjustment image information to obtain target correction image information corresponding to the image information to be processed, including:
Performing distortion type detection processing on the target adjustment image information to obtain image distortion type information corresponding to the image information to be processed;
performing parameter estimation processing on target adjustment image information based on the image distortion type information to obtain estimated parameter information corresponding to the image information to be processed;
and remapping the pixel points in the target adjustment image information based on the estimated parameter information to obtain target correction image information corresponding to the image information to be processed.
It should be noted that, the above-mentioned distortion type detection processing for the target adjustment image information is to determine a distortion type, such as radial distortion or tangential distortion, of the image information to be processed, which may be implemented based on a function in an opencv library, or may be implemented based on a neural network, and the embodiment of the present invention is not limited.
It should be noted that, the parameter estimation processing for the target adjustment image information based on the image distortion type information is to estimate distortion parameters of feature points, such as corner points and edges, of a distorted portion in an image according to the image distortion type. The method can be realized based on an L-M algorithm, can be realized based on a function in opencv, can be realized based on a neural network, and is not limited by the embodiment of the invention.
It should be noted that, the remapping processing of the pixel points in the target adjustment image information based on the estimation parameter information uses a mapping function to remap each pixel point in the image to eliminate distortion, and further, in the remapping process, the pixel value of the pixel point is estimated by a difference method, which may also be directly implemented by a deep learning algorithm, and the embodiment of the invention is not limited.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In an alternative embodiment, as shown in fig. 4, the image processing model includes a first connection module, a second connection module, a third connection module, a fourth connection module, a first multi-sensing module, a second multi-sensing module, a third multi-sensing module, a fourth multi-sensing module, a fifth multi-sensing module, a sixth multi-sensing module, a seventh multi-sensing module, an eighth multi-sensing module, a first activation module, a second activation module, a third activation module, a fourth activation module, a fifth activation module, a first single-sensing module, a second single-sensing module, a third single-sensing module, a fourth single-sensing module, a fifth single-sensing module, and a fusion module; wherein,
The input end of the first connecting module, the input end of the fifth single sensing module, the input end of the fourth connecting module, the input end of the third connecting module and the input end of the first single sensing module are all connected with the first model input of the image processing model; the output end of the first connecting module is connected with the input end of the first multi-perception module; the output end of the first multi-perception module is connected with the input end of the second multi-perception module; the output end of the second sensing module is connected with the input end of the third multi-sensing module; the output end of the third multi-perception module is connected with the input end of the fourth multi-perception module; the output end of the fourth multi-perception module is connected with the input end of the fifth multi-perception module; the output end of the fifth multi-perception module is connected with the input end of the sixth multi-perception module; the output end of the sixth multi-perception module is connected with the input end of the seventh multi-perception module; the output end of the seventh multi-perception module is connected with the input end of the eighth multi-perception module and the input end of the second connection module; the output end of the second connecting module is connected with the output end of the first activating module and is respectively connected with the input end of the fusion module, the input end of the first single sensing module, the input end of the third connecting module, the input end of the fourth connecting module and the input end of the fifth single sensing module; the output end of the first single sensing module is connected with the input end of the second activating module; the output end of the second activation module is connected with the input end of the fusion module; the output end of the third connecting module is connected with the input end of the second single sensing module; the output end of the second single sensing module is connected with the input end of the third activating module; the output end of the third activation module is connected with the input end of the fusion module; the input end of the fourth connecting module is connected with the input end of the third single sensing module; the output end of the third single-sensing module is connected with the input end of the fourth single-sensing module; the output end of the fourth single-perception module is connected with the input end of the fourth activation module; the output end of the fourth activation module is connected with the input end of the fusion module; the output end of the fifth single-perception module is connected with the input end of the fifth activation module; the output end of the fifth activation module is connected with the input end of the fusion module; the output end of the fusion module is connected with the first model output of the image processing model.
It should be noted that, the first connection module, the second connection module, the third connection module, and the fourth connection module are configured based on a full connection layer, and the embodiment of the invention is not limited.
It should be noted that, the first multi-sensing module, the second multi-sensing module, the third multi-sensing module, the fourth multi-sensing module, the fifth multi-sensing module, the sixth multi-sensing module, the seventh multi-sensing module, and the eighth multi-sensing module are configured based on a multi-layer sensing machine, and the number of neurons is between 256 and 512, which is not limited by the embodiment of the invention.
It should be noted that, the first single sensing module, the second single sensing module, the third single sensing module, the fourth single sensing module, and the fifth single sensing module are configured based on a single-layer sensing machine, and the embodiment of the invention is not limited.
It should be noted that, the above fusion module is constructed based on matrix summation, and the embodiment of the present invention is not limited.
It should be noted that the first activation module and the second activation module are constructed based on softplus functions, which is not limited by the embodiment of the present invention.
It should be noted that the third activation module, the fourth activation module, and the fifth activation module are constructed based on a sigmoid function, and the embodiment of the present invention is not limited.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In another alternative embodiment, the image processing model is derived based on the steps of:
Acquiring initial training image information; the initial training image information comprises A initial training sample information;
Randomly selecting B pieces of initial training sample information from the initial training image information to obtain target training image information; b is a positive integer not greater than A/2;
Training the initial processing model by utilizing the target training image information and the first training evaluation model to obtain intermediate processing model and first model training information;
The first training evaluation model is as follows:
wherein L1 is a first model evaluation value in the first model training information; n is the number of target training images in the target training image information; a is a first model training coefficient; b is a second model training coefficient; x i is the image label value corresponding to the ith target training image; the label estimation value corresponding to the ith target training image is obtained;
judging whether the first model training information meets the first model training condition or not to obtain a first model training judgment result;
when the first model training judgment result is negative, calculating the first model training information by using a second training evaluation model to obtain second model training information;
Wherein the second training evaluation model is:
wherein L2 is a second model evaluation value corresponding to the second model training information; c is a third model training coefficient; x is the number of positive samples of the target training image in the first model training information; y is the number of negative samples of the target training image in the first model training information;
Judging whether the second model training information meets the second model training condition or not to obtain a second model training judgment result;
When the training judgment result of the second model is negative, the intermediate processing model is utilized to update parameters of the initial processing model, and the execution is triggered to be based on the initial training image information, so that the target training image information is determined;
When the second model training judgment result is yes, determining the intermediate processing model as an image processing model;
And when the first model training judgment result is yes, determining the intermediate processing model as an image processing model.
It should be noted that, the training condition analysis of the first training evaluation model is performed first, and then the second training evaluation model is used for performing the second training condition analysis, so that the training quality of the model can be ensured, and the training speed of the model can be accelerated.
It should be noted that the initial processing model is consistent with the model architecture of the image processing model.
The first model training condition is data convergence composed of a first model evaluation value and a historical first model evaluation value.
The second model training condition is that the second model evaluation value is equal to or greater than a training threshold value.
It should be noted that, the first model training coefficient, the second model training coefficient, the third model training coefficient and the training threshold may be set by a user, or may be a default value given by a system, which is not limited in the embodiment of the present invention.
The above a is a positive even number not less than 1000.
It should be noted that, the image tag value and the tag estimation value may be set by a user or be given by a system when the initial training sample information is marked, which is not limited by the embodiment of the present invention.
It should be noted that, the number of positive samples in the initial training sample information is greater than the number of negative samples, and the number of the positive samples and the negative samples is determined when the initial training image information is acquired, which is not limited in the embodiment of the present invention.
Therefore, the implementation of the data-driven remote sensing image new view angle synthesizing method described by the embodiment of the invention is beneficial to improving the efficiency and reliability of multi-dimensional data-driven remote sensing image new view angle synthesis, and further realizes multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a new view angle synthesizing device for remote sensing images based on data driving according to an embodiment of the present invention. The device described in fig. 2 can be applied to a warehouse management system, such as a local server or a cloud server for new view angle synthesis management of remote sensing images based on data driving in warehouse logistics, and the embodiment of the invention is not limited. As shown in fig. 2, the apparatus may include:
An acquisition module 201, configured to acquire multi-dimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed;
a first processing module 202, configured to perform preprocessing on the multi-dimensional image data information to obtain initial image processing information;
The second processing module 203 is configured to process the initial image processing information by using the image processing model, so as to obtain new view angle composite image information of the remote sensing image.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In another alternative embodiment, as shown in fig. 2, the first processing module 202 performs preprocessing on the multi-dimensional image data information to obtain initial image processing information, including:
Correcting the multi-dimensional image data information to obtain multi-dimensional corrected image information; the multi-dimensional correction image information comprises a plurality of pieces of target correction image information;
and carrying out registration processing on the multidimensional correction image information to obtain initial image processing information.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another alternative embodiment, as shown in fig. 2, the first processing module 202 performs correction processing on the multi-dimensional image data information to obtain multi-dimensional corrected image information, including:
For any image information to be processed in the multi-dimensional image data information, performing image filtering adjustment processing on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed;
And correcting the target adjustment image information to obtain target correction image information corresponding to the image information to be processed.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another alternative embodiment, as shown in fig. 2, the first processing module 202 performs an image filtering adjustment process on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed, including:
sequentially carrying out twice filtering processing on the image information to be processed based on the first filtering frequency and the second filtering frequency to obtain target filtering image information corresponding to the image information to be processed; the first filtering frequency is larger than the second filtering frequency;
performing contrast enhancement processing on the target filtered image information to obtain first enhanced image information corresponding to the image information to be processed;
And carrying out brightness optimization adjustment processing on the first enhanced image information to obtain target adjustment image information corresponding to the image information to be processed.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another alternative embodiment, as shown in fig. 2, the first processing module 202 performs correction processing on the target adjustment image information to obtain target correction image information corresponding to the image information to be processed, including:
Performing distortion type detection processing on the target adjustment image information to obtain image distortion type information corresponding to the image information to be processed;
performing parameter estimation processing on target adjustment image information based on the image distortion type information to obtain estimated parameter information corresponding to the image information to be processed;
and remapping the pixel points in the target adjustment image information based on the estimated parameter information to obtain target correction image information corresponding to the image information to be processed.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another alternative embodiment, as shown in fig. 2, the image processing model includes a first connection module, a second connection module, a third connection module, a fourth connection module, a first multi-sensing module, a second multi-sensing module, a third multi-sensing module, a fourth multi-sensing module, a fifth multi-sensing module, a sixth multi-sensing module, a seventh multi-sensing module, an eighth multi-sensing module, a first activation module, a second activation module, a third activation module, a fourth activation module, a fifth activation module, a first single-sensing module, a second single-sensing module, a third single-sensing module, a fourth single-sensing module, a fifth single-sensing module, and a fusion module; wherein,
The input end of the first connecting module, the input end of the fifth single sensing module, the input end of the fourth connecting module, the input end of the third connecting module and the input end of the first single sensing module are all connected with the first model input of the image processing model; the output end of the first connecting module is connected with the input end of the first multi-perception module; the output end of the first multi-perception module is connected with the input end of the second multi-perception module; the output end of the second sensing module is connected with the input end of the third multi-sensing module; the output end of the third multi-perception module is connected with the input end of the fourth multi-perception module; the output end of the fourth multi-perception module is connected with the input end of the fifth multi-perception module; the output end of the fifth multi-perception module is connected with the input end of the sixth multi-perception module; the output end of the sixth multi-perception module is connected with the input end of the seventh multi-perception module; the output end of the seventh multi-perception module is connected with the input end of the eighth multi-perception module and the input end of the second connection module; the output end of the second connecting module is connected with the output end of the first activating module and is respectively connected with the input end of the fusion module, the input end of the first single sensing module, the input end of the third connecting module, the input end of the fourth connecting module and the input end of the fifth single sensing module; the output end of the first single sensing module is connected with the input end of the second activating module; the output end of the second activation module is connected with the input end of the fusion module; the output end of the third connecting module is connected with the input end of the second single sensing module; the output end of the second single sensing module is connected with the input end of the third activating module; the output end of the third activation module is connected with the input end of the fusion module; the input end of the fourth connecting module is connected with the input end of the third single sensing module; the output end of the third single-sensing module is connected with the input end of the fourth single-sensing module; the output end of the fourth single-perception module is connected with the input end of the fourth activation module; the output end of the fourth activation module is connected with the input end of the fusion module; the output end of the fifth single-perception module is connected with the input end of the fifth activation module; the output end of the fifth activation module is connected with the input end of the fusion module; the output end of the fusion module is connected with the first model output of the image processing model.
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
In yet another alternative embodiment, as shown in fig. 2, the image processing model is obtained by the second processing module 203 performing the following steps:
Acquiring initial training image information; the initial training image information comprises A initial training sample information;
Randomly selecting B pieces of initial training sample information from the initial training image information to obtain target training image information; b is a positive integer not greater than A/2;
Training the initial processing model by utilizing the target training image information and the first training evaluation model to obtain intermediate processing model and first model training information;
The first training evaluation model is as follows:
wherein L1 is a first model evaluation value in the first model training information; n is the number of target training images in the target training image information; a is a first model training coefficient; b is a second model training coefficient; x i is the image label value corresponding to the ith target training image; the label estimation value corresponding to the ith target training image is obtained;
judging whether the first model training information meets the first model training condition or not to obtain a first model training judgment result;
when the first model training judgment result is negative, calculating the first model training information by using a second training evaluation model to obtain second model training information;
Wherein the second training evaluation model is:
wherein L2 is a second model evaluation value corresponding to the second model training information; c is a third model training coefficient; x is the number of positive samples of the target training image in the first model training information; y is the number of negative samples of the target training image in the first model training information;
Judging whether the second model training information meets the second model training condition or not to obtain a second model training judgment result;
When the training judgment result of the second model is negative, the intermediate processing model is utilized to update parameters of the initial processing model, and the execution is triggered to be based on the initial training image information, so that the target training image information is determined;
When the second model training judgment result is yes, determining the intermediate processing model as an image processing model;
And when the first model training judgment result is yes, determining the intermediate processing model as an image processing model. A step of
Therefore, implementing the new view angle synthesizing device of the remote sensing image based on data driving described in fig. 2 is beneficial to improving the efficiency and reliability of synthesizing the new view angle of the remote sensing image based on data driving in multiple dimensions, thereby realizing multi-position, multi-angle and three-dimensional panoramic remote sensing image observation of a three-dimensional scene.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of another new view angle synthesizing device based on data driving for remote sensing images according to an embodiment of the present invention. The device described in fig. 3 can be applied to a warehouse management system, such as a local server or a cloud server for new view angle synthesis management of remote sensing images based on data driving in warehouse logistics, and the embodiment of the invention is not limited. As shown in fig. 3, the apparatus may include:
a memory 301 storing executable program code;
a processor 302 coupled with the memory 301;
the processor 302 invokes the executable program code stored in the memory 301 for performing the steps in the data-driven remote sensing image new view angle synthesizing method described in the first embodiment.
Example IV
The embodiment of the invention discloses a computer readable storage medium which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute the steps in the remote sensing image new view angle synthesizing method based on data driving.
Example five
The embodiment of the invention discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to make a computer execute the steps in the data-driven remote sensing image new view angle synthesizing method described in the embodiment.
The apparatus embodiments described above are merely illustrative, in which the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a new view angle synthesizing method of remote sensing images based on data driving, which is only disclosed as a preferred embodiment of the invention, and is only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (10)
1. A new view angle synthesizing method of remote sensing images based on data driving is characterized by comprising the following steps:
Acquiring multi-dimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed;
preprocessing the multidimensional image data information to obtain initial image processing information;
and processing the initial image processing information by using the image processing model to obtain new view angle synthesized image information of the remote sensing image.
2. The data-driven remote sensing image new view angle synthesizing method according to claim 1, wherein the preprocessing of the multi-dimensional image data information to obtain the initial image processing information comprises the steps of:
Correcting the multi-dimensional image data information to obtain multi-dimensional corrected image information; the multi-dimensional correction image information comprises a plurality of pieces of target correction image information;
and carrying out registration processing on the multidimensional correction image information to obtain initial image processing information.
3. The data-driven remote sensing image new view angle synthesizing method according to claim 2, wherein the correcting process is performed on the multi-dimensional image data information to obtain multi-dimensional corrected image information, comprising:
For any image information to be processed in the multi-dimensional image data information, performing image filtering adjustment processing on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed;
And correcting the target adjustment image information to obtain target correction image information corresponding to the image information to be processed.
4. The method for synthesizing a new view angle of a remote sensing image based on data driving according to claim 3, wherein performing image filtering adjustment processing on the image information to be processed to obtain target adjustment image information corresponding to the image information to be processed, comprises:
sequentially carrying out twice filtering processing on the image information to be processed based on the first filtering frequency and the second filtering frequency to obtain target filtering image information corresponding to the image information to be processed; the first filtering frequency is larger than the second filtering frequency;
performing contrast enhancement processing on the target filtered image information to obtain first enhanced image information corresponding to the image information to be processed;
And carrying out brightness optimization adjustment processing on the first enhanced image information to obtain target adjustment image information corresponding to the image information to be processed.
5. The data-driven remote sensing image new view angle synthesizing method as set forth in claim 3, wherein the correcting process is performed on the target adjustment image information to obtain target correction image information corresponding to the image information to be processed, comprising:
Performing distortion type detection processing on the target adjustment image information to obtain image distortion type information corresponding to the image information to be processed;
performing parameter estimation processing on target adjustment image information based on the image distortion type information to obtain estimated parameter information corresponding to the image information to be processed;
and remapping the pixel points in the target adjustment image information based on the estimated parameter information to obtain target correction image information corresponding to the image information to be processed.
6. The method for synthesizing the new view angle of the remote sensing image based on data driving according to claim 1, wherein the image processing model comprises a first connecting module, a second connecting module, a third connecting module, a fourth connecting module, a first multi-sensing module, a second multi-sensing module, a third multi-sensing module, a fourth multi-sensing module, a fifth multi-sensing module, a sixth multi-sensing module, a seventh multi-sensing module, an eighth multi-sensing module, a first activating module, a second activating module, a third activating module, a fourth activating module, a fifth activating module, a first single-sensing module, a second single-sensing module, a third single-sensing module, a fourth single-sensing module, a fifth single-sensing module and a fusion module; wherein,
The input end of the first connecting module, the input end of the fifth single sensing module, the input end of the fourth connecting module, the input end of the third connecting module and the input end of the first single sensing module are all connected with the first model input of the image processing model; the output end of the first connecting module is connected with the input end of the first multi-perception module; the output end of the first multi-perception module is connected with the input end of the second multi-perception module; the output end of the second sensing module is connected with the input end of the third multi-sensing module; the output end of the third multi-perception module is connected with the input end of the fourth multi-perception module; the output end of the fourth multi-perception module is connected with the input end of the fifth multi-perception module; the output end of the fifth multi-perception module is connected with the input end of the sixth multi-perception module; the output end of the sixth multi-perception module is connected with the input end of the seventh multi-perception module; the output end of the seventh multi-perception module is connected with the input end of the eighth multi-perception module and the input end of the second connection module; the output end of the second connecting module is connected with the output end of the first activating module and is respectively connected with the input end of the fusion module, the input end of the first single sensing module, the input end of the third connecting module, the input end of the fourth connecting module and the input end of the fifth single sensing module; the output end of the first single sensing module is connected with the input end of the second activating module; the output end of the second activation module is connected with the input end of the fusion module; the output end of the third connecting module is connected with the input end of the second single sensing module; the output end of the second single sensing module is connected with the input end of the third activating module; the output end of the third activation module is connected with the input end of the fusion module; the input end of the fourth connecting module is connected with the input end of the third single sensing module; the output end of the third single-sensing module is connected with the input end of the fourth single-sensing module; the output end of the fourth single-perception module is connected with the input end of the fourth activation module; the output end of the fourth activation module is connected with the input end of the fusion module; the output end of the fifth single-perception module is connected with the input end of the fifth activation module; the output end of the fifth activation module is connected with the input end of the fusion module; the output end of the fusion module is connected with the first model output of the image processing model.
7. The data-driven remote sensing image new view angle synthesizing method as claimed in claim 1, wherein the image processing model is obtained based on the following steps:
Acquiring initial training image information; the initial training image information comprises A initial training sample information;
Randomly selecting B pieces of initial training sample information from the initial training image information to obtain target training image information; b is a positive integer not greater than A/2;
Training the initial processing model by utilizing the target training image information and the first training evaluation model to obtain intermediate processing model and first model training information;
The first training evaluation model is as follows:
wherein L1 is a first model evaluation value in the first model training information; n is the number of target training images in the target training image information; a is a first model training coefficient; b is a second model training coefficient; x i is the image label value corresponding to the ith target training image; the label estimation value corresponding to the ith target training image is obtained;
judging whether the first model training information meets the first model training condition or not to obtain a first model training judgment result;
when the first model training judgment result is negative, calculating the first model training information by using a second training evaluation model to obtain second model training information;
Wherein the second training evaluation model is:
wherein L2 is a second model evaluation value corresponding to the second model training information; c is a third model training coefficient; x is the number of positive samples of the target training image in the first model training information; y is the number of negative samples of the target training image in the first model training information;
Judging whether the second model training information meets the second model training condition or not to obtain a second model training judgment result;
When the training judgment result of the second model is negative, the intermediate processing model is utilized to update parameters of the initial processing model, and the execution is triggered to be based on the initial training image information, so that the target training image information is determined;
When the second model training judgment result is yes, determining the intermediate processing model as an image processing model;
And when the first model training judgment result is yes, determining the intermediate processing model as an image processing model.
8. The utility model provides a novel visual angle synthesizer of remote sensing image based on data drive which characterized in that, the device includes:
the acquisition module is used for acquiring the multidimensional image data information; the multi-dimensional image data information comprises a plurality of image information to be processed;
the first processing module is used for preprocessing the multi-dimensional image data information to obtain initial image processing information;
And the second processing module is used for processing the initial image processing information by utilizing the image processing model to obtain new view angle synthesized image information of the remote sensing image.
9. The utility model provides a novel visual angle synthesizer of remote sensing image based on data drive which characterized in that, the device includes:
A memory storing executable program code;
A processor coupled to the memory;
The processor invokes executable program code stored in the memory to perform the data-driven remote sensing image new view angle synthesizing method according to any one of claims 1-7.
10. A computer readable storage medium, wherein the computer readable storage medium stores computer instructions, which when invoked, are adapted to perform the data-driven remote sensing image new view angle synthesizing method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410449709.3A CN118298120A (en) | 2024-04-15 | 2024-04-15 | Novel view angle synthesizing method of remote sensing image based on data driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410449709.3A CN118298120A (en) | 2024-04-15 | 2024-04-15 | Novel view angle synthesizing method of remote sensing image based on data driving |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118298120A true CN118298120A (en) | 2024-07-05 |
Family
ID=91682009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410449709.3A Pending CN118298120A (en) | 2024-04-15 | 2024-04-15 | Novel view angle synthesizing method of remote sensing image based on data driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118298120A (en) |
-
2024
- 2024-04-15 CN CN202410449709.3A patent/CN118298120A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570466B (en) | Method and device for generating three-dimensional live-action point cloud model | |
CN109685842B (en) | Sparse depth densification method based on multi-scale network | |
CN110675418B (en) | Target track optimization method based on DS evidence theory | |
Bosch et al. | A multiple view stereo benchmark for satellite imagery | |
Baltsavias et al. | High‐quality image matching and automated generation of 3D tree models | |
Schenk et al. | Fusion of LIDAR data and aerial imagery for a more complete surface description | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
CN107560592B (en) | Precise distance measurement method for photoelectric tracker linkage target | |
KR100529401B1 (en) | Apparatus and method of dem generation using synthetic aperture radar(sar) data | |
CN109840553A (en) | The extracting method and system, storage medium, electronic equipment for agrotype of ploughing | |
CN115236655A (en) | Landslide identification method, system, equipment and medium based on fully-polarized SAR | |
CN111143489B (en) | Image-based positioning method and device, computer equipment and readable storage medium | |
CN115861591B (en) | Unmanned aerial vehicle positioning method based on transformer key texture coding matching | |
CN116245757B (en) | Multi-scene universal remote sensing image cloud restoration method and system for multi-mode data | |
CN117132649A (en) | Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion | |
Rönnholm | Registration quality-towards integration of laser scanning and photogrammetry | |
CN116817891A (en) | Real-time multi-mode sensing high-precision map construction method | |
CN115049794A (en) | Method and system for generating dense global point cloud picture through deep completion | |
Mohamed et al. | Change detection techniques using optical remote sensing: a survey | |
CN117274375A (en) | Target positioning method and system based on transfer learning network model and image matching | |
CN115825946A (en) | Millimeter wave radar ranging method and device based on unsupervised learning | |
CN118298120A (en) | Novel view angle synthesizing method of remote sensing image based on data driving | |
CN115713548A (en) | Automatic registration method for multi-stage live-action three-dimensional model | |
Al-Durgham | The registration and segmentation of heterogeneous Laser scanning data | |
Wu et al. | Building Facade Reconstruction Using Crowd-Sourced Photos and Two-Dimensional Maps |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |