Nothing Special   »   [go: up one dir, main page]

CN111383199B - Image processing method, device, computer readable storage medium and electronic equipment - Google Patents

Image processing method, device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111383199B
CN111383199B CN202010205822.9A CN202010205822A CN111383199B CN 111383199 B CN111383199 B CN 111383199B CN 202010205822 A CN202010205822 A CN 202010205822A CN 111383199 B CN111383199 B CN 111383199B
Authority
CN
China
Prior art keywords
ghost
image
free
images
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010205822.9A
Other languages
Chinese (zh)
Other versions
CN111383199A (en
Inventor
顾晓东
饶童
刘程林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
You Can See Beijing Technology Co ltd AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Can See Beijing Technology Co ltd AS filed Critical You Can See Beijing Technology Co ltd AS
Priority to CN202010205822.9A priority Critical patent/CN111383199B/en
Publication of CN111383199A publication Critical patent/CN111383199A/en
Priority to US17/210,100 priority patent/US11620730B2/en
Application granted granted Critical
Publication of CN111383199B publication Critical patent/CN111383199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method, an image processing device, a computer readable storage medium and electronic equipment. The method comprises the following steps: acquiring a plurality of ghost-free images; generating a plurality of corresponding ghost images according to the plurality of ghost-free images; training to obtain a ghost processing model by utilizing a plurality of ghost-free images and a plurality of ghost images; and inputting the ghost image to be processed into a ghost processing model to obtain a corresponding ghost-free image output by the ghost processing model. In the embodiment of the disclosure, the ghost image in the ghost image can be removed by using the ghost processing model obtained through training, so that a ghost-free image for depth estimation is obtained, and the influence of the ghost image on the height cognition, the object boundary cognition and the like is avoided.

Description

Image processing method, device, computer readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to an image processing method, an image processing device, a computer readable storage medium, and an electronic device.
Background
Currently, images are used in many modeling processes, for example, images may be used in depth estimation related processes. In some cases, there may be a ghost image in the image, i.e., there are at least two images of the same object in the image, e.g., there are two images of the same door in fig. 1, visually as if the two doors were superimposed. When depth estimation is performed using images with ghosts, the ghosts affect the high awareness and the object boundary awareness, resulting in a significant reduction in the accuracy of the depth estimation.
Disclosure of Invention
The present disclosure has been made in order to solve the above technical problems. Embodiments of the present disclosure provide an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
According to an aspect of the embodiments of the present disclosure, there is provided an image processing method including:
acquiring a plurality of ghost-free images;
generating a plurality of corresponding ghost images according to the plurality of ghost-free images;
training to obtain a ghost processing model by utilizing the ghost-free images and the ghost images;
and inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image output by the ghost processing model.
In an alternative example, each ghost-free image of the plurality of ghost-free images is a panoramic image;
the generating a corresponding plurality of ghost images according to the plurality of ghost-free images includes:
acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
generating a ghost image on the first spherical image;
and obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
In an alternative example, the generating a ghost on the first spherical image includes:
determining a first region on the first spherical image;
applying a pose disturbance to the first region to determine a second region on the first spherical image;
mapping a partial image located in the second region to the first region to generate a ghost image on the first spherical image.
In one example of an alternative implementation of the method,
before the determining the first region on the first spherical image, the method further includes:
determining a third region on the first spherical image; wherein there is an overlap of the first region and the third region;
The obtaining the ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost, includes:
controlling two partial images positioned in the overlapped area to be displayed according to corresponding transparency respectively according to a preset transparency strategy aiming at the first spherical image with the ghost;
and obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost and the transparency controlled.
In an optional example, the obtaining, according to the first spherical image in which the ghost is generated and the transparency is controlled, a ghost image corresponding to the first ghost-free image includes:
performing boundary smoothing processing on the overlapped area in the first spherical image which is generated with ghost and has controlled transparency to obtain a second spherical image;
and taking the panoramic image corresponding to the second spherical image as the ghost image corresponding to the first ghost-free image.
In one example of an alternative implementation of the method,
after the generating the corresponding multiple ghost images according to the multiple ghost-free images, the method further comprises:
determining a plurality of ghost position indication information corresponding to the plurality of ghost images;
The training to obtain a ghost processing model by using the ghost-free images and the ghost-free images includes:
training by taking the multiple ghost images as input data and taking the multiple ghost-free images and the multiple ghost position indication information as output data, so as to obtain a ghost processing model;
the inputting the ghost image to be processed into the ghost processing model to obtain the corresponding ghost-free image output by the ghost processing model comprises the following steps:
and inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image and corresponding ghost position indication information output by the ghost processing model.
According to another aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including:
the first acquisition module is used for acquiring a plurality of ghost-free images;
the generation model is used for generating a plurality of corresponding ghost images according to the plurality of ghost-free images;
the second acquisition module is used for training to obtain a ghost processing model by utilizing the ghost-free images and the ghost images;
and the third acquisition module is used for inputting the ghost image to be processed into the ghost processing model so as to obtain a corresponding ghost-free image output by the ghost processing model.
In an alternative example, each ghost-free image of the plurality of ghost-free images is a panoramic image;
the generating a model includes:
the first acquisition sub-module is used for acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
a generation sub-module for generating a ghost on the first spherical image;
and the second acquisition sub-module is used for obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
In an alternative example, the generating sub-module includes:
a first determining unit configured to determine a first region on the first spherical image;
a second determining unit configured to apply a pose disturbance to the first region to determine a second region on the first spherical image;
and the mapping unit is used for mapping the local image positioned in the second area to the first area so as to generate double images on the first spherical image.
In one example of an alternative implementation of the method,
the apparatus further comprises:
a first determining module for determining a third region on the first spherical image before determining the first region on the first spherical image; wherein there is an overlap of the first region and the third region;
The second acquisition sub-module includes:
the control unit is used for controlling the two partial images positioned in the overlapped area to be displayed according to the corresponding transparency respectively according to a preset transparency strategy aiming at the first spherical image with the ghost;
and the acquisition unit is used for acquiring the ghost image corresponding to the first ghost-free image according to the first spherical image which is generated with the ghost and has the transparency controlled.
In an alternative example, the acquiring unit includes:
an acquisition subunit, configured to perform boundary smoothing processing on the overlapping region in the first spherical image, where the overlapping region is generated with ghost and transparency is controlled, so as to obtain a second spherical image;
and the determining subunit is used for taking the panoramic image corresponding to the second spherical image as the ghost image corresponding to the first ghost-free image.
In one example of an alternative implementation of the method,
the apparatus further comprises:
a second determining module, configured to determine a plurality of ghost position indication information corresponding to the plurality of ghost images after generating a corresponding plurality of ghost images according to the plurality of ghost-free images;
the second obtaining module is specifically configured to:
training by taking the multiple ghost images as input data and taking the multiple ghost-free images and the multiple ghost position indication information as output data, so as to obtain a ghost processing model;
The third obtaining module is specifically configured to:
and inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image and corresponding ghost position indication information output by the ghost processing model.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described image processing method.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the image processing method described above.
In the embodiment of the disclosure, a plurality of ghost-free images can be acquired, a corresponding plurality of ghost-free images are generated according to the plurality of ghost-free images, and a ghost processing model is obtained through training by using the plurality of ghost-free images and the plurality of ghost-free images. And then, only inputting the ghost image to be processed into the ghost processing model, and obtaining the corresponding ghost-free image output by the ghost processing model. That is, in the embodiments of the present disclosure, by using the ghost processing model obtained by training, the ghost in the ghost image can be removed, so as to obtain the ghost-free image for depth estimation, so as to avoid the influence of the ghost on the height cognition, the object boundary cognition, and the like.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a schematic diagram of an image in which ghost images exist.
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a partial schematic view of a spherical image.
Fig. 4 is a schematic view of a panoramic image.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment of the present disclosure.
Fig. 6 is a schematic diagram of an image processing method in another exemplary embodiment of the present disclosure.
Fig. 7 is a schematic structural view of an image processing apparatus provided in an exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Exemplary method
Fig. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure. The method shown in fig. 2 includes step 201, step 202, step 203, and step 204, each of which is described below.
In step 201, a plurality of ghost-free images are acquired.
Here, a database may be preset, in which a large number of captured ghost-free images may be stored, and each ghost-free image in the database may be a panoramic image or a non-panoramic image. In step 201, a plurality of ghost-free images may be acquired from a database, for example, 20 ten thousand, 50 ten thousand, 80 ten thousand, or 100 ten thousand ghost-free images are acquired from the database.
Step 202, generating a plurality of corresponding ghost images according to the plurality of ghost-free images.
Here, the ghost-free images and the ghost-free images may be in a one-to-one correspondence, each ghost-free image and the ghost image corresponding thereto may be an image of the same scene, and each ghost-free image and the ghost image corresponding thereto may form an image pair, so that the ghost-free images and the ghost-free images may form a plurality of image pairs.
Alternatively, each of the plurality of ghost-free images, and each of the plurality of ghost images may have a size of 640×320 (although other sizes are possible, only the case of 640×320 is illustrated in the embodiment of the present disclosure).
It should be noted that, according to the multiple ghost-free images, specific implementation forms of generating the corresponding multiple ghost-free images are various, and for the sake of layout clarity, the following description will be given by way of example.
In step 203, a ghost processing model is obtained by training using the plurality of ghost-free images and the plurality of ghost images.
Here, from the plurality of ghost-free images and the plurality of ghost images, a plurality of sample data can be obtained. The model training can be performed by using a plurality of sample data to obtain a ghost processing model, and the ghost processing model can output at least corresponding ghost-free images according to the input ghost images.
Alternatively, the model training process may be trained using an adaptive moment estimation (Adaptive Moment Estimation, adam) optimizer until the model converges.
Alternatively, the ghost processing model may be deployed on an embedded device or cloud server (which may also be referred to as a cloud).
Step 204, inputting the ghost image to be processed into the ghost processing model to obtain the corresponding ghost-free image output by the ghost processing model.
Here, the ghost image to be processed may be an image from which ghost is to be removed, and at least two images of the same object may exist in the ghost image to be processed.
In the case where each of the plurality of ghost-free images and each of the plurality of ghost images has a size of 640×320, the receivable image of the ghost processing model may have a size of 640×320. In this way, in the case where a ghost image to be processed is acquired, it can be determined whether or not the scale of the ghost image to be processed is 640×320.
If the judgment result is yes, the ghost image to be processed can be directly provided to the ghost processing model so as to obtain a corresponding ghost-free image output by the ghost processing model.
If the determination result is negative, for example, the scale of the ghost image to be processed is 1280×640, the ghost image to be processed may be subjected to downsampling, the scale of the ghost image to be processed may be 640×320 after downsampling, and then the ghost image to be processed after downsampling is provided to the ghost processing model, so as to obtain a corresponding ghost-free image output by the ghost processing model.
Whether the determination result is yes or no, the scale of the ghost-free image output by the ghost processing model may be 640×320. In addition, after obtaining the ghost-free image output by the ghost processing model, the obtained ghost-free image may be used for depth estimation or other purposes.
In the embodiment of the disclosure, a plurality of ghost-free images can be acquired, a corresponding plurality of ghost-free images are generated according to the plurality of ghost-free images, and a ghost processing model is obtained through training by using the plurality of ghost-free images and the plurality of ghost-free images. And then, only inputting the ghost image to be processed into the ghost processing model, and obtaining the corresponding ghost-free image output by the ghost processing model. That is, in the embodiments of the present disclosure, by using the ghost processing model obtained by training, the ghost in the ghost image can be removed, so as to obtain the ghost-free image for depth estimation, so as to avoid the influence of the ghost on the height cognition, the object boundary cognition, and the like.
In an alternative example, each of the plurality of ghost-free images is a panoramic image;
generating a plurality of corresponding ghost images according to the plurality of ghost-free images, including:
acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
generating a ghost image on the first spherical image;
and obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
For easy understanding, the generation process of the panoramic image will be briefly described.
In general, when three-dimensional reconstruction of indoor and outdoor scenes is performed, the data acquisition device can be used for acquiring relevant data of actual scenes, the relevant data can comprise a plurality of small images, and each small image can correspond to different camera poses. Then, a plurality of small images in the related data can be spliced, and a panoramic image is obtained according to the splicing result.
In the actual stitching process, a certain point in space is generally taken as the center, and a surrounding scene is projected and imaged on a spherical surface around the certain point, so that each small image in related data can be respectively attached to the spherical surface until the whole spherical surface is attached, and thus, a spherical surface image can be considered to be formed on the spherical surface, the spherical surface image serves as a stitching result, and the local part of the spherical surface image can be shown as shown in fig. 3.
Then, the sphere where the spherical image as the stitching result is located may be expanded to obtain a rectangular image, the longitude 360 ° of the sphere corresponds to the width of the rectangular image, the latitude 180 ° of the sphere corresponds to the height of the rectangular image, the rectangular image may be considered as a panoramic image corresponding to the spherical image as the stitching result, generally, the aspect ratio of the panoramic image may be cut to 2:1, and corresponds to the proportional relationship between the longitude 360 ° and the latitude 180 °, and the panoramic image may be specifically shown in fig. 4.
As can be seen from the above, there is correspondence between the spherical image and the panoramic image, and thus, in the embodiment of the disclosure, the first spherical image corresponding to the first ghost-free image may be obtained by using the inverse operation of the sphere expansion operation. Then, a ghost may be generated on the first spherical image, and specifically, one, two, or more ghosts may be generated on the first spherical image to obtain a ghost-generated first spherical image.
Then, according to the first spherical image with the ghost generated, a ghost image corresponding to the first ghost-free image can be obtained. Specifically, the sphere where the first spherical image with ghost is generated may be directly expanded to obtain a rectangular image, and the obtained rectangular image is used as the ghost image corresponding to the first ghost-free image. Of course, the specific embodiment of obtaining the ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost generated is not limited thereto, and will be described by way of example for the sake of layout clarity.
As can be seen, in the embodiments of the present disclosure, by generating the ghost on the spherical image corresponding to the panoramic image without the ghost, the ghost image for model training can be obtained very conveniently, so that the problem that the ghost image and the ghost-free image are difficult to be obtained simultaneously by shooting can be well avoided.
In one alternative example, generating a ghost image on a first spherical image includes:
determining a first region on the first spherical image;
applying a pose disturbance to the first region to determine a second region on the first spherical image;
mapping the partial image located in the second region to the first region to generate a ghost image on the first spherical image.
In an embodiment of the present disclosure, the first region may be randomly selected on the first spherical image, or the first region may be selected on the first spherical image according to a certain rule.
Next, a random disturbance of a pose may be applied to the first region to determine a second region on the first spherical image. Here, the random disturbance may conform to a normal distribution, and the second region corresponds to a region determined after a certain rotation and/or translation of the first region, and the second region is consistent with the first region in size.
Then, the partial image located in the second area may be mapped to the first area, so that the partial image originally located in the second area may cover the partial image originally existing in the first area, and two images of the same object may necessarily exist on the first spherical image, that is, the ghost image is successfully generated on the first spherical image.
It can be seen that in the embodiments of the present disclosure, the generation of the ghost can be very conveniently achieved through the mapping operation.
In one example of an alternative implementation of the method,
before determining the first region on the first spherical image, the method further comprises:
determining a third region on the first spherical image; wherein, the first area and the third area have an overlapping area;
according to the first spherical image with ghost, obtaining a ghost image corresponding to the first ghost-free image, comprising:
aiming at the first spherical image with ghost, according to a preset transparency strategy, controlling two partial images positioned in an overlapped area to be displayed according to corresponding transparency respectively;
and obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image which is generated with the ghost and has the transparency controlled.
In an embodiment of the disclosure, a third region may be randomly selected on the first spherical image, and a first region may be selected around the third region, where an overlapping region is required between the first region and the third region. Next, a second region may be determined by applying a random perturbation of a pose to the first region and mapping a partial image of the second region to the first region.
It will be appreciated that the third region itself is locally imaged before the mapping operation is performed, and that there are two locally imaged overlapping regions of the first region and the third region after the mapping operation is performed, and that one of the two locally imaged overlapping regions is a partial image that would otherwise be located in the local image of the third region and the other is a partial image that is mapped to the local image of the first region.
In order to ensure that the two overlapped partial images can be displayed, after the mapping operation is executed, the two overlapped partial images can be controlled to be displayed according to the corresponding transparency respectively according to a preset transparency strategy. For example, one of the two partial images overlapped may be controlled to be displayed at a transparency of 0.3 and the other may be controlled to be displayed at a transparency of 0.7 according to a preset transparency policy; for another example, one of the two partial images that are overlapped may be controlled to be displayed at a transparency of 0.4 and the other may be controlled to be displayed at a transparency of 0.6 according to a preset transparency policy.
Then, a ghost image corresponding to the first ghost-free image can be obtained according to the first spherical image which is generated with the ghost and has the transparency controlled. In one embodiment, according to a first spherical image in which ghost is generated and transparency is controlled, obtaining a ghost image corresponding to a first ghost-free image includes:
Performing boundary smoothing processing on the overlapped area in the first spherical image which is generated with ghost and the transparency of which is controlled, so as to obtain a second spherical image;
and taking the panoramic image corresponding to the second spherical image as a ghost image corresponding to the first ghost-free image.
Here, regarding the overlapping region in the first spherical image in which ghost is generated and transparency is controlled, the boundary thereof may be filtered, for example, mean filtered, median filtered, or the like, to achieve smoothing of the boundary, thereby obtaining the second spherical image. Then, the sphere where the second spherical image is located can be unfolded to obtain a corresponding panoramic image, and the panoramic image is used as a ghost image corresponding to the first ghost-free image.
In this embodiment, the process of obtaining the ghost image incorporates boundary smoothing, so that the image quality of the obtained ghost image can be ensured, thereby ensuring the reliability of the ghost processing model trained on the ghost image later.
Of course, the embodiment of obtaining the ghost image corresponding to the first ghost-free image from the first spherical image in which the ghost is generated and the transparency is controlled is not limited to this, and for example, the sphere in which the first spherical image in which the ghost is generated and the transparency is controlled may be directly expanded, and the panoramic image obtained after the expansion may be used as the ghost image corresponding to the first ghost-free image.
It can be seen that, in the embodiments of the present disclosure, by performing transparency control of two overlapped partial images according to a preset transparency policy, it is able to ensure that both overlapped partial images are displayed, and according to a first spherical image that generates a ghost and controls transparency, a ghost image can be obtained conveniently and reliably, so as to facilitate training of a ghost processing model.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment of the present disclosure. The method shown in fig. 5 includes step 501, step 502, step 503, step 504, and step 505, and each step is described below.
In step 501, a plurality of ghost-free images are acquired.
Step 502, generating a plurality of corresponding ghost images according to the plurality of ghost-free images.
It should be noted that, the specific implementation process of step 501 to step 502 is referred to the description of step 201 to step 202, and will not be described herein.
In step 503, a plurality of ghost position indication information corresponding to the plurality of ghost images is determined.
Here, the plurality of ghost images and the plurality of ghost position indication information may have a one-to-one correspondence relationship, and any one of the plurality of ghost position indication information is used to indicate a ghost position in the corresponding ghost image.
It should be noted that, the plurality of ghost-free images and the plurality of ghost-free images may form a plurality of image pairs, and each image pair includes a ghost-free image and a ghost image corresponding to the ghost-free image. In implementation, a difference binarization result (which may be a ghost area mask) of the ghost-free image and the ghost image may be calculated for each image pair, and the ghost area mask corresponding to each image pair may be used as ghost position indication information corresponding to the ghost image in each image pair.
Specifically, for any image pair D1, assuming that the ghost-free image is I1, the ghost image is I2, the ghost region mask corresponding to D1 is I3, and the pixel points in any two of I1, I2, and I3 are in a one-to-one correspondence relationship, the pixel value of each pixel point in I1 and the pixel value of each pixel point in I2 may be obtained. Then, for each pixel point in I1, whether the pixel value of the pixel point is the same as the pixel value of the corresponding pixel point in I2 can be judged, if so, the corresponding pixel point in I3 can be determined to be black, otherwise, the corresponding pixel point in I3 can be determined to be white. In this way, by black and white of the pixel point in I3, it is possible to know which positions in I2 are the positions where the ghost exists and which positions are the positions where the ghost does not exist, so I3 can be used as ghost position indication information corresponding to I2.
In step 504, training is performed using the plurality of ghost images as input data and the plurality of ghost-free images and the plurality of ghost position indication information as output data, thereby obtaining a ghost processing model.
Here, after the ghost position indication information corresponding to the plurality of ghost images is obtained, each ghost image and the ghost-free image and ghost position indication information corresponding thereto may constitute one sample data, and thus, a plurality of sample data may be obtained from the plurality of ghost images, the plurality of ghost-free images, and the plurality of ghost position indication information.
Then, the ghost images in the plurality of sample data can be used as input data, the ghost-free images and ghost position indication information in the plurality of sample data can be used as output data for training, for example, training is performed by using an Adam optimizer until the model converges, so that a final ghost processing model is obtained, and the ghost processing model can output corresponding ghost-free images and corresponding ghost position indication information according to the input ghost images.
Step 505, inputting the ghost image to be processed into the ghost processing model to obtain the corresponding ghost-free image and the corresponding ghost position indication information output by the ghost processing model.
Here, the ghost-free image and the ghost position indication information output by the ghost processing model may each have a size of 640×320.
In the embodiment of the disclosure, a plurality of ghost-free images may be acquired, a plurality of corresponding ghost-free images may be generated according to the plurality of ghost-free images, a plurality of ghost position indication information corresponding to the plurality of ghost-free images may be determined, and a ghost processing model may be obtained through training according to the plurality of ghost-free images, and the plurality of ghost position indication information. And then, only inputting the ghost image to be processed into the ghost processing model, and obtaining the corresponding ghost-free image and the corresponding ghost position indication information output by the ghost processing model. That is, in the embodiments of the present disclosure, by using the ghost processing model obtained by training, the ghost in the ghost image can be removed, so as to obtain the ghost-free image for depth estimation, so as to avoid the influence of the ghost on the height cognition, the object boundary cognition, and the like. In addition, with the trained ghost processing model, embodiments of the present disclosure can also conveniently locate the ghost position in the ghost image.
It should be noted that, in the embodiment of the present disclosure, after generating the corresponding multiple ghost images from the multiple ghost-free images, the multiple ghost position indication information corresponding to the multiple ghost images may not be determined, but the multiple ghost images may be directly used as input data, and the multiple ghost-free images may be used as output data to train, thereby obtaining the ghost processing model, where the ghost processing model outputs only the corresponding ghost-free images according to the input ghost images.
In an alternative example, as shown in fig. 6, to ensure accuracy of depth estimation, data preprocessing may be advanced, and a neural network may be used to train to obtain a ghost processing model.
Specifically, in the data preprocessing process, a database storing a large number of ghost-free images (for example, 50 ten thousand ghost-free images are stored) may be prepared, and each ghost-free image in the database is a panoramic image.
Next, any ghost-free image I1 may be extracted from the database, I1 is downsampled to 640×320, and a ghost image I2 and a ghost area mask (a result of binarizing a difference between mask=i1 and I2) corresponding to I1 are generated by a specific algorithm.
Here, the process of generating the ghost image I2 corresponding to I1 by a specific algorithm may be:
(1) Randomly selecting a region D1 (corresponding to the third region above) on the spherical image corresponding to I1, wherein the size of the region is equal to the scale of each small image in FIG. 3;
(2) According to the shooting rules, a neighbor area D2 (corresponding to the first area above) is determined around the area D1, the neighbor area is error-free, and an overlapping area exists between the neighbor area and the area D1 (the size of the overlapping area is consistent with the actual shooting process);
(3) A random disturbance is given to the pose of the region D2, normal distribution is selected for the disturbance, the mean value and the variance of the normal distribution are provided by actual estimation in the shooting process, and the region D3 (corresponding to the second region above) is set to be the position of the region D2 subjected to the random disturbance (the size of the region D3 is completely consistent with the size of the region D2);
(4) In the spherical image corresponding to the I1, the image in the area D3 is attached to the position D2, and a preset transparency strategy is adopted in the overlapping area of the D1 and the D2 to control the overlapped two partial images to be displayed according to the corresponding transparency respectively;
(5) And performing boundary smoothing processing.
After I1, I2 and the ghost area mask are obtained, the obtained I1, I2 and ghost area mask may be combined into one piece of data. In a similar manner, a plurality of sample data may be obtained. Then, I1 in the plurality of sample data may be used as input data, and I2 and the ghost area mask in the plurality of sample data may be used as output data to train to obtain a ghost processing model.
Alternatively, the neural network utilized in training may employ an Encoder-Decode structure. Here, the Encoder portion may employ a dense convolutional network (Dense Convolutional Network, denseNet) to extract features, and the feature dimension of DenseNet may be num_channels×20×10 (num_channels may be 512), 20×10 is the five downscaling of 640×320 input images by DenseNet (each time width, height, divided by 2); the Decoder section may use two convolutions to smooth five upsampled layers and then one convolutional layer to recover the 640 x 320 scale. Finally, according to the input panoramic image with the ghost effect, the ghost processing model can output two channels through forward reasoning, wherein one channel is a 640×320-scale image after the de-duplication, and the other channel is a 640×320-scale ghost mask prediction mask.
In summary, according to the embodiments of the present disclosure, by using the ghost processing model, an image with the same dimension and removed of the ghost effect can be obtained conveniently according to an input image with a ghost, and the obtained image is used for depth estimation, so as to effectively improve the accuracy of the depth estimation.
Any of the image processing methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including, but not limited to: terminal equipment, servers, etc. Alternatively, any of the image processing methods provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any of the image processing methods mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary apparatus
Fig. 7 is a schematic structural view of an image processing apparatus provided in an exemplary embodiment of the present disclosure. The apparatus shown in fig. 7 includes a first acquisition module 701, a generation model 702, a second acquisition module 703, and a third acquisition module 704.
A first acquiring module 701, configured to acquire a plurality of ghost-free images;
a generating model 702, configured to generate a plurality of corresponding ghost images according to the plurality of ghost-free images;
a second obtaining module 703, configured to train to obtain a ghost processing model by using the multiple ghost-free images and the multiple ghost images;
the third obtaining module 704 is configured to input the ghost image to be processed into the ghost processing model, so as to obtain a corresponding ghost-free image output by the ghost processing model.
In an alternative example, each of the plurality of ghost-free images is a panoramic image;
generating model 702 includes:
the first acquisition sub-module is used for acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
a generation sub-module for generating a ghost on the first spherical image;
and the second acquisition sub-module is used for obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
In one alternative example, generating a sub-module includes:
a first determination unit configured to determine a first region on a first spherical image;
a second determining unit for applying a pose disturbance to the first region to determine a second region on the first spherical image;
and the mapping unit is used for mapping the local image positioned in the second area to the first area so as to generate double images on the first spherical image.
In one example of an alternative implementation of the method,
the apparatus further comprises:
a first determining module for determining a third region on the first spherical image before determining the first region on the first spherical image; wherein, the first area and the third area have an overlapping area;
a second acquisition sub-module comprising:
the control unit is used for controlling the two partial images positioned in the overlapped area to be displayed according to the corresponding transparency respectively according to a preset transparency strategy aiming at the first spherical image with the ghost;
and the acquisition unit is used for acquiring the ghost image corresponding to the first ghost-free image according to the first spherical image which is generated with the ghost and has the transparency controlled.
In an alternative example, the obtaining unit includes:
an acquisition subunit, configured to perform boundary smoothing processing on an overlapping region in the first spherical image, where the overlapping region is generated with ghost and transparency is controlled, so as to obtain a second spherical image;
And the determining subunit is used for taking the panoramic image corresponding to the second spherical image as a ghost image corresponding to the first ghost-free image.
In one example of an alternative implementation of the method,
the apparatus further comprises:
a second determining module, configured to determine a plurality of ghost position indication information corresponding to the plurality of ghost images after generating the corresponding plurality of ghost images according to the plurality of ghost-free images;
the second obtaining module 703 is specifically configured to:
training by taking a plurality of ghost images as input data and taking a plurality of ghost-free images and a plurality of ghost position indication information as output data, so as to obtain a ghost processing model;
the third obtaining module 704 is specifically configured to:
and inputting the ghost image to be processed into a ghost processing model to obtain a corresponding ghost-free image and corresponding ghost position indication information output by the ghost processing model.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 8. The electronic device may be either or both of the first device and the second device, or a stand-alone device independent thereof, which may communicate with the first device and the second device to receive the acquired input signals therefrom.
Fig. 8 illustrates a block diagram of an electronic device 80 according to an embodiment of the present disclosure.
As shown in fig. 8, the electronic device 80 includes one or more processors 81 and memory 82.
Processor 81 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in electronic device 80 to perform desired functions.
Memory 82 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 81 to implement the image processing methods and/or other desired functions of the various embodiments of the present disclosure described above. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 80 may further include: an input device 83 and an output device 84, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
For example, where the electronic device 80 is a first device or a second device, the input means 83 may be a microphone or an array of microphones. When the electronic device 80 is a stand-alone device, the input means 83 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
In addition, the input device 83 may also include, for example, a keyboard, a mouse, and the like.
The output device 84 can output various information to the outside. The output device 84 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 80 relevant to the present disclosure are shown in fig. 8, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 80 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in an image processing method according to various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in an image processing method according to various embodiments of the present disclosure described in the above "exemplary method" section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (12)

1. An image processing method, comprising:
acquiring a plurality of ghost-free images;
generating a plurality of corresponding ghost images according to the plurality of ghost-free images;
training to obtain a ghost processing model by utilizing the ghost-free images and the ghost images;
inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image output by the ghost processing model;
each ghost-free image in the plurality of ghost-free images is a panoramic image;
the generating a corresponding plurality of ghost images according to the plurality of ghost-free images includes:
acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
generating a ghost image on the first spherical image;
And obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
2. The method of claim 1, wherein the generating ghost images on the first spherical image comprises:
determining a first region on the first spherical image;
applying a pose disturbance to the first region to determine a second region on the first spherical image;
mapping a partial image located in the second region to the first region to generate a ghost image on the first spherical image.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
before the determining the first region on the first spherical image, the method further includes:
determining a third region on the first spherical image; wherein there is an overlap of the first region and the third region;
the obtaining the ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost, includes:
controlling two partial images positioned in the overlapped area to be displayed according to corresponding transparency respectively according to a preset transparency strategy aiming at the first spherical image with the ghost;
And obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost and the transparency controlled.
4. A method according to claim 3, wherein obtaining a ghost image corresponding to the first ghost-free image from the first spherical image in which ghost is generated and transparency is controlled, comprises:
performing boundary smoothing processing on the overlapped area in the first spherical image which is generated with ghost and has controlled transparency to obtain a second spherical image;
and taking the panoramic image corresponding to the second spherical image as the ghost image corresponding to the first ghost-free image.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
after the generating the corresponding multiple ghost images according to the multiple ghost-free images, the method further comprises:
determining a plurality of ghost position indication information corresponding to the plurality of ghost images;
the training to obtain a ghost processing model by using the ghost-free images and the ghost-free images includes:
training by taking the multiple ghost images as input data and taking the multiple ghost-free images and the multiple ghost position indication information as output data, so as to obtain a ghost processing model;
The inputting the ghost image to be processed into the ghost processing model to obtain the corresponding ghost-free image output by the ghost processing model comprises the following steps:
and inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image and corresponding ghost position indication information output by the ghost processing model.
6. An image processing apparatus, comprising:
the first acquisition module is used for acquiring a plurality of ghost-free images;
the generation model is used for generating a plurality of corresponding ghost images according to the plurality of ghost-free images;
the second acquisition module is used for training to obtain a ghost processing model by utilizing the ghost-free images and the ghost images;
the third acquisition module is used for inputting the ghost image to be processed into the ghost processing model so as to obtain a corresponding ghost-free image output by the ghost processing model;
each ghost-free image in the plurality of ghost-free images is a panoramic image;
the generating a model includes:
the first acquisition sub-module is used for acquiring a first spherical image corresponding to the first ghost-free image; wherein the first ghost-free image is any ghost-free image of the plurality of ghost-free images;
A generation sub-module for generating a ghost on the first spherical image;
and the second acquisition sub-module is used for obtaining a ghost image corresponding to the first ghost-free image according to the first spherical image with the ghost.
7. The apparatus of claim 6, wherein the generating sub-module comprises:
a first determining unit configured to determine a first region on the first spherical image;
a second determining unit configured to apply a pose disturbance to the first region to determine a second region on the first spherical image;
and the mapping unit is used for mapping the local image positioned in the second area to the first area so as to generate double images on the first spherical image.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the apparatus further comprises:
a first determining module for determining a third region on the first spherical image before determining the first region on the first spherical image; wherein there is an overlap of the first region and the third region;
the second acquisition sub-module includes:
the control unit is used for controlling the two partial images positioned in the overlapped area to be displayed according to the corresponding transparency respectively according to a preset transparency strategy aiming at the first spherical image with the ghost;
And the acquisition unit is used for acquiring the ghost image corresponding to the first ghost-free image according to the first spherical image which is generated with the ghost and has the transparency controlled.
9. The apparatus of claim 8, wherein the acquisition unit comprises:
an acquisition subunit, configured to perform boundary smoothing processing on the overlapping region in the first spherical image, where the overlapping region is generated with ghost and transparency is controlled, so as to obtain a second spherical image;
and the determining subunit is used for taking the panoramic image corresponding to the second spherical image as the ghost image corresponding to the first ghost-free image.
10. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the apparatus further comprises:
a second determining module, configured to determine a plurality of ghost position indication information corresponding to the plurality of ghost images after generating a corresponding plurality of ghost images according to the plurality of ghost-free images;
the second obtaining module is specifically configured to:
training by taking the multiple ghost images as input data and taking the multiple ghost-free images and the multiple ghost position indication information as output data, so as to obtain a ghost processing model;
The third obtaining module is specifically configured to:
and inputting the ghost image to be processed into the ghost processing model to obtain a corresponding ghost-free image and corresponding ghost position indication information output by the ghost processing model.
11. A computer-readable storage medium storing a computer program for executing the image processing method according to any one of the preceding claims 1-5.
12. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the image processing method according to any one of the preceding claims 1-5.
CN202010205822.9A 2020-03-23 2020-03-23 Image processing method, device, computer readable storage medium and electronic equipment Active CN111383199B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010205822.9A CN111383199B (en) 2020-03-23 2020-03-23 Image processing method, device, computer readable storage medium and electronic equipment
US17/210,100 US11620730B2 (en) 2020-03-23 2021-03-23 Method for merging multiple images and post-processing of panorama

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010205822.9A CN111383199B (en) 2020-03-23 2020-03-23 Image processing method, device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111383199A CN111383199A (en) 2020-07-07
CN111383199B true CN111383199B (en) 2023-05-26

Family

ID=71221748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010205822.9A Active CN111383199B (en) 2020-03-23 2020-03-23 Image processing method, device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111383199B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734680B (en) * 2020-12-31 2024-07-05 视涯科技股份有限公司 Ghost measurement method and device, readable storage medium and computer equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827920A (en) * 2011-09-28 2014-05-28 皇家飞利浦有限公司 Object distance determination from image
CN104869376A (en) * 2015-05-18 2015-08-26 中国科学院自动化研究所 Multi-image and multi-pixel level geometric correction method for video fusion
CN107203965A (en) * 2016-03-18 2017-09-26 中国科学院宁波材料技术与工程研究所 A kind of Panorama Mosaic method merged based on multichannel image
CN107820001A (en) * 2016-09-14 2018-03-20 豪威科技股份有限公司 The array camera image removed using the ghost image of feature based is combined
CN110020578A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110062160A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method and device
CN110378982A (en) * 2019-07-23 2019-10-25 上海联影医疗科技有限公司 Reconstruction image processing method, device, equipment and storage medium
CN110781901A (en) * 2019-10-29 2020-02-11 湖北工业大学 Instrument ghost character recognition method based on BP neural network prediction threshold

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589210B1 (en) * 2015-08-26 2017-03-07 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827920A (en) * 2011-09-28 2014-05-28 皇家飞利浦有限公司 Object distance determination from image
CN104869376A (en) * 2015-05-18 2015-08-26 中国科学院自动化研究所 Multi-image and multi-pixel level geometric correction method for video fusion
CN107203965A (en) * 2016-03-18 2017-09-26 中国科学院宁波材料技术与工程研究所 A kind of Panorama Mosaic method merged based on multichannel image
CN107820001A (en) * 2016-09-14 2018-03-20 豪威科技股份有限公司 The array camera image removed using the ghost image of feature based is combined
CN110020578A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110062160A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method and device
CN110378982A (en) * 2019-07-23 2019-10-25 上海联影医疗科技有限公司 Reconstruction image processing method, device, equipment and storage medium
CN110781901A (en) * 2019-10-29 2020-02-11 湖北工业大学 Instrument ghost character recognition method based on BP neural network prediction threshold

Also Published As

Publication number Publication date
CN111383199A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
CN108710885B (en) Target object detection method and device
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
CN111612842B (en) Method and device for generating pose estimation model
US11620730B2 (en) Method for merging multiple images and post-processing of panorama
Beyeler OpenCV with Python blueprints
CN111428805B (en) Method for detecting salient object, model, storage medium and electronic device
CN113838070B (en) Data desensitization method and device
US20240331365A1 (en) Processing system, estimation apparatus, processing method, and non-transitory storage medium
JP7554946B2 (en) Image data processing method and device
CN115294268A (en) Three-dimensional model reconstruction method of object and electronic equipment
CN113989616A (en) Target detection method, device, equipment and storage medium
JP2023545052A (en) Image processing model training method and device, image processing method and device, electronic equipment, and computer program
CN111353325A (en) Key point detection model training method and device
CN111383199B (en) Image processing method, device, computer readable storage medium and electronic equipment
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN113592706B (en) Method and device for adjusting homography matrix parameters
KR20200083188A (en) Method and apparatus for detecting liveness and object recognition method using same
US20240161391A1 (en) Relightable neural radiance field model
US20240005464A1 (en) Reflection removal from an image
CN117789109A (en) Unattended abnormal behavior detection method and system in industrial scene
CN112991208A (en) Image processing method and device, computer readable medium and electronic device
EP3929866A2 (en) Inpainting method and apparatus for human image, and electronic device
CN112052863B (en) Image detection method and device, computer storage medium and electronic equipment
CN114898447A (en) Personalized fixation point detection method and device based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200921

Address after: 100085 Floor 102-1, Building No. 35, West Second Banner Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 300 457 days Unit 5, Room 1, 112, Room 1, Office Building C, Nangang Industrial Zone, Binhai New Area Economic and Technological Development Zone, Tianjin

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220322

Address after: 100085 8th floor, building 1, Hongyuan Shouzhu building, Shangdi 6th Street, Haidian District, Beijing

Applicant after: As you can see (Beijing) Technology Co.,Ltd.

Address before: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant before: Seashell Housing (Beijing) Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200707

Assignee: Beijing Intellectual Property Management Co.,Ltd.

Assignor: As you can see (Beijing) Technology Co.,Ltd.

Contract record no.: X2023110000092

Denomination of invention: Image processing methods, devices, computer-readable storage media, and electronic devices

Granted publication date: 20230526

License type: Common License

Record date: 20230818

EE01 Entry into force of recordation of patent licensing contract