CN112989092A - Image processing method and related device - Google Patents
Image processing method and related device Download PDFInfo
- Publication number
- CN112989092A CN112989092A CN202011446454.3A CN202011446454A CN112989092A CN 112989092 A CN112989092 A CN 112989092A CN 202011446454 A CN202011446454 A CN 202011446454A CN 112989092 A CN112989092 A CN 112989092A
- Authority
- CN
- China
- Prior art keywords
- image
- definition
- cloud server
- terminal
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 299
- 238000000034 method Methods 0.000 claims description 92
- 230000015654 memory Effects 0.000 claims description 42
- 230000008569 process Effects 0.000 claims description 41
- 230000008439 repair process Effects 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005192 partition Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 description 61
- 238000004422 calculation algorithm Methods 0.000 description 45
- 238000004891 communication Methods 0.000 description 44
- 230000006854 communication Effects 0.000 description 44
- 238000003384 imaging method Methods 0.000 description 44
- 238000010586 diagram Methods 0.000 description 40
- 230000004044 response Effects 0.000 description 37
- 230000006870 function Effects 0.000 description 35
- 230000004927 fusion Effects 0.000 description 34
- 239000010410 layer Substances 0.000 description 26
- 238000007726 management method Methods 0.000 description 20
- 238000005457 optimization Methods 0.000 description 17
- 239000000284 extract Substances 0.000 description 13
- 230000003287 optical effect Effects 0.000 description 13
- 230000002708 enhancing effect Effects 0.000 description 12
- 230000007613 environmental effect Effects 0.000 description 12
- 238000010295 mobile communication Methods 0.000 description 12
- 230000011514 reflex Effects 0.000 description 12
- 238000010276 construction Methods 0.000 description 11
- 238000009877 rendering Methods 0.000 description 11
- 230000005236 sound signal Effects 0.000 description 11
- 210000000988 bone and bone Anatomy 0.000 description 10
- 230000011218 segmentation Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000012217 deletion Methods 0.000 description 7
- 230000037430 deletion Effects 0.000 description 7
- 238000011084 recovery Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000013441 quality evaluation Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses an image processing method, which relates to the field of computer vision and comprises the following steps: and the cloud server cuts the low-definition images with low zooming magnification sent by the terminal based on the zooming magnification of the high-definition images in the high-definition image library to obtain a plurality of image blocks. And aiming at each image block, the cloud server can perform retrieval and matching in the high-definition image library to obtain one or more high-definition reference images, and then the image quality is improved through an image enhancement network based on the high-definition reference images to obtain a plurality of high-definition image blocks. And finally, the server at the cloud end splices the multiple high-definition image blocks into a super-definition image with higher resolution and higher definition, and returns the super-definition image to the terminal. In this way, the processing capability of the cloud server can be used, so that the shooting picture when the terminal shoots the photo or the video not only has a wider field angle, but also has rich detail texture and definition which can be shot when the terminal zooms at a high power.
Description
The present application claims priority of chinese patent office filed on 12/13/2019, application No. 201911285717.4, entitled "image processing method and related apparatus", and chinese patent application filed on 29/04/2020, 202010357077.X, entitled "image processing method and apparatus", the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of computer vision, and in particular, to an image processing method and related apparatus.
Background
Smart phones have developed to date, taking pictures has become one of their most important features. No matter a basic imaging device or an imaging algorithm, great development and change are generated, the change of mobile phone photographing is promoted again and again, and the photographing experience of a user is improved. However, after this revolution again, the cost of the device is increasing and the imaging quality is approaching the bottleneck.
In the existing scheme, in order to improve the quality of a picture taken by a terminal, after a camera acquires a taken picture, the terminal generates image data through an Image Signal Processing (ISP), and processes and optimizes the image data through a preset algorithm, so as to improve an image effect. However, the terminal processes the pictures by using a fixed algorithm, the processed images are all of the same style, and the quality of the processed images is not high due to the limitation of the terminal capability. And the presentation style is different from scene to scene. For example, the famous places and ancient sites such as Beijing old palace present the style of ancient style, and the urban scenic spots such as Shanghai foreign beach present the style of metropolis. The style of presentation is also different in different environments of the same scene. For example, the style of the old palace in summer and the style of the old palace in winter are different. Each style has different image parameters for the photographer, such as sharpness, color richness, contrast, white balance, detail, texture, etc. In the existing scheme, the means for improving the quality of the shot picture by the terminal is very limited, and the imaging quality is not high.
Disclosure of Invention
The application provides an image processing method and a related device, combination of a cloud server and a terminal is achieved, high-quality picture data and an algorithm are fully utilized, a terminal photographing imaging system is enabled, the terminal can break through the imaging quality limit of the terminal, an effect similar to that of a professional single lens reflex camera can be obtained by a common camera, accordingly, the cost of a mobile phone is reduced, the quality of a terminal photographing picture is greatly improved, and photographing experience of a user is improved.
In a first aspect, the present application provides an image processing method, comprising: the method comprises the steps that a cloud server obtains a low-definition image sent by a terminal and shooting information when the low-definition image is shot, wherein the shooting information comprises zoom magnification of the low-definition image; the cloud server identifies semantic information of the low-definition image; the cloud server determines the number N of image blocks based on the zooming multiplying power of high-definition images and the zooming multiplying power of low-definition images in a high-definition image library, wherein N is a positive integer; the cloud server divides the low-definition image into N image blocks based on the number N of the image blocks, and marks the type of image content in each image block of the N image blocks based on semantic information of the low-definition image; the cloud server retrieves T high-definition retrieval images matched with the shooting content of the first type of image blocks from the high-definition image library, wherein T is a positive integer; the cloud server repairs details in the first type of image blocks based on the T high-definition retrieval images to obtain first type high-definition image blocks, wherein the resolution of the first type high-definition image blocks is greater than that of the first type image blocks; and the cloud server replaces the first type of image blocks in the low-definition image based on the first type of high-definition image blocks to obtain a high-definition output image, and sends the high-definition output image to the terminal.
In one possible implementation, the shooting information of the low-definition image further includes one or more of the following: geographical position information, weather information, season information, time information, shooting parameters, and shooting posture information.
In a possible implementation manner, the retrieving, by the cloud server, T high-definition retrieval images matched with the shot content of the first type of image block from the high-definition image library specifically includes: the cloud server determines at least one first matching image from the high-definition image library based on the shooting information of the low-definition image, wherein the matching degree between the shooting information of the first matching image and the shooting information of the low-definition image is greater than a first threshold value, and the resolution of the first matching image is greater than that of the low-definition image; the cloud server respectively acquires a first feature corresponding to each first matching image in the at least one first matching image and a first feature corresponding to the first type of image block, wherein one first matching image corresponds to one first feature; the cloud server determines T high-definition retrieval images with shooting contents most similar to those of the first type image blocks from the at least one first matching image based on the similarity between the first features corresponding to each of the at least one first matching image and the first features corresponding to the first image blocks.
In a possible implementation manner, the determining, by the cloud server, T high-definition retrieval images whose shooting contents are most similar to those of the image blocks of the first type from the at least one first matching image based on a similarity between a first feature corresponding to each of the at least one first matching image and a first feature corresponding to the image block of the first type by the cloud server specifically includes: the cloud server determines at least one second matching image which is most similar to the image block of the first type on the first feature based on the similarity between the first feature corresponding to each first matching image in the at least one first matching image and the first feature corresponding to the image block of the first type; the cloud server respectively acquires a second feature corresponding to each second matching image in the at least one second matching image and a second feature corresponding to the image block of the first type, wherein one second matching image corresponds to one second feature; the cloud server determines T high-definition retrieval images which are most similar to the image blocks of the first type on the second characteristics based on the similarity between the second characteristics corresponding to each second matching image in the at least one second matching image and the second characteristics corresponding to the image blocks of the first type; wherein the second feature is finer than the first feature.
In a possible implementation manner, the repairing, by the cloud server, details in the first type of image block by using the T high-definition search image to obtain a first type of high-definition image block specifically includes: the cloud server acquires image space coordinate transformation parameters of the first type of image blocks and each high-definition retrieval image in the T high-definition retrieval images; the cloud server adjusts the feature points in each high-definition retrieval image, which are matched with the image blocks of the first type, to the same positions as those in the image blocks of the first type based on the image space coordinate transformation parameters of the matched feature point pairs in the image blocks of the first type and each high-definition retrieval image in the T high-definition retrieval images to obtain T high-definition registration images; and the cloud server repairs the details of the first type of image block based on the T high-definition registration images to obtain the first type of high-definition image block.
In a possible implementation manner, the repairing, by the cloud server, details of the first type of image block based on the T high-definition registration images to obtain a first type of high-definition image block specifically includes: the cloud server determines L high-definition reference images with the most similar image space positions of the matched characteristic point pairs from the T high-definition registration images based on the image space coordinate transformation parameters of the matched characteristic point pairs in the first type image blocks and each high-definition search image in the T high-definition search images, wherein L is a positive integer; and the cloud server repairs the details of the first type of image block based on the L high-definition reference images to obtain the first type of high-definition image block.
In a possible implementation manner, the replacing, by the cloud server, the first type of image block in the low-definition image based on the first type of high-definition image block to obtain a high-definition output image specifically includes: when the low-definition image only comprises a first type of image blocks, the cloud server splices the first type of high-definition image blocks into the high-definition output image; when the low-definition image comprises a first type of image block and a second type of image block, the cloud server splices the first type of high-definition image block and the second type of high-definition image block into the high-definition output image.
In one possible implementation, before the cloud server divides the low-definition image into N image blocks based on the number N of image blocks, the method further includes: the cloud server determines a first image processing model corresponding to the environment information according to the shooting information of the low-definition image; and the cloud server processes the low-definition image through the first processing model. The cloud server divides the low-definition image into N image blocks based on the number N of the image blocks, and specifically includes: the cloud server divides the low-definition image processed by the first image processing model into N image blocks based on the number N of the image blocks.
In a possible implementation manner, the processing, by the cloud server, the low-definition image through the first processing model specifically includes: and the cloud server processes detail definition, dynamic range, contrast and/or noise and/or color of the low-definition image through the first image processing model.
The application provides an image processing method, and a low-definition image with a low zoom magnification (namely, a large field angle) shot on a terminal is sent to a cloud server by means of a high-definition image database on the cloud server. And the cloud server cuts the low-definition image with low zooming magnification into blocks based on the zooming magnification of the high-definition image in the high-definition image library to obtain a plurality of image blocks. And aiming at each image block, the cloud server can perform retrieval and matching in the high-definition image library to obtain one or more high-definition reference images, and then the image quality is improved through an image enhancement network based on the high-definition reference images to obtain a plurality of high-definition image blocks. And finally, the server at the cloud end splices the multiple high-definition image blocks into a super-definition image with higher resolution and higher definition, and returns the super-definition image to the terminal. In this way, the processing capability of the cloud server can be used, so that the shooting picture when the terminal shoots the photo or the video not only has a wider field angle, but also has rich detail texture and definition which can be shot when the terminal zooms at a high power.
In a second aspect, the present application provides an image processing method applied to an image processing system, where the image processing system includes a terminal and a cloud server, the method including: the terminal receives a first input from a user. In response to the first input, the terminal acquires a first image and first environment information when the first image is captured. And the terminal sends the first environment information to the cloud server. The cloud server determines a first image processing model corresponding to the environment information according to the first environment information; the first image processing model is used to improve the quality of the image. The cloud server sends the first image processing model to the terminal. And the terminal processes the first image through the first image processing model to obtain a second image.
The image processing method provided by the application can be used for combining the processing capacities of the terminal and the cloud server, fully utilizing high-quality image data and image processing algorithms on the cloud server and enabling the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
In one possible implementation, before the terminal receives the first input of the user, the method further includes: the terminal displays a shooting interface, and the shooting interface is used for displaying a preview image; the capture interface includes a capture control. The first input is the operation of a user for the shooting control, and the first image is the preview image in the shooting interface. Therefore, images shot by the terminal can be optimized and enhanced by means of the processing capacity of the cloud server when the user shoots, and the imaging quality of the terminal when the user shoots is improved.
In one possible implementation, before the terminal receives the first input of the user, the method further includes: the terminal displays a cloud enhanced interface in the gallery application; the cloud enhanced interface display comprises a thumbnail corresponding to the first image and an uploading control; and the first input is the operation of the user for the uploading control. The method for acquiring the first image and the first environment information of the first image by the terminal specifically includes: the terminal retrieves the first image from a local storage. The terminal acquires the first environment information from exchangeable image file format Exif data of the first image. Therefore, after a user selects a picture to be optimized in the gallery, optimization and enhancement can be performed by means of the processing capacity of the cloud server, and the quality effect of the picture stored on the terminal is improved.
In one possible implementation, the method further includes: the cloud server sends the first image to the cloud server. And the cloud server determines a first high-definition image according to the first environment information. The similarity between the second environment information of the first high-definition picture and the first environment information is within a first preset range, and the similarity between the shot content in the first high-definition picture and the shot content in the first picture is within a second preset range. And the cloud server carries out image registration on the first image and the first high-definition image to obtain a second high-definition image. And the cloud server sends the second high-definition image to the terminal. And the terminal performs image fusion on the second high-definition image and the second image to obtain a third image, wherein the definition of the third image is higher than that of the second image. Therefore, the first image can be optimized by using the specific image processing model according to the specific shooting environment, and the quality of the image is improved.
In a possible implementation manner, the determining, by the cloud server, a first high-definition image according to the first environment information specifically includes: and the cloud server retrieves the first high-definition image from a high-definition image library according to the first environment information. The high-definition image library stores a plurality of high-definition images and second environment information corresponding to the plurality of high-definition images.
In a possible implementation manner, the determining, by the cloud server, a first high-definition image according to the first environment information specifically includes: and the cloud server renders the first high-definition image from the three-dimensional world model through a rendering engine according to the first environment information.
In a possible implementation manner, the image fusion between the second high-definition image and the second image by the terminal to obtain a third image specifically includes: the terminal extracts the characteristic information of the second high-definition image and fuses the characteristic information of the second high-definition image into the second image to obtain a third image; wherein the feature information includes high definition detail, high frequency information.
In a possible implementation manner, the terminal processes the quality of the first image through the first image processing model to obtain a second image, and specifically includes: the terminal processes detail definition, dynamic range, contrast, noise and color of the first image through the first image processing model to obtain the second image.
In a possible implementation manner, after the terminal obtains the second image, the method further includes: the terminal displays the second image.
In a possible implementation manner, after the terminal obtains the third image, the method further includes: the terminal displays the third image.
In one possible implementation, the first environment information includes one or more of: geographical position information, weather information, season information, time information, photographing posture information, photographing parameters, and the like.
In a third aspect, the present application provides an image processing method applied to an image processing system, where the image processing system includes a terminal and a cloud server. The method comprises the following steps: the terminal receives a first input from a user. In response to the first input, the terminal acquires a first image and first environment information of the first image. The terminal sends the first image and the first environment information to the cloud server. And the cloud server determines a first high-definition image according to the first environment information and the first image. The similarity between the environment information of the first high-definition image and the first environment information is within a first preset range, and the similarity between the shot content in the first high-definition image and the shot content in the first image is within a second preset range. And the cloud server performs image registration on the first high-definition image according to the first image to obtain a second high-definition image. And the cloud server sends the second high-definition image to the terminal. And the terminal performs image fusion on the second high-definition image and the first image to obtain a third image, wherein the definition of the third image is higher than that of the first image.
The image processing method provided by the application can be used for combining the processing capacities of the terminal and the cloud server, fully utilizing high-quality image data and image processing algorithms on the cloud server and enabling the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
In one possible implementation, before the terminal receives the first input of the user, the method further includes: the terminal displays a shooting interface, and the shooting interface is used for displaying a preview image; the capture interface includes a capture control. The first input is the operation of a user for the shooting control, and the first image is the preview image in the shooting interface. Therefore, images shot by the terminal can be optimized and enhanced by means of the processing capacity of the cloud server when the user shoots, and the imaging quality of the terminal when the user shoots is improved.
In one possible implementation, before the terminal receives the first input of the user, the method further includes: the terminal displays a cloud enhanced interface in the gallery application; the cloud enhanced interface display comprises a thumbnail corresponding to the first image and an uploading control; and the first input is the operation of the user for the uploading control. The method for acquiring the first image and the first environment information of the first image by the terminal specifically includes: the terminal retrieves the first image from a local storage. The terminal acquires the first environment information from exchangeable image file format Exif data of the first image. Therefore, after a user selects a picture to be optimized in the gallery, optimization and enhancement can be performed by means of the processing capacity of the cloud server, and the quality effect of the picture stored on the terminal is improved.
In one possible implementation, the method further includes: before the terminal performs image fusion on the second high-definition image and the first image to obtain a third image, the method further includes: the cloud server determines a first image processing model corresponding to the first environment information according to the first environment information. The first image processing model is used to improve the quality of the image. And the cloud server sends the first image processing model to the terminal. The terminal processes the first image through the first image processing model. The terminal performs image fusion on the second high-definition image and the first image to obtain a third image, and the method specifically includes: and the terminal performs image fusion on the second high-definition image and the first image processed by the first image processing model to obtain the third image. Therefore, the first image can be optimized through high-definition pictures in the same shooting environment on the cloud server, and the detail definition of the image is improved.
In a possible implementation manner, the determining, by the cloud server, a first high-definition image according to the first environment information specifically includes: and the cloud server retrieves the first high-definition image from a high-definition image library according to the first environment information. The high-definition image library stores a plurality of high-definition images and second environment information corresponding to the plurality of high-definition images.
In a possible implementation manner, the determining, by the cloud server, a first high-definition image according to the first environment information specifically includes: and the cloud server renders the first high-definition image from the three-dimensional world model through a rendering engine according to the first environment information.
In a possible implementation manner, the image fusion between the second high-definition image and the second image by the terminal to obtain a third image specifically includes: and the terminal extracts the characteristic information of the second high-definition image and fuses the characteristic information into the second image to obtain the third image. Wherein the feature information includes high-definition details.
In a possible implementation manner, the processing, by the terminal, the first image through the first image processing model specifically includes: the terminal processes detail definition, dynamic range, contrast and color of the first image through the first image processing model.
In a possible implementation manner, after the terminal obtains the third image, the method further includes: the terminal displays the third image.
In one possible implementation, the first environment information includes one or more of: geographical position information, weather information, season information, time information, photographing posture information, photographing parameters, and the like.
In a fourth aspect, the present application provides an image processing method applied to an image processing system, where the image processing system includes a terminal and a cloud server. The method comprises the following steps: the terminal receives a first input from a user. In response to the first input, the terminal acquires a first image and first environment information of the first image. The terminal sends the first image and the first environment information to the cloud server. The cloud server determines a first image processing model corresponding to the first environment information according to the first environment information. The first image processing model is used to improve the quality of the image. And the cloud server processes the quality of the first image through the first image processing model to obtain a second image. And the cloud server sends the second image to the terminal.
In one possible implementation, the method further includes: the cloud server determines a first high-definition image according to the first environment information; the second environment information of the first high-definition picture and the first environment information are in a second preset range, and the similarity between the shooting content in the first high-definition picture and the shooting content in the first picture is in the second preset range. And the cloud server performs image registration on the first high-definition image according to the first image to obtain a second high-definition image. And the cloud server performs image fusion on the second high-definition image and the second image to obtain a third image. And the cloud server sends the third image to the terminal.
In a fifth aspect, the present application provides an image processing method applied to an image processing system, where the image processing system includes a terminal and a cloud server. The method comprises the following steps: the terminal receives a first input from a user. In response to the first input, the terminal acquires a first image and first environment information of the first image. The terminal sends the first image and the first environment information to the cloud server. And the cloud server determines a first high-definition image according to the first environment information. The similarity between the second environment information of the first high-definition image and the first environment information is within a first preset range, and the similarity between the shot content in the first high-definition image and the shot content in the first image is within a second preset range. And the cloud server performs image registration on the first high-definition image according to the first image to obtain a second high-definition image. And the cloud server performs image fusion on the second high-definition image and the first image to obtain a third image, wherein the definition of the third image is higher than that of the first image. And the cloud server sends the third image to the terminal.
In a possible implementation manner, before the server performs image fusion on the second high-definition image and the first image to obtain a third image, the method further includes: the cloud server determines a first image processing model corresponding to the first environment information according to the first environment information. The first image processing model is used to improve the quality of the image. The cloud server processes the first image through the first image processing model.
In a sixth aspect, an embodiment of the present application provides an image processing method, where the method may be executed by a terminal or a cloud server, and the image processing method may include: an image A shot by a terminal and shooting information of the image A are obtained. Optionally, if the method is executed by a cloud server, the terminal may send the captured image a and the capturing information of the image a to the cloud server, and the cloud server receives the image a and the capturing information of the image a.
Further, at least one image B is determined according to the shooting information of the image a, wherein the matching degree between the shooting information of each image B in the at least one image B and the shooting information of the image a is larger than a first threshold value. Each image B of the at least one image B has a resolution greater than the resolution of image a. For example, the at least one image B is a high definition image, while image a is a low definition image.
Alternatively, the terminal may send the shooting information of the image a to a cloud server, the cloud server selects the at least one image B from a high-definition image library and sends the at least one image B to the terminal, and the terminal determines a reference image 1 with the shooting content most similar to that of the image a from the at least one image B.
Alternatively, the terminal may transmit the image a and the shooting information of the image a to a cloud server, and the cloud server selects the at least one image B from a high-definition image library and determines the reference image 1 with the shooting content most similar to that of the image a from the at least one image B.
By implementing the embodiment of the application, at least one image B is determined according to the shooting information of the image A, and then the reference image 1 with the shooting content most similar to that of the image A is determined from the at least one image B, namely, the searching range can be reduced through the shooting information of the image A, so that the searching efficiency of the image is improved.
In combination with the sixth aspect, in some embodiments, the photographing information may include one or more of the following information: shooting magnification, shooting geographical position information, shooting weather condition information and the like. Wherein the photographing magnification may be a magnification. The shooting geographical location information may refer to geographical location information where the photographing apparatus is located when the image a or the image B is shot, and the geographical location information may be defined by Global Positioning System (GPS) information. The photographing weather condition information may refer to weather conditions when the image a or the image B is photographed, for example, a sunny day, a cloudy day, a rainy day, and the like.
Optionally, when the shooting information includes the shooting geographical location information, the matching degree between the shooting geographical location information of each image B in the at least one image B and the shooting geographical location information of the image a is greater than a first threshold, and through the matching of the geographical location information, the consistency between the shooting geographical locations of the image a and the at least one image B can be ensured. Finally, a reference image 1 having the shooting content most similar to that of the image a is determined from the at least one image B, and the reference image 1 is highly similar to the image a in not only the shooting content but also the shooting angle (i.e., shooting geographical position information) and the image a.
Optionally, when the shooting information includes the shooting magnification, the matching degree between the shooting magnification of each image B in the at least one image B and the shooting magnification of the image a is greater than the first threshold, so that it can be ensured that the difference between the magnification of the image a and the magnification of the at least one image B is not large.
Optionally, when the shooting information includes the shooting weather condition information, the matching degree between the shooting weather condition information of each image B in the at least one image B and the shooting weather condition information of the image a is greater than a first threshold, for example, all the images are shot on a sunny day or all the images are shot on a rainy day, so that consistency of transparency of the images can be ensured.
With reference to the sixth aspect, in some embodiments, the detail restoration process is performed on the image a according to the reference image 1, and an image C is obtained. Wherein the resolution of the obtained image C is greater than that of the image a, i.e. the definition of the image C is greater than that of the image a.
Optionally, the reference image 1 and the image a may be input to a deep learning neural network, the high-frequency information of the reference image 1 is extracted through the deep learning neural network, and the image a is subjected to detail recovery through the high-frequency information, so as to obtain an image C with higher definition.
With reference to the sixth aspect, in some embodiments, determining the reference image 1 whose shooting content is most similar to that of the image a from the at least one image B may include: a first feature corresponding to each image B of the at least one image B and a first feature corresponding to the image a may be obtained, respectively, where each image B corresponds to one first feature.
The first feature corresponding to each image B in the at least one image B may be extracted through a search network, and the first feature corresponding to the image a may also be extracted through the search network. The first feature includes, but is not limited to, a CNN feature and, accordingly, the retrieval network may be a CNN network.
And calculating the similarity between the first feature corresponding to each image B in the at least one image B and the first feature corresponding to the image A, and determining M images B with the shooting contents most similar to the shooting contents of the image A from the at least one image B, wherein M is an integer greater than or equal to 1. For example, M may be 15, that is, 15 images B most similar to the photographic content of the image a are selected from the at least one image B.
The similarity between the first feature corresponding to each image B and the first feature corresponding to the image a may be sorted, so as to obtain M images B with the greatest similarity with the image a.
Further, a reference image 1 whose shot content is most similar to that of the image a is determined from the M images B, and the reference image 1 can be obtained by matching with more detailed features, for example.
By implementing the embodiment, the reference image 1 most similar to the shot content of the image a is obtained by gradually reducing the search range, so that the search efficiency is improved, and the consistency between the searched reference image 1 and the shot content of the image a can be ensured.
With reference to the sixth aspect, in some embodiments, determining a reference image 1 whose shot content is most similar to that of the image a from among the M images B includes: and respectively acquiring a second feature corresponding to each image B in the M images B and a second feature corresponding to the image A, wherein each image B corresponds to one second feature.
The second feature corresponding to each image B in the M images B may be extracted through a reordering network, and the second feature corresponding to the image a may also be extracted through the reordering network. The reordering network may be a more complex feature extraction network than the search network, and the second feature extracted by the reordering network may be a finer image feature than the first feature extracted by the search network, for example, the second feature represents a more detailed feature of the image than the first feature.
And calculating the similarity between the second feature corresponding to each image B in the M images B and the second feature corresponding to the image A, and determining the image B with the shooting content most similar to that of the image A from the M images B as a reference image 1.
The similarity between the second feature corresponding to each image B and the second feature corresponding to the image a may be sorted, so that the image B with the highest similarity is taken as the reference image 1.
It is understood that the ranking result of ranking the similarity between the first feature corresponding to each image B and the first feature corresponding to image a may be different from the ranking result of ranking the similarity between the second feature corresponding to each image B and the second feature corresponding to image a.
By implementing the embodiment, the retrieval network is simpler than the reordering network, so that the retrieval efficiency is higher, in the embodiment of the application, the first feature extracted by the retrieval network is firstly used for matching, M images B which are approximately similar to the image A are obtained from at least one image B, then the second feature extracted by the reordering network is used for matching, and the most similar reference image 1 is obtained from the M images B, so that the retrieval accuracy can be improved on the premise of ensuring the retrieval efficiency.
With reference to the sixth aspect, in some embodiments, before performing the detail restoration process on the image a according to the reference image 1, the method further includes: the ratio of the shot content of the reference image 1 to the shot content of the image a is determined, wherein the ratio can reflect whether the shot content contained in the retrieved reference image 1 covers the shot content contained in the full image a.
And if the proportion of the shot content of the reference image 1 in the shot content of the image A is smaller than the second threshold value, removing the shot content matched with the shot content of the reference image 1 from the shot content of the image A to obtain the removed image A. The second threshold may be set according to actual needs, for example, the second threshold may be set to 80%, and if the ratio of the shot content of the reference image 1 to the shot content of the image a is less than 80%, it indicates that the shot content included in the reference image 1 does not cover the shot content included in the full image a, and the shot content other than the shot content included in the reference image 1, that is, the image a after the deletion, in the shot content included in the image a may be further searched.
Further, a reference image 2 whose shot content is most similar to that of the removed image a is determined from at least one image B.
The method for determining the reference image 2 with the shooting content most similar to the shooting content of the removed image a from the at least one image B may refer to the method for determining the reference image 1 in the foregoing embodiment, and is not described herein again.
After obtaining the reference image 1 and the reference image 2, the detail restoration process may be performed on the image a according to the reference image 1 and the reference image 2 to obtain the image C.
It is understood that, after the reference image 1 and the reference image 2 are obtained, the ratio of the captured content of the reference image 1 and the captured content of the reference image 2 to the captured content of the image a may be further determined, and if the ratio is still smaller than the second threshold, the captured content matching the captured content of the reference image 1 and the captured content of the reference image 2 may be further removed from the captured content of the image a, the removed image a is obtained, and the reference image 3 most similar to the captured content of the removed image a is further retrieved from at least one image B. And circulating until the ratio of the obtained shooting content of the at least one reference image to the shooting content of the image A is greater than or equal to a second threshold, and then performing detail recovery processing on the image A according to the obtained at least one reference image to obtain an image C with higher resolution.
In a seventh aspect, an embodiment of the present application provides an image processing apparatus, which includes various modules or units for executing the method of the fifth aspect.
In an eighth aspect, an embodiment of the present application provides an image processing system, where the system includes a terminal and a cloud server, where: the first terminal sends shot image A and shot information of the image A to a cloud server, the cloud server receives the shot information of the image A and the shot information of the image A, and determines at least one image B according to the shot information of the image A, wherein the matching degree between the shot information of the image B and the shot information of the image A is larger than a first threshold, and the resolution of the image B is larger than that of the image A. The cloud server determines a reference image 1 having a shot content most similar to that of the image a from the at least one image B.
Optionally, the cloud server performs detail restoration processing on the image a according to the reference image 1 to obtain an image C, where the resolution of the image C is greater than that of the image a, and sends the image C to the terminal.
The terminal receives the image C and displays the image C.
In a ninth aspect, an embodiment of the present application provides an image processing system, where the system includes a terminal and a cloud server, where: the first terminal sends shot image A and shot information of the image A to a cloud server, the cloud server receives the shot information of the image A and the shot information of the image A, and determines at least one image B according to the shot information of the image A, wherein the matching degree between the shot information of the image B and the shot information of the image A is larger than a first threshold, and the resolution of the image B is larger than that of the image A. And the cloud server sends the at least one image B to the terminal equipment.
The terminal device determines a reference image 1 whose shot content is most similar to that of the image a from the at least one image B.
Optionally, the terminal device performs detail restoration processing on the image a according to the reference image 1 to obtain an image C, and outputs the image C, where a resolution of the image C is greater than a resolution of the image a.
In a tenth aspect, the present application provides a terminal, comprising: a camera, a transceiver, one or more processors, and one or more memories. The one or more processors are coupled with the camera, the one or more memories and the transceiver, the one or more memories are used for storing computer program code, the computer program code comprises computer instructions, when the one or more processors execute the computer instructions, the terminal is caused to execute the image processing method in any possible implementation manner of any one of the aspects.
In an eleventh aspect, the present application provides a cloud server, a transceiver, one or more processors, and one or more memories. The one or more memories are coupled with the one or more processors, the one or more memories are configured to store computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the cloud server to perform the image processing method of any of the possible implementations of any of the aspects.
In a twelfth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on a terminal, the terminal is caused to execute the image processing method in any one of the possible implementation manners of the foregoing aspects.
In a thirteenth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on a cloud server, the cloud server is caused to execute an image processing method in any possible implementation manner of any one of the foregoing aspects.
In a fourteenth aspect, the present application provides a computer program product, which when run on a terminal, causes the terminal to execute the image processing method in any one of the possible implementation manners of the foregoing aspect.
In a fifteenth aspect, an embodiment of the present application provides a computer program product, which, when running on a cloud server, causes the cloud server to execute the image processing method in any one of the possible implementation manners of the foregoing aspects.
Drawings
Fig. 1 is a schematic architecture diagram of a communication system according to an embodiment of the present application;
fig. 2A is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
Fig. 2B is a schematic diagram of a software architecture of a terminal according to an embodiment of the present disclosure;
3A-3H are a set of schematic user interfaces provided by embodiments of the present application;
FIGS. 4A-4F are schematic diagrams of another set of user interfaces provided by embodiments of the present application;
FIG. 4G is a schematic diagram illustrating comparison of processing effects of a group of pictures according to an embodiment of the present disclosure;
FIGS. 5A-5G are schematic diagrams of another set of user interfaces provided by the embodiments of the application;
FIGS. 6A-6P are schematic diagrams of another set of user interfaces provided by the embodiments of the application;
FIGS. 7A-7D are schematic diagrams of another set of user interfaces provided in accordance with embodiments of the present application;
fig. 8 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating an architecture of an image processing system according to an embodiment of the present application;
FIG. 10 is a block diagram of an architecture of another image processing system according to an embodiment of the present application;
FIG. 11 is a block diagram of an architecture of another image processing system according to an embodiment of the present disclosure;
FIG. 12 is a block diagram of an architecture of another image processing system according to an embodiment of the present application;
FIG. 13 is a block diagram of a data construction system according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a cloud server according to an embodiment of the present application;
Fig. 15 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 16 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 17 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 18 is a block diagram of an image processing scheme provided by an embodiment of the present application;
fig. 19 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 20 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 21A is a schematic diagram of a low-definition image provided in an embodiment of the present application;
FIG. 21B is a semantic segmentation mask diagram of a low-definition image according to an embodiment of the present disclosure;
fig. 21C is a schematic block diagram for a low-definition image according to an embodiment of the present application;
fig. 21D is a schematic diagram of an image block provided in an embodiment of the present application;
fig. 22 is a schematic flowchart of image registration provided in an embodiment of the present application;
fig. 23A is a schematic flowchart of image detail restoration provided in an embodiment of the present application;
fig. 23B is a schematic diagram of a high definition reference image provided in an embodiment of the present application;
fig. 23C is a schematic diagram of a high-definition image block according to an embodiment of the present application;
Fig. 24 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 25A is a schematic block diagram of a panoramic high-definition image provided in an embodiment of the present application;
fig. 25B is an exemplary diagram of a high-definition image segmented in a panoramic high-definition image according to an embodiment of the present application;
fig. 25C is a schematic block diagram of another panoramic high-definition image provided in the embodiment of the present application;
fig. 25D is an exemplary diagram of a high-definition image segmented in a panoramic high-definition image according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be described in detail and removed with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
A communication system 10 to which embodiments of the present application relate is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an architecture of a communication system 10 according to an embodiment of the present application. The communication system 10 may include a terminal 100 and a cloud server 200. The terminal 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \\ Virtual Reality (VR) device, and the like. The embodiment of the present application does not specifically limit the specific type of the terminal 100.
The terminal 100 may be connected to the cloud server 200 through a 2G network, a 3G network, a 4G network, a 5G network, a Wireless Local Area Network (WLAN), or the like. Among them, the terminal 100 may transmit image data, environment information of the image data (e.g., geographical location information, photographing apparatus photographing posture information, photographing time information, weather information, etc.) to the cloud server 200. The cloud server 200 may transmit an image processing network model, high definition picture data, and the like to the terminal 100.
The cloud server 200 may establish a connection with a plurality of terminals 100, and may independently process a processing task requested by the plurality of terminals 100. The cloud server 200 may distinguish the terminals through an account (e.g., hua is an account) in which the user logs on the terminals.
Fig. 2A shows a schematic structural diagram of the terminal 100.
The following describes an embodiment specifically by taking the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 2A is merely an example, and that the terminal 100 may have more or fewer components than shown in fig. 2A, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The terminal 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the terminal 100. In other embodiments of the present application, terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The controller may be, among other things, a neural center and a command center of the terminal 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the terminal 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of terminal 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the terminal 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal 100, and may also be used to transmit data between the terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminals, such as AR devices, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 140 may also supply power to the terminal through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal 100 can communicate with a network and other devices through a wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the terminal 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The terminal 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the terminal 100 selects a frequency bin, the digital signal processor is configured to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The terminal 100 can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The terminal 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal 100 receives a call or voice information, it can receive voice by bringing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The terminal 100 determines the intensity of the pressure according to the change in the capacitance. When a touch operation is applied to the display screen 194, the terminal 100 detects the intensity of the touch operation according to the pressure sensor 180A. The terminal 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine a motion attitude of the terminal 100. In some embodiments, the angular velocity of terminal 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the terminal 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the terminal 100 by a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the terminal 100 calculates an altitude from the barometric pressure measured by the barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The terminal 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the terminal 100 is a folder, the terminal 100 may detect the opening and closing of the folder according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the terminal 100 in various directions (generally, three axes). The magnitude and direction of gravity can be detected when the terminal 100 is stationary. The method can also be used for recognizing the terminal gesture, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The terminal 100 may measure the distance by infrared or laser. In some embodiments, the scene is photographed and the terminal 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal 100 emits infrared light outward through the light emitting diode. The terminal 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal 100. When insufficient reflected light is detected, the terminal 100 may determine that there is no object near the terminal 100. The terminal 100 can utilize the proximity light sensor 180G to detect that the user holds the terminal 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The terminal 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the terminal 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, the terminal 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the terminal 100 performs a reduction in the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, terminal 100 heats battery 142 when the temperature is below another threshold to avoid a low temperature causing abnormal shutdown of terminal 100. In other embodiments, when the temperature is lower than a further threshold, the terminal 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal 100 at a different position than the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal 100 may receive a key input, and generate a key signal input related to user setting and function control of the terminal 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the terminal 100 employs eSIM, namely: an embedded SIM card. The eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
The software system of the terminal 100 may adopt a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the terminal 100.
Fig. 2B is a block diagram of the software configuration of the terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2B, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the terminal 100 software and hardware in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The application provides an image processing method which can combine the processing capacities of a terminal and a cloud server, make full use of high-quality image data and an image processing algorithm on the cloud server, and enable a terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
An image processing method according to an embodiment of the present application is described below with reference to an application scenario.
In some application scenarios, when the user uses the terminal 100 to take a picture, the user may select a different picture taking mode in the camera application of the terminal 100 to take a picture. When a user photographs a landscape at a place such as a scenic spot, a business district, etc. through the terminal 100, the user may select a "cloud enhanced mode" in a camera application to photograph. In the "cloud enhanced mode", the terminal 100 may perform enhanced optimization on a picture taken by the terminal 100 by means of the processing capability of the cloud server. Therefore, the terminal 100 can break through the imaging quality limit of the terminal, so that a common camera on the terminal can also shoot the effect of professional single-lens reflex equipment. For example, when the terminal 100 is photographing a long shot, due to a limit of a self-imaging quality limit, when zooming to a certain degree, details of the long shot photographed by the terminal 100 may become blurred. When the terminal 100 starts the "cloud enhanced mode" to shoot the long-range view, the terminal 100 may optimize the long-range view image shot by the terminal 100 by means of the processing capability of the cloud server, so that the terminal 100 can shoot a clear picture when shooting the long-range view.
As shown in fig. 3A, the terminal 100 may display an interface 310 having a home screen, in which interface 310 a page in which application icons are placed is displayed, the page including a plurality of application icons (e.g., a weather application icon, a stock application icon, a calculator application icon, a setup application icon, a mail application icon, a pay treasure application icon, a facebook application icon, a browser application icon, a gallery application icon 312, a music application icon, a video application icon, a WeChat application icon, etc.). And page indicators are also displayed below the application icons to indicate the position relationship of the currently displayed page and other pages. Below the page indicator are a plurality of tray icons (e.g., a dialing application icon, an information application icon, a contacts application icon, a camera application icon 313) that remain displayed during page switching. In some embodiments, the page may also include a plurality of application icons and a page indicator, the page indicator may not be a part of the page, and may exist alone, and the picture icon is also optional, which is not limited in this embodiment of the present application. A status bar 311 is displayed in an upper portion of the interface 310, and the status bar 311 may include: one or more signal strength indicators for mobile communication signals (which may also be referred to as cellular signals), battery status indicators, time indicators, Wi-Fi signal indicators, and the like.
The terminal 100 may receive an input operation (e.g., a single click) by the user on the camera application icon 313, and in response to the input operation, the terminal 100 may display a photographing interface 320 as shown in fig. 3B.
As shown in fig. 3B, the shooting interface 320 may display a screen including a shot image playback control 321, a shooting control 322, a camera conversion control 323, a camera-captured screen 324, a setting control 325, a zoom magnification control 326, and one or more shooting mode controls (e.g., a "night mode" control 372A, a "portrait mode" control 372B, a "cloud enhanced mode" control 372C, a "normal shooting mode" control 372D, a "video recording mode" control 372E, a "professional mode" control 372F, and a more mode control 327G). The captured image playback control 321 can be used to display the captured image. The shooting control 322 is used to trigger saving of the image shot by the camera. The camera conversion control 323 can be used to switch the camera for taking a picture. The settings control 325 may be used to set the photographing function. The zoom magnification control 326 may be used to set the zoom magnification of the camera. The shooting mode control can be used for triggering and starting an image processing flow corresponding to the shooting mode. For example, a "night mode" control 372A may be used to trigger an increase in brightness and color richness in a captured image, among other things. The "portrait mode" control 372B may be used to trigger blurring of the background of a person in a captured image. The "cloud enhanced mode" control 372C may be used to trigger enhancement of the picture effect of the captured image by means of the processing power of the cloud server. As shown in fig. 3B, the photographing mode currently selected by the user is the "normal photographing mode".
The terminal 100 may receive an input operation (e.g., a single click) of the user on the "cloud enhanced mode" control 327C, and in response to the input operation, as shown in fig. 3C, the terminal 100 may adjust the shooting mode currently selected by the user to the "cloud enhanced mode" and display a mode detail control 328
As shown in fig. 3C, the terminal 100 may receive an input operation (e.g., a single click) by the user acting on the mode detail control 328, and in response to the input operation, the terminal 100 may display a "cloud enhanced mode" detail page 330 as shown in fig. 3D.
As shown in fig. 3D, the "cloud enhanced mode" detail page 330 may display a comparison graph of the effect of the "cloud enhanced mode" when it is turned on and the effect of the "cloud enhanced mode" when it is not turned on. For example, the "cloud enhanced mode" detail page 330 may display an image 331 photographed when the "cloud enhanced mode" is not turned on and a detail view 332 of the image 331, and an image 333 photographed when the "cloud enhanced mode" is turned on and a detail view 334 of the image 333. The "cloud enhanced mode" detail page 330 may also display a text description 335 of "cloud enhanced mode" (e.g., "take high-definition picture with cloud capability, highlight detail, enhance color rendering, enhance picture quality.
The terminal 100 may receive an input operation (e.g., a single click) by the user acting again on the mode detail control 328, and in response to the input operation, as shown in fig. 3E, the terminal 100 may close the "cloud enhanced mode" detail page 330 and return to display the above-described photographing interface 320.
As shown in fig. 3E, the shooting mode currently selected by the user is the "cloud enhanced mode". The terminal 100 may receive an input operation (e.g., a single click) of the photographing control 322 by the user, and in response to the input operation, the terminal 100 may acquire an image photographed by the camera and environment information of the image. Wherein the environment information may include one or more of: geographical position information, weather information, time information, photographing parameters, photographing posture information, and the like. The shooting parameters may include zoom factors, among others. The photographing posture information may include orientation information of a photographing lens of the photographing apparatus when photographing an image. For example, the pitch angle and the azimuth angle of the photographing lens, etc., or the three-axis rotation angle in natural coordinates, etc. The terminal 100 may transmit the photographed image and the environment information of the photographed image to the cloud server 200, and enhance the photographed image by means of the processing capability of the cloud server 200. The process of enhancing the captured image by the terminal 100 through the processing capability of the cloud server 200 will be described in detail in the following embodiments, and will not be described in detail here.
In some embodiments, in the process of enhancing the photographed image by the terminal 100 by means of the processing capability of the cloud server 200, the terminal 100 may display a processing progress prompt or countdown for prompting the user of the current progress of enhancing the photographed image by the terminal 100 by means of the processing capability of the cloud server 200.
As is exemplary. As shown in fig. 3F, the terminal 100 may display a progress prompt window 341 on the photographing interface 320 for prompting the progress of the enhancement of the photographed image. For example, the progress prompt window 341 displays a text prompt "10% progress of a photo being optimally photographed through the cloud". A cancel control 342 may also be displayed in the progress indication window 341, and may be used to trigger cancellation of the enhancement of the captured image. For example, when the terminal 100 receives an input operation (e.g., a single click) of the user with respect to the cancel control 342, the terminal 100 may transmit a cancel instruction to the cloud server 200, instructing the cloud server 200 to stop the enhancement process of the captured image. Thus, when the progress is slow or the user temporarily does not want to enhance the captured image, the enhancement of the captured image can be cancelled immediately.
In a possible implementation manner, the terminal 100 may display the enhancement effect in real time, and when the user is satisfied with a certain enhancement effect in the middle, the terminal may end the enhancement in advance, and save the halfway enhancement result at the end as the final result.
After the terminal 100 enhances the captured image by the processing capability of the cloud server 200, the terminal 100 may display a completion prompt and store the enhanced image locally. The completion prompt may be used to prompt the user that enhancement of the captured image has currently been completed.
For example, as shown in fig. 3G, the terminal 100 may display a completion prompt 343 after completing enhancement of the photographed image by means of the processing capability of the cloud server 200. For example, completion prompt 343 may be a text prompt "photo enhancement completed, viewable into gallery".
The terminal 100 can receive an input operation (e.g., a single click) by the user with respect to the captured image redisplay control 321, and in response to the input operation, the terminal 100 can display a photo browsing interface 350 as shown in fig. 3H.
As shown in fig. 3H, the photo browsing interface 350 may display a menu including enhanced images 351, labels 352, image related information 353, a menu 354, and a gallery control 355. Wherein the label 352 is used to prompt the user that the image 351 has been enhanced by the processing power of the cloud server 200. The image-related information 353 may include the photographing time, photographing weather, geographical location information, and the like of the image 351. For example, the shooting time may be "12 months, 3 days, 8 in 2019: 00AM ", the shooting weather may be" cloudy ", the shooting place may be" shanghai disney ", and the like. The menu 354 may include a share button, a favorites button, an edit button, a delete button, and more buttons. The share button may be used to trigger sharing of the image 351. The favorites button may be used to trigger the favorites of the image 351 to a picture favorites folder. The edit buttons may be used to trigger editing functions such as rotating, cropping, adding filters, blurring, etc. of the image 351. A delete button may be used to trigger deletion of the image 351. More buttons may be used to trigger the opening of more functions associated with the image 351. The gallery control 355 may be used to trigger the terminal 100 to open a gallery application.
In some embodiments, when the terminal 100 enhances the image by using the processing capability of the cloud server 200, the image and the environment information of the image need to be transmitted to the cloud server 200. Therefore, after the terminal 100 opens the camera application and starts the "cloud enhanced" mode, the terminal 100 may detect a current network state, and if the current network of the terminal 100 is disconnected and cannot be connected to the cloud server, the terminal 100 may output a network disconnection prompt, which may be used to prompt the user to use the cloud enhanced mode to take a picture after connecting to the network.
Illustratively, as shown in fig. 4A, the terminal 100 is displayed with a shooting interface 320. For the text description of the shooting interface 320, reference may be made to the foregoing embodiments, which are not described herein again. When the terminal 100 detects that the network is disconnected and cannot establish connection with the cloud server 200, the terminal 100 may display a prompt 411, where the prompt 411 may be used to prompt the user that the network of the terminal 100 is currently disconnected. For example, the prompt 411 may be a text prompt "disconnect the current network, please connect the network first, then borrow the cloud-optimized picture", and so on.
When the terminal 100 opens the camera application and starts the cloud enhanced mode, and the network state of the terminal 100 is switched from disconnection to reconnection, the terminal 100 may output a network reconnection prompt, where the network reconnection prompt may be used to prompt a user that the current network is restored to connection and the cloud enhanced mode may be normally used to take a picture.
Illustratively, as shown in fig. 4B, the terminal 100 is displayed with a shooting interface 320. For the text description of the shooting interface 320, reference may be made to the foregoing embodiments, which are not described herein again. When the terminal 100 detects that the network is switched from disconnection to connection restoration, the terminal 100 may display a prompt 412, where the prompt 412 may be used to prompt the user that the network of the terminal 100 is restored currently, and the cloud enhanced mode may be used normally to take a picture. For example, the prompt 412 may be a text prompt "current network connected, you can take a picture using cloud enhanced mode.
In some embodiments, since the cloud server 200 may process the pictures uploaded by multiple terminals for enhancement, when the cloud server 200 has too many tasks for enhancing the pictures, the cloud server 200 queues the tasks. When the terminal 100 requests the picture enhancement task to be queued, the terminal 100 may output a queuing prompt, maintain communication with the cloud server 200 in the background, and monitor the queuing progress. The picture enhancement task. The queuing prompt may be used to prompt that the current picture enhancement task is queuing and needs to wait.
Illustratively, as shown in fig. 4C, the terminal 100 is displayed with a shooting interface 320. For the text description of the shooting interface 320, reference may be made to the foregoing embodiments, which are not described herein again. After the terminal 100 transmits the photographed image and the environment information of the photographed image to the cloud server 200, the terminal 100 may periodically acquire a processing progress for enhancing the photographed image from the cloud server 200. When the number of tasks for enhancing the pictures on the cloud server 200 reaches a specified number, the cloud server 200 may queue the task requests from the terminal 100 and periodically send a queuing progress to the terminal 100. The terminal 100 may output a prompt 413 for prompting the user that the task of the terminal 100 currently requesting enhancement of the captured image is being queued. For example, the prompt 413 may indicate "the current user angle through the cloud enhanced picture, has queued for you, and can view the queuing progress in the notification bar" in text.
When the terminal 100 requests the task of enhancing the photographed image to be in a queue, the terminal 100 may maintain communication with the cloud server 200 and monitor the queuing progress in the background, and may display the queuing progress in a notification bar of the terminal 100.
Illustratively, as shown in fig. 4D, the terminal 100 is displayed with a shooting interface 320. The terminal 100 can receive an input operation from the user and call up the status bar 311 above the shooting interface 320. The terminal 100 may receive an input operation (e.g., a downward slide) by the user with respect to the status bar 311, and in response to the input operation, the terminal 100 may display a menu window 420 and a notification bar 430 as shown in fig. 4E.
As shown in fig. 4E, switch controls for some functions (e.g., a Wireless Local Area Network (WLAN) switch control, a bluetooth switch control, a flashlight switch control, a ringer switch control, an auto-rotation switch control, a Huawei Share switch control, etc.) may be displayed in the menu window 420. The notification bar 430 may display a queue notification (e.g., "more users are using the cloud to enhance the picture effect, and are currently queuing"), a queue time prompt 432 to enhance the picture taken (e.g., "projected wait time 1 minute"), and a cancel wait control 431. The cancel wait control 431 may be used to trigger the terminal 100 to send a cancel request to the cloud server 200, the cancel request being used to request the cloud server 200 to cancel the enhancement of the uploaded captured image of the terminal 100.
When the terminal 100 completes the enhancement of the photographed image by means of the processing capability of the cloud server 200, the terminal 100 may display a completion prompt 433 as shown in fig. 4F. The prompt 433 may be used to prompt the user that the terminal 100 has currently completed enhancing the captured image. The terminal 100 can receive an input operation (e.g., a single click) by the user with respect to the prompt 433, and in response to the input operation, the terminal 100 can display the photo browsing interface 350 shown in fig. 3H described above. For the text description of the photo browsing interface 350, reference may be made to the foregoing embodiments, which are not described herein again.
According to the image processing method provided by the embodiment of the application, the terminal 100 can perform enhancement optimization on the picture shot by the terminal 100 by means of the processing capability of the cloud server 200 in the 'cloud enhancement mode'. The terminal 100 can break through the imaging quality limit of the terminal itself, so that a common camera on the terminal can also shoot the effect of professional single-lens reflex equipment.
As shown in fig. 4G, (a) in fig. 4G shows a photographed image before the terminal 100 enhances the processing capability of the cloud server 200, and (b) in fig. 4G shows an image obtained after the terminal 100 enhances the photographed image by the processing capability of the cloud server 200. Fig. 4G (c) shows a partial view of the captured image in fig. 4G (a), and fig. 4G (d) shows a partial view of the enhanced image in fig. 4G (b). Comparing (a) and (b) in fig. 4G and (c) and (d) in fig. 4G, it can be seen that after the terminal 100 enhances the captured image by means of the processing capability of the cloud server 200, the definition of the whole and details of the image is improved, and the contrast, dynamic range, color, and the like of the image are also improved.
In some application scenarios, terminal 100 may locally store many pictures, which may be taken by terminal 100 or sent by other devices or downloaded from a network. Wherein, some pictures can not present high-quality image effect due to the limitation of the shooting capability of the shooting device. Therefore, the terminal 100 enhances the images stored in the gallery by means of the processing capability of the cloud server 200, so that the image effect of the shot images can be improved, and the user can conveniently enhance the effect of the shot images by means of the processing capability of the cloud server 200 after the images are shot.
As shown in fig. 5A, the terminal 100 may display an interface 510 having a home screen, in which interface 510 a page in which application icons are placed is displayed, the page including a plurality of application icons (e.g., a weather application icon, a stock application icon, a calculator application icon, a setup application icon, a mail application icon, a pay treasure application icon, a facebook application icon, a browser application icon, a gallery application icon 512, a music application icon, a video application icon, a WeChat application icon, etc.). And page indicators are also displayed below the application icons to indicate the position relationship of the currently displayed page and other pages. Below the page indicator are a plurality of tray icons (e.g., a dialing application icon, an information application icon, a contacts application icon, a camera application icon 513), and the upper portion of the interface 510 is displayed with a status bar 511. Since the interface 510 is the same as the interface 310 shown in fig. 3A, for the detailed text description of the interface 510, reference may be made to the embodiment shown in fig. 3A, which is not repeated herein.
The terminal 100 may receive an input operation by the user on the gallery application icon 512, and in response to the input operation, the terminal 100 may display a gallery album interface 520 as shown in fig. 5B.
As shown in FIG. 5B, the gallery photo album interface 520 may be displayed to include one or more photo albums (e.g., all photos photo album 526, video album, camera album, cloud enhanced photo album 527, WeChat photo album, micro blog album, etc.). The terminal 100 may have a gallery menu 521 displayed below the gallery album interface 520. The gallery menu 521 includes a photo control 522, an album control 523, a time control 524, and a discovery control 525. The photo control 522 is used to trigger the terminal 100 to display all local pictures in the form of picture thumbnails. The album widget 523 is used to trigger the terminal 100 to display an album to which the local picture belongs. As shown in FIG. 5B, the current album control 523 is in the selected state and the terminal 100 displays the gallery album interface 520. The time of day control 524 may be used to trigger the terminal 100 to display a locally stored pick picture. The discovery control 525 may be used to trigger the terminal 100 to display a categorized album of pictures. For example, the terminal 100 may automatically classify pictures according to people, locations, sights, and business circles.
The terminal 100 may receive an input operation (e.g., a single click) from a user with respect to the discovery control 525, and in response to the input operation, the terminal 100 may display a gallery discovery interface 530 as shown in fig. 5C.
As shown in FIG. 5C, the gallery discovery interface 530 displays a person image category 531, a location category 532, a sight and a business district category 533. The portrait classification 531 includes one or more pictures containing portrait content. One or more locality albums (e.g., "mansion door" album, "shanghai" album, "Shenzhen" album, etc.) are included in the locality category 532. The sight spots and trade lots 533 include one or more sight spots and trade lots albums (e.g., "shanghai disney" album, "shanghai beach" album, "beijing palace" album, etc.).
The terminal 100 may receive an input operation (e.g., a single click) by the user for the attraction and business district classification 533, in response to which the terminal 100 may display an attraction and business district classification interface 540 as shown in fig. 5D.
As shown in FIG. 5D, the attraction and business turn classification interface 540 displays a cloud enhanced control 545 that includes one or more albums (e.g., album 541, album 542, album 543, album 544, etc.). And the names of the scenic spots or business circles corresponding to the photo albums and the number of the pictures in the photo albums are arranged below each photo album. For example, the name of the spot or business district corresponding to the album 541 is "shanghai disney", and the album 541 has 3 pictures. The name of the spot or business circle corresponding to the album 542 is "shanghai/outer beach", and the album 542 contains 2 pictures. The name of the scenic spot or business circle corresponding to the photo album 543 is "Beijing & Imperial palace", and 2 pictures are contained in the photo album 543. The name of the scenery spot or business district corresponding to the album 544 is "building door and swell", and 2 pictures are included in the album 544. The above examples are merely illustrative of the present application and should not be construed as limiting.
The terminal 100 may receive an input operation (e.g., a single click) by the user acting on the cloud enhanced control 545, and in response to the input operation, the terminal 100 may display a cloud enhanced interface 550 as shown in fig. 5E.
As shown in fig. 5E, the cloud enhanced interface 550 includes a picture adding control 551 and a details-understanding control 552. The picture adding control 551 may be used to trigger the terminal 100 to select a picture to be enhanced by the processing capability of the cloud server 200. The details-aware control 552 may be used to trigger the terminal 100 to display the effects of the cloud enhanced functionality and related information.
The terminal 100 may receive an input operation (e.g., a single click) by the user with respect to the know-details control 552, and in response to the input operation, the terminal 100 may display a cloud enhanced details page 560 as shown in fig. 5F.
As shown in fig. 5F, a comparison graph of the effect of the cloud enhancement function is displayed in the cloud enhancement detail page 560. For example, the cloud enhancement details page 560 may display a picture 561 before cloud enhancement processing and a detail view 562 of the picture 561, and a picture 563 after cloud enhancement processing and a detail view 564 of the picture 563, and a comparison result (e.g., "details before enhancement are blurred and details after enhancement are clearer"). The cloud enhancement detail page 560 may also display a text description 565 related to the cloud enhancement function (e.g., "enhance the photo by the cloud capability, so as to highlight the detail, improve the definition and color expression, and enhance the quality of the photo"). The cloud enhancement details page 560 may also display a page indicator 566, and the page indicator 566 may be used to indicate the position relationship between the page where the current effect contrast diagram is located and the page where the other effect contrast diagrams are located.
The terminal 100 may receive an input operation (e.g., a leftward swipe) by the user with respect to the cloud enhanced details page 560, and in response to the input operation, as shown in fig. 5G, the terminal 100 may switch to display another set of effect-contrast maps in the cloud enhanced details page 560.
As shown in fig. 5G, the terminal 100 may display a picture 571 before the cloud enhancement processing and a picture 572 after the cloud enhancement processing, and a comparison result 573 (e.g., "the enhanced picture is richer in color and more balanced, rich in texture") in the cloud enhancement page 560.
After the enhanced function is turned on, the terminal 100 may receive an operation of a user and select a picture in the gallery application. The terminal 100 may acquire the picture and the environmental information (e.g., geographical location information, weather information, time information, shooting parameters, shooting posture information, etc.) of the picture, upload the picture to the cloud server 200, and enhance the picture selected by the user by means of the processing capability of the cloud server 200.
As shown in fig. 6A, the terminal 100 is displayed with a cloud enhanced interface 550. The cloud enhanced interface 550 includes a picture adding control 551 and a detail learning control 552. For the detailed text description of the cloud enhanced interface 550, reference may be made to the embodiment shown in fig. 5E, which is not described herein again. The terminal 100 may receive an input operation (e.g., a single click) by the user with respect to the picture addition control 551, and in response to the operation, the terminal 100 may display a picture selection interface 610 as shown in fig. 6B.
As shown in fig. 6B, the picture selection interface 610 displays one or more albums that can be cloud-enhanced (e.g., "shanghai disney" album, "shanghai outer beach" album 611, "beijing native house" album, "mansion gate" swell "album, etc.). The environmental information of the pictures in these albums is complete, and can be enhanced by the processing capability of the cloud server 200. A selection box (e.g., selection box 612 displayed on "Shanghai beach" album 611) may also be displayed on each album in the picture selection interface 610, which may be used to check all the pictures in the album. The number of pictures in the photo album is displayed below each photo album.
The terminal 100 may receive an input operation (e.g., a single click) by the user on the "shanghai/beach" album 611, and in response to the input operation, the terminal 100 may be as shown in the album interface 620 shown in fig. 6C.
As shown in fig. 6C, the album interface 620 is named "shanghai/beach". The album interface 620 displays thumbnails (e.g., thumbnail 621) including one or more pictures with complete environmental information and a determination control 623. Wherein, a selection frame is displayed on the thumbnail of each picture of the album interface 620, and the selection frame can be used to select the picture. As shown in fig. 6C, the user has selected the picture corresponding to thumbnail 621.
The terminal 100 may receive an input operation (e.g., a single click) by the user with respect to the determination control 623 after the user selects a picture, and in response to the input operation, the terminal 100 may return to display the picture selection interface 610 as shown in fig. 6D and display the selected number of pictures in the picture selection interface 610, and the determination control 613.
As shown in fig. 6D, the terminal 100 displays the selected number 614 of pictures (e.g., "1 picture selected in total") and a determination control 613 on the picture selection interface 610.
The terminal 100 may receive an input operation (e.g., a single click) by the user with respect to the determination control 613, and in response to the input operation, the terminal 100 may display a thumbnail (e.g., the thumbnail 631) of the selected picture and an upload control 633 on the cloud enhanced interface 550 as shown in fig. 6E. A deletion control 632 is displayed below the thumbnail of the selected picture, and the deletion control 632 can be used to trigger the terminal 100 to delete the picture selected by the user. The upload control 633 may display a textual description (e.g., "upload cloud enhancement effect"). The uploading control 633 can be used for the terminal 100 to upload the picture selected by the user and the environmental information of the picture to the cloud server 200, and the effect of the picture selected by the user is enhanced by the processing capability of the cloud server 200.
In some embodiments, when the terminal 100 enhances the picture selected by the user by using the processing capability of the cloud server 200, the picture selected by the user and the environment information of the picture need to be sent to the cloud server 200 first. Therefore, when the user triggers the terminal 100 to start uploading the selected picture and the environmental information of the picture, the terminal 100 may detect the current network state, and if the network state is disconnected and cannot be connected to the cloud server 200, the terminal 100 may output an upload failure prompt, which may be used to prompt the user to upload after connecting to the network.
For example, as shown in fig. 6E, the terminal 100 receives an input operation (e.g., a single click) from a user with respect to the upload control 633, and in response to the input operation, the terminal 100 may first detect a current network status. If the current terminal 100 is disconnected from the network and cannot connect to the cloud server 200, the terminal 100 displays an upload failure prompt 634 as shown in fig. 6F.
As shown in fig. 6F, the upload failure prompt 634 shows a text prompt "upload failure, image enhancement by cloud requires networking first, and then uploading image to cloud enhancement.
After displaying the upload failure prompt, the terminal 100 may receive an input from a user, connect to a network, and communicate with the cloud server 200. After the terminal 100 is connected to the network, the input operation of the user for the upload control 633 may be received again, and the picture selected by the user and the environment information of the picture may be sent to the cloud server 200.
Illustratively, as shown in fig. 6F, the terminal 100 may receive an input operation (e.g., a downward slide) by the user on the status bar 511, and in response to the input operation, the terminal 100 may display a menu window 640 as shown in fig. 6G.
As shown in fig. 6G, switch controls for some functions (e.g., a Wireless Local Area Network (WLAN) switch control 641, a bluetooth switch control, a flashlight switch control, a ringer switch control, an auto-rotation switch control, a Huawei Share switch control, etc.) may be displayed in the menu window 420. Wherein the current wlan control 641 is in an off state.
The terminal 100 may receive an input operation (e.g., a single click) from a user with respect to the wlan control 641, and in response to the input operation, the terminal 100 may turn on a wlan switch and connect a connectable Wi-Fi network. As shown in fig. 6H, after the wireless lan switch is turned on, the terminal 100 is connected to the Wi-Fi network, and displays a name (e.g., "HW-ZL") of the connected Wi-Fi network under the wireless lan control 641, and displays a Wi-Fi signal indicator 642 in the status bar 511. The Wi-Fi signal indicator 642, which is used to indicate that the current network state of the terminal 100 is normal, may be connected to the cloud server 200.
As shown in fig. 6I, after the terminal 100 is connected to the network, an input operation (e.g., a single click) of the user on the upload control 633 in the cloud enhanced interface 550 may be received, and in response to the input operation, the terminal 100 may upload a picture selected by the user and environment information of the picture to the cloud server 200.
In some embodiments, since the cloud server 200 may process the pictures uploaded by multiple terminals for enhancement, when the cloud server 200 has too many tasks for enhancing the pictures, the cloud server 200 queues the tasks. When the terminal 100 requests the picture enhancement task to be queued, the terminal 100 may output a queuing prompt, maintain communication with the cloud server 200 in the background, and monitor the queuing progress. The picture enhancement task. The queuing prompt may be used to prompt that the current picture enhancement task is queuing and needs to wait.
For example, as shown in fig. 6J, after the terminal 100 sends the selected picture and the environment information of the picture to the cloud server 200, the terminal 100 may periodically obtain the processing progress for enhancing the selected picture from the cloud server 200. When the number of tasks for enhancing the pictures on the cloud server 200 reaches a specified number, the cloud server 200 may queue the task requests from the terminal 100 and periodically send a queuing progress to the terminal 100. The terminal 100 may output a progress prompt window 650. The reason why the current task is slow in progress (for example, "there are many users with the help of the cloud enhanced picture effect, currently queuing"), the waiting time 651 (for example, "1 picture in total, the expected waiting time is 1 minute"), and the cancel waiting control 652 are displayed in the progress prompt window 650. The cancel wait control 652 may be configured to trigger the cloud server 200 of the terminal 100 to send a cancel request, where the cancel request is used to request the cloud server 200 to cancel enhancement of an uploaded picture of the terminal 100.
When the cloud server 200 starts to process the picture enhancement task of the terminal 100, the cloud server 200 may send a start processing response to the terminal 100, and after receiving the start processing response, the terminal 100 may display a processing prompt, which may be used to prompt the user that the cloud server 200 is currently processing the picture enhancement task.
For example, as shown in fig. 6K, the terminal 100 may display the processing prompt 653 when the cloud server 200 processes a picture enhancement task of the terminal 100. For example, the in-process prompt 653 can be a text prompt "cloud in process, please wait, 1 picture in total," or the like. The examples are merely illustrative of the present application and should not be construed as limiting.
In some embodiments, after the terminal 100 uploads the selected picture and the environmental information of the picture to the cloud server 200, if the terminal 100 disconnects from the cloud server 200 and disconnects from the network, the terminal 100 may output a network disconnection prompt, where the network disconnection prompt may be used to prompt the user that the current network is disconnected and needs to be connected to the network to obtain the picture enhancement result.
For example, as shown in fig. 6L, after the terminal 100 uploads the selected picture and the environment information of the picture to the cloud server 200, the network of the terminal 100 is disconnected and cannot be connected to the cloud server 200. The terminal 100 may display a network disconnect identifier 643 in the status bar 511 for prompting the user of the current network disconnect. The terminal 100 may also display a network disconnect prompt 654. For example, the network disconnection prompt 654 may be a text prompt "detect your current network has been disconnected, please reconnect the network to obtain the cloud enhanced picture". The examples are merely illustrative of the present application and should not be construed as limiting.
As shown in fig. 6M, after the terminal 100 completes enhancement of the picture selected by the user by means of the processing capability of the cloud server 200, the terminal 100 may display an enhancement success prompt window 660 on the cloud enhancement interface 550. The enhancement success hint window 660 may display an effect description 661 that includes an enhancement to the selected picture. For example, the text corresponding to the effect description 661 can be "the picture location selected by you is" shanghai, beach ", the details of the building in the picture are more prominent and clear through the cloud, the dynamic range of the picture and the picture color are improved, and the urban landscape! ". The enhanced success prompt window 660 displays thumbnails 662, view controls 663, download controls 664 and all download controls 665 which correspond to the enhanced pictures. Wherein the view control 663 may be used to view the enhanced picture. Download control 664 may be used to save the enhanced picture to a locally specified storage path of terminal 100. The all download control 665 can be configured to save multiple enhanced pictures to the locally designated storage path of the terminal 100 when there are multiple enhanced pictures.
Terminal 100 can receive an input operation (e.g., a single click) from a user with respect to view control 663, and in response to the input operation, terminal 100 can display picture browsing interface 670 as shown in fig. 6N.
As shown in fig. 6N, the picture browsing interface 670 includes enhanced pictures 671, marks 672, a download control 673, an effect contrast control 674, picture related information 675, and a menu 676. The flag 672 is used to prompt the user that the picture 671 has been enhanced by the processing capability of the cloud server 200. The picture-related information 675 may include the time the picture 671 was taken, the weather it was taken, geographic location information, and the like. For example, the shooting time may be "12 months, 1 day, 8 in 2019: 00AM ", the shooting weather can be 'cloudy', the shooting place can be 'Shanghai, outer beach' and the like. The menu 676 may include a share button, a collect button, an edit button, a delete button, and a more button. The share button may be used to trigger sharing of the picture 671. The favorites button may be used to trigger the favorites of the picture 671 to a picture favorites folder. The edit buttons may be used to trigger editing functions such as rotate, crop, add filters, blur, etc. of the picture 671. A delete button may be used to trigger deletion of the picture 671. More buttons may be used to trigger the opening of more functions associated with the picture 671. The effect contrast control 674 can be used to trigger the display of a pre-enhanced picture 671 (i.e., a user-selected picture) at the location of the picture 671.
As shown in fig. 6P, terminal 100 may receive an input operation (e.g., a single click) by a user with respect to download control 673, and in response to the input operation, terminal 100 may save enhanced picture 671 to a locally specified storage path to terminal 100 and display save completion prompt 678 for prompting the user that enhanced picture 671 has been saved after the save is completed.
In some embodiments, the terminal 100 can store the shot picture locally after taking the picture. The user can view pictures taken by the terminal 100 and pictures taken from other devices or networks in the gallery application of the terminal 100. The terminal 100 can mark pictures to be enhanced, enhanced pictures, and enhanced pictures in the graph library application. When the environmental information (e.g., geographical location information, weather information, time information, shooting parameters, shooting posture information, etc.) of the picture is complete and is not enhanced by the cloud server 200, the picture can be classified as the picture to be enhanced. Therefore, the user can conveniently and quickly select the picture which needs to be enhanced by means of the processing capacity of the cloud server 200, and the time of the user is saved.
Illustratively, as shown in FIG. 7A, the terminal 100 may display a gallery album interface 520. The gallery album interface 520 may display, among other things, one or more albums (e.g., all photos album 526, video album, camera album, cloud enhanced album 527, WeChat album, micro blog album, etc.). For the detailed text description of the gallery album interface 520, reference may be made to the embodiment shown in fig. 5B, which is not described herein again.
The terminal 100 may receive an input operation (e.g., a single click) by the user to act on the all-photos album 526, and in response to the input operation, the terminal 100 may display an all-photos interface 710 as shown in fig. 7B.
As shown in fig. 7B, the all photos interface 710 displays a thumbnail that includes one or more pictures. Among them, the terminal 100 can mark the picture to be enhanced, the enhanced picture and the enhanced picture in the one or more pictures. For example, the picture corresponding to the thumbnail 711 is enhanced, and the terminal 100 enhances the picture corresponding to the thumbnail 711 by means of the processing capability of the cloud server 200 to obtain the picture corresponding to the thumbnail 713. The picture corresponding to the thumbnail 715 is a picture to be enhanced. The terminal 100 may display a mark 712 on the thumbnail 711, where the mark 712 is used to prompt the user that the picture corresponding to the thumbnail 711 has been enhanced by the processing capability of the cloud server 200. The terminal 100 may display a mark 714 on the thumbnail 713, where the mark 714 is used to prompt the user that the picture corresponding to the thumbnail 713 is a picture enhanced by the processing capability of the cloud server 200. The terminal 100 may display a mark 716 on the thumbnail 715, where the mark 716 is used to prompt the user that the picture corresponding to the thumbnail 715 is to be enhanced by the processing capability of the cloud server 200. The above examples are merely illustrative of the present application and should not be construed as limiting.
The terminal 100 may receive an input operation (e.g., a single click) by the user on the thumbnail 715, and in response to the input operation, the terminal 100 may display a picture browsing interface 720 as shown in fig. 7C. Since the picture corresponding to the thumbnail 715 is to be enhanced by the processing capability of the cloud server 200, the terminal 100 may display the picture corresponding to the thumbnail 715 and a cloud enhancement control in the picture browsing interface 720, where the cloud enhancement control is used to trigger the terminal 100 to enhance the picture corresponding to the thumbnail 715 by the processing capability of the cloud server 200.
As shown in fig. 7C, since the picture corresponding to the thumbnail 715 is to be enhanced by the processing capability of the cloud server 200, the picture browsing interface 720 may display a picture 721 including the picture corresponding to the thumbnail 715, a cloud enhancement control 722, the picture related information 723, and a menu 724. The picture related information 723 may include the time of taking the picture 721, the weather of taking the picture, geographical location information, and the like. For example, the shooting time may be "12 months, 1 day, 8 in 2019: 00AM ", the shooting weather can be 'cloudy', the shooting place can be 'Shanghai, outer beach' and the like. The menu 724 may include a share button, a collect button, an edit button, a delete button, a more button. A share button may be used to trigger sharing of the picture 721. The favorites button may be used to trigger the favorites of the picture 721 to a picture favorites folder. The edit buttons may be used to trigger edit functions such as rotating, cropping, adding filters, blurring, etc. of the picture 721. A delete button may be used to trigger the deletion of the picture 721. More buttons may be used to trigger the opening of more functions associated with the picture 721. The effect contrast control 722 can be used to trigger the terminal 100 to enhance the picture 721 by means of the processing capability of the cloud server 200.
The terminal 100 may receive an input operation (e.g., a single click) by the user with respect to the cloud enhanced control 722, and in response to the input operation, the terminal 100 may display a cloud enhanced interface 730 as shown in fig. 7D.
As shown in fig. 7D, the cloud-enhanced interface 730 may display a picture addition control 731, a thumbnail of the selected picture (e.g., thumbnail 732), a delete control 733, a know details control 734, and an upload control 735. Since the cloud enhanced interface 730 is the same as the cloud enhanced interface 550 shown in fig. 6E, for the detailed text description of the cloud enhanced interface 730, reference may be made to the embodiment shown in fig. 6E, and details are not repeated here.
The terminal 100 may receive an input operation of the user on the upload control 735, and in response to the operation, the terminal 100 may upload the picture selected by the user and the environment information of the picture to the cloud server 200, and enhance an effect of the picture selected by the user (i.e., the picture 721 shown in fig. 7C) by means of the processing capability of the cloud server 200.
How the terminal 100 enhances the picture by the processing capability of the cloud server 200 in the embodiment of the present application is described below.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating an image processing method according to an embodiment of the present disclosure. As shown in fig. 8, the method includes:
S801, the terminal 100 receives a first input from a user.
S802, the terminal 100 responds to the first input, and acquires the first image and first environment information of the first image.
Wherein the first environment information may include one or more of geographical location information, weather information, season information, time information, photographing posture information, photographing parameters, and the like. The shooting parameters may include zoom factors, among others. The photographing posture information may include orientation information of a photographing lens of the photographing apparatus when photographing an image. For example, the pitch angle and the azimuth angle of the photographing lens, etc., or the three-axis rotation angle in natural coordinates, etc.
In a scene that the user opens the camera application of the terminal 100 to take a picture through the cloud enhanced mode, the first input may be an input operation (for example, a click) to the shooting control 322 after the user opens the cloud enhanced mode in the shooting interface 320 in the embodiment shown in fig. 3E. This first image may be the picture 324 (i.e., the captured image) captured by the camera shown in fig. 3E described above. The terminal 100 may obtain the first environment information of the first image from one or more sensors and from a network.
Illustratively, when the geographic location information is GPS location information, the terminal 100 may acquire the GPS location information of the captured image through a GPS positioning module. When the geographical location information is a geographical name, after the terminal 100 may obtain the GPS location information of the captured image through the GPS positioning module, the terminal may query the geographical name corresponding to the GPS location information through an offline map or a network online map. The terminal 100 may also acquire weather information, season information, and the like of the captured image from the network in combination with the capturing time and the geographical location information. The terminal 100 may further acquire shooting attitude information of the shot image through a sensor such as a gyroscope or an acceleration sensor. The shooting parameters may include a zoom magnification and the like, and the terminal 100 may acquire the shooting parameters set by the user through the camera application, and the like.
In a scenario where the user opens the gallery application of the terminal 100 to select a picture for enhancement, the first input may be an input operation of the upload control 633 after the user selects a picture corresponding to the thumbnail 631 in the cloud enhancement interface 550 in the embodiment shown in fig. 6I. Alternatively, the first input may be an input operation of the upload control 735 after the user selects the picture corresponding to the thumbnail 732 in the cloud enhanced interface 730 in the embodiment shown in fig. 7D. The first image may be a picture corresponding to the thumbnail 631 in the embodiment shown in fig. 6I or a picture corresponding to the thumbnail 732 in the embodiment shown in fig. 7D.
When the picture is saved after being taken, the shooting device (terminal 100 or other device) may save the first environment information (e.g., geographical location information, weather information, time information, season information, shooting parameters, shooting posture information, etc.) of the picture into exchangeable image file format (Exif) data of the picture. Therefore, when the first image is a picture which is selected by the user and is locally stored in the terminal 100, the terminal 100 may acquire the first environment information of the picture selected by the user from the Exif data corresponding to the picture selected by the user.
In one possible implementation, the terminal 100 may mark a picture to be enhanced, an enhanced picture, and an enhanced picture in the gallery application. When the first environment information of the picture is complete and is not enhanced by the cloud server 200, the picture can be classified as the picture to be enhanced. When a picture has been selected by a user for optimization by means of the processing capability of the cloud server 200 and a corresponding high-quality picture is generated, the picture is classified as the enhanced picture. The enhanced picture may refer to a high-quality picture generated by optimizing a picture selected by a user by using the processing capability of the cloud server 200. Therefore, the user can conveniently and quickly select the picture which needs to be enhanced by means of the processing capacity of the cloud server 200, and the time of the user is saved.
S803, the terminal 100 sends the first image and the first environment information of the first image to the cloud server 200.
When the terminal 100 acquires the first image and the first environment information, the terminal 100 may detect a current network state, and if the network state is disconnected and the terminal cannot be connected to the cloud server 200, the terminal 100 may output a prompt to prompt a user that the first image can be enhanced by the cloud server 200 after the user connects to the network. For details, reference may be made to the embodiment shown in fig. 4A or the embodiment shown in fig. 6F, which is not described herein again.
S804, the terminal 100 determines a first image processing model matched with the first environment information from the image processing model library according to the first environment information.
Because the imaging effect and the style of the image are different under different shooting environments, different image processing models can be trained aiming at different shooting environments. Therefore, the image can be optimized by using the specific image processing model according to the specific shooting environment, and the quality of the image is improved.
The image processing model library stores image processing models corresponding to various trained shooting environments. The photographing environment may be distinguished by one or more of geographical location information, weather information, season information, time information, and the like.
For example, the image processing model library may be divided into a plurality of image processing models according to a shooting location and shooting weather. The image processing models corresponding to the plurality of shooting environments stored in the image processing model library may be as shown in table 1 below:
TABLE 1
As can be seen from table 1 above, the image processing model 1 corresponds to an imaging environment in which the imaging location is "shanghai/outer beach" and the imaging weather is "sunny day". The image processing model 2 corresponds to an imaging environment in which the imaging location is "shanghai/beach" and the imaging weather is "cloudy". The image processing model 3 corresponds to an imaging environment in which the imaging location is "shanghai/beach" and the imaging weather is "other weather". The image processing model 4 corresponds to a shooting environment with a shooting place of Beijing & Imperial palace and a shooting weather of sunny days. The image processing model 5 corresponds to a shooting environment in which the shooting place is "Beijing & Imperial palace" and the shooting weather is "snow". The image processing model 6 corresponds to a shooting environment in which the shooting location is "beijing · old palace" and the shooting weather is "other weather", and so on. The examples shown in table 1 above are merely for explaining the present application and should not be construed as limiting.
For example, in combination with the image processing models corresponding to the plurality of shooting environments included in the image processing model library shown in table 1, when the geographical position information in the first environment information of the first image is "shanghai/beach" and the weather information is "cloudy", the cloud server 200 may determine that the first image processing model corresponding to the first environment information from the image processing model library may be "image processing model 2". The examples are merely illustrative of the present application and should not be construed as limiting.
S805, the cloud server 200 sends the first image processing model to the terminal 100.
After determining the first image processing model matched with the first environment information, the cloud server 200 may send the first image processing model to the terminal 100.
And S806, the terminal 100 optimizes the first image according to the first image processing model to obtain a second image.
After the terminal 100 receives the first image processing model sent by the cloud server 200, the terminal 100 may input the first image into the first image processing model, and may obtain the second image after the first image processing model processing. Compared with the first image, the second image has improved detail definition, dynamic range/contrast, noise, color and the like.
S807, the cloud server 200 retrieves the first high-definition image from the high-definition image library according to the first image and the first environment information. The similarity between the second environment information of the first high-definition image and the first environment information of the first image is within a first preset range, and the similarity between the shot content in the first high-definition image and the shot content in the first image is within a second preset range.
The high-definition image library stores a plurality of high-definition images and second environment information of the high-definition images. After the cloud server 200 receives the first image and the first environment information sent by the terminal 100, the cloud server 200 may first select a high-definition image with similarity between the second environment information and the first environment information in a first preset range from a high-definition image library according to the first environment information. Then, the cloud server 200 may compare the first image with the high-definition images similar to the shooting environment, and screen out a first high-definition image with a similarity to the shooting content in the first image within a second preset range from the high-definition images similar to the shooting environment.
For example, the high definition images stored in the high definition image library and the second environment information of the high definition images may be as shown in table 2 below:
TABLE 2
As can be seen from table 2 above, the shooting location of the high-definition image 1 in the high-definition image library is "shanghai/outer beach", the weather is "fine", and the shooting time is "afternoon". The shooting location of the high-definition image 2 is "shanghai/outer beach", the weather is "cloudy", and the shooting time is "morning". The shooting location of the high-definition image 3 is "Shanghai and outer beach", the weather is "cloudy", and the shooting time is "morning". The shooting place of the high-definition image 4 is Shanghai and outer beach, the weather is cloudy, and the shooting time is night. The shooting place of the high-definition image 5 is 'Beijing & Imperial palace', the weather is 'fine', and the shooting time is 'morning'. The shooting place of the high-definition image 6 is 'Beijing & Imperial palace', the weather is 'light rain', and the shooting time is 'afternoon'. And so on. The examples shown in table 2 above are for illustrative purposes only and should not be construed as limiting.
For example, the photographing location in the first environment information of the first image may be "shanghai/beach", the weather may be "cloudy", and the photographing time may be "morning". As shown in table 2, the shooting location of the high definition image 2 is "shanghai/outer beach", the weather is "cloudy", and the shooting time is "morning". The shooting location of the high-definition image 3 is "Shanghai and outer beach", the weather is "cloudy", and the shooting time is "morning". Therefore, the cloud server 200 can determine that the high-definition images identical to the first image capturing environment have the high-definition image 2 and the high-definition image 3. Then, the cloud server 200 may screen out the first high-definition image of which the similarity between the shot content and the shot content in the first image is within a preset range from the high-definition images 2 and 3. For example, the similarity between the shot content in the high-definition image 2 and the shot content in the first image is 90%, the similarity between the shot content in the high-definition image 3 and the shot content in the first image is 40%, and the preset range is 80%, so the cloud server 200 may determine the high-definition image 3 as the first high-definition image. The above examples are merely illustrative of the application and should not be construed as limiting.
And S808, carrying out image registration on the first high-definition image according to the first image by the cloud server 200 to obtain a second high-definition image.
The cloud server 200 may extract the matched feature points in the first image and the first high-definition image, and then obtain the image space coordinate transformation parameter by matching the feature point pairs. Then, the cloud server 200 may perform image registration according to the image space coordinate transformation parameter.
S809, the cloud server 200 may send the second high-definition picture to the terminal 100.
And S810, the terminal 100 can perform image fusion on the second image and the second high-definition image to obtain a third image.
After receiving the second high-definition image sent by the cloud server 200, the terminal 100 may compare the feature information in the second image and the second high-definition image through an image fusion algorithm, and fuse the feature information with a certain advantage in the second high-definition image to the second image to obtain a third image (i.e., an image enhanced with respect to the first image). For example, high-definition details, rich-color feature points, and the like in the second high-definition image may be fused into the second image to obtain the third image. In this way, the detail definition, color, white balance, etc. of the image can be improved.
In one possible implementation, a 3D world model and a rendering engine may be included in the cloud server 200. After receiving the first image and the first environment information sent by the terminal 100, the cloud server 200 may determine a shooting environment of the first image from the 3D world model according to the first environment information, and render a high-definition image in the 3D world model, which is the same as the shooting environment of the first image, through a rendering engine, and use the rendered high-definition image as the first high-definition image for subsequent image registration and image fusion after image registration. In another possible implementation manner, the cloud server 200 may also pre-render a high-definition image of each shooting angle in each geographic area in the 3D world model, and after the cloud server 200 receives the first image and the first environment information sent by the terminal 100, the cloud server 200 may search, according to the first environment information, a high-definition image that is consistent with the first image shooting angle from the multiple high-definition images that are rendered, and determine the high-definition image as the first high-definition image, which is used for subsequent image registration and image fusion after the image registration.
In some embodiments, after the cloud server 200 determines the first image processing model from the image processing model library according to the first environment information of the first image, the cloud server 200 may directly perform optimization processing on the high definition image according to the first image processing model to obtain the second image without sending the first image processing model to the terminal 100. After the cloud server 200 retrieves the first high-definition image from the high-definition image library according to the first image and the first environment information and performs image registration on the first high-definition image to obtain a second high-definition image, the second high-definition image does not need to be sent to the terminal 100. The cloud server 200 may perform image fusion on the second image and the second high-definition image to obtain a third image (i.e., an image obtained by enhancing the first image), and then send the third image to the terminal 100. Therefore, the computing resources of the terminal 100 can be greatly saved, the terminal 100 can break through the imaging quality limit of the terminal, and a common camera on the terminal can shoot the effect of professional single-lens reflex equipment.
In some embodiments, the terminal 100 may transmit only the first environment information of the first image to the cloud server 200 after responding to the first input of the user. After receiving the first environment information sent by the terminal 100, the cloud server 200 may determine a first image processing model matching the first environment information from the image processing model library, and send the first image processing model to the terminal 100. After receiving the first image processing model, the terminal 100 may perform optimization processing on the first image through the first image processing model to obtain a second image, and store the second image as an enhanced image to the local.
In some embodiments, the terminal 100 may transmit the first image and the first environment information of the first image to the cloud server 200 in response to a first input by the user. After receiving the first image and the first environment information sent by the terminal 100, the cloud server 200 may determine a first image processing model matched with the first environment information from the image processing model library, and perform optimization processing on the first image through the first image processing model to obtain a second image. The cloud server 200 may send the second image to the terminal 100, and save the second image as an enhanced image to the local terminal 100.
In some embodiments, the terminal 100 may transmit the first image and the first environment information of the first image to the cloud server 200 after responding to the first input of the user. The cloud server 200 may retrieve the first high-definition image from the high-definition image library according to the first environment information, and perform image registration on the first high-definition image according to the first image to obtain a second high-definition image. The cloud service 200 may send the second high-definition image to the terminal 100, and the terminal 100 performs image fusion on the second high-definition image and the first image to obtain an enhanced image, and store the enhanced image in the local area of the terminal 100. In a possible implementation manner, after obtaining the second high-definition image, the cloud server 200 may also directly perform image fusion with the first image, and send the image after image fusion to the terminal 100.
The image processing method provided by the embodiment of the application can be used for fully utilizing high-quality image data and an image processing algorithm by combining the processing capacities of the terminal and the cloud server and enabling the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, season information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
An image processing system 900 to which the present application relates is described below.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating an architecture of an image processing system 900 provided in an embodiment of the present application. As shown in fig. 9, the image processing system 900 may include a terminal 100 and a cloud server 200.
The terminal 100 may include a camera 911, an image signal processor 912, a homologous processing module 913, and a heterologous fusion module 914.
The camera 911 is used to capture an optical signal when a user takes a picture, convert the optical signal into an electrical signal, and send the electrical signal to the image signal processor 912.
The image signal processor 912 may convert the electrical signal transmitted from the camera 911 into a digital image signal, also referred to as a first image, wherein the format of the first image may be a Raw image, a YUV image, or an RGB image, etc. The image signal processor 912 may further acquire first environment information (including one or more of geographical location information, weather information, season information, time information, photographing parameters, photographing posture information, etc.) of the first image from one or more sensors (not shown in the drawing) of the terminal 100 and a network to which the terminal 100 is connected.
The homologous processing module 913 may be configured to perform optimization processing on the first image generated by the image processor 912 according to the first image processing model sent by the cloud server 200, so as to obtain a second image.
The heterogeneous fusion module 914 may be configured to perform image fusion on the second image processed by the homogeneous processing module 913 and the second high-definition image sent by the cloud server 200, so as to obtain a third image.
The cloud server 200 may include an image processing model library 921, a model selection module 922, a high-definition image library 923, an image retrieval module 924, and an image registration module 925, among others.
The image processing model library 921 is used to store a plurality of image processing models divided according to the shooting environment.
The model selecting module 922 is configured to determine, according to data 1 in the first environment information sent by the terminal 100, a first image processing model matching the data 1 from the image processing model library 921. Wherein, the data 1 may be all or part of the first environment information. For example, when the first environment information includes geographical location information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 1 may include the geographical location information and the weather information.
The high-definition image library 923 may be configured to store a plurality of high-definition images and second environment information of the high-definition images.
The image retrieving module 924 is configured to retrieve the first high-definition image from the high-definition image library 923 according to the first image and the data 2 in the first environment information. The similarity between the shooting environment of the first high-definition image and the shooting environment of the first image is within a first preset range, and the similarity between the shooting content in the first high-definition image and the shooting content in the first image is within a second preset range. Wherein, the data 2 may be all or part of the first environment information. For example, when the first environment information includes geographical position information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 2 may include geographical position information, season information, time information, and photographing posture information.
The image registration module 925 may be configured to perform image registration on the first high-definition image according to the first image to obtain a second high-definition image. The cloud server 200 may transmit the second high-definition image to the terminal 100.
The image processing system 900 provided by the embodiment of the application can be combined with the processing capabilities of the terminal and the cloud server, make full use of high-quality image data and an image processing algorithm, and enable the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
An image processing system 1000 to which the present application relates is described below.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating an architecture of an image processing system 1000 according to an embodiment of the present application. As shown in fig. 10, the image processing system 1000 may include a terminal 100 and a cloud server 200.
The terminal 100 may include a camera 1011, an image signal processor 1012, an homologous processing module 1013, and a heterologous fusion module 1014.
The camera 1011 can be used to capture optical signals when a user takes a picture, convert the optical signals into electrical signals, and send the electrical signals to the image signal processor 1012.
The image signal processor 1012 may convert the electric signal transmitted from the camera 1011 into a digital image signal, also referred to as a first image, wherein the format of the first image may be a Raw image, a YUV image, or an RGB image, or the like. The image signal processor 1012 may further acquire first environment information (including one or more of geographical location information, weather information, season information, time information, photographing parameters, photographing posture information, etc.) of the first image from one or more sensors (not shown in the drawing) of the terminal 100 and a network to which the terminal 100 is connected.
The homologous processing module 1013 may be configured to perform optimization processing on the first image generated by the image processor 1012 according to the first image processing model sent by the cloud server 200, so as to obtain a second image.
The heterogeneous fusion module 1014 can be configured to perform image fusion on the second image processed by the homogeneous processing module 1013 and the second high-definition image sent by the cloud server 200 to obtain a third image.
The cloud server 200 may include an image processing model library 1021, a model selection module 1022, a 3D world model 1023, a rendering engine 1024, and an image registration module 1025.
The image processing model library 1021 is operable to store a plurality of image processing models divided according to shooting environments.
The model selecting module 1022 is configured to determine, according to the data 1 of the first environment information sent by the terminal 100, a first image processing model matching the data 1 from the image processing model library 1021. Wherein, the data 1 may be all or part of the first environment information. For example, when the first environment information includes geographical location information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 1 may include the geographical location information and the weather information.
The 3D world model 1023 includes 3D map information of the real world.
The rendering engine 1024 is configured to render a first high definition image from the 3D world model 1023 according to the data 2 in the first environment information of the first image. The similarity between the shooting content in the first high-definition image and the shooting content in the first image is within a preset range, and the shooting environment of the first high-definition image is the same as the shooting environment of the first image. Wherein, the data 2 may be all or part of the first environment information. For example, when the first environment information includes geographical position information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 2 may include geographical position information, season information, time information, and photographing posture information.
The image registration module 1025 can be configured to perform image registration on the first high-definition image according to the first image to obtain a second high-definition image. The cloud server 200 may transmit the second high-definition image to the terminal 100.
The image processing system 1000 provided by the embodiment of the application can be combined with the processing capabilities of the terminal and the cloud server, make full use of high-quality image data and an image processing algorithm, and enable the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
An image processing system 1100 to which the present application relates is described below.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating an architecture of an image processing system 1100 provided in an embodiment of the present application. As shown in fig. 11, the image processing system 1100 may include a terminal 100 and a cloud server 200.
The terminal 100 may include a camera 1111 and an image signal processor 1112, among others.
The camera 1111 may be used to capture optical signals when a user takes a picture, convert the optical signals into electrical signals, and send the electrical signals to the image signal processor 1112.
The image signal processor 1112 may convert the electric signal transmitted by the camera 1111 into a digital image signal, also referred to as a first image, wherein the format of the first image may be a Raw image, a YUV image, or an RGB image, etc. The image signal processor 1112 may further acquire first environment information (including one or more of geographical location information, weather information, season information, time information, photographing parameters, photographing posture information, and the like) of the first image from one or more sensors (not shown in the drawing) of the terminal 100 and a network to which the terminal 100 is connected.
The cloud server 200 may include an image processing model library 1121, a model selection module 1122, a high-definition image library 1123, an image retrieval module 1124, an image registration module 1125, a homologous processing module 1126, and a heterologous fusion module 1127.
The image processing model library 1121 may be used to store a plurality of image processing models divided according to shooting environments.
The model selection module 1122 is configured to determine, according to data 1 in the first environment information sent by the terminal 100, a first image processing model matching the data 1 from the image processing model library 1121. Wherein, the data 1 may be all or part of the first environment information. For example, when the first environment information includes geographical location information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 1 may include the geographical location information and the weather information.
The high-definition image library 1123 may be configured to store a plurality of high-definition images and second environment information of the high-definition images.
The image retrieving module 1124 is configured to retrieve the first high definition image from the high definition image library 1123 according to the first image and the data 2 in the first environment information of the first image. The similarity between the shooting environment of the first high-definition image and the shooting environment of the first image is within a first preset range, and the similarity between the shooting content in the first high-definition image and the shooting content in the first image is within the preset range. Wherein, the data 2 may be all or part of the first environment information. For example, when the first environment information includes geographical position information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 2 may include geographical position information, season information, time information, and photographing posture information.
The image registration module 1125 may be configured to perform image registration on the first high-definition image according to the first image to obtain a second high-definition image.
The homologous processing module 1126 is configured to perform optimization processing on the first image sent by the terminal 100 according to the first image processing model selected by the model selecting module 1122, so as to obtain a second image.
The heterogeneous fusion module 1127 may be configured to perform image fusion on the second image and the second high-definition image processed by the homogeneous processing module 1126 to obtain a third image. The cloud server 200 may transmit the third image to the terminal 100.
The image processing system 1100 provided by the embodiment of the application can be combined with the processing capabilities of the terminal and the cloud server, make full use of high-quality image data and an image processing algorithm, and enable the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
An image processing system 1200 to which the present application relates is described below.
Referring to fig. 12, fig. 12 is a schematic diagram illustrating an architecture of an image processing system 1200 provided in an embodiment of the present application. As shown in fig. 12, the image processing system 1200 may include the terminal 100 and the cloud server 200.
The terminal 100 may include a camera 1211 and an image signal processor 1212.
The camera 1211 may be used to capture an optical signal when a user takes a picture, convert the optical signal into an electrical signal, and send the electrical signal to the image signal processor 1212.
The image signal processor 1212 may convert the electric signal transmitted from the camera 1211 into a digital image signal, also referred to as a first image, wherein the format of the first image may be a Raw image, a YUV image, or an RGB image, etc. The image signal processor 1212 may further acquire first environment information (including one or more of geographical location information, weather information, season information, time information, photographing parameters, photographing posture information, and the like) of the first image from one or more sensors (not shown in the drawing) of the terminal 100 and a network to which the terminal 100 is connected.
The cloud server 200 may include, among other things, an image processing model library 1221, a model selection module 1222, a 3D world model 1223, a rendering engine 1224, an image registration module 1225, a homologous processing module 1226, and a heterologous fusion module 1227.
The image processing model library 1221 may be used to store a plurality of image processing models divided according to shooting environments.
The model selecting module 1222 is configured to determine, from the image processing model library 1221, a first image processing model matching the data 1 in the first environment information according to the data 1 in the first environment information sent by the terminal 100. Wherein, the data 1 may be all or part of the first environment information. For example, when the first environment information includes geographical location information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 1 may include the geographical location information and the weather information.
The 3D world model 1223 includes 3D map information of the real world.
The rendering engine 1224 may be configured to render a first high definition image from the 3D world model 1223 according to the data 2 of the first environment information of the first image. The similarity between the shooting content in the first high-definition image and the shooting content in the first image is within a preset range, and the shooting environment of the first high-definition image is the same as the shooting environment of the first image. Wherein, the data 2 may be all or part of the first environment information. For example, when the first environment information includes geographical position information, weather information, season information, time information, photographing parameters, and photographing posture information, the data 2 may include geographical position information, season information, time information, and photographing posture information.
The image registration module 1225 may be configured to perform image registration on the first high-definition image according to the first image, so as to obtain a second high-definition image.
The homologous processing module 1226 is configured to perform optimization processing on the first image sent by the terminal 100 according to the first image processing model selected by the model selecting module 1222, so as to obtain a second image.
The heterogeneous fusion module 1227 may be configured to perform image fusion on the second image and the second high-definition image processed by the homogeneous processing module 1226 to obtain a third image. The cloud server 200 may transmit the third image to the terminal 100.
The image processing system 1200 provided by the embodiment of the application can be combined with the processing capabilities of the terminal and the cloud server, make full use of high-quality image data and an image processing algorithm, and enable the terminal photographing imaging system. The cloud server can match an image processing algorithm suitable for the image and high-quality image data which is the same as the image shooting content and has a similar shooting environment through the image uploaded by the terminal and the environment information (such as geographical position information, weather information, time information, shooting parameters, shooting posture information and the like) of the image. Through the image processing algorithm and the high-quality image data, the quality and effect of images uploaded by the terminal can be enhanced. Therefore, pictures shot in different environments can show different styles and effects. Therefore, the terminal can break through the imaging quality limit of the terminal, so that an ordinary camera on the terminal can also shoot the effect of professional single-lens reflex equipment, the cost of the mobile phone is reduced, and the shooting quality and the shooting experience of a user of the terminal are greatly improved.
The following description relates to a data construction system in the embodiments of the present application.
Referring to fig. 13, fig. 13 is a schematic diagram illustrating an architecture of a data construction system 1300 provided in an embodiment of the present application. As shown in fig. 13, the data construction system 1300 may include a data quality evaluation module 1301, a high definition image library 1302, an image database 1303 to be enhanced, an image retrieval module 1304, an image registration module 1305, an image pairing module 1306, a verification module 1307, and an image processing model library 1308.
The professional photographer or the photography enthusiast can upload the image and the environmental information thereof to the data quality evaluation module 1301.
The data quality evaluation module 1301 may be used to evaluate the image quality of images uploaded by professional photographers or photography enthusiasts, remove low-quality images therein, for example, images with low definition or incomplete environmental information, screen out high-definition images and upload the high-definition images and their environmental information to the high-definition image library 1302.
The image database 1303 to be enhanced may store a plurality of images to be enhanced and environment information thereof.
The image retrieving module 1304 may be configured to retrieve the image to be enhanced and the environment information thereof from the image database 1303 to be enhanced, and retrieve a high-definition image with similar shooting content as the environment information of the image to be enhanced from the high-definition image data 1302.
The image retrieval module 1305 may be configured to perform image registration on the high-definition image according to the image to be enhanced, so as to obtain a high-definition training image.
An image pairing module 1306 may be configured to bind the image to be enhanced and the high definition training image into a training image pair.
The verification module 1307 may be configured to filter out image pairs with a large alignment deviation, and classify the image pairs with a small alignment deviation according to the environmental information to corresponding training set categories to train the image processing models in the image processing model library 1308.
The data construction system 1300 provided by the embodiment of the application can optimize and update the image processing model library in the cloud server 200, and improve the training efficiency of the image processing model.
A schematic structural diagram of the cloud server 200 according to an embodiment of the present application is described below. Referring to fig. 14, fig. 14 is a schematic structural diagram of a cloud server 200 according to an embodiment of the present disclosure. The cloud server 200 is applied to the image processing system 900 described in fig. 9, the image processing system 1000 described in fig. 10, the image processing system 1100 described in fig. 11, or the image processing system 1200 described in fig. 12.
As shown in fig. 14, the cloud server 200 includes one or more processors 201A, a communication interface 202A, and a memory 203A, where the processors 201A, the communication interface 202A, and the memory 203A may be connected by a bus or in other manners, and in this embodiment, the connection by the bus 204A is taken as an example. Wherein:
the processor 201A may be constituted by one or more general-purpose processors, such as CPUs. The processor 201A may be used to run the associated program code for the device control method.
The communication interface 202A may be a wired interface (e.g., an ethernet interface) or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other nodes. In this embodiment, the communication interface 202A may be specifically used for communicating with the electronic device 100.
It should be noted that the server shown in fig. 14 is only one implementation manner of the embodiment of the present application, and in practical applications, the server may further include more or less components, which is not limited herein.
An image processing method according to an embodiment of the present application is exemplarily described below with reference to fig. 15 to 17.
Referring to fig. 15, a schematic flowchart of an image processing method provided in an embodiment of the present application is shown, in which the terminal 100 sends the shooting information of the image a and the image a to the cloud server 200, and the cloud server 200 determines the reference image 1 from the high-definition image library, where the shooting information of the reference image 1 is similar to the shooting information of the image a, and the shooting content is also consistent with the shooting content of the image a. The cloud server 200 performs detail restoration processing on the image a according to the reference image 1 to obtain an image C, and returns the image C to the terminal 100. As shown in fig. 15, the image processing method includes, but is not limited to, the following steps:
s1501, the terminal 100 detects a photographing instruction.
In order to facilitate the terminal 100 to acquire the photographing information of the photographed image a, the GPS of the terminal 100 may be turned on and the terminal 100 may be connected to a network.
Alternatively, the terminal 100 may install an Application (APP) for uploading image a and photographing information of image a. Optionally, the application program may include, but is not limited to, a photographing application, a cropping application, and the like. The application on the terminal 100 may also log in to the account.
The user can initiate the photographing instruction through touch screen operation, or the user can also initiate the photographing instruction through a key. The terminal 100 detects the photographing instruction and photographs the image a. The terminal 100 can also acquire the shooting geographical position information of the image a through the GPS. Optionally, the terminal 100 may further obtain weather condition information of the shooting geographical location information as the shooting weather condition information of the image a through a network. Alternatively, the terminal 100 may further acquire the shooting magnification of the image a.
S1502, the terminal 100 captures an image a.
In this case, due to the limitation of the imaging device, the resolution of the image a is low and the details are blurred when the image is captured at a high magnification.
S1503, the terminal 100 transmits the image a and the photographing information of the image a to the cloud server.
In some embodiments, the terminal 100 may send the shooting information of the image a and the image a to a cloud server through a shooting application or a retouching application, where the cloud server corresponds to the shooting application or the retouching application.
The cloud server can be configured with a high-definition image library, and the high-definition image library stores a plurality of high-definition images, shooting information of each high-definition image in the plurality of high-definition images, and a first feature and a second feature corresponding to each high-definition image in the plurality of high-definition images. Wherein the first feature may be extracted by a search network and the second feature may be extracted by a reorder network, which may be a more complex feature extraction network than the search network, so that the second feature extracted by the reorder network is finer than the first feature extracted by the search network, i.e. the second feature may express more detailed features of the image. In this embodiment of the application, a rough fast search may be performed by searching for the first feature extracted by the network, and a more precise search may be further performed by reordering the second feature extracted by the network to determine the reference image 1 with the most similar shooting content.
Illustratively, a plurality of high-definition images are acquired by utilizing a high-definition photographing device respectively based on a plurality of geographic areas, so that the acquired high-definition images form a high-definition image library. The geographic region can be partitioned in a large geographic region, e.g., Shenzhen, Guangzhou, Chongqing, etc., can be considered a large geographic region.
The first feature corresponding to each of the plurality of high-definition images is extracted by using a search network, for example, each of the plurality of high-definition images may be input to the search network, so as to obtain the first feature corresponding to each of the high-definition images.
The reordering network is adopted to extract the second feature corresponding to each of the plurality of high-definition images, for example, each of the plurality of high-definition images may be input into the reordering network, so as to obtain the second feature corresponding to each of the high-definition images.
According to the shooting information of each high-definition image (the shooting information comprises but is not limited to shooting geographical position information, shooting weather condition information, shooting magnification and the like), the first features and the second features respectively corresponding to the plurality of high-definition images and the plurality of high-definition images are classified and stored. The shooting weather condition information may include a sunny day, a cloudy day, a fog day, a rainy day, or the like when the high-definition image is shot.
For convenience of retrieval, the plurality of high definition images can be divided into a plurality of large categories according to the geographic region in which each high definition image is shot, for example, all the high definition images of which the geographic region belongs to shenzhen are divided into one large category, all the high definition images of which the geographic region belongs to guangzhou are divided into another large category, and the like. It is understood that the division of the geographic area may be other, for example, the division may be according to the names of sights, and the embodiment of the present application is not limited thereto.
And further classifying all the high-definition images belonging to the same geographical area according to the shooting information of each high-definition image. For example, a plurality of high-definition pictures having the same or similar shooting magnification, the same or similar GPS information, and the same weather condition information at the time of shooting may be classified into the same category. Of course, when the category is divided, the reference shooting information may be selected according to an actual situation, for example, multiple high-definition pictures with the same or similar shooting magnifications and the same or similar GPS information may be classified into the same category without paying attention to weather condition information during shooting, which is not limited in the embodiment of the present application.
The same or similar imaging magnifications may mean that an absolute value of a difference between the imaging magnifications is smaller than or equal to a set magnification threshold. The GPS information may be the same or similar, that is, the distance between the geographic locations indicated by the GPS information may be less than or equal to the set distance threshold.
It can be understood that the GPS information in the embodiments of the present application refers to GPS information of a geographical location where a photographing device (such as the terminal 100 or a high-definition photographing device deployed in multiple geographical areas) is located when an image is taken.
When the cloud server stores each high-definition image in a classified manner, the first feature and the second feature corresponding to the high-definition image can be stored together, so that subsequent retrieval and use are facilitated. In the embodiment of the present application, the resolution of the high-definition image stored in the high-definition image library is greater than the resolution of the image a captured by the terminal 100.
S1504, the cloud server 200 determines at least one image B according to the shooting information of the image a.
In some embodiments, the cloud server 200 receives the image a and the shooting information of the image a sent by the terminal 100, and determines at least one image B from the high definition image library according to the shooting information of the image a. The matching degree between the photographing information of each image B and the photographing information of the image a in the at least one image B is greater than a first threshold.
Wherein the shooting information may include one or more of the following information: shooting magnification, shooting geographical position information, shooting weather condition information and the like.
Optionally, when the shooting information includes shooting geographical location information, the matching degree between the shooting geographical location information of each image B in the at least one image B and the shooting geographical location information of the image a is greater than a first threshold, for example, the distance between the geographical location where the image B is shot and the geographical location where the image a is shot is less than or equal to a distance threshold. Through the matching of the geographical position information, the consistency of the reference image 1 which is obtained by subsequent retrieval and is most similar to the shooting content of the image A and the shooting angle of the image A can be ensured.
Optionally, when the shooting information includes a shooting magnification, the matching degree between the shooting magnification of each image B in the at least one image B and the shooting magnification of the image a is greater than a first threshold, for example, the absolute value of the difference between the shooting magnification of the image B and the shooting magnification of the image a is less than or equal to a magnification threshold. Through the matching of the shooting magnifications, the difference between the magnification of the image A and the magnification of the at least one image B can be ensured to be not large, and correspondingly, the consistency of the magnification of the reference image 1 which is obtained by subsequent retrieval and is most similar to the shooting content of the image A and the magnification of the image A can also be ensured.
Optionally, when the shooting information includes the shooting weather condition information, the matching degree between the shooting weather condition information of each image B in the at least one image B and the shooting weather condition information of the image a is greater than a first threshold, for example, all the images are shot on a sunny day, all the images are shot on a rainy day, or all the images are shot on a foggy day. Through the matching of the weather condition information, the consistency of the transparency of the image A and the at least one image B can be ensured, and correspondingly, the consistency of the transparency of the reference image 1 which is obtained by subsequent retrieval and is most similar to the shooting content of the image A and the transparency of the image A can also be ensured.
Optionally, the high-definition images in the high-definition image library may be classified and stored according to the shooting information, and specific reference is made to the description of the foregoing embodiment, which is not described herein again. When determining at least one image B according to the shooting information of the image a, the cloud server 200 may determine a corresponding category according to the shooting information of the image a, so as to acquire the at least one image B from the determined category.
For example, if the shooting information of the image a includes shooting geographical position information, the geographical area to which the shooting geographical position information belongs is determined according to the shooting geographical position information, and then at least one image B with highly matched shooting information is determined from the high-definition images belonging to the geographical area. For example, a plurality of high-definition pictures with the same or similar shooting magnifications, the same or similar GPS information, and the same weather condition information during shooting are classified into the same category, and the shooting magnifications, the GPS information, and the weather condition information are used as classification labels, and when determining at least one image B, the cloud server 200 may determine a corresponding category according to matching between the shooting information of the image a and each classification label, thereby determining the at least one image B from the category, and acquiring a first feature and a second feature corresponding to each image B in the at least one stored image B.
S1505, the cloud server 200 determines a reference image 1 whose photographic content is most similar to that of the image a from the at least one image B.
In some embodiments, the cloud server 200 may extract the first feature corresponding to the image a through a search network, and respectively calculate a similarity between the first feature corresponding to the image a and the first feature corresponding to each image B in the at least one image B, so as to obtain M images B with the largest similarity. It can be understood that the first feature corresponding to each image B may be extracted in advance through the search network and stored in the high definition image library, and please refer to the description of the foregoing embodiment, which is not repeated herein.
For example, the cloud server 200 may sort the similarity between the first feature corresponding to the image a and the first feature corresponding to each image B in the at least one image B in descending order, and set the image B corresponding to the top M similarity as the M images B with the largest similarity.
It is understood that M images B substantially similar to the captured content of the image a can be obtained by performing similarity measurement by retrieving the first features extracted by the network.
In order to more accurately determine the image B with the shooting content most similar to that of the image a from the M images B, in this embodiment of the application, the cloud server 200 may further extract the second feature corresponding to the image a through a re-ordering network, and respectively calculate a similarity between the second feature corresponding to the image a and the second feature corresponding to each image B in the M images B, so as to obtain the image B with the maximum similarity, which is used as the reference image 1. It can be understood that the second feature corresponding to each image B may be extracted in advance through the reordering network and stored in the high definition image library, and please refer to the description of the foregoing embodiment, which is not repeated herein.
For example, the cloud server 200 may reorder the similarity between the second feature corresponding to the image a and the second feature corresponding to each image B in the M images B in descending order, and use the image B with the highest similarity as the reference image 1.
It should be noted that the second feature can express more detailed features of the image because the second feature is finer than the first feature. Therefore, the result of the similarity ranking according to the first feature may be different from the result of the similarity ranking according to the second feature. It can be understood that M substantially similar images B can be obtained by similarity matching of the first features, and the most similar reference image 1 can be obtained by similarity matching of the second features. The reference image 1 has high matching degree between the shooting information and the shooting information of the image A, and the shooting contents are very close to each other, so that the reference image 1 with very similar shooting angle, shooting contents, picture transparency and the like and high resolution can be obtained by the method of the embodiment of the application, and the image A with low resolution is better recovered in detail.
S1506, the cloud server 200 performs detail restoration processing on the image a according to the reference image 1, to obtain an image C.
In some embodiments, the resolution of image A is relatively low, and correspondingly, the sharpness is relatively low. The cloud server 200 performs detail restoration processing on the image a according to the reference image 1 to obtain an image C, where the resolution of the image C is greater than that of the image a, and accordingly, the definition of the image C is greater than that of the image a.
Optionally, the reference image 1 and the image a may be input to a deep learning neural network, the high-frequency information of the reference image 1 is extracted through the deep learning neural network, and the image a is subjected to detail recovery through the high-frequency information, so as to obtain an image C with higher definition.
For example, before the detail restoration process is performed on the image a according to the reference image 1, the ratio of the shot content of the reference image 1 to the shot content of the image a may be determined, wherein the ratio may reflect whether the shot content included in the retrieved reference image 1 covers the shot content included in the full image a.
And if the proportion of the shot content of the reference image 1 in the shot content of the image A is smaller than the second threshold value, removing the shot content matched with the shot content of the reference image 1 from the shot content of the image A to obtain the removed image A. The second threshold may be set according to actual needs, for example, the second threshold may be set to 80%, and if the ratio of the shot content of the reference image 1 to the shot content of the image a is less than 80%, it indicates that the shot content included in the reference image 1 does not cover the shot content included in the full image a, and the shot content other than the shot content included in the reference image 1, that is, the image a after the deletion, in the shot content included in the image a may be further searched.
Further, a reference image 2 whose shot content is most similar to that of the removed image a is determined from at least one image B whose shot information is highly matched.
The method for determining the reference image 2 with the shooting content most similar to the shooting content of the removed image a from the at least one image B may refer to the method for determining the reference image 1 in the foregoing embodiment, and is not described herein again.
After obtaining the reference image 1 and the reference image 2, the detail restoration process may be performed on the image a according to the reference image 1 and the reference image 2 to obtain the image C.
It is understood that, after the reference image 1 and the reference image 2 are obtained, the ratio of the captured content of the reference image 1 and the captured content of the reference image 2 to the captured content of the image a may be further determined, and if the ratio is still smaller than the second threshold, the captured content matching the captured content of the reference image 1 and the captured content of the reference image 2 may be further removed from the captured content of the image a, the removed image a is obtained, and the reference image 3 most similar to the captured content of the removed image a is further retrieved from at least one image B. And circulating until the ratio of the obtained shooting content of the at least one reference image to the shooting content of the image A is greater than or equal to a second threshold, and then performing detail recovery processing on the image A according to the obtained at least one reference image to obtain an image C with higher resolution.
It is to be understood that the ratio of the captured content of the at least one reference image to the captured content of the image a is to be understood as the ratio of the intersection of the captured content of the at least one reference image and the captured content of the image a to the captured content of the image a.
S1507, the cloud server 200 transmits the image C to the terminal 100.
S1508, the terminal 100 displays the image C.
In some embodiments, terminal 100 receives image C and displays image C. Alternatively, for user comparison, the terminal 100 may display the image a and the image C, where the captured contents of the image C and the image a are the same, but the image C has higher definition.
By implementing the embodiment shown in fig. 15, the terminal only needs to send the image a with a relatively low shooting resolution to the cloud server, retrieve the reference image with highly matched shooting information and highly similar shooting content by the cloud server, perform detail recovery processing on the image a by the cloud server according to the reference image to obtain the image C, and return the image C to the terminal.
Referring to fig. 16, another schematic flowchart of an image processing method provided in an embodiment of the present application is shown, in which in this embodiment, the terminal 100 sends the shooting information of the image a to the cloud server 200, and the cloud server 200 determines at least one image B from the high definition image library according to the shooting information of the image a, where a matching degree between the shooting information of the image B and the shooting information of the image a is greater than a first threshold. The cloud server 200 transmits the at least one image B to the terminal 100, and determines a reference image 1 whose photographed content is most similar to that of the image a from the at least one image B by the terminal 100. As shown in fig. 16, the image processing method includes, but is not limited to, the following steps:
S1601, the terminal 100 detects a photographing instruction.
S1602, the terminal 100 captures an image a.
Steps S1601 to S1602 refer to steps S1501 to S1502 of the embodiment in fig. 15, which are not described herein again.
S1603, the terminal 100 transmits the photographing information of the image a to the cloud server 200.
In some embodiments, the terminal 100 may send the shooting information of the image a to a cloud server through a shooting application or a retouching application, where the cloud server corresponds to the shooting application or the retouching application.
The cloud server can be configured with a high-definition image library, and the high-definition image library stores a plurality of high-definition images and shooting information of each high-definition image in the plurality of high-definition images. Illustratively, a plurality of high-definition images are acquired by utilizing a high-definition photographing device respectively based on a plurality of geographic areas, so that the acquired high-definition images form a high-definition image library. The geographic region can be partitioned in a large geographic region, e.g., Shenzhen, Guangzhou, Chongqing, etc., can be considered a large geographic region.
The plurality of high-definition images are classified and stored according to shooting information of each high-definition image (the shooting information includes but is not limited to shooting geographical position information, shooting weather condition information, shooting magnification and the like). The shooting weather condition information may include a sunny day, a cloudy day, a fog day, a rainy day, or the like when the high-definition image is shot.
For convenience of retrieval, the plurality of high definition images can be divided into a plurality of large categories according to the geographic region in which each high definition image is shot, for example, all the high definition images of which the geographic region belongs to shenzhen are divided into one large category, all the high definition images of which the geographic region belongs to guangzhou are divided into another large category, and the like. It is understood that the division of the geographic area may be other, for example, the division may be according to the names of sights, and the embodiment of the present application is not limited thereto.
And further classifying all the high-definition images belonging to the same geographical area according to the shooting information of each high-definition image. For example, a plurality of high-definition pictures having the same or similar shooting magnification, the same or similar GPS information, and the same weather condition information at the time of shooting may be classified into the same category. Of course, when the category is divided, the reference shooting information may be selected according to an actual situation, for example, multiple high-definition pictures with the same or similar shooting magnifications and the same or similar GPS information may be classified into the same category without paying attention to weather condition information during shooting, which is not limited in the embodiment of the present application.
The same or similar imaging magnifications may mean that an absolute value of a difference between the imaging magnifications is smaller than or equal to a set magnification threshold. The GPS information may be the same or similar, that is, the distance between the geographic locations indicated by the GPS information may be less than or equal to the set distance threshold.
It can be understood that the GPS information in the embodiments of the present application refers to GPS information of a geographical location where a photographing device (such as the terminal 100 or a high-definition photographing device deployed in multiple geographical areas) is located when an image is taken.
In the embodiment of the present application, the resolution of the high-definition image stored in the high-definition image library is greater than the resolution of the image a captured by the terminal 100.
S1604, the cloud server 200 determines at least one image B according to the shooting information of the image a.
The cloud server 200 may determine at least one image B whose shooting information is highly matched with the shooting information of the image a, and the specific determination method may refer to step S1504 in the embodiment of fig. 15, which is not described herein again.
S1605, the cloud server 200 transmits the at least one image B to the terminal 100.
S1606, the terminal 100 determines the reference image 1 whose shot content is most similar to the shot content of the image a from the at least one image B.
In some embodiments, the terminal 100 receives at least one image B transmitted by the cloud server 200.
Further optionally, the terminal 100 may use a search network to respectively extract the first feature corresponding to each image B in the at least one image B and the first feature corresponding to the image a, for example, each image B in the at least one image B may be respectively input to the search network, so as to respectively obtain the first feature corresponding to each image B. And inputting the image A into the retrieval network, thereby obtaining a first characteristic corresponding to the image A.
The terminal 100 calculates the similarity between the first feature corresponding to the image a and the first feature corresponding to each image B in the at least one image B, respectively, to obtain M images B with the maximum similarity.
For example, the cloud server 200 may sort the similarity between the first feature corresponding to the image a and the first feature corresponding to each image B in the at least one image B in descending order, and set the image B corresponding to the top M similarity as the M images B with the largest similarity.
It is understood that M images B substantially similar to the captured content of the image a can be obtained by performing similarity measurement by retrieving the first features extracted by the network.
In order to more accurately determine the image B with the shooting content most similar to that of the image a from the M images B, in this embodiment of the application, the terminal 100 may respectively extract the second feature corresponding to each image B of the M images B and the second feature corresponding to the image a by using a reordering network, for example, each image B of the M images B may be respectively input to the reordering network, so as to respectively obtain the second feature corresponding to each image B. And inputting the image A into the reordering network, thereby obtaining a second characteristic corresponding to the image A.
The terminal 100 calculates the similarity between the second feature corresponding to the image a and the second feature corresponding to each image B in the M images B, respectively, to obtain an image B with the maximum similarity, which is used as the reference image 1.
For example, the cloud server 200 may reorder the similarity between the second feature corresponding to the image a and the second feature corresponding to each image B in the M images B in descending order, and use the image B with the highest similarity as the reference image 1.
S1607, the terminal 100 performs detail restoration processing on the image a according to the reference image 1, to obtain an image C.
The specific step of the terminal 100 performing the detail recovery processing on the image a may refer to step S1506 in the embodiment of fig. 15, which is not described herein again.
S1608, the terminal 100 displays the image C.
Step S1608 is please refer to step S1608 in the embodiment of fig. 15, which is not described herein again.
By implementing the embodiment of fig. 16, the cloud server returns the at least one image B with highly matched shooting information to the terminal, and the terminal extracts the features corresponding to the at least one image B and the features corresponding to the image a according to the self-demand, without performing feature extraction on all images in the whole high-definition image library, thereby avoiding waste of processing resources.
Referring to fig. 17, a further schematic flowchart of an image processing method provided in an embodiment of the present application is shown, in which the terminal 100 sends the image a and the shooting information of the image a to the cloud server 200, and the cloud server 200 determines at least one image B from the high definition image library according to the shooting information of the image a, where a matching degree between the shooting information of the image B and the shooting information of the image a is greater than a first threshold. The cloud server 200 further extracts the first feature and the second feature corresponding to the image a, transmits the first feature and the second feature corresponding to the image a and the first feature and the second feature corresponding to each image B to the terminal 100, and determines, by the terminal 100, a reference image 1 whose shooting content is most similar to that of the image a from the at least one image B. As shown, the image processing method includes, but is not limited to, the following steps:
S1701, the terminal 100 detects a photographing instruction.
S1702, the terminal 100 captures an image a.
S1703, the terminal 100 transmits the image a and the photographing information of the image a to the cloud server 200.
S1704, the cloud server 200 determines at least one image B according to the shooting information of the image a.
Please refer to steps S1701 to S1704 in fig. 17, and further description thereof is omitted here.
S1705, the cloud server sends the first feature and the second feature corresponding to the image a and the first feature and the second feature corresponding to each image B to the terminal 100.
In some embodiments, the cloud server 200 may extract a first feature corresponding to the image a through a search network and extract a second feature corresponding to the image a through a reorder network.
The cloud server 200 sends the first feature and the second feature corresponding to the image a and the first feature and the second feature corresponding to each image B in the at least one image B stored in the high-definition image library to the terminal 100.
S1706, the terminal 100 determines the reference image 1 whose shot content is most similar to the shot content of the image a from among the at least one image B.
In some embodiments, the terminal 100 determines the reference image 1 having the shooting content most similar to the shooting content of the image a according to the received first and second features corresponding to the image a and the first and second features corresponding to each of the at least one image B. For the specific determination method, reference may be made to step S1505 in fig. 15, which is not described herein again.
S1707, the terminal 100 performs detail restoration processing on the image a according to the reference image 1 to obtain an image C.
In some embodiments, the process of the terminal 100 performing the detail recovery processing on the image a according to the reference image 1 may refer to step S1506 in the embodiment of fig. 15, and is not described herein again.
S1708, the terminal 100 displays the image C.
Step S1708 please refer to step S1508 in fig. 15, which is not described herein again.
By implementing the embodiment shown in fig. 17, the cloud server extracts the features corresponding to the images and the features corresponding to the stored images, and the terminal performs matching according to the features corresponding to the image a and the features corresponding to the at least one image B returned by the cloud server to obtain the reference image 1, so that the occupation of a large amount of memory of the terminal for storing the image features can be avoided.
Fig. 18 is a block diagram of an image processing scheme according to an embodiment of the present disclosure; as shown in the figure, the high-definition images in the high-definition image library of the cloud server are offline through a retrieval network to extract the first features.
The cloud server can extract first features (namely feature vectors) corresponding to the images shot by the mobile phone on line through a retrieval network, and can also screen a feature vector set from a high-definition image library according to shooting information (including real-time GPS information, weather condition information, shooting magnification and the like) of the images shot by the mobile phone, wherein the matching degree between the shooting information of the high-definition images in the feature vector set and the shooting information of the images shot by the mobile phone is greater than a first threshold value. The online extraction of the first feature may refer to network connection between the mobile phone and the cloud server, obtaining an image shot by the mobile phone through the network connection, and extracting the first feature of the image.
And further performing similarity measurement between the first features in the feature vector set and the first features corresponding to the images shot by the mobile phone to obtain M images with the maximum similarity, wherein M is 15, for example, so that 15 high-definition images with the maximum similarity are obtained.
And extracting a second feature corresponding to each high-definition image from the 15 high-definition images with the maximum similarity through a re-sequencing network, and extracting a second feature corresponding to an image shot by a mobile phone through the re-sequencing network. And then obtaining a reference image with the maximum similarity according to the similarity measurement between the second feature corresponding to each high-definition image in the 15 high-definition images and the second feature corresponding to the image shot by the mobile phone. The reference image can be a high-definition image which is highly consistent with the shooting angle of the image shot by the mobile phone, similar in shooting magnification and consistent with the shooting weather condition information.
And restoring the details of the image shot by the mobile phone according to the reference image, so as to obtain an image with higher definition than the image shot by the mobile phone.
Referring to fig. 19, which is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in the figure, the image processing apparatus may include an obtaining module 1901, a first determining module 1902, and a second determining module 1903.
An acquiring module 1901, configured to acquire a captured image a and capturing information of the image a;
a first determining module 1902, configured to determine at least one image B according to the shooting information of the image a, where a matching degree between the shooting information of the image B and the shooting information of the image a is greater than a first threshold, and a resolution of the image B is greater than a resolution of the image a;
a second determining module 1903, configured to determine, from the at least one image B, a reference image 1 whose shooting content is most similar to that of the image a.
Optionally, the shooting information includes one or more of the following information: shooting magnification, shooting geographical position information or shooting weather condition information.
Optionally, the image processing apparatus may further include:
a restoring processing module, configured to perform detail restoring processing on the image a according to the reference image 1 to obtain an image C, where a resolution of the image C is greater than a resolution of the image a;
and the output module is used for outputting the image C.
Optionally, the second determining module 1903 may include:
an obtaining unit, configured to obtain a first feature corresponding to each image B in the at least one image B and a first feature corresponding to the image a, respectively, where one image B corresponds to one first feature;
A first determining unit, configured to determine, from the at least one image B, M images B whose captured contents are most similar to the captured contents of the image a according to a similarity between a first feature corresponding to each image B in the at least one image B and a first feature corresponding to the image a, where M is an integer greater than or equal to 1;
a second determination unit configured to determine, from the M images B, a reference image 1 whose shot content is most similar to that of the image a.
Optionally, the second determining unit is specifically configured to:
respectively acquiring a second feature corresponding to each image B in the M images B and a second feature corresponding to the image A, wherein one image B corresponds to one second feature;
determining an image B with the shooting content most similar to that of the image A from the M images B according to the similarity between the second feature corresponding to each image B in the M images B and the second feature corresponding to the image A, and taking the image B as a reference image 1;
wherein the second feature is finer than the first feature.
Optionally, the image processing apparatus may further include:
a third determining module, configured to determine a ratio of the captured content of the reference image 1 to the captured content of the image a;
A removal processing module, configured to remove, if the ratio is smaller than a second threshold, the shot content that matches the shot content of the reference image 1 from the shot content of the image a, and obtain a removed image a;
a fourth determining module, configured to determine, from the at least one image B, a reference image 2 whose shooting content is most similar to that of the removed image a;
the restoration processing module is specifically configured to perform detail restoration processing on the image a according to the reference image 1 and the reference image 2 to obtain an image C.
It is understood that each module and/or unit described above may be software, hardware, or a combination of software and hardware. For the detailed description of each step, reference may be made to the description of the method embodiment, which is not repeated herein.
An image processing method according to another embodiment of the present application is described below.
Since the terminal 100 generally uses a lower zoom magnification to capture a picture in order to capture a larger field of view (FOV) when shooting or recording a video, the smaller the zoom magnification used, the larger the field of view of the terminal 100 capturing the picture, and the more image content contained in the picture. For example, the terminal 100 may perform photographing with a wide-angle lens or a super wide-angle lens, thereby reducing a zoom magnification and increasing the FOV of a photographed picture. However, the angle of view of the captured image is inversely related to the image quality of the captured image. Depending on the image pickup device on the terminal 100 alone, it is not possible to achieve both a large field angle and high definition image quality of a photographed image. For example, when a photograph or a video is taken using a wide-angle camera, although the angle of view of the taken picture is large, the image quality is blurred, the detailed texture of the object to be taken is not clear, and particularly the detailed texture of the distant view is blurred.
Therefore, the embodiment of the present application transmits a low-definition image of a low zoom magnification (i.e., a large field angle) photographed on the terminal 100 to the cloud server 200 via the high-definition image database on the cloud server 200 by providing an image processing method. The cloud server 200 cuts the low-definition image with the low zoom magnification into blocks based on the zoom magnification of the high-definition image in the high-definition image library to obtain a plurality of image blocks. And aiming at each image block, the cloud server can perform retrieval and matching in the high-definition image library to obtain one or more high-definition reference images, and then the image quality is improved through an image enhancement network based on the high-definition reference images to obtain a plurality of high-definition image blocks. Finally, the server at the cloud end splices the multiple high-definition image blocks into a super-definition image with higher resolution and higher definition, and returns the super-definition image to the terminal 100. In this way, the processing capability of the cloud server 200 allows the terminal 100 to capture a picture or video with a wide field angle and with rich detail texture and sharpness that can be captured in high-power zooming.
Fig. 20 is a flowchart illustrating another image processing method provided in the embodiment of the present application.
As shown in fig. 20, the method may include, but is not limited to, the following steps:
s2001, the terminal 100 detects the photographing instruction.
The terminal 100 may install an Application (APP) for uploading a low-definition image. Optionally, the application program may include, but is not limited to, a photographing application, a cropping application, and the like. The application on the terminal 100 may also log in to the account.
The user can initiate the photographing instruction through touch screen operation, or the user can also initiate the photographing instruction through a key. The terminal 100 detects the photographing instruction and photographs the low-definition image.
S2002, the terminal 100, after detecting the photographing instruction, photographs the low-definition image and acquires photographing information of the low-definition image.
Wherein the resolution of the low-definition image is relatively low and the details are relatively blurred due to the limitations of the imaging device on the terminal 100. The shooting information of the low-definition image includes zoom magnification, and one or more of the following: geographical position information, weather information, season information, time information, photographing parameters (e.g., aperture, exposure time, etc.), photographing posture information, and the like. For example, the terminal 100 may capture the low-definition image at a 1-fold zoom magnification.
The smaller the zoom magnification is, the larger the field angle of the low-definition image is, the less detail is in the low-definition image, and the more blurred the screen in the low-definition image is.
S2003, the terminal 100 transmits the low-definition image and the shooting information of the low-definition image to the cloud server 200.
After the low-definition image is shot, the terminal 100 may automatically transmit the low-definition image and shooting information of the low-definition image to the cloud server 200. The terminal 100 may upload the low-definition image to the cloud server 200 by receiving an operation of the user after the low-definition image is photographed.
In a possible implementation manner, the terminal 100 may also upload the low-definition images stored in the gallery to the cloud server 200.
S2004, the cloud server 200 recognizes semantic information of the low-definition image.
After receiving the low-definition image sent by the terminal 100, the cloud server 200 identifies areas where image contents of different types are located in the low-definition image through a semantic segmentation model, and obtains a semantic segmentation mask (mask) map of the low-definition image. The semantic segmentation mask (mask) map may mark regions in the low-definition image where different classes of image content are located. The cloud server 200 may mark the category of the image content in the low-definition image.
For example, as shown in fig. 21A, the category of the image content in the low-definition image may include sky, buildings, ground, and the like. The cloud server 200 may identify regions of the sky, the buildings, and the ground in the low-definition image through the semantic segmentation model, so as to obtain a semantic segmentation mask image.
As shown in fig. 21B, the semantic segmentation mask map includes a field 2101, a field 2102, and a field 2103. The area 2101 is used to indicate an area where the sky is located in the low-definition image, the area 2102 is used to indicate an area where the building is located in the low-definition image, and the area 2103 is used to indicate an area where the ground is located in the low-definition image.
S2005, the cloud server 200 divides the low-definition image into N image blocks of a specified magnification based on the semantic information and the zoom ratio information of the high-definition image library.
A high-definition image library may be configured in the cloud server 200, and a plurality of high-definition images and shooting information of each of the plurality of high-definition images are stored in the high-definition image library. The shooting information of the high-definition image can comprise one or more of the following information: zoom magnification, geographical position information, weather information, season information, time information, shooting parameters (e.g., aperture, exposure time, etc.), shooting attitude information, and the like. When the cloud server 200 is configured with the high-definition image library, all of the high-definition image libraries may be at a fixed zoom magnification (for example, a zoom magnification of 10 times).
The cloud server 200 may determine the number of image blocks based on the zoom magnification of the high-definition image and the zoom magnification of the low-definition image in the high-definition image library. The zooming magnification of the high-definition image is larger than or equal to that of the low-definition image, and the difference between the zooming magnification of the high-definition image and the zooming magnification of the low-definition image is larger, so that the number of the image blocks is larger. The zooming magnification of each image block is corresponding to or similar to that of the high-definition image or meets a preset proportional relation.
For example, the zoom magnification of the high-definition image may be 10 times of zoom magnification, and the zoom magnification of the low-definition image may be 1 times of zoom magnification, and the cloud server 200 may divide the low-definition image into N image blocks according to a window of 10 times of zoom magnification, where the size of each image block is the same as the window of 10 times of zoom magnification.
As shown in fig. 21C, the low-definition image may be divided into N image blocks. Fig. 21C shows only some of the N image blocks by way of example. The image block 2131 may be specifically as shown in fig. 21D.
The cloud server 200 may divide the voice division mask map of the low-definition image by the same window size, so as to obtain a mask block corresponding to each image block. The cloud server 200 may label the category of each image block.
And S2006, the cloud server 200 retrieves T high-definition retrieval images of each image block from the high-definition image library based on the semantic information, wherein T is a positive integer.
The semantic information may include a category for each image block in the low-definition image. In some embodiments, the cloud server 200 may employ different retrieval strategies for different categories of image blocks.
For example, the categories of image blocks may include sky, buildings, and ground. Since the texture of the sky region and the texture of the ground region are relatively simple, the cloud server 200 may not perform image retrieval and detail restoration for image blocks of the "sky" and "ground" categories. Since the texture details of the building area are rich, the cloud server 200 may retrieve, for an image block of the category "building", a high-definition retrieval image corresponding to the image block of the category "building".
The cloud server 200 may retrieve T high-definition search images corresponding to image blocks from the high-definition image library according to the shooting information (which may also be referred to as first environment information in other embodiments) of the low-definition images and the image blocks.
In a possible implementation manner, the cloud server 200 determines at least one first matching image from the high-definition image library based on the shooting information of the low-definition image, where a matching degree between the shooting information of the first matching image and the shooting information of the low-definition image is greater than a first threshold, and a resolution of the first matching image is greater than a resolution of the low-definition image; the cloud server respectively obtains a first feature corresponding to each first matching image in at least one first matching image and a first feature corresponding to the image block of the first type, wherein one first matching image corresponds to one first feature. The cloud server determines T high-definition retrieval images with shooting contents most similar to those of the first type of image blocks from the at least one first matching image based on the similarity between the first features corresponding to each first matching image in the at least one first matching image and the first features corresponding to the first image blocks.
In a possible implementation manner, the cloud server determines, based on a similarity between a first feature corresponding to each of the at least one first matching image and a first feature corresponding to the image block of the first type, at least one second matching image that is most similar to the image block of the first type on the first feature. The cloud server respectively obtains a second feature corresponding to each second matching image in at least one second matching image and a second feature corresponding to the image block of the first type, wherein one second matching image corresponds to one second feature. And the cloud server determines the T high-definition retrieval images which are most similar to the image block of the first type on the second characteristics based on the similarity between the second characteristics corresponding to each second matching image in the at least one second matching image and the second characteristics corresponding to the image block of the first type. Wherein the second feature is finer than the first feature.
For details about the process of the cloud server 200 performing image retrieval on the image block, reference may be made to the foregoing embodiment shown in fig. 18, which is not described herein again.
S2007, the cloud server 200 performs matching registration on the T high-definition retrieval images of each image block based on the semantic information and the low-definition images to obtain the T high-definition registration images.
In some embodiments, the cloud server 200 may employ different matching registration strategies for different categories of image blocks based on the semantic information as a priori guidance information.
For example, the categories of image blocks may include sky, buildings, and ground. Since the texture of the sky region and the texture of the ground region are relatively simple, the cloud server 200 may not perform image matching registration nor detail restoration for image blocks of the "sky" and "ground" categories. Because texture details of the building area are rich, for the image block with the category of "building", the cloud server 200 may perform image registration on the high-definition images after retrieving T high-definition retrieval images corresponding to the image block with the category of "building", so as to obtain T high-definition registration images.
And S2008, the cloud server 200 repairs the details of each image block based on the semantic information and the T high-definition registration images of each image block to obtain N high-definition image blocks.
The process of performing image registration on the T high-definition retrieval images corresponding to the image blocks by the cloud server 200 may be as follows:
as shown in fig. 22, the cloud server 200 may determine a category of an image block by segmenting a mask map based on a semantic of a low-definition image, and further determine whether the image block needs to be registered. If image registration is required, the cloud server 200 may obtain the matched feature points in the image block and each high-definition search image by using a conventional matching algorithm. Then, the cloud server 200 may obtain an image space coordinate transformation parameter between the image block and each high-definition retrieval image through the matching feature point pair between the image block and each high-definition retrieval image. The cloud server 200 may perform image registration on each high-definition retrieval image according to the image space coordinate transformation parameter, so as to obtain T high-definition registration images.
Specifically, the cloud server adjusts the feature points in each high-definition retrieval image, which are matched with the first type of image block, to the same positions as those in the image block based on the image space coordinate transformation parameters of the image block and each high-definition retrieval image in the T high-definition retrieval images, so as to obtain the T high-definition registration images.
The conventional matching algorithm may include, but is not limited to, any one of the following two types of algorithms: an algorithm based on gray scale template matching and an algorithm based on feature matching. The algorithms based on the gray level template matching include Mean Absolute Difference (MAD), Sum of Absolute Difference (SAD), sum of square error (SSD), Mean Square Difference (MSD), normalized product correlation (NCC), Sequential Similarity Detection (SSDA), hadamard transform (SATD), local gray level coding (partially gray level coding), and the like. The feature matching based algorithm may include, but is not limited to, one or more of the following: FAST algorithm, ORB algorithm, SIFT algorithm, SURF algorithm, LBP algorithm, traditional optical flow method, etc.
In a possible implementation manner, after T high-definition registration images are obtained, the cloud server 200 may screen out an optimal L high-definition reference image from the T high-definition registration images based on a preference policy, where L is a positive integer and L is less than or equal to T.
The optimal strategy can be that L high-definition reference images with the highest matching degree with the image blocks are selected from the T high-definition registration images. In some possible implementations, the optimization strategy may be to select the L high definition reference images that are most similar to the image spatial positions of the image blocks and the matching feature point pairs in the T high definition registered images.
In some embodiments, the cloud server 200 may employ different detail repair strategies for different classes of image blocks based on the semantic information as a priori guidance information.
For example, the categories of image blocks may include sky, buildings, and ground. Since the texture of the sky region and the texture of the ground region are relatively simple, the cloud server 200 may not perform image matching registration nor detail restoration for image blocks of the "sky" and "ground" categories. Because texture details of the building area are rich, for the image block of the category "building", the cloud server 200 may perform detail restoration on the image block after acquiring the L high-definition reference images of the image block.
The detail repairing process of the image block by the cloud server 200 using the L high-definition registration images may be as follows:
as shown in fig. 23A, the cloud server 200 may determine the category of an image block by using a semantic segmentation mask map based on a low-definition image, and further determine whether the image block needs to be detailed. If image detail restoration is needed, the cloud server 200 may input the L high-definition reference images and the image blocks into the hyper-division network algorithm model after the L high-definition reference images are restored, and restore details in the image blocks by using detail texture information in the L high-definition reference images, so as to improve resolution in the image blocks, thereby obtaining the high-definition image blocks. The hyper-division network algorithm model can comprise a matching degree detection network, a deep registration network and a deep fusion network. The matching degree detection network can be used for detecting an area in the high-definition reference image, which is inconsistent with the content in the image block. The depth registration network can be used for carrying out pixel-level registration on an area in the high-definition reference image, wherein the area is consistent with the content in the image block. The depth fusion network can be used for fusing high-frequency details in a high-definition reference image to an image block, so that the resolution of the image block is improved, and the high-definition image block is obtained.
For example, as shown in fig. 23B, the cloud server 200 may obtain the high-definition reference image 2331 corresponding to the image block 2131 through the image registration process shown in fig. 22. The cloud server 200 may input the image block 2131 and the high-definition reference image 2331 into a hyper-division network algorithm model, and fuse the high-frequency detail information in the high-definition reference image 2331 into the image block 2131 to obtain a high-definition image block 2332 shown in fig. 23C.
And S2009, splicing the N high-definition image blocks by the cloud server 200 to obtain a high-definition output image.
After the high-definition image blocks corresponding to each image block are obtained, the cloud server 200 may splice the high-definition image blocks corresponding to each image block according to the positions of the image blocks in the low-definition image, so as to obtain a high-definition output image.
In some embodiments, since the detail repair is not performed on some types of image blocks, the cloud server 200 may directly use the image blocks that are not subjected to the detail repair to be spliced with other high-definition image blocks that are obtained after the detail repair, so as to obtain a high-definition output image.
For example, the cloud server 200 does not perform detail restoration on image blocks of the "sky" category and the "ground" category in the low-definition image, and performs detail restoration on image blocks of the "building" category in the low-definition image. Therefore, the cloud server 200 may use the image blocks of the "sky" category and the "ground" category in the low-definition image to be spliced with the high-definition image blocks corresponding to the image blocks of the "building" category to obtain the high-definition output image.
S2010, the cloud server 200 sends the high definition output map to the terminal 100.
S2011, the terminal 100 displays a high definition output map.
Wherein the high definition output map may also be referred to as a third image or image C in the foregoing embodiments.
The timing and the application interface of the terminal 100 displaying the high definition output map may refer to the aforementioned interface interaction embodiments, and are not described herein again.
In some embodiments, before the step S2004, the cloud server 200 may match a first image processing model with the shooting information (also referred to as first environmental information in the foregoing embodiments) of the low-definition image (also referred to as a first image or image a in the foregoing embodiments), perform optimization processing on the low-definition image, and sequentially perform the subsequent steps S2004 to S2009 on the low-definition image after the optimization processing. For the matching process of the first image processing model and the optimization processing of the low-definition image by using the first image processing model, reference may be made to the embodiments shown in fig. 8 to 11, which are not described herein again.
By the image processing method provided by the embodiment of the application, the image with smaller zooming magnification is diced according to the zooming magnification close to the cloud high-definition image library, the image is amplified to a certain scale, the image is retrieved and matched in the cloud high-definition image library, the image quality is improved through the image enhancement network based on the reference image, and the final result is spliced into the ultra-definition image with larger resolution ratio and higher definition. Therefore, with the help of semantic information, local enhancement is performed first, then the whole splicing and fusion are performed, an ultra-clear image with higher resolution is obtained, and the false detection and the false matching of the region lacking the characteristic information are effectively solved.
An image processing method according to another embodiment of the present application is described below.
Fig. 24 is a flowchart illustrating an image processing method provided in an embodiment of the present application.
A high-definition image library may be arranged on the cloud server 200, wherein the high-definition image library may store a plurality of high-definition images and shooting information of the high-definition images (which may also be referred to as second environment information in the above embodiment), where the shooting information of the high-definition images may include zoom magnifications and one or more of the following: geographical position information, weather information, season information, time information, photographing parameters (e.g., aperture, exposure time, etc.), photographing posture information, and the like. When the high-definition image library is constructed, the construction device can acquire a plurality of high-definition images with high zoom magnification aiming at the same shooting object in the same geographic position, and then the construction device can splice the plurality of high-definition images with high zoom magnification into a panoramic high-definition image according to the relevance of image contents. Then, the construction device can respectively cut the panoramic high-definition image according to the cutting sizes corresponding to the various zoom magnifications, so that the panoramic high-definition image is divided into a plurality of high-definition images under each zoom magnification. The zooming magnification is larger, the cutting size is smaller, and the zooming magnification of the high-definition image obtained after cutting is larger.
For example, as shown in fig. 25A, the building apparatus may splice a panoramic high definition image 2500 through high zoom ratio high definition images. The construction device may divide the panoramic image 2500 at a zoom magnification such as a zoom magnification of 5 times, a zoom magnification of 10 times, or a zoom magnification of 20 times, and divide a plurality of high-definition images at each zoom magnification. As shown in fig. 25A, when the construction apparatus divides the panoramic high definition image at a zoom magnification of 10 times, the size of each divided high definition image may be as shown in a high definition image 2511 in fig. 25B. As shown in fig. 25C, when the construction apparatus divides the panoramic high definition image at a zoom magnification of 5 times, the size of each of the divided high definition images may be as shown by a high definition image 2521 in fig. 25D. The higher the zoom magnification used when the panoramic high-definition image 2500 is divided, the larger the number of high-definition images obtained by division, and the less image information included in each high-definition image. Both of fig. 25A and 25C described above show only the cutting of the panoramic image
As shown in fig. 24, the image processing method may include, but is not limited to, the following steps:
s2401, the terminal 100 detects a photographing instruction.
For details, reference may be made to step S2001 in the foregoing embodiment shown in fig. 20, which is not described herein again.
S2402, the terminal 100 captures a low-definition image after detecting the photographing instruction.
For specific content, reference may be made to step S2002 in the foregoing embodiment shown in fig. 20, which is not described herein again.
S2403, the terminal 100 sends the low-definition image and the shooting information of the low-definition image to the cloud server 200.
For specific content, reference may be made to step S2003 in the embodiment shown in fig. 20, which is not described herein again.
S2404, the cloud server 200 identifies semantic information of the low-definition image.
For details, reference may be made to step S2004 in the embodiment shown in fig. 20, which is not described herein again.
S2405, the cloud server 200 retrieves T high definition search images of the low definition images from the high definition image library based on the semantic information.
Specifically, the cloud server 200 may retrieve, from the high-definition image library, T high-definition retrieval images having contents similar to those of the low-definition images based on semantic information of the low-definition images.
In some embodiments, the cloud server 200 may retrieve T high-definition retrieval images similar to the content of the low-definition images from the high-definition image library based on semantic information of the low-definition images and shooting information of the low-definition images (which may also be referred to as first environment information in other embodiments). For details about the process of the cloud server 200 performing image retrieval on the image block, reference may be made to the foregoing embodiment shown in fig. 18, which is not described herein again.
S2406, the cloud server 200 registers the T high-definition retrieval images of the low-definition images based on the semantic information and the low-definition images to obtain the T high-definition registration images of the low-definition images.
The registration process of the low-definition images using the T high-definition search images may refer to the embodiment shown in fig. 22, and is not described herein again.
S2407, the cloud server 200 repairs the low-definition image based on the semantic information and the T high-definition registration image of the low-definition image to obtain a high-definition output image.
The registration process of the low-definition images using the T high-definition registration images may refer to the embodiment shown in fig. 23A, and is not described herein again.
S2408, the cloud server 200 sends the high definition output image to the terminal 100.
S2409, the terminal 100 displays the high definition output map.
Wherein the high definition output map may also be referred to as a third image or image C in the foregoing embodiments.
The timing and the application interface of the terminal 100 displaying the high definition output map may refer to the aforementioned interface interaction embodiments, and are not described herein again.
In a possible implementation manner, the cloud server 200 may send the T high-definition registration images to the terminal 100 after registering the T high-definition retrieval images of the low-definition images based on the semantic information and the low-definition images to obtain the T high-definition registration images of the low-definition images. The terminal 100 repairs the low-definition image based on the T high-definition registration images of the low-definition image to obtain a high-definition output image.
According to the embodiment of the application, a large number of high-definition images with high zooming multiplying power can be used for constructing and splicing the panoramic image on the cloud server 200 in the high-definition image library, and then the panoramic image is cut according to the image sizes with different zooming multiplying powers to obtain the high-definition images with different zooming multiplying powers. The cloud server 200 can retrieve high-definition reference images with the same or similar zoom magnifications for the low-definition images shot by the terminal 100 in the high-definition image library, and the detail restoration effect of the low-definition images by using the high-definition reference images is improved.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (11)
1. An image processing method, characterized in that the method comprises:
the method comprises the steps that a cloud server obtains a low-definition image sent by a terminal and shooting information when the low-definition image is shot, wherein the shooting information comprises zoom magnification of the low-definition image;
The cloud server identifies semantic information of the low-definition image;
the cloud server determines the number N of image blocks based on the zooming multiplying power of high-definition images and the zooming multiplying power of low-definition images in a high-definition image library, wherein N is a positive integer;
the cloud server divides the low-definition image into N image blocks based on the number N of the image blocks, and marks the type of image content in each image block of the N image blocks based on semantic information of the low-definition image;
the cloud server retrieves T high-definition retrieval images matched with the shooting content of the first type of image blocks from the high-definition image library, wherein T is a positive integer;
the cloud server repairs details in the first type of image blocks based on the T high-definition retrieval images to obtain first type high-definition image blocks, wherein the resolution of the first type high-definition image blocks is greater than that of the first type image blocks;
and the cloud server replaces the first type of image blocks in the low-definition image based on the first type of high-definition image blocks to obtain a high-definition output image, and sends the high-definition output image to the terminal.
2. The method of claim 1, wherein the capture information of the low-definition image further comprises one or more of: geographical position information, weather information, season information, time information, shooting parameters, and shooting posture information.
3. The method according to claim 1 or 2, wherein the cloud server retrieves T high-definition search images from the high-definition image library, the T high-definition search images matching the captured content of the first type of image block, and specifically comprises:
the cloud server determines at least one first matching image from the high-definition image library based on the shooting information of the low-definition image, wherein the matching degree between the shooting information of the first matching image and the shooting information of the low-definition image is greater than a first threshold value, and the resolution of the first matching image is greater than that of the low-definition image;
the cloud server respectively acquires a first feature corresponding to each first matching image in the at least one first matching image and a first feature corresponding to the first type of image block, wherein one first matching image corresponds to one first feature;
the cloud server determines T high-definition retrieval images with shooting contents most similar to those of the first type image blocks from the at least one first matching image based on the similarity between the first features corresponding to each of the at least one first matching image and the first features corresponding to the first image blocks.
4. The method according to claim 3, wherein the cloud server determines, from the at least one first matching image, T high-definition retrieval images having shooting contents most similar to those of the first type image blocks based on a similarity between a first feature corresponding to each of the at least one first matching image and a first feature corresponding to the first type image block, and specifically includes:
the cloud server determines at least one second matching image which is most similar to the image block of the first type on the first feature based on the similarity between the first feature corresponding to each first matching image in the at least one first matching image and the first feature corresponding to the image block of the first type;
the cloud server respectively acquires a second feature corresponding to each second matching image in the at least one second matching image and a second feature corresponding to the image block of the first type, wherein one second matching image corresponds to one second feature;
the cloud server determines T high-definition retrieval images which are most similar to the image blocks of the first type on the second characteristics based on the similarity between the second characteristics corresponding to each second matching image in the at least one second matching image and the second characteristics corresponding to the image blocks of the first type;
Wherein the second feature is finer than the first feature.
5. The method according to claim 1 or 2, wherein the cloud server repairs details in the first type of image block by using the T high definition search images to obtain a first type of high definition image block, and specifically includes:
the cloud server acquires image space coordinate transformation parameters of the first type of image blocks and each high-definition retrieval image in the T high-definition retrieval images;
the cloud server adjusts the feature points in each high-definition retrieval image, which are matched with the image blocks of the first type, to the same positions as those in the image blocks of the first type based on the image space coordinate transformation parameters of the matched feature point pairs in the image blocks of the first type and each high-definition retrieval image in the T high-definition retrieval images to obtain T high-definition registration images;
and the cloud server repairs the details of the first type of image block based on the T high-definition registration images to obtain the first type of high-definition image block.
6. The method according to claim 5, wherein the cloud server repairs the details of the first type of image block based on the T high definition registration images to obtain a first type of high definition image block, specifically comprising:
The cloud server determines L high-definition reference images with the most similar image space positions of the matched characteristic point pairs from the T high-definition registration images based on the image space coordinate transformation parameters of the matched characteristic point pairs in the first type image blocks and each high-definition search image in the T high-definition search images, wherein L is a positive integer;
and the cloud server repairs the details of the first type of image block based on the L high-definition reference images to obtain the first type of high-definition image block.
7. The method according to claim 1, wherein the cloud server replaces a first type of image block in the low-definition image based on a first type of high-definition image block to obtain a high-definition output image, and specifically comprises:
when the low-definition image only comprises a first type of image blocks, the cloud server splices the first type of high-definition image blocks into the high-definition output image;
when the low-definition image comprises a first type of image block and a second type of image block, the cloud server splices the first type of high-definition image block and the second type of high-definition image block into the high-definition output image.
8. The method according to any one of claims 1-7, wherein before the cloud server partitions the low-definition image into N tiles based on the number N of tiles, the method further comprises:
the cloud server determines a first image processing model corresponding to the environment information according to the shooting information of the low-definition image;
the cloud server processes the low-definition image through the first processing model;
the cloud server divides the low-definition image into N image blocks based on the number N of the image blocks, and specifically includes:
the cloud server divides the low-definition image processed by the first image processing model into N image blocks based on the number N of the image blocks.
9. The method according to claim 8, wherein the cloud server processes the low-definition image through the first processing model, specifically comprising:
and the cloud server processes detail definition, dynamic range, contrast and/or noise and/or color of the low-definition image through the first image processing model.
10. A cloud server, comprising: a transceiver, one or more processors, and one or more memories; the one or more memories coupled with the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the cloud server to perform the image processing method of any of claims 1-9 above.
11. A readable storage medium storing computer instructions which, when executed on a cloud server, cause the cloud server to perform the image processing method of any of claims 1-9 above.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20900091.8A EP4060603A4 (en) | 2019-12-13 | 2020-12-14 | Image processing method and related apparatus |
PCT/CN2020/136178 WO2021115483A1 (en) | 2019-12-13 | 2020-12-14 | Image processing method and related apparatus |
US17/784,342 US20230056332A1 (en) | 2019-12-13 | 2020-12-14 | Image Processing Method and Related Apparatus |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019112857174 | 2019-12-13 | ||
CN201911285717 | 2019-12-13 | ||
CN202010357077X | 2020-04-29 | ||
CN202010357077 | 2020-04-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112989092A true CN112989092A (en) | 2021-06-18 |
Family
ID=76344921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011446454.3A Pending CN112989092A (en) | 2019-12-13 | 2020-12-11 | Image processing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112989092A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870180A (en) * | 2021-08-25 | 2021-12-31 | 广东电网有限责任公司广州供电局 | Image processing method, image processing device, computer equipment and storage medium |
CN115115679A (en) * | 2022-06-02 | 2022-09-27 | 华为技术有限公司 | Image registration method and related equipment |
WO2023142732A1 (en) * | 2022-01-28 | 2023-08-03 | 华为技术有限公司 | Image processing method and apparatus, and electronic device |
CN118264889A (en) * | 2023-08-29 | 2024-06-28 | 华为技术有限公司 | Image processing method and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108419009A (en) * | 2018-02-02 | 2018-08-17 | 成都西纬科技有限公司 | Image definition enhancing method and device |
CN109089040A (en) * | 2018-08-20 | 2018-12-25 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus and terminal device |
-
2020
- 2020-12-11 CN CN202011446454.3A patent/CN112989092A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108419009A (en) * | 2018-02-02 | 2018-08-17 | 成都西纬科技有限公司 | Image definition enhancing method and device |
CN109089040A (en) * | 2018-08-20 | 2018-12-25 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus and terminal device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870180A (en) * | 2021-08-25 | 2021-12-31 | 广东电网有限责任公司广州供电局 | Image processing method, image processing device, computer equipment and storage medium |
WO2023142732A1 (en) * | 2022-01-28 | 2023-08-03 | 华为技术有限公司 | Image processing method and apparatus, and electronic device |
CN115115679A (en) * | 2022-06-02 | 2022-09-27 | 华为技术有限公司 | Image registration method and related equipment |
CN118264889A (en) * | 2023-08-29 | 2024-06-28 | 华为技术有限公司 | Image processing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021093793A1 (en) | Capturing method and electronic device | |
CN113132620B (en) | Image shooting method and related device | |
US12096120B2 (en) | Photographing method in telephoto scenario and mobile terminal | |
WO2021115483A1 (en) | Image processing method and related apparatus | |
CN113747048B (en) | Image content removing method and related device | |
WO2021104485A1 (en) | Photographing method and electronic device | |
CN112989092A (en) | Image processing method and related device | |
CN113037995A (en) | Shooting method and terminal in long-focus scene | |
CN114697543B (en) | Image reconstruction method, related device and system | |
CN113497890B (en) | Shooting method and equipment | |
WO2021185296A1 (en) | Photographing method and device | |
CN112529645A (en) | Picture layout method and electronic equipment | |
CN113709354A (en) | Shooting method and electronic equipment | |
US20220215050A1 (en) | Picture Search Method and Device | |
CN115002333B (en) | Image processing method and related device | |
CN115115679A (en) | Image registration method and related equipment | |
CN114979457B (en) | Image processing method and related device | |
CN114842069A (en) | Pose determination method and related equipment | |
CN113542575A (en) | Device pose adjusting method, image shooting method and electronic device | |
CN114764745A (en) | Image reconstruction method and related device | |
WO2024114785A1 (en) | Image processing method, electronic device, and system | |
WO2023036084A1 (en) | Image processing method and related apparatus | |
CN114693511A (en) | Picture completion method and electronic equipment | |
CN116546313A (en) | Shooting restoration method and electronic equipment | |
CN114757955A (en) | Target tracking method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |