CN111614901A - Image shooting method and device, storage medium and terminal - Google Patents
Image shooting method and device, storage medium and terminal Download PDFInfo
- Publication number
- CN111614901A CN111614901A CN202010459328.5A CN202010459328A CN111614901A CN 111614901 A CN111614901 A CN 111614901A CN 202010459328 A CN202010459328 A CN 202010459328A CN 111614901 A CN111614901 A CN 111614901A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- pixel point
- pixel
- pixel points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses an image shooting method, an image shooting device, a storage medium and a terminal. The image capturing method includes: when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image. According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
Description
Technical Field
The application relates to the field of mobile terminal application, in particular to an image shooting method, an image shooting device, a storage medium and a terminal.
Background
With the development of terminal technology, a terminal can be used not only for making a call and sending a short message, but also for downloading and installing an application program with various functions. Wherein, part of the application programs can use the camera. On the other hand, cameras have been widely spread in terminals, and benefit from the reduction of cost of memory, technologies such as large-capacity memory and network cloud are used in terminals, and the use of cameras is increasingly powerful with the continuous development of the technologies.
However, since the processing memory space of the terminal is limited, when the image data amount of the image captured by the camera is large, the image capturing speed is slow.
Disclosure of Invention
The embodiment of the application provides an image shooting method, an image shooting device, a storage medium and a terminal, which can improve the image shooting efficiency of the terminal.
The embodiment of the application provides an image shooting method, which comprises the following steps:
when the camera is detected to shoot an image, acquiring a current image;
processing the current image according to a sample format to obtain a processed image;
acquiring a color parameter of each pixel point in the processed image;
determining color difference information of each pixel point based on the color parameters;
and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image.
Correspondingly, the embodiment of the present application further provides an image capturing apparatus, including:
the acquisition unit is used for acquiring a current image when the image shot by the camera is detected;
the processing unit is used for processing the current image according to a sample format to obtain a processed image;
the first obtaining unit is used for obtaining the color parameter of each pixel point in the processed image;
the determining unit is used for determining the color difference information of each pixel point based on the color parameters;
and the deleting unit is used for deleting corresponding pixel points in the processed image according to the color difference information to obtain and store a target image.
Correspondingly, the embodiment of the application also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the image shooting method.
Correspondingly, the embodiment of the application also provides a terminal, which comprises a processor and a memory, wherein the memory stores a plurality of instructions, and the processor loads the instructions to execute the image shooting method.
According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first image capturing method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating an arrangement of pixel points of an image according to an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a second image capturing method according to an embodiment of the present disclosure.
Fig. 4 is a block diagram of a first image capturing apparatus according to an embodiment of the present application.
Fig. 5 is a block diagram of a second image capturing apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Based on the above problems, embodiments of the present application provide a first image capturing method, an apparatus, a storage medium, and a terminal, which can improve image capturing efficiency of the terminal. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image capturing method according to an embodiment of the present disclosure. The image photographing method may be applied to mobile terminals such as mobile phones, tablet computers, notebook computers, palmtop computers, Portable Media Players (PMPs), and fixed terminals such as desktop computers. The specific flow of the image shooting method can be as follows:
101. when the image shot through the camera is detected, the current image is collected.
Specifically, it is detected that an image is shot through the camera, and whether the camera function needs to be started or not can be detected through detecting the current user operation. And when detecting that the user opens the camera, starting a camera shooting function, and shooting an image according to the user operation. After it is determined that the user completes the image capturing, a current image captured by the user may be captured.
The terminal can start the camera function according to various trigger instructions of the user, for example, the user can directly start the camera application, the terminal is triggered to start the camera function, or the user can open other applications installed on the terminal, the other applications can support the camera function, the camera is opened through the other applications, and the camera function is started.
For example, the current terminal detects that a user starts a camera, and captures an image through the camera, so that a current image captured by the user can be acquired.
102. And processing the current image according to the sample format to obtain a processed image.
Specifically, after a current image shot by a user is acquired, the current image may be processed, and the current image may be processed according to a sample format. Wherein the sample format may refer to a color space mode of the image.
In particular, the color space is also called a color model, and its purpose is to describe colors in a generally acceptable manner under certain standards. There are many kinds of color spaces, and RGB (Red, Green, Blue) color spaces, YUV (luminance, chrominance) color spaces, etc. are commonly used, and different color spaces are composed of different parameters, and the colors are explained in different ways.
The RGB is a color space describing colors by three primary colors of Red, Green and Blue, and the RGB color space is based on three primary colors of R (Red), G (Green) and B (Blue), and is superimposed to different degrees to generate abundant and wide colors, so that the RGB is commonly called a three-primary-color mode. Red, green and blue represent three basic colors in the visible spectrum or three primary colors, each of which is classified into 256 levels according to its brightness. When the three primary colors of the color light are overlapped, various intermediate colors can be generated due to different color mixing ratios.
The YUV color space is a color coding method adopted by the european television system, and is a color space adopted by PAL (phase alternation Line, Line by Line) and SECAM (sequential Couleur a Memoire, in order of color and memory) analog color television systems. YUV is a color space that describes color by luminance-chrominance difference. The luminance signal is often referred to as Y and the chrominance signal is composed of two mutually independent signals. Depending on the color system and format, the two chrominance signals are often referred to as UV.
In a modern color television system, a three-tube color camera or a color CCD (Charge coupled device) camera is usually used for image capture, then the obtained color image signals are subjected to color separation and respective amplification and correction to obtain RGB, a luminance signal Y and two color difference signals B-Y (i.e., U) and R-Y (i.e., V) are obtained through a matrix conversion circuit, and finally, a transmitting end encodes the luminance signal and the color difference signals respectively and transmits the signals through the same channel. This color representation is called YUV color space representation. The importance of using the YUV color space is that its luminance signal Y and chrominance signal U, V are separate.
Specifically, since the luminance signal Y and the chrominance signal U, V in the YUV color space may be separated, the sample format may be a YUV format, and the current image may be processed according to the YUV format, so that color values in the color information in the image may be separated.
In some embodiments, the step of "processing said current image in a sample format" may comprise the following flow:
acquiring a current color format of the current image;
determining the corresponding relation between the current color format and the sample format;
and converting the color format of the current image from the current color format to the sample format based on the corresponding relation.
Specifically, a current color format of the current image is obtained, that is, a data format of a picture taken by the camera is obtained, where the data format may refer to a color space mode of the image, for example, the current color format may be an RGB format, which indicates that when the terminal camera takes the image, color information of the image may be stored according to the RGB format.
After the current color format of the current picture is acquired, the corresponding relationship between the current color format and the sample format may be acquired. The current color format may include a plurality of parameters, the sample format may include a plurality of parameters, and the correspondence between the current color format and the sample format may be a conversion rule between the plurality of parameters of the current color format and the plurality of parameters of the sample format. For example, the conversion rule may be a calculation method or the like.
For example, the current color format may be an RGB format, the sample format may be a YUV format, the RGB format may include parameters R, G, B; the YUV format may include parameters Y, U, and V, and the correspondence between the RGB format and the YUV format may be a conversion rule between the parameter R, G, B and the parameter Y, U, V, and the conversion rule may be
Y=0.299*R+0.587*G+0.114*B;U=-0.169*R-0.331*G+0.5*B;V=0.5*R-0.419*G-0.081*B。
Specifically, after determining the corresponding relationship between the current color format and the sample format, the color format of the current image may be converted from the current color format to the sample format according to the corresponding relationship. That is, the values of the parameters in the current color format in the current image are converted into the values of the parameters in the sample format according to the corresponding relationship.
For example, if the values of the parameters R, G, and B included in the current color format are obtained to be 200, 200, and 200, respectively, the values of the parameters Y, U, and V in the sample format can be obtained to be 200, 0, and 0, respectively, by processing the parameters R, G, and B according to the corresponding relationship between the current color format and the sample format.
In some embodiments, in order to avoid unnecessary operation steps, the terminal may be enabled to directly store the pictures acquired by the camera, and before the step "processing the current image according to the sample format", the method may further include the steps of:
acquiring the residual memory space capacity of the current operating memory;
acquiring the image data volume of a current image;
judging whether the image data volume is larger than the residual memory space capacity or not;
and if so, executing the step of processing the current image according to the sample format.
Specifically, the remaining memory space capacity of the currently operating memory is obtained, where the operating memory is a memory used when the mobile phone operates a program, and is also called a ram (random Access memory).
The RAM is a random access memory, which is equivalent to a computer memory, and the machine body storage is equivalent to a hard disk. The application program is installed in the machine body storage, and the Advanced RISC Machines (a 32-bit reduced instruction set processor architecture) are used for realizing the function of the video card. The operating memory is typically referred to as system memory as a temporary storage medium for the operating system or other programs that are running. It is just like the memory bank in the computer, if the memory bank capacity is bigger, the computer has more memory to store the task of running simultaneously, so the system response speed is faster, RAM plays this role in the cell-phone.
In the embodiment of the present application, the remaining storage space of the currently running memory is obtained, and the running memory occupied by the currently running application program may be obtained, where the currently running application program may include a foreground running application program, a background running application program, and the like. And determining the residual memory space capacity of the operating memory according to the operating memory occupied by the current operating application program.
For example, the obtained currently running application programs may include an application program a, an application program B, and an application program C, where an operating memory occupied by the application program a may be 100M (megabyte), an operating memory occupied by the application program B may be 150M, an operating memory occupied by the application program C may be 100M, and a total operating memory capacity may be 2000M, and then the remaining storage space capacity of the currently running memory may be determined to be 1650M.
Specifically, after the remaining memory space capacity of the current operating memory is obtained, the image data amount of the current image may be obtained, where the image data amount may be the operating memory capacity required by the current image, and then it may be determined whether the current image data amount is greater than the remaining memory space capacity, and corresponding operation is executed according to the determination result.
For example, the image data amount of the obtained current image may be 20M, the remaining storage space capacity of the operating memory may be 1650M, and if it is determined that the image data amount is smaller than the remaining storage space capacity, the current image may be directly stored; for another example, if the image data amount of the acquired current image may be 20M, the remaining storage space capacity of the operating memory may be 0M, and it may be determined that the image data amount is in response to the remaining storage space capacity, the step "process the current image according to the sample format" may be performed, and the image data amount and the remaining memory space are compared by acquiring the remaining memory space of the operating memory, so that the operating efficiency of the terminal may be improved, and the power consumption of the terminal may be saved.
103. And acquiring the color parameter of each pixel point in the processed image.
Specifically, the color parameter of each pixel point in the processed image is obtained, and the color format of the processed image is the sample format, so that the color parameter in the processed image may be the color parameter included in the sample format. The color of each pixel point of the processed image can be represented by the color parameter value of the sample format.
For example, the processed image may include a plurality of pixel points: pixel 1, pixel 2, pixel 3, pixel 4, pixel 5, and so on. The sample format may be a YUV format, and then the color parameter may include a U value and a V value, the U value obtained for pixel point 1 may be 10, 10, the U value of pixel point 2 may be 20, 20, the U value of pixel point 3 may be 10, 10, the U value of pixel point 4 may be 20, 20, the U value of pixel point 5 may be 10, 10, and so on, and then the color parameter of each pixel point in the processed image may be obtained.
In some embodiments, in order to ensure parallel and synchronous operation of the terminals, that is, to simultaneously process different regions of the processed image, after the step "obtaining the color parameter of each pixel point in the processed image", the following steps may be further included:
acquiring the image content of the processed image;
identifying the image content to obtain an identification result;
and dividing the processed image into a plurality of sub-images according to the identification result.
Specifically, the image content of the processed image is obtained, where the image content may refer to various contents included in the image to be processed, for example, the current image to be processed may be a face image, and the image content may include contents of various parts of a face, such as eyes, a nose, and a mouth.
The image content can be identified after the image content is acquired, the image content of the image to be processed can be identified through an image identification technology, different content areas included in the image to be processed are determined, for example, the processed image can be a human face image, content identification is carried out on the processed image according to the image identification technology, a plurality of areas with different contents such as eyes, a nose and a mouth can be obtained, and the processed image can be divided into a plurality of sub-images according to different areas corresponding to different contents.
104. And determining the color difference information of each pixel point based on the color parameters.
Specifically, the color difference information of each pixel point is determined based on the color parameters, and the color difference between the pixel points in the image can be determined by comparing the color parameters of each pixel point.
In some embodiments, the color parameter may be a color component value, that is, a parameter value corresponding to a color format, and the step "determining color difference information of each pixel point based on the color parameter" may include the following steps:
acquiring a color component difference value between color component values of every two pixel points;
and determining the color difference of the pixel points according to the color component difference value.
Specifically, a color component difference value between color component values of every two pixel points is obtained, the color component value of each pixel point in the image to be processed can be determined, and the color component difference value of the color component values of any two pixel points is obtained, wherein the color component value can be obtained after parameter values of a color parameter U and a color parameter V in a YUV format, for example, the obtained U value of the pixel point 1 can be 10, the obtained V value can be 10, and the obtained color component value of the pixel point 1 can be 20.
After determining the color component difference between every two pixel points, the color difference between every two pixel points can be determined according to the color component difference, for example, the image to be processed can include pixel point 1, the color component value can be 20, pixel point 2, the color component value can be 40, pixel point 3, the color component value can be 20, pixel point 4, the color component value can be 40, etc., the color component value difference between every two pixel points is obtained, it can be obtained that the color component difference between the pixel point 1 and the pixel point 2 is 20, the color component difference between the pixel point 1 and the pixel point 3 is 0, the color component difference between the pixel point 1 and the pixel point 4 is 20, the color component difference between the pixel point 2 and the pixel point 3 is 20, the color component difference between the pixel point 2 and the pixel point 4 is 0, and the color component difference between the pixel point 3 and the pixel point 4 is 20, so that the color difference between every two pixel points can be obtained.
105. And deleting corresponding pixel points in the processed image according to the color difference information to obtain and store a target image.
Specifically, after the color difference between the pixel points is determined according to the color component values, the pixel points in the image to be processed can be deleted according to the color difference, so that the image data volume is reduced.
Since the image to be processed may be divided into a plurality of sub-images, in some embodiments, the step "deleting corresponding pixel points in the processed image according to the color difference" may include the following steps:
and deleting the corresponding pixel points in the processed image according to the color difference information of the pixel points in the same sub-image.
Specifically, according to the color difference information of the pixel points located in the same sub-image, the pixel point deleting operation is performed on the processed image, whether the pixel points with the same color exist can be judged according to the color difference between the pixel points in each sub-image, and the corresponding pixel points in each sub-image are deleted according to the judgment result.
In some embodiments, the step of deleting the corresponding pixel point in the processed image according to the color difference information of the pixel points located in the same sub-image may include the following steps:
determining a pixel point pair with the color parameter difference smaller than a preset threshold value from each sub-image, wherein the pixel point pair is an adjacent pixel point;
and selecting corresponding pixel points from the processed image to delete based on the determined pixel points.
Specifically, the color difference information may include a color parameter difference of the pixel point. The pixel point pairs with the color parameter difference smaller than the preset threshold value are determined from each sub-image, a plurality of pixel point pairs included in the sub-image can be determined at first, and two adjacent pixel points can be used as the pixel point pairs for comparison.
For example, referring to fig. 2, fig. 2 is a schematic diagram illustrating an arrangement of pixel points of an image according to an embodiment of the present disclosure. Fig. 2 can be any subimage area, and the subimage area can include pixel 1, pixel 2, pixel 3, pixel 4, follow sheet 5, pixel 6, etc., wherein, pixel 1 can be adjacent to pixel 2 and pixel 3, pixel 2 can also be adjacent to pixel 4 and pixel 5, pixel 3 can also be adjacent to pixel 4, pixel 4 can also be adjacent to pixel 6, and pixel 5 can also be adjacent to pixel 6. It can be determined that the sub-image can include the pixel point pairs of (pixel 1, pixel 2), (pixel 1, pixel 3), (pixel 2, pixel 4), (pixel 2, pixel 5), (pixel 3, pixel 4), (pixel 4, pixel 6), (pixel 5, and pixel 6).
After all pixel point pairs in the sub-image are determined, the color parameter difference between two pixel points of each pixel point pair can be obtained, so that the pixel point pair with the color parameter difference smaller than the preset threshold value can be determined and can be used as a target pixel point pair, namely the pixel point pair needing pixel point deletion operation.
For example, a pair of pixel points may include: (pixel 1, pixel 2), (pixel 1, pixel 3), (pixel 2, pixel 4), (pixel 2, pixel 5), and so on, the color parameter difference obtained for pixel 1 and pixel 2 can be 10, the color parameter difference obtained for pixel 1 and pixel 3 can be 8, the color parameter difference obtained for pixel 2 and pixel 4 can be 20, the color parameter difference obtained for pixel 2 and pixel 5 can be 9, if the preset threshold can be 10, then it can be determined that the pixel pair whose color parameter difference is less than the preset threshold can be (pixel 1, pixel 2), (pixel 1, pixel 3), (pixel 2, pixel 4).
Specifically, after the target pixel point pair is determined, a corresponding pixel point can be selected from the processed image according to the target pixel point pair and deleted. In some embodiments, the step of "deleting a corresponding pixel point selected from the processed image based on the determined pixel point pair" may include the following steps:
comparing a plurality of pairs of pixel points determined in the same sub-image;
screening out repeated target pixel points from the pairs of pixel points, and deleting the target pixel points from the processed image;
and deleting any pixel point from the pixel point pairs of the remained undeleted pixel points in the processed image.
Specifically, the plurality of pixel point pairs are matched, and a target pixel point pair with a color parameter difference smaller than a preset difference may be matched to determine whether the same target pixel point exists in at least two pixel point pairs.
For example, the target pixel point pairs may include a first target pixel point pair (pixel point 1, pixel point 2), a second target pixel point pair (pixel point 1, pixel point 3), and a third target pixel point pair (pixel point 2, pixel point 4), and by matching the first target pixel point pair, the second pixel point pair, and the third pixel point pair, it may be determined that the same pixel point exists in the first pixel point pair and the second pixel point pair, that is, pixel point 1; the first target pixel point pair and the third target pixel point pair have the same pixel point, namely pixel point 2, so that pixel point 1 and pixel point 2 can be used as target pixel points, and pixel point 1 and pixel point 2 can be deleted from the processed image.
For another example, the target pixel point pairs may include a fourth target pixel point pair (pixel point 4, pixel point 5), a fifth target pixel point pair (pixel point 6, pixel point 7), and a sixth target pixel point pair (pixel point 8, pixel point 9), and by matching the first target pixel point pair, the second pixel point pair, and the third pixel point pair, it may be determined that the same pixel point does not exist in all of the three target pixel point pairs, and then one pixel point may be selected from the fourth target pixel point pair, the fifth target pixel point pair, and the sixth target pixel point pair, and one pixel point in each pixel point pair is used as a target pixel point, and the selected target pixel point is deleted from the processed image. By deleting the pixel points among the pixel points with smaller color parameter difference among the adjacent pixel points, the data volume of the shot image can be reduced, and the operation efficiency of the terminal is ensured.
In some embodiments, in order to ensure that the user can ensure the image quality when using the target image after obtaining the target image, after the step "obtaining and storing the target image", the following steps may be further included:
when the target image is detected to be viewed by a user, determining the positions of missing pixel points in the target image;
acquiring color information of adjacent pixel points of the position;
restoring color information of the missing pixel points based on the color information;
and displaying the recovered target image.
Specifically, when it is detected that the user views the target image, the positions of missing pixel points in the target image are determined. The positions of the pixel points may be represented by two-dimensional coordinates, and the two-dimensional coordinates may respectively include the height (row) and the width (column) of the target image, that is, the two-dimensional coordinates are (height, width), for example, the height of the target image may be 100, and the width may be 100, and then the positions of the pixel points of the target image may include (1,1), (1,2), (1,3),. (100 ).
After determining the position of the missing pixel point in the target image, the position of the pixel point adjacent to the position of the missing pixel point may be obtained, for example, the position of the missing pixel point may be (1,1), and then the positions of the adjacent pixel points may be (1,2), (2, 1). After the positions of the adjacent pixel points are obtained, color information corresponding to the positions of the adjacent pixel points can be obtained, the color information can comprise color parameter values, and the color information of the missing pixel points is recovered according to the color parameter values of the adjacent pixel points.
For example, the adjacent pixel positions may be (1,2), (2,1), which correspond to the pixel 1 and the pixel 2, respectively, obtain the color parameter values of the pixel 1 and the pixel 2, the color parameter value of the pixel 1 may be 100, and the color parameter value of the pixel 2 may be 150, then determine the color parameter value of the pixel 2 according to the color parameter value of the pixel 1, in order to ensure the uniformity of the image color, it may be determined that the color parameter value of the missing image is 225 by obtaining the average value of the color parameter values of the pixel 1 and the pixel 2, then recover the color of the missing image based on the color parameter value 225, display the target image after recovering the color information, can ensure the quality of the image, and provide a better visual effect for the user.
The embodiment of the application discloses an image shooting method, which comprises the following steps: when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image. According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of a second image capturing method according to an embodiment of the present application. The specific scene application of the image shooting method can be as follows:
201. when the terminal detects that a user starts a camera to shoot an image, image data of the image is obtained.
Specifically, the user opens the camera to shoot images, and can open the camera in multiple ways, for example, the user can directly open the terminal camera function, or the user opens the camera function through the third party application after opening the third party application. The third-party application program may be an application pre-installed in the terminal, and the third-party application may support a camera function.
When the terminal detects that the camera is turned on, an image shot by a user is acquired, and image data is collected from the image shot by the user, wherein the image data can comprise a plurality of image information, such as image pixels, image colors, image contents, image brightness and the like.
202. And the terminal processes the image data according to a preset data format to obtain processed image data.
Specifically, after the terminal acquires the image data, a current image data format may be acquired, and the image data format may be a color space representing an image.
For example, the current color format of the acquired image data may be an RGB format, i.e., the color of each pixel point in the image data may be represented by R, G, B values to indicate the color.
After the current color format of the image data is determined, the data to be transmitted may be processed according to the preset data format, and the current color format of the image data may be updated to the preset data format. The preset data format may be a YUV format.
Specifically, the current color format is updated to the preset data format, and a conversion rule between the current color format and the preset data format may be obtained, where the conversion rule may include a correspondence between each value in the RGB format and each value in the YUV format, for example, the current color format may be the RGB format, the preset data format may be the YUV data format, and the RGB format includes R, G, B values, so as to form colors of an image; the YUV format includes Y, U, V values, which make up the colors of the image. The correspondence between R, G, B and Y, U, V values can be expressed by the following formula: y is 0.299R + 0.587G + 0.114B; U-0.169R-0.331G + 0.5B; V-0.5R-0.419G-0.081B.
In some embodiments, the terminal may obtain a plurality of image pixel points included in the image data, obtain each color value corresponding to each image pixel point in the current color format, and according to a conversion rule between the current color format and the preset data format:
y is 0.299R + 0.587G + 0.114B; U-0.169R-0.331G + 0.5B; and V is 0.5R-0.419G-0.081B, and the image data is processed to obtain processed image data.
For example, the color value of any image pixel point in the image data acquired by the terminal is: and if R is 100, G is 100, and B is 100, then according to the conversion rule between the RGB format and the YUV format, Y is 100, U is 0, and V is 0, and based on the conversion rule, the color values of all image pixels can be updated to the color values in the preset data format, so as to obtain the processed image data.
203. The terminal acquires image information of an image and divides the processed image data into a plurality of image subdata according to the image information.
Specifically, the terminal acquires image information of an image, and the image information may include multiple types, such as: image pixels, image sharpness, image color, image brightness, etc. The processed image data may be divided into a plurality of image sub-data according to any image feature in the image information.
For example, the image information of the image obtained by the terminal may be image definition, specifically, the terminal may obtain the definition of each image pixel point of the image, and may divide the image into a plurality of regions according to the definition, that is, image pixel points corresponding to the definitions with the same definition or different in a preset range may be divided into the same image region, and image data corresponding to the image regions may be obtained, that is, a plurality of image subdata may be obtained. The processed image data is divided into a plurality of image subdata according to the image information, so that the plurality of image subdata can be processed simultaneously in the follow-up process, and the image shooting efficiency can be improved.
204. The terminal acquires color component information of the image sub-data.
Specifically, after the terminal divides the image data to be processed into a plurality of image sub-data, the color component information of each image sub-data may be obtained. The data format of the processed image data is a preset data format, namely a YUV format, and comprises a Y value, a U value and a V value.
The color component information may be a chrominance value in YUV format, and may include a U value and a V value. Specifically, the terminal obtains the color component information of the image sub-data, may obtain the number of pixel points included in each image sub-data, and then obtains the chromatic value corresponding to each pixel point.
For example, the terminal may divide the image into 4 regions according to the definition of the image, and may divide the processed image data into 4 image sub-data, which may include first image sub-data, second image sub-data, third image sub-data, and fourth image sub-data. The method includes the steps of obtaining pixel points of each image subdata, wherein the first image subdata may include 200 pixel points, the second image subdata may include 150 pixel points, the third image subdata may include 300 pixel points, the fourth image subdata may include 200 pixel points, and after the pixel points included in each image subdata are determined, chromatic values, namely U values and V values, corresponding to the pixel points in each image subdata can be obtained at the same time.
205. The terminal determines data to be transmitted from the image sub-data based on the color component information.
Specifically, the terminal determines data to be transmitted from the image sub-data based on the color component information, and after the terminal obtains chromatic values corresponding to the pixel points in each image sub-data, the chromatic values corresponding to the pixel points in the same image sub-data can be compared. For example, two or more colorimetric values may be compared with each other to obtain a comparison result between each two colorimetric values. For example, the chroma value corresponding to any pixel point in the image sub-data may be randomly obtained, and the chroma value may be sequentially compared with other chroma values.
After the colorimetric values are compared, if the colorimetric values are the same, selecting image data corresponding to pixel points with the same colorimetric values, selecting the image data corresponding to any pixel point as data to be transmitted, and deleting the image data corresponding to other pixel points; and if the chromatic values are different, taking the image data corresponding to the pixel points with different chromatic values as the data to be transmitted.
For example, can include first pixel in the first image subdata, the second pixel, the third pixel, the fourth pixel, the chromatic value of obtaining first pixel can be for 10, the chromatic value of second pixel can be for 10, the chromatic value of third pixel can be for 20, the chromatic value of fourth pixel can be for 20, can carry out the chromatic value comparison according to the mode of carrying out the comparison with the chromatic value between two liang, also compare the chromatic value of first pixel and the chromatic value of second pixel, compare the chromatic value of third pixel with the chromatic value of fourth pixel, it is to obtain the comparison result: the chromatic value of the first pixel point is equal to the chromatic value of the second pixel point, the chromatic value of the third pixel point is equal to the chromatic value of the fourth pixel point, and the chromatic value of the first pixel point is different from the chromatic value of the third pixel point, then the image data corresponding to one of the first pixel point and the second pixel point can be deleted, the image data corresponding to the other pixel point is used as the data to be transmitted, the image data corresponding to one of the third pixel point and the fourth pixel point is deleted, the image data corresponding to the other pixel point is used as the data to be transmitted, the data to be transmitted is finally determined, the chromatic information of the image data is obtained, the image data is deleted according to the chromatic information, the image shooting amount of the camera can be reduced, and the image shooting efficiency is guaranteed.
206. And the terminal transmits the data to be transmitted.
Specifically, after the terminal determines the data to be transmitted from all the image sub-data, the data to be transmitted can be transmitted, for example, the current user needs to send the image acquired by the camera to another terminal, and then the data to be transmitted can be transmitted based on the data to be transmitted, so that the transmission efficiency of the image data can be effectively improved.
The embodiment of the application discloses an image shooting method, which comprises the following steps: when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image. According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
In order to better implement the image capturing method provided by the embodiment of the present application, the embodiment of the present application further provides an apparatus based on the image capturing method. The terms are the same as those in the image capturing method, and details of implementation can be referred to the description in the method embodiment.
Referring to fig. 4, fig. 4 is a block diagram of a first image capturing device according to an embodiment of the present disclosure, which can be applied to a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Portable Media Player (PMP), and a fixed terminal such as a desktop computer, and the device includes:
the acquisition unit 301 is used for acquiring a current image when the image shot by the camera is detected;
a processing unit 302, configured to process the current image according to a sample format to obtain a processed image;
a first obtaining unit 303, configured to obtain a color parameter of each pixel in the processed image;
a determining unit 304, configured to determine color difference information of each pixel point based on the color parameter;
a deleting unit 305, configured to delete, according to the color difference information, a corresponding pixel point in the processed image, to obtain a target image, and store the target image.
In some embodiments, please refer to fig. 5, where fig. 5 is a block diagram of a second image capturing apparatus provided in the embodiments of the present application, the image capturing apparatus may further include:
a second obtaining unit 306, which obtains the image content of the processed image;
an identifying unit 307, configured to identify the image content to obtain an identification result;
a dividing unit 308, configured to divide the processed image into a plurality of sub-images according to the recognition result.
In some embodiments, the deletion unit 305 may include:
and the deleting subunit is used for deleting the corresponding pixel points in the processed image according to the color information difference of the pixel points in the same sub-image.
In some embodiments, the deletion subunit may specifically be configured to: determining a pixel point pair with the color parameter difference smaller than a preset threshold value from each sub-image, wherein the pixel point pair is an adjacent pixel point; matching pixel point pairs determined in the same sub-image; if the same target pixel point exists in different pixel point pairs, deleting the target pixel point from the processed image; and if the same target pixel point does not exist in different pixel point pairs, deleting any one pixel point in the pixel point pairs from the processed image.
In some embodiments, the image capturing apparatus may further include:
the detection unit is used for determining the position of a missing pixel point in the target image when detecting that a user views the target image;
a third obtaining unit, configured to obtain color information of an adjacent pixel point of the position;
the recovery unit is used for recovering the color information of the missing pixel points based on the color information;
and the display unit is used for displaying the recovered target image.
In some embodiments, the image capturing apparatus may further include:
a fourth obtaining unit, configured to obtain a remaining memory space capacity of a currently operating memory;
a fifth acquiring unit configured to acquire an image data amount of the current image;
the judging unit is used for judging whether the image data volume is larger than the residual memory space capacity or not;
and the execution unit is used for executing the step to process the current image according to the sample format if the current image is in the original image.
The embodiment of the application discloses image shooting device, this image shooting device includes: when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image. According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
The embodiment of the application also provides a terminal. As shown in fig. 6, the terminal may include a Radio Frequency (RF) circuit 601, a memory 602 including one or more storage media, an input unit 603, a display unit 604, a sensor 605, an audio circuit 606, a Wireless Fidelity (WiFi) module 607, a processor 608 including one or more processing cores, and a power supply 609. Those skilled in the art will appreciate that the terminal structure shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 601 may be used for receiving and transmitting signals during the process of transmitting and receiving information, and in particular, for processing the received downlink information of the base station by one or more processors 608; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuit 601 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 601 may also communicate with networks and other devices via wireless communications.
The memory 602 may be used to store software programs and modules, and the processor 608 executes various functional applications and image capturing by operating the software programs and modules stored in the memory 602. The memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 608 and the input unit 603 access to the memory 602.
The input unit 603 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, input unit 603 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. The input unit 603 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 604 may be used to display information input by or provided to the user and various graphical user interfaces of the server, which may be made up of graphics, text, icons, video, and any combination thereof. The display unit 604 may include a display panel, and optionally, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 608 to determine the type of touch event, and the processor 608 then provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 6 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that turns off the display panel and the backlight when the server moves to the ear.
WiFi belongs to short-distance wireless transmission technology, and the terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 607, and provides wireless broadband internet access for the user. Although fig. 6 shows the WiFi module 607, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope of not changing the essence of the application.
The processor 608 is a control center of the terminal, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the server and processes data by operating or executing software programs and modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the handset. Optionally, processor 608 may include one or more processing cores; preferably, the processor 608 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 608.
The terminal also includes a power supply 609 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 608 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 609 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Specifically, in this embodiment, the processor 608 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and the processor 608 runs the application programs stored in the memory 602, thereby implementing various functions:
when the camera is detected to shoot an image, acquiring a current image;
processing the current image according to a sample format to obtain a processed image;
acquiring a color parameter of each pixel point in the processed image;
determining color difference information of each pixel point based on the color parameters;
and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image.
The embodiment of the application discloses an image shooting method, an image shooting device, a storage medium and a terminal. The image capturing method includes: when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image. According to the embodiment of the application, the data size of the image data acquired by the camera is detected, when the data size is large, the data format can be changed, the same data part is deleted, the data size is reduced, and therefore the image shooting efficiency of the terminal can be improved.
It will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by instructions or by instructions controlling associated hardware, which may be stored in a storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the image capturing methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
when the camera is detected to shoot an image, acquiring a current image; processing the current image according to a sample format to obtain a processed image; acquiring a color parameter of each pixel point in the processed image; determining color difference information of each pixel point based on the color parameters; and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image capturing method provided in the embodiments of the present application, the beneficial effects that can be achieved by any image capturing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The image capturing method, the image capturing device, the storage medium and the terminal provided by the embodiment of the present application are described in detail above, and a specific example is applied in the description to explain the principle and the embodiment of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An image capturing method, characterized by comprising:
when the camera is detected to shoot an image, acquiring a current image;
processing the current image according to a sample format to obtain a processed image;
acquiring a color parameter of each pixel point in the processed image;
determining color difference information of each pixel point based on the color parameters;
and deleting corresponding pixel points in the processed image according to the color difference information to obtain a target image and storing the target image.
2. The method according to claim 1, wherein after obtaining the color parameter of each pixel point in the processed image, before deleting the corresponding pixel point in the processed image according to the color difference information, further comprising:
acquiring the image content of the processed image;
identifying the image content to obtain an identification result;
dividing the processed image into a plurality of sub-images according to the identification result;
deleting the corresponding pixel points in the processed image according to the color difference information, wherein the deleting comprises the following steps:
and deleting the corresponding pixel points in the processed image according to the color difference information of the pixel points in the same sub-image.
3. The method of claim 2, wherein the color difference information comprises: a difference in color parameters;
the deleting the corresponding pixel points in the processed image according to the color difference information of the pixel points in the same sub-image comprises the following steps:
determining a pixel point pair with the color parameter difference smaller than a preset threshold value from each sub-image, wherein the pixel point pair is an adjacent pixel point;
and selecting corresponding pixel points from the processed image to delete based on the determined pixel points.
4. The method of claim 3, wherein said selecting corresponding pixels from said processed image based on said determined pixel point pairs for deletion comprises:
comparing a plurality of pairs of pixel points determined in the same sub-image;
screening out repeated target pixel points from the pairs of pixel points, and deleting the target pixel points from the processed image;
and deleting any pixel point from the pixel point pairs of the remained undeleted pixel points in the processed image.
5. The method of claim 1, after obtaining and storing the target image, further comprising:
when the target image is detected to be viewed by a user, determining the positions of missing pixel points in the target image;
acquiring color information of adjacent pixel points of the position;
restoring color information of the missing pixel points based on the color information;
and displaying the recovered target image.
6. The method of any of claims 1-5, further comprising, prior to processing the current image in a sample format:
acquiring the residual memory space capacity of the current operating memory;
acquiring the image data volume of a current image;
judging whether the image data volume is larger than the residual memory space capacity or not;
and if so, executing the step of processing the current image according to the sample format.
7. An image capturing apparatus, characterized by comprising:
the acquisition unit is used for acquiring a current image when the image shot by the camera is detected;
the processing unit is used for processing the current image according to a sample format to obtain a processed image;
the first obtaining unit is used for obtaining the color parameter of each pixel point in the processed image;
the determining unit is used for determining the color difference information of each pixel point based on the color parameters;
and the deleting unit is used for deleting corresponding pixel points in the processed image according to the color difference information to obtain and store a target image.
8. The apparatus of claim 7, further comprising:
a second acquiring unit, configured to acquire image content of the processed image;
the identification unit is used for identifying the image content to obtain an identification result;
and the dividing unit is used for dividing the processed image into a plurality of sub-images according to the identification result.
9. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the image capturing method of any one of claims 1 to 7.
10. A terminal comprising a processor and a memory, the memory storing a plurality of instructions, the processor loading the instructions to perform the image capture method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010459328.5A CN111614901A (en) | 2020-05-27 | 2020-05-27 | Image shooting method and device, storage medium and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010459328.5A CN111614901A (en) | 2020-05-27 | 2020-05-27 | Image shooting method and device, storage medium and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111614901A true CN111614901A (en) | 2020-09-01 |
Family
ID=72197904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010459328.5A Pending CN111614901A (en) | 2020-05-27 | 2020-05-27 | Image shooting method and device, storage medium and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111614901A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112869767A (en) * | 2021-01-11 | 2021-06-01 | 青岛海信医疗设备股份有限公司 | Ultrasonic image storage method and device and ultrasonic equipment thereof |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1879421A (en) * | 2003-11-05 | 2006-12-13 | Lm爱立信电话有限公司 | Methods of processing digital image and/or video data including luminance filtering based on chrominance data and related systems and computer program products |
CN101083765A (en) * | 2006-07-18 | 2007-12-05 | 威盛电子股份有限公司 | System and method for video data compression |
CN101494788A (en) * | 2009-01-23 | 2009-07-29 | 炬才微电子(深圳)有限公司 | Method and apparatus for compressing and decompressing video image |
CN101616320A (en) * | 2008-06-26 | 2009-12-30 | 展讯通信(上海)有限公司 | Image compression, decompression method and equipment |
US20110158525A1 (en) * | 2009-12-25 | 2011-06-30 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
TW201340716A (en) * | 2012-03-30 | 2013-10-01 | Nation United University | Differential layer stratification image compression method |
CN105578035A (en) * | 2015-12-10 | 2016-05-11 | 联想(北京)有限公司 | Image processing method and electronic device |
CN107147914A (en) * | 2017-06-07 | 2017-09-08 | 广东工业大学 | A kind of embedded system and monochrome bitmap compression method, main frame |
CN108111858A (en) * | 2016-11-24 | 2018-06-01 | 腾讯科技(深圳)有限公司 | A kind of picture compression method and device |
CN108900843A (en) * | 2018-07-31 | 2018-11-27 | 京东方科技集团股份有限公司 | Monochrome image compression method, device, medium and electronic equipment |
-
2020
- 2020-05-27 CN CN202010459328.5A patent/CN111614901A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1879421A (en) * | 2003-11-05 | 2006-12-13 | Lm爱立信电话有限公司 | Methods of processing digital image and/or video data including luminance filtering based on chrominance data and related systems and computer program products |
CN101083765A (en) * | 2006-07-18 | 2007-12-05 | 威盛电子股份有限公司 | System and method for video data compression |
CN101616320A (en) * | 2008-06-26 | 2009-12-30 | 展讯通信(上海)有限公司 | Image compression, decompression method and equipment |
CN101494788A (en) * | 2009-01-23 | 2009-07-29 | 炬才微电子(深圳)有限公司 | Method and apparatus for compressing and decompressing video image |
US20110158525A1 (en) * | 2009-12-25 | 2011-06-30 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
TW201340716A (en) * | 2012-03-30 | 2013-10-01 | Nation United University | Differential layer stratification image compression method |
CN105578035A (en) * | 2015-12-10 | 2016-05-11 | 联想(北京)有限公司 | Image processing method and electronic device |
CN108111858A (en) * | 2016-11-24 | 2018-06-01 | 腾讯科技(深圳)有限公司 | A kind of picture compression method and device |
CN107147914A (en) * | 2017-06-07 | 2017-09-08 | 广东工业大学 | A kind of embedded system and monochrome bitmap compression method, main frame |
CN108900843A (en) * | 2018-07-31 | 2018-11-27 | 京东方科技集团股份有限公司 | Monochrome image compression method, device, medium and electronic equipment |
Non-Patent Citations (1)
Title |
---|
刘晖: ""剖析索尼R1"", 《微电脑世界》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112869767A (en) * | 2021-01-11 | 2021-06-01 | 青岛海信医疗设备股份有限公司 | Ultrasonic image storage method and device and ultrasonic equipment thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3410390B1 (en) | Image processing method and device, computer readable storage medium and electronic device | |
US10827140B2 (en) | Photographing method for terminal and terminal | |
CN108900790B (en) | Video image processing method, mobile terminal and computer readable storage medium | |
CN107038715B (en) | Image processing method and device | |
CN107093418B (en) | Screen display method, computer equipment and storage medium | |
CN107395898B (en) | Shooting method and mobile terminal | |
CN107465881B (en) | Dual-camera focusing method, mobile terminal and computer readable storage medium | |
EP3893495A1 (en) | Method for selecting images based on continuous shooting and electronic device | |
CN108198146B (en) | Noise reduction method, equipment and computer readable storage medium | |
CN107846554B (en) | Image processing method, terminal and computer readable storage medium | |
WO2019114724A1 (en) | Camera thumbnail image generating method, mobile terminal, and storage medium | |
CN106844580B (en) | Thumbnail generation method and device and mobile terminal | |
CN108280136B (en) | Multimedia object preview method, equipment and computer readable storage medium | |
CN106993136B (en) | Mobile terminal and multi-camera-based image noise reduction method and device thereof | |
CN107705247B (en) | Image saturation adjusting method, terminal and storage medium | |
CN108459799B (en) | Picture processing method, mobile terminal and computer readable storage medium | |
CN113014803A (en) | Filter adding method and device and electronic equipment | |
WO2020134789A1 (en) | Mobile terminal and method for controlling on and off of screen, and computer storage medium | |
WO2022110687A1 (en) | Image processing method and apparatus, electronic device, and readable storage medium | |
CN111263216B (en) | Video transmission method, device, storage medium and terminal | |
CN110944163A (en) | Image processing method and electronic equipment | |
CN108320265B (en) | Image processing method, terminal and computer readable storage medium | |
CN107743198B (en) | Photographing method, terminal and storage medium | |
CN111614901A (en) | Image shooting method and device, storage medium and terminal | |
CN111182351A (en) | Video playing processing method and device, storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200901 |