CN111353133A - Image processing method, device and readable storage medium - Google Patents
Image processing method, device and readable storage medium Download PDFInfo
- Publication number
- CN111353133A CN111353133A CN201811584188.3A CN201811584188A CN111353133A CN 111353133 A CN111353133 A CN 111353133A CN 201811584188 A CN201811584188 A CN 201811584188A CN 111353133 A CN111353133 A CN 111353133A
- Authority
- CN
- China
- Prior art keywords
- image
- user information
- data
- information data
- frequency domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000009466 transformation Effects 0.000 claims description 42
- 238000003709 image segmentation Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 8
- 230000010365 information processing Effects 0.000 claims description 8
- 238000013075 data extraction Methods 0.000 claims description 7
- 238000013501 data transformation Methods 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/16—Program or content traceability, e.g. by watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0052—Embedding of the watermark in the frequency domain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Technology Law (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure discloses an image processing method, an image processing device and a readable storage medium, and belongs to the technical field of image processing. According to the method, user information data corresponding to user information of a user requesting to acquire a first image is written into the first image, so that a second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, and the user information of a leaking person is determined according to the user information data, so that the leaking person can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, and a readable storage medium.
Background
In some companies or government departments with security requirements, internal information is required to be strictly confidential and cannot be leaked to the outside. If a user needs to look up images in an internal system, the user needs to log in a server through a terminal such as a computer and the like, then sends an access request to the server, and the server sends a target image corresponding to the access request to the terminal. In practical use, a few users can obtain images in an internal system and leak the images or even upload the images to the internet in a mode of storing the images or taking pictures. However, it is often difficult to finalize the leak of the image, since there are many people who can log in the internal system.
Disclosure of Invention
The present disclosure provides a method, an apparatus, and a readable storage medium for image processing, which can realize determination of a leaking person by a leaked image. The technical scheme is as follows:
in a first aspect, the present disclosure provides an image processing method, including: when a user requests to acquire a first image, acquiring the first image and user information of the user; obtaining at least one group of user information data based on the user information; and writing the at least one group of user information data into the first image to obtain a second image, wherein the at least one group of user information data is carried in the second image in a manner that the at least one group of user information data is invisible to naked eyes.
In a possible embodiment, the obtaining at least one set of user information data based on the user information includes: generating a user identification image based on the user information; dividing the user identification image into a plurality of first image blocks; and respectively carrying out frequency domain transformation processing on each first image block to obtain frequency domain data of each first image block, wherein the frequency domain data of each first image block is a group of user information data.
In another possible embodiment, the writing the at least one set of user information data into the first image to obtain a second image includes: dividing the first image into a plurality of second image blocks; performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of the second image blocks, wherein each group of user information data corresponds to the frequency domain data of one second image block; writing the at least one group of user information data into the frequency domain data of the corresponding second image block; and obtaining the second image based on the frequency domain data of the second image block written with the user information data.
Optionally, the writing the at least one set of user information data into the frequency domain data of the corresponding second image block includes: and replacing the data at the intermediate frequency position in the frequency domain data of the corresponding second image block by adopting the user information data.
In a second aspect, the present disclosure provides another image processing method, including: acquiring a second image, wherein the second image carries at least one group of user information data in a mode that the second image is invisible to naked eyes, the at least one group of user information data is used for indicating user information, the at least one group of user information data is written into a first image when a user corresponding to the user information requests the first image, and the second image is obtained after the at least one group of user information data is written into the first image; extracting the at least one set of user information data from the second image; determining the user information based on the at least one set of user information data.
In a possible embodiment, said extracting said at least one set of user information data from said second image comprises: dividing the second image into a plurality of second image blocks; performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of a plurality of second image blocks; and respectively extracting a group of user information data from at least part of the frequency domain data of the second image block to obtain a plurality of groups of user information data.
Optionally, the extracting a set of user information data from the frequency domain data of at least part of the second image block respectively includes: and extracting data positioned at the intermediate frequency position in the frequency domain data of the second image block as user information data.
In another possible embodiment, the determining the user information based on the at least one set of user information data includes: carrying out time domain transformation on the multiple groups of user information data to obtain a plurality of first image blocks; and combining the plurality of first image blocks to obtain a user identification image, wherein the user identification image is used for displaying the user information.
In a third aspect, the present disclosure provides an image processing apparatus comprising: the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image and user information of a user when the user requests to acquire the first image; the user information processing module is used for obtaining at least one group of user information data based on the user information; and the image processing module is used for writing the at least one group of user information data into the first image to obtain a second image, and the at least one group of user information data is carried in the second image in a manner that the at least one group of user information data is invisible to naked eyes.
In one possible implementation manner, the user information processing module includes: the image generation submodule is used for generating a user identification image based on the user information; an image segmentation sub-module, configured to segment the user identification image into a plurality of first image blocks; and the data processing sub-module is used for respectively carrying out frequency domain transformation processing on each first image block to obtain frequency domain data of each first image block, and the frequency domain data of each first image block is a group of user information data.
In another possible implementation, the image processing module includes: an image segmentation sub-module for segmenting the first image into a plurality of second image blocks; the data transformation sub-module is used for performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of the second image blocks, and each group of user information data corresponds to the frequency domain data of one second image block; the data writing sub-module is used for writing the at least one group of user information data into the frequency domain data of the corresponding second image block; and the data inverse transformation submodule is used for obtaining the second image based on the frequency domain data of the second image block written with the user information data.
Optionally, the data writing sub-module is configured to replace, with the user information data, data located at an intermediate frequency position in the frequency domain data of the corresponding second image block.
In a fourth aspect, the present disclosure provides another image processing apparatus comprising: an obtaining module, configured to obtain a second image, where the second image carries at least one group of user information data in a manner invisible to naked eyes, the at least one group of user information data is used to indicate user information, the at least one group of user information data is written into a first image when a user corresponding to the user information requests the first image, and the second image is obtained after the at least one group of user information data is written into the first image; an extraction module for extracting the at least one set of user information data from the second image; a determining module for determining the user information based on the at least one set of user information data.
In one possible implementation, the extraction module includes: an image segmentation sub-module for segmenting the second image into a plurality of second image blocks; the data transformation sub-module is used for carrying out frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of a plurality of second image blocks; and the data extraction sub-module is used for respectively extracting a group of user information data from at least part of the frequency domain data of the second image block to obtain a plurality of groups of user information data.
Optionally, the data extraction sub-module is configured to extract data located at an intermediate frequency position in the frequency domain data of the second image block as user information data.
In another possible implementation manner, the determining module includes: the data processing submodule is used for carrying out time domain transformation on the multiple groups of user information data to obtain a plurality of first image blocks; and the synthesis sub-module is used for combining the plurality of first image blocks to obtain a user identification image, and the user identification image is used for displaying the user information.
In a fifth aspect, the present disclosure provides an image processing apparatus, which includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the image processing method according to the first aspect or the second aspect.
In a sixth aspect, the present disclosure provides a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the image processing method of the first or second aspect.
The technical scheme provided by the embodiment of the disclosure at least comprises the following beneficial effects:
according to the technical scheme, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, the user information of the leaking person is determined according to the user information data, and accordingly the leaking person can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 shows a schematic diagram of a usage scenario of one embodiment of the present disclosure;
FIG. 2 is a flow chart of an image processing method provided by an embodiment of the present disclosure;
FIG. 3 shows a flow chart of an image processing method provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of an image processing method provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating an image processing method provided by an embodiment of the present disclosure;
fig. 6 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating a structure of a user information processing module according to an embodiment of the present disclosure;
fig. 8 is a block diagram illustrating a structure of an image processing module according to an embodiment of the present disclosure;
fig. 9 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 10 is a block diagram illustrating the structure of an extraction module according to an embodiment of the present disclosure;
FIG. 11 is a block diagram illustrating the structure of a determination module according to an embodiment of the present disclosure;
fig. 12 shows a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Before explaining the present disclosure in detail, an application scenario and related technologies related to the present disclosure will be described.
Referring to fig. 1, a schematic view of an application scenario of an image processing method according to an embodiment of the present disclosure is shown. The terminal 120 and the server 140 are connected via a network, and information can be exchanged between the two via the network. The server 140 stores therein a variety of information such as images and the like. When a user needs to access information in the server 140, the account information with login authority needs to be sent to the server 140 through the terminal 120, and the information in the server 140 can be accessed after the authentication of the server 140. Three exemplary scenes in which the terminal 120 requests the target image in the server 140 are described below, respectively.
In one scenario, the terminal 120 sends a user request to the server 140, where the user request carries user information and an identifier of the target image, for example, the identifier of the target image may be a Uniform Resource Locator (URL), and the server 140 writes the user information into the target image and then sends the target image carrying the user information to the terminal 120, so that the terminal 120 can obtain the requested target image. If the target image carrying the user information sent to the terminal 120 is leaked, for example, photographed or captured and propagated to other unallowed channels, the user information is obtained through the target image carrying the user information, so that the path of the leaked image is investigated and blamed. In the present implementation scenario, the execution subject of the image processing method is the server 140.
In another scenario of the present disclosure, the terminal 120 sends a user request to the server 140, where the user request carries user information and an identifier (e.g., URL) of the target image, the server 140 sends the target image to the terminal 120, and the terminal 120 writes the user information into the target image and then outputs the target image carrying the user information. If the target image carrying the user information is leaked, for example, photographed or captured and spread to other unallowed channels, the user information is obtained through the target image carrying the user information, and therefore the path of the leaked image is investigated and blamed. In the present implementation scenario, the execution subject of the image processing method is the terminal 120.
In addition, in another scenario of the present disclosure, a separate device is disposed between the terminal 120 and the server 140, and the device forwards data transmitted between the terminal 120 and the server 140. The terminal 120 sends a user request to the device, where the user request carries user information and an identifier (e.g., a URL) of the target image, and the device sends the user request sent by the terminal 120 to the server 140, and obtains the user information from the request and stores the obtained user information. After receiving the user request, the server 140 sends the target image requested by the user request to the device. When receiving the target image returned by the server 140, the device adds the user information corresponding to the target image and forwards the user information to the terminal 120. If the target image carrying the user information is leaked, for example, photographed or captured and spread to other unallowed channels, the user information is obtained through the target image carrying the user information, and therefore the path of the leaked image is investigated and blamed. In the present implementation scenario, the execution subject of the image processing method is a device between the terminal 120 and the server 140.
In addition, the terminal 120, the server 140 or the device between the terminal 120 and the server 140 in the three exemplary scenarios may also obtain user information through a leaked target image carrying the user information, and the user information may be used to investigate and trace the path of the leaked image.
The embodiment of the disclosure can be applied to scenes such as an information inquiry system of a public security department, a marital registration system of a civil administration department, an inquiry system in a bank and the like, and can also be applied to scenes in which other images need to be kept secret, such as a situation in which a merchant on a shopping website encrypts a commodity picture of the merchant and the like.
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Referring to fig. 2, which shows a flowchart of an image processing method provided by an embodiment of the present disclosure, the method shown in fig. 2 may be performed by a server, a terminal, or a device between the server and the terminal. The method comprises the following steps:
201. when a user requests to acquire a first image, the first image and user information of the user are acquired.
The user information refers to information of a user requesting the first image, and the user information includes, but is not limited to, an account number, an IP (Internet Protocol) address, a Media Access Control (MAC) address, and the like of the user, and the user information may include any one or more of the user information. The first image is an image requested by a user corresponding to the user information.
In a possible implementation manner, when the method is executed by a server, and a user requests to acquire a first image, the user request is sent to a first image storage party, such as the server, through a terminal; the user request includes user information and an identifier of the first image, for example, the identifier of the first image may be a URL; the first image holder may obtain the user information of the user requesting the first image from the user request, and obtain the first image according to the identifier of the first image, for example, obtain the first image in the server according to the URL and obtain the first image. That is, when the method is performed by a server, the server may obtain the user information from a user request sent by the terminal, and obtain the first image according to the identification of the first image in the user request.
In another possible implementation manner, when the method is executed by a terminal, the terminal may determine an identifier of the first image according to a user instruction, generate a user request based on the user information and the identifier of the first image, send the user request to a storage party of the first image, and receive the first image returned by the storage party of the first image according to the user request. That is, when the method is executed by a terminal, the terminal may acquire the user information stored locally and receive the first image returned by the server. In addition, the terminal may also obtain the user information from the server, for example, the server may simultaneously carry the user information when returning the first image. Or, the terminal may also acquire user information directly input by the user.
In yet another possible implementation, when the method is performed by a device between the server and the terminal, the device may obtain the user information from a user request sent by the terminal, and obtain the first image sent by the server to the terminal.
202. At least one set of user information data is obtained based on the user information.
The user information data is data indicating user information. All the user information data corresponding to the user information can be taken as one group or can be divided into a plurality of groups, and each group of user information data comprises one or more user information data.
Alternatively, the user information data may be time domain data or frequency domain data.
In one possible implementation, this step 202 may include: generating a user identification image based on the user information; dividing a user identification image into a plurality of first image blocks; each first image block is converted into a set of user information data. Alternatively, the pixel data (time domain data) corresponding to the first image block may be used as the user information data, or the frequency domain data of the first image block may be used as a set of user information data. Illustratively, when the user identification image is a grayscale image or a binary image, the pixel data may be a grayscale value of the pixel. When the user identification image is a monochrome image, the pixel data may be a certain channel value of the pixel (for example, any one of Red, Green, and Blue (RGB) channels).
In another possible implementation, the step 202 may include: obtaining binary data corresponding to user information, dividing the binary data corresponding to the user information data into a plurality of groups, wherein each group of binary data is a group of user information data (time domain data), or frequency domain data obtained after each group of binary data is frequency domain transformed is a group of user information data, and the binary data of each group can be equal in length, that is, the data bits contained in the binary data are equal.
203. And writing user information data corresponding to the user information into the first image to obtain a second image.
The user information data corresponding to the user information comprises all the user information data obtained based on the user information, and the user information data is carried in the second image in a manner invisible to naked eyes. Here, the fact that the user information data is carried in the second image in a manner invisible to the naked eye means that the image content shown by the first image is the same as the image content shown by the second image, and the user cannot directly observe that the user information data is carried in the second image from the second image. Through the steps 202 and 203, the user information can be steganographically written into the first image, so that the second image is obtained.
In the embodiment of the present disclosure, the user information data may be written in the spatial domain data (i.e., the time domain data) of the first image, or the user information data may be written in the frequency domain data of the first image.
When the user information data is written into the spatial domain data of the first image, the user information data can be written in an LSB (least significant bit) mode, and the second image is obtained after the user information data is written. In this case, the user information data corresponding to the user information may be a set, and written in the spatial domain data corresponding to the designated area of the first image, where the designated area may be the entire area or a partial area of the first image; alternatively, the user information data corresponding to the user information may be divided into a plurality of groups, and different groups are written into the spatial domain data corresponding to different regions of the first image. The user information data is written into the spatial domain data of the first image in the LSB mode, so that the content of the seen images is completely the same when the first image and the second image are observed by human eyes, and the definition of the first image and the definition of the second image are basically the same.
When the user information data is written into the frequency domain data of the first image, the frequency domain conversion processing needs to be performed on the first image to obtain the frequency domain data corresponding to the first image, then the user information data is written into the frequency domain data corresponding to the first image, and finally, the frequency domain data corresponding to the first image after the user information data is written into is subjected to time domain conversion to obtain the second image. The time-domain transform here is the inverse of the aforementioned frequency-domain transform.
The user information data is written into the spatial domain data of the first image, the user can detect that the user information data is carried in the spatial domain data of the second image through means such as a histogram of the second image, the second image is easy to carry hidden information, the user information data is written into the frequency domain data of the first image, and the user is more difficult to find that the user information data is carried in the second image.
Optionally, the method may further include: and outputting the second image. The manner of outputting the second image may be to transmit the second image to another device (e.g., a terminal, etc.) or to display the second image through a display device (e.g., a display screen).
For example, when the method is performed by a server or a device between the server and the terminal, the manner of outputting the second image may be to transmit the second image to the terminal. When the method is performed by a terminal, the manner of outputting the second image may be displaying the second image.
According to the technical scheme, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, the user information of the leaking person is determined according to the user information data, and accordingly the leaking person can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
Referring to fig. 3, which shows a flowchart of an image processing method provided by an embodiment of the present disclosure, the method shown in fig. 3 may be performed by a server, a terminal, or a device between the server and the terminal. The method comprises the following steps:
301. when a user requests to acquire a first image, the first image and user information of the user are acquired.
The implementation of this step can be referred to as step 201, and a detailed description is omitted here.
302. A user identification image is generated based on the user information.
Optionally, the user identification image contains user information, which may be visually displayed, i.e. the user information may be directly visible on the user identification image. Illustratively, the account number of the user, such as Zhang III, is shown on the user identification image.
Alternatively, the user identification image may be a monochrome image. Alternatively, the user identification image may be a binary image, i.e. each pixel needs a 0 or 1 to represent its gray value; alternatively, the user identification image may be a grayscale image, i.e. each pixel needs to represent its grayscale value by a multi-bit binary number.
Illustratively, the user information is displayed in characters on the image to generate a user identification image. For example, if the user information is the user's account "three sheets", the generated user identification image is a binary image having the word "three sheets".
303. The user identification image is divided into n first image blocks.
Wherein n is a positive integer greater than 1.
Optionally, the user identification image may be divided into n first image blocks, where n is p2P is positive integerN first image blocks forming the first image by p rows by p columns, each first image block being a square image block, for example, each image block of the n image blocks being q × q image blocks, and each image block having a size Ln ═ q image blocks2And q is a positive integer greater than 1, for example q may be equal to 8. The values of p and q may further be limited to integer multiples of 2 to make the processing of the first image block more suitable for computer operations.
304. And respectively carrying out frequency domain transformation processing on each first image block to obtain frequency domain data of each first image block.
The frequency domain data of each first image block is a group of user information data. In this embodiment, frequency domain transform processing is performed on the n first image blocks, so as to obtain frequency domain data of the n first image blocks, where the frequency domain data of each first image block is a set of user information data, so as to obtain n sets of user information data.
When performing the frequency domain transform processing, the frequency domain transform processing may be performed on each first image block in sequence, or may be performed on all first image blocks at the same time, which is not limited in this disclosure.
Alternatively, one of the pixel data of the first image block may be selected to be subjected to a frequency domain transform process, for example, a gray scale value of a pixel of the first image block, a certain channel value of RGB data of the pixel of the first image block, or the like may be selected.
Alternatively, the frequency domain transform may be a discrete cosine transform, a discrete fourier transform, a discrete wavelet transform, or the like.
Optionally, the security of the user information may be further improved by an interleaving method. The specific rule of interleaving may be predetermined.
As described above, in the present embodiment, the user identification image is divided into n first image blocks, each of the first image blocks includes Ln pixels, each of the first image blocks is converted into a set of user information data, accordingly, Ln user information data may be included in each set of user information data, and n and Ln are both integers greater than 1. The interleaving manner may include at least one of the following three manners:
firstly, n first image blocks of the user identification image are interleaved. This step may be performed after step 303 and before step 304.
For example, n interleaving units are interleaved with a length of L1, taking one first image block as one interleaving unit, where L1 is a positive integer greater than or equal to n. The purpose of interleaving is to disorder the sequence of n image blocks of the user identification image, so that a user who does not know the interleaving rule cannot perform reverse interleaving processing on the first image containing the user information, the user information cannot be removed, and the probability of tracing a divulger by obtaining the user information through the leaked image is further increased.
Second, n sets of user information data are interleaved. This step may be performed after step 304.
For example, a set of user information data is interleaved as one interleaved unit with n interleaved units of length L2, where L2 is a positive integer equal to or greater than n. The purpose of interleaving is to break up the sequence of n groups of user information data, so that a user who does not know the interleaving rule cannot perform reverse interleaving processing on a plurality of groups of user information data, so that the user information cannot be removed, and the probability of tracing a divulger by obtaining the user information through the leaked image is further increased.
Thirdly, interleaving Ln user information data in each group of user information data. This step may be performed after step 304.
Illustratively, one user information data is taken as one interleaving unit, and n interleaving units are interleaved with the length of L3, wherein L3 is a positive integer greater than or equal to Ln. The purpose of interleaving is to disorder the sequence of Ln user information data, so that a user who does not know the interleaving rule cannot perform reverse interleaving processing on the user information data, so that the user information cannot be removed, and the probability of tracing a divulger by obtaining the user information through the leaked image is further increased.
Through steps 302-304, at least one group of user information data can be obtained based on the user information.
In step 304, the frequency domain data of each first image block is respectively set as a set of user information data, but in other embodiments, the pixel data of each first image block may be respectively set as a set of user information data. For example, a gray value of a pixel of the first image block, a certain channel value in RGB data of the pixel of the first image block, and the like.
Alternatively, steps 302-304 can be replaced by the following steps to achieve at least one set of user information data based on the user information:
firstly, converting user information into binary data, wherein the binary data is used for representing the user information;
and secondly, dividing binary data corresponding to the user information into multiple groups of binary data with equal length, wherein each group of binary data is a group of user information data, or frequency domain data obtained after each group of binary data is subjected to frequency domain transformation is a group of user information data.
Alternatively, the steps 302-304 can be replaced by the following steps to obtain at least one group of user information data based on the user information:
carrying out redundant coding on binary data corresponding to the user information to obtain n × Ln data; and dividing the data obtained by the redundant coding into n groups of user information data, wherein each group of user information data comprises Ln user information data, and each user information data is one of the n Ln data.
For example, n is 4, Ln is 4, and data obtained by redundantly encoding data of the user information is {1, 4, 8, 76, 27, 201, 8, 37, 29, 198, 57, 92, 6, 34, 83, 72 }; the obtained 4 groups of user information data are: {1, 4, 8, 76}, {27, 201, 8, 37}, {29, 198, 57, 92} and {6, 34, 83, 72 }.
In the above example, the redundant coding may be repetition coding or error correction coding.
It should be noted that steps 302-304 are used for multiple user information data sets derived based on user information. For the same user, after step 202, the user information and the plurality of user information data sets may be correspondingly stored in the service, and the user may directly use the service when initiating the image request again, without performing the conversion action again. That is, the step 202 may include searching for multiple sets of user information data corresponding to the user information according to the user information.
305. The first image is divided into m second image blocks.
Wherein m is a positive integer greater than 1.
Optionally, the first image may be divided into m second image blocks, m ═ s2And s is a positive integer, the m second image blocks form the first image in a manner of multiplying s rows by s columns, each second image block is a square image block, and the size of each second image block is Lm ═ r2And r is a positive integer. The values of s and r may be further limited to integer multiples of 2 to make the subsequent writing of user information data more suitable for computer operations.
306. And selecting n second image blocks from the m second image blocks to perform frequency domain transformation processing to obtain frequency domain data of the n second image blocks.
Wherein m is an integer greater than n. By setting m larger than n, the user information data is written only in a part of the image blocks of the first image, thereby reducing the difference between the second image and the first image, i.e. reducing the influence of writing the user information data in the first image on the quality (e.g. sharpness) of the first image.
Alternatively, the manner of selecting the n second image blocks from the m second image blocks may be a preset manner. Exemplarily, n second image blocks may be consecutively selected from the m second image blocks; it is also possible to discretely select n second image blocks from the m second image blocks. The discrete manner includes selecting the second image blocks at set intervals from the set second image blocks, for example, the 1 st second image block, the 3 rd second image block … … 2x +1(x is an integer) second image blocks, until reaching n second image blocks; alternatively, the selected second image block is randomly selected, e.g., determined using a specified function, the input of which may be a random number, and the output of which is used to represent the second image block.
The frequency domain transform processing may be performed on one type of pixel data of the second image block, and the pixel data of the second image block may be a luminance value, a gray value, a chrominance value, and the like. For example, when the first image is a grayscale image or a binary image, the pixel data of the second image block may be a grayscale value of the pixel; when the first image is in RGB format, the pixel data of the second image block may be one of R, G and B three color components; when the first image is in YUV format (Y denotes luminance and U, V denotes chrominance of a color), the pixel data of the second image block may be one of the three components Y, U and V.
When performing the frequency domain transform processing, the frequency domain transform processing may be performed on the n second image blocks at a time, or may also be performed on the n second image blocks at the same time, which is not limited in this disclosure.
Alternatively, the frequency domain transform may be a discrete cosine transform, a discrete fourier transform, a discrete wavelet transform, or the like. The frequency domain transformation in step 306 may be the same as the frequency domain transformation in step 304.
In a possible implementation manner, the step 306 may be replaced by the following step:
respectively carrying out frequency domain transformation processing on the m second image blocks to obtain frequency domain data of the m second image blocks; and then selecting the frequency domain data of the n second image blocks from the frequency domain data of the m second image blocks.
In this embodiment, each set of user information data corresponds to frequency domain data of one second image block, and is used to be written into the frequency domain data of the corresponding second image block.
It should be noted that, the steps 302 to 304 and the steps 305 to 306 are not executed sequentially, the steps 302 to 304 may be executed first and then the steps 305 to 306 are executed, the steps 305 to 306 may be executed first and then the steps 302 to 304 are executed, or the steps 302 to 304 and the steps 305 to 306 may be executed simultaneously.
307. And writing the n groups of user information data into the frequency domain data of the corresponding second image block.
For an image block, in the frequency domain data obtained after the frequency domain transformation, the upper left is low frequency data, correspondingly, the upper left is a low frequency position, the lower right is high frequency data, correspondingly, the lower right is a high frequency position, and a middle frequency position is between the low frequency position and the high frequency position. In this embodiment, the writing of the user information data into the frequency domain data of the corresponding second image block means replacing the data located at the intermediate frequency position in the frequency domain data of the corresponding second image block with the user information data. Thereby making it difficult for the user information to be destroyed by signal processing, and the influence on the image quality of the first image can be reduced.
In implementation, the starting point of the intermediate frequency position may be a preset value, and the value of the preset value is obtained through experience or experiment; the starting point of the intermediate frequency position may be variable, and the starting point of the intermediate frequency position may be determined according to statistical characteristics of data in the first image frequency domain coefficient set.
In this embodiment, each set of user information data includes Ln user information data, and the frequency domain data of each second image block includes Lm frequency domain data, where Lm is greater than Ln. And Ln is equal to the number of pixels in the first image block, Lm is equal to the number of pixels in the second image block, Lm is greater than Ln, which indicates that the number of pixels contained in each second image block is greater than the number of pixels contained in the first image block, and the number of frequency domain data contained in the frequency domain data of each second image block is greater than the number of user information data in a group of user information data. Then, when the Ln user information data are written into the corresponding Lm frequency domain data, only part of the frequency domain data of the second image block will be replaced, and the influence on the image quality is small. For example, each second image block includes 512 × 512 pixels, each first image block includes 8 × 8 pixels, and accordingly, the frequency domain data of each second image block includes 512 × 512 frequency domain data, and a set of user information data obtained by converting one first image block includes 8 × 8 user information data, so that only 8 × 8 frequency domain data of the 512 frequency domain data are replaced when the user information data are written into the frequency domain data of the corresponding second image block, and the influence on the image quality of the first image is small.
It should be noted that, in this embodiment, the user information data is written into the first image in a manner of replacing part of the frequency domain data in the second image block, and in other embodiments, other writing manners, such as data correlation, data weighted superposition, and the like, may also be used.
308. And obtaining a second image based on the frequency domain data of the second image block written with the user information data.
The user information data corresponding to the user information comprises all the user information data obtained based on the user information, and the user information data is carried in the second image in a manner invisible to naked eyes. Here, the fact that the user information data is carried in the second image in a manner invisible to the naked eye means that the image content shown by the first image is the same as the image content shown by the second image, and the user cannot directly observe that the user information data is carried in the second image from the second image.
If all the second image blocks are subjected to the frequency domain transform processing in the foregoing step, the step 308 may include: respectively carrying out inverse transformation on the frequency domain data of the second image block written with the user information data and the frequency domain data of the second image block not written with the user information data to obtain pixel data corresponding to all the second image blocks; and combining the pixel data corresponding to all the second image blocks to obtain a second image.
If the frequency domain transform processing is performed on only the selected n second image blocks in the foregoing steps, the step 308 may include: performing inverse transformation, namely time domain transformation, on the frequency domain data of the second image block into which the user information data is written to obtain pixel data (i.e. time domain data) of the n second image blocks; and combining the pixel data of the n second image blocks with the pixel data of the second image blocks which are not subjected to the frequency domain transformation processing to obtain a second image.
This step 308 is the inverse transformation of step 306, so the way of inverse transformation corresponds to the way of frequency domain transformation in step 306. For example, when a discrete cosine transform is used in step 306, an inverse discrete cosine transform is used in step 308.
Through the steps 305-308, at least one group of user information data can be written into the first image to obtain the second image.
In this embodiment, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, and the user information of the leak is determined according to the user information data, so that the leak can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
In addition, in this embodiment, the second image is obtained by converting the user identification image into a plurality of sets of frequency domain data and then writing the sets of frequency domain data into the first image in groups, so that even if the user destroys the second image by operations such as mosaic or clipping, as long as the range of mosaic or clipping by the user is limited, when the user information data extracted from the second image is converted into the user identification image, the destroyed part only has a certain influence on the definition of the user identification image, and the user information can still be recognized from the user identification image, that is, as long as the range of mosaic or clipping by the user is limited, enough user information data can still be extracted from the second image to restore the user information, and the user information of the leaking person is obtained through the leaked second image, thereby realizing tracing the leaking person.
The following is a further embodiment of the method of the present disclosure for recovering user information from a second image containing user information data, the method implemented corresponds to the method for writing user information data into a first image provided in the embodiment shown in fig. 2 or fig. 3, and for details not described in detail in the embodiment, reference may be made to the method provided in the embodiment shown in fig. 2 or fig. 3.
Referring to fig. 4, which shows a flowchart of an image processing method provided by an embodiment of the present disclosure, the method shown in fig. 4 may be executed by a terminal or a server, and the method includes:
401. a second image is acquired.
The second image carries at least one group of user information data in a mode invisible to naked eyes, the at least one group of user information data is used for indicating user information, the at least one group of user information data is written into the first image when a user corresponding to the user information requests the first image, and the second image is obtained after the at least one group of user information data is written into the first image. That is, the second image may be obtained by the method shown in fig. 2 or fig. 3.
The second image in this step 401 may be a leaked second image to determine the user information of the leaking person by the method provided in this embodiment.
402. At least one set of user information data is extracted from the second image.
When the user information data is written in the spatial domain data of the first image, the pixel data of the designated pixel position in the second image may be extracted as the user information data. The specified pixel position corresponds to the pixel position where the user information data is written in step S203.
When the user information data is written into the frequency domain data of the first image, the frequency domain transformation processing needs to be performed on the second image to obtain the frequency domain data of the second image, and then part of the frequency domain data is selected from the frequency domain data of the second image to be used as the user information data.
403. Based on the extracted user information data, user information is determined.
The user information data extracted in this step 403 may be time domain data, such as pixel data of an image, binary data, and the like; and may be frequency domain data. When the extracted user information data is frequency domain data, this step 403 may include: performing time domain transformation on the extracted frequency domain data to obtain time domain data corresponding to the user information data; and determining user information based on the obtained time domain data.
The user information may be used to indicate the identity of the user so that the leak may be blamed for based on the user information.
In this embodiment, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, and the user information of the leak is determined according to the user information data, so that the leak can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
Referring to fig. 5, which shows a flowchart of an image processing method provided by an embodiment of the present disclosure, the method shown in fig. 5 may be executed by a terminal or a server, and the method includes:
501. a second image is acquired.
The related description of the second image is referred to in step 401, and the detailed description is omitted here.
502. The second image is divided into m second image blocks.
Wherein m is a positive integer greater than 1.
Optionally, the second image may be divided into m second image blocks, m ═ s2And s is a positive integer, the m second image blocks form the first image in a manner of multiplying s rows by s columns, each second image block is a square image block, and the size of each second image block is Lm ═ r2And r is a positive integer. The values of s and r can be further limited to integer multiples of 2, so that the subsequent extraction of the user information data is more suitable for computer operation.
503: and selecting n second image blocks from the m second image blocks to perform frequency domain transformation processing to obtain frequency domain data of the n second image blocks.
Wherein m is an integer greater than n.
The frequency domain transform method and the method for selecting n image blocks used in step 503 may be the same as those used in step 306, and a detailed description thereof is omitted here.
Alternatively, this step 503 may be replaced with the following step:
respectively carrying out frequency domain transformation processing on the m second image blocks to obtain frequency domain data of the m second image blocks; and then selecting the frequency domain data of the n second image blocks from the frequency domain data of the m second image blocks.
504. And respectively extracting a group of user information data from the frequency domain data of each second image block to obtain n groups of user information data.
Alternatively, for a second image block, data located at an intermediate frequency position in the frequency domain data of the second image block may be extracted as user information data. The description of the intermediate frequency position can be referred to in step 307, and the detailed description is omitted here.
505. And performing time domain transformation on the extracted n groups of user information data to obtain image data of a plurality of first image blocks.
In this embodiment, the extracted n groups of user information data are all frequency domain data, so that time domain transformation needs to be performed on the n groups of user information data to obtain image data of the n first image blocks.
506. And combining the image data of the n first image blocks to obtain the user identification image.
The user identification image is used for displaying user information. The description of the user identification image obtained in step 506 may be referred to in step 302.
The step 505 to 506 can realize that the user information is determined based on the extracted user information data.
In another possible implementation manner, n sets of user information data are time-domain transformed to obtain multiple sets of binary data, in which case, the step 506 may be replaced by the following steps:
merging the n groups of user information data; and decoding the merged user information data to obtain the user information.
Optionally, the decoding is the inverse of the redundant encoding mentioned in step 202.
Optionally, the method may further comprise a deinterleaving operation, which is the inverse of the interleaving operation in the implementation shown in fig. 2.
In another possible implementation, a set of user information data is extracted from the frequency domain data of each second image block as time domain data, for example, pixel data or binary data, in which case, the steps 505 to 506 may be replaced by the following steps: combining the extracted user information data; and converting the combined user information data into user information.
In this embodiment, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, and the user information of the leak is determined according to the user information data, so that the leak can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
In addition, in this embodiment, the second image is obtained by converting the user identification image into a plurality of sets of frequency domain data and then writing the sets of frequency domain data into the first image in groups, so that even if the user destroys the second image by operations such as mosaic or clipping, as long as the range of mosaic or clipping by the user is limited, when the user information data extracted from the second image is converted into the user identification image, the destroyed part only has a certain influence on the definition of the user identification image, and the user information can still be recognized from the user identification image, that is, as long as the range of mosaic or clipping by the user is limited, enough user information data can still be extracted from the second image to restore the user information, and the user information of the leaking person is obtained through the leaked second image, thereby realizing tracing the leaking person.
The following are embodiments of the disclosed apparatus and reference may be made to the above-described method embodiments for details not described in detail in the apparatus embodiments.
Referring to fig. 6, a block diagram of an image processing apparatus 600 according to an embodiment of the present disclosure is shown, where the apparatus 600 is used for steganographically writing user information into a first image. The device includes: an acquisition module 610, a user information processing module 620, and an image processing module 630. The obtaining module 610 is configured to obtain the first image and user information of the user when the user requests to obtain the first image; a user information processing module 620, configured to obtain at least one group of user information data based on the user information; the image processing module 630 is configured to write at least one set of user information data into the first image to obtain a second image, where the at least one set of user information data is carried in the second image in a manner invisible to the naked eye.
In this embodiment, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, and the user information of the leak is determined according to the user information data, so that the leak can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
Optionally, the apparatus may further comprise an output module for outputting the second image.
In one possible implementation, referring to fig. 7, the user information processing module 620 includes an image generation sub-module 621, an image segmentation sub-module 622, and a data processing sub-module 623. The image generation submodule 621 is configured to generate a user identification image based on the user information; the image segmentation sub-module 622 is configured to segment the user identification image into a plurality of first image blocks; the data processing sub-module 623 is configured to perform frequency domain transformation on each first image block to obtain frequency domain data of each first image block, where the frequency domain data of each first image block is a set of user information data.
Alternatively, the data processing sub-module 623 may also be configured to use the pixel data (time domain data) corresponding to the first image block as the user information data. I.e. the data processing module 623 may be adapted to convert each first image block into a set of user information data.
Alternatively, the data processing sub-module 623 may be further configured to acquire binary data corresponding to the user information, and divide the binary data corresponding to the user information data into multiple groups, where each group of binary data is a group of user information data (time domain data), or each group of binary data is frequency domain data obtained by frequency domain transformation, and each group of binary data may have equal length, that is, the number of data bits included in each group of binary data is equal.
Optionally, the data processing sub-module 623 may also be used to perform interleaving operations.
In one possible implementation, the image processing module 630 may write the user information data into the frequency domain data of the first image. Referring to fig. 8, the image processing module 630 includes: an image segmentation sub-module 631, a data transformation sub-module 632, a data write sub-module 633, and an inverse data transformation sub-module 634. The image segmentation sub-module 631 is configured to segment the first image into a plurality of second image blocks; the data transform sub-module 632 is configured to perform frequency domain transform on at least a part of the second image blocks to obtain frequency domain data of the second image blocks, where each group of user information data corresponds to the frequency domain data of one second image block; the data writing sub-module 633 is configured to write at least one set of user information data into the frequency domain data of a corresponding second image block; the data inverse transformation sub-module 634 is configured to obtain a second image based on the frequency domain data of the second image block written with the user information data.
Optionally, the data writing sub-module 633 is configured to replace, with the user information data, data located at an intermediate frequency position in the frequency domain data of the corresponding second image block.
In another possible implementation, the image processing module 630 may write the user information data into the frequency domain data of the first image. For example, the writing may be performed in an LSB (least significant bit) manner, and the second image is obtained after the writing of the user information data is completed.
Referring to fig. 9, a block diagram of an image processing apparatus 700 according to an embodiment of the present disclosure is shown, where the apparatus 700 is used to obtain user information from a second image carrying user information data. The device includes: an acquisition module 710, an extraction module 720, and a determination module 730.
An obtaining module 710, configured to obtain a second image, where the second image carries at least one group of user information data in a manner invisible to naked eyes, the at least one group of user information data is used to indicate user information, the at least one group of user information data is written into a first image when a user corresponding to the user information requests the first image, and the second image is obtained by writing the at least one group of user information data into the first image; an extracting module 720, configured to extract at least one set of user information data from the second image; a determining module 730 for determining the user information based on at least one set of user information data.
According to the technical scheme, the user information data corresponding to the user information of the user requesting to acquire the first image is written into the first image, so that the second image carrying the user information data is obtained, when the user leaks the second image, the user information data can be extracted through the leaked second image, the user information of the leaking person is determined according to the user information data, and accordingly the leaking person can be traced. Moreover, since the user information data is carried in the second image in a manner invisible to the naked eye, the user is hard to find, so that the probability that the user removes the user information data and then leaks the image can be reduced.
In one possible implementation, referring to fig. 10, the extracting module 720 may include: an image segmentation sub-module 721, a data transformation sub-module 722, and a data extraction sub-module 723. The image segmentation sub-module 721 is configured to segment the second image into a plurality of second image blocks; the data transform submodule 722 is configured to perform frequency domain transform on at least part of the second image blocks to obtain frequency domain data of a plurality of second image blocks; the data extraction sub-module 723 is configured to extract a group of user information data from the frequency domain data of at least part of the second image block, respectively, to obtain multiple groups of user information data.
In another possible implementation, the data extraction sub-module 723 is configured to extract data located at an intermediate frequency position in the frequency domain data of the second image block as user information data.
Alternatively, when the user information data is written in the spatial domain data of the first image, the extraction module 720 may be configured to extract the pixel data of the specified pixel position in the second image as the user information data.
In another possible implementation, referring to fig. 11, the determining module 730 includes: a data processing sub-module 731 and a synthesis sub-module 732. The data processing sub-module 731 is configured to perform time domain transformation on multiple sets of user information data to obtain multiple first image blocks; the synthesis sub-module 732 is configured to combine the plurality of first image blocks to obtain a user identification image, where the user identification image is used to display user information.
Referring to fig. 12, a schematic structural diagram of an image processing apparatus provided in an embodiment of the present disclosure is shown. The device may be a server or a terminal, in particular:
the image processing apparatus 800 includes a Central Processing Unit (CPU)801, a system memory 804 including a Random Access Memory (RAM)802 and a Read Only Memory (ROM)803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The computing system 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein a display 808 and an input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the computing system 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
According to an embodiment of the present disclosure, the image processing apparatus 800 may also operate by a remote computer connected to a network through a network such as the internet. That is, the image processing apparatus 800 may be connected to a network 812 through a network interface unit 811 connected to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing the image processing method provided by any of fig. 2 and 3, or alternatively, performing the image processing method provided by any of fig. 4 and 5.
The disclosed embodiments also provide a non-transitory computer-readable storage medium having instructions that, when executed by a processor of a computing system, enable the computing system to perform the image processing method provided by any one of fig. 2 and 3, or enable the computing system to perform the image processing method provided by any one of fig. 4 and 5.
A computer program product comprising instructions which, when run on a computer, cause the computer to execute instructions for performing the image processing method provided in any one of figures 2 and 3, or for performing the image processing method provided in any one of figures 4 and 5.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
Claims (18)
1. An image processing method, characterized in that the method comprises:
when a user requests to acquire a first image, acquiring the first image and user information of the user;
obtaining at least one group of user information data based on the user information;
and writing the at least one group of user information data into the first image to obtain a second image, wherein the at least one group of user information data is carried in the second image in a manner that the at least one group of user information data is invisible to naked eyes.
2. The method of claim 1, wherein the deriving at least one set of user information data based on the user information comprises:
generating a user identification image based on the user information;
dividing the user identification image into a plurality of first image blocks;
and respectively carrying out frequency domain transformation processing on each first image block to obtain frequency domain data of each first image block, wherein the frequency domain data of each first image block is a group of user information data.
3. The method of claim 1, wherein writing the at least one set of user information data to the first image to obtain a second image comprises:
dividing the first image into a plurality of second image blocks;
performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of the second image blocks, wherein each group of user information data corresponds to the frequency domain data of one second image block;
writing the at least one group of user information data into the frequency domain data of the corresponding second image block;
and obtaining the second image based on the frequency domain data of the second image block written with the user information data.
4. The method according to claim 3, wherein said writing the at least one set of user information data into the frequency domain data of the corresponding one of the second image blocks comprises:
and replacing the data at the intermediate frequency position in the frequency domain data of the corresponding second image block by adopting the user information data.
5. An image processing method, characterized in that the method comprises:
acquiring a second image, wherein the second image carries at least one group of user information data in a mode that the second image is invisible to naked eyes, the at least one group of user information data is used for indicating user information, the at least one group of user information data is written into a first image when a user corresponding to the user information requests the first image, and the second image is obtained after the at least one group of user information data is written into the first image;
extracting the at least one set of user information data from the second image;
determining the user information based on the at least one set of user information data.
6. The method of claim 5, wherein said extracting said at least one set of user information data from said second image comprises:
dividing the second image into a plurality of second image blocks;
performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of a plurality of second image blocks;
and respectively extracting a group of user information data from at least part of the frequency domain data of the second image block to obtain a plurality of groups of user information data.
7. The method according to claim 6, wherein the extracting a set of user information data from the frequency domain data of at least part of the second image block respectively comprises:
and extracting data positioned at the intermediate frequency position in the frequency domain data of the second image block as user information data.
8. The method of claim 6, wherein the determining the user information based on the at least one set of user information data comprises:
carrying out time domain transformation on the multiple groups of user information data to obtain a plurality of first image blocks;
and combining the plurality of first image blocks to obtain a user identification image, wherein the user identification image is used for displaying the user information.
9. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image and user information of a user when the user requests to acquire the first image;
the user information processing module is used for obtaining at least one group of user information data based on the user information;
and the image processing module is used for writing the at least one group of user information data into the first image to obtain a second image, and the at least one group of user information data is carried in the second image in a manner that the at least one group of user information data is invisible to naked eyes.
10. The apparatus of claim 9, wherein the user information processing module comprises:
the image generation submodule is used for generating a user identification image based on the user information;
an image segmentation sub-module, configured to segment the user identification image into a plurality of first image blocks;
and the data processing sub-module is used for respectively carrying out frequency domain transformation processing on each first image block to obtain frequency domain data of each first image block, and the frequency domain data of each first image block is a group of user information data.
11. The apparatus of claim 9, wherein the image processing module comprises:
an image segmentation sub-module for segmenting the first image into a plurality of second image blocks;
the data transformation sub-module is used for performing frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of the second image blocks, and each group of user information data corresponds to the frequency domain data of one second image block;
the data writing sub-module is used for writing the at least one group of user information data into the frequency domain data of the corresponding second image block;
and the data inverse transformation submodule is used for obtaining the second image based on the frequency domain data of the second image block written with the user information data.
12. The apparatus according to claim 11, wherein the data writing sub-module is configured to replace data located at an intermediate frequency position in the frequency domain data of the corresponding second image block with the user information data.
13. An image processing apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain a second image, where the second image carries at least one group of user information data in a manner invisible to naked eyes, the at least one group of user information data is used to indicate user information, the at least one group of user information data is written into a first image when a user corresponding to the user information requests the first image, and the second image is obtained after the at least one group of user information data is written into the first image;
an extraction module for extracting the at least one set of user information data from the second image;
a determining module for determining the user information based on the at least one set of user information data.
14. The apparatus of claim 13, wherein the extraction module comprises:
an image segmentation sub-module for segmenting the second image into a plurality of second image blocks;
the data transformation sub-module is used for carrying out frequency domain transformation on at least part of the second image blocks to obtain frequency domain data of a plurality of second image blocks;
and the data extraction sub-module is used for respectively extracting a group of user information data from at least part of the frequency domain data of the second image block to obtain a plurality of groups of user information data.
15. The apparatus according to claim 14, wherein the data extraction sub-module is configured to extract data located at an intermediate frequency position in the frequency domain data of the second image block as user information data.
16. The apparatus of claim 14, wherein the determining module comprises:
the data processing submodule is used for carrying out time domain transformation on the multiple groups of user information data to obtain a plurality of first image blocks;
and the synthesis sub-module is used for combining the plurality of first image blocks to obtain a user identification image, and the user identification image is used for displaying the user information.
17. An image processing apparatus, comprising a processor and a memory, wherein at least one instruction is stored in the memory, and wherein the instruction is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 8.
18. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to implement the image processing method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811584188.3A CN111353133B (en) | 2018-12-24 | 2018-12-24 | Image processing method, device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811584188.3A CN111353133B (en) | 2018-12-24 | 2018-12-24 | Image processing method, device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111353133A true CN111353133A (en) | 2020-06-30 |
CN111353133B CN111353133B (en) | 2023-02-10 |
Family
ID=71193895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811584188.3A Active CN111353133B (en) | 2018-12-24 | 2018-12-24 | Image processing method, device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111353133B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750427A (en) * | 2020-07-31 | 2021-05-04 | 清华大学深圳国际研究生院 | Image processing method, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105469353A (en) * | 2015-12-25 | 2016-04-06 | 上海携程商务有限公司 | Embedding method and device of watermark image, and extraction method and device of watermark image |
CN105869104A (en) * | 2016-04-06 | 2016-08-17 | 广州市幸福网络技术有限公司 | JPEG compression stable digital watermarking method and system based on picture content |
CN105898324A (en) * | 2015-12-07 | 2016-08-24 | 乐视云计算有限公司 | Video watermark hidden insertion method and device |
CN107341759A (en) * | 2017-07-17 | 2017-11-10 | 惠州Tcl移动通信有限公司 | A kind of method, storage medium and electronic equipment for adding blind watermatking to image |
CN108229180A (en) * | 2016-12-09 | 2018-06-29 | 阿里巴巴集团控股有限公司 | Sectional drawing data processing method, device and electronic equipment |
-
2018
- 2018-12-24 CN CN201811584188.3A patent/CN111353133B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898324A (en) * | 2015-12-07 | 2016-08-24 | 乐视云计算有限公司 | Video watermark hidden insertion method and device |
CN105469353A (en) * | 2015-12-25 | 2016-04-06 | 上海携程商务有限公司 | Embedding method and device of watermark image, and extraction method and device of watermark image |
CN105869104A (en) * | 2016-04-06 | 2016-08-17 | 广州市幸福网络技术有限公司 | JPEG compression stable digital watermarking method and system based on picture content |
CN108229180A (en) * | 2016-12-09 | 2018-06-29 | 阿里巴巴集团控股有限公司 | Sectional drawing data processing method, device and electronic equipment |
CN107341759A (en) * | 2017-07-17 | 2017-11-10 | 惠州Tcl移动通信有限公司 | A kind of method, storage medium and electronic equipment for adding blind watermatking to image |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750427A (en) * | 2020-07-31 | 2021-05-04 | 清华大学深圳国际研究生院 | Image processing method, device and storage medium |
CN112750427B (en) * | 2020-07-31 | 2024-02-27 | 清华大学深圳国际研究生院 | Image processing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111353133B (en) | 2023-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230360165A1 (en) | Method and apparatus for protecting digital photos from alteration | |
CN115997207B (en) | Detecting a sub-image region of interest in an image using a pilot signal | |
CN111008923B (en) | Watermark embedding method, watermark extracting method, watermark embedding device, watermark extracting device and watermark extracting equipment | |
CN110380864B (en) | Method, device and system for acquiring and verifying face data | |
CN110650350B (en) | Method and device for displaying coded image and electronic equipment | |
CN110189384B (en) | Image compression method, device, computer equipment and storage medium based on Unity3D | |
CN113688658A (en) | Object identification method, device, equipment and medium | |
CN111353133B (en) | Image processing method, device and readable storage medium | |
CN117336570B (en) | Video tamper-proof system and method based on digital watermark, electronic equipment and medium | |
CN111131270B (en) | Data encryption and decryption method and device, electronic equipment and storage medium | |
CN111310135B (en) | Watermark adding method and device based on virtual desktop | |
CN116366778A (en) | Image encryption and decryption method and device, computer equipment and storage medium | |
Kozina | Discrete Fourier transform as a basis for steganographic method | |
CN115114667A (en) | Privacy information processing and classifying method and device for security chip | |
CN115205089A (en) | Image encryption method, network model training method and device and electronic equipment | |
CN110730277A (en) | Information coding and method and device for acquiring coded information | |
CN113379582A (en) | Information adding method, information extracting device and electronic equipment | |
CN114331841A (en) | Content picture processing method, system, terminal and storage medium | |
Rajput et al. | Cloud based image color transfer and storage in encrypted domain | |
US11281422B2 (en) | Video data display method and device | |
CN107305610B (en) | Access path processing method and device, and automaton identification method, device and system | |
CN118590681B (en) | Internet-based teaching data secure transmission method and system | |
WO2015003222A1 (en) | Method and system for providing information from print | |
CN111367520B (en) | Component rendering method and device based on semi-integrated framework and computer equipment | |
US12137167B2 (en) | Watermarking in a virtual desktop infrastructure environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |