CN105049669A - Method for transmitting multiple images hidden in one image - Google Patents
Method for transmitting multiple images hidden in one image Download PDFInfo
- Publication number
- CN105049669A CN105049669A CN201510337092.7A CN201510337092A CN105049669A CN 105049669 A CN105049669 A CN 105049669A CN 201510337092 A CN201510337092 A CN 201510337092A CN 105049669 A CN105049669 A CN 105049669A
- Authority
- CN
- China
- Prior art keywords
- image
- size
- neural network
- input
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32267—Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
- H04N1/32272—Encryption or ciphering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/41—Bandwidth or redundancy reduction
- H04N1/411—Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
- H04N1/413—Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method for transmitting multiple images hidden in one image. According to the method, based on information hiding and nonlinear transformation, the hiding of the images is realized via the building of a nonlinear network, multiple input is changed into one output for transmission, the information hiding of multiple images in one image is completed, the one output is changed into multiple output by a receiving terminal, and mapping restoration from one image to multiple images is accomplished. According to the method, without increasing the channel transmission resource of the single image, the transmission quantity of a system is increased by n times, or it is equivalent that the image is compressed for n times, and the transmission efficiency and the transmission security of the data transmission system are improved.
Description
Technical field
The present invention relates to a kind of method of Image Communication, particularly a kind ofly multiple image is embedded in the method for carrying out high efficiency of transmission in piece image, belong to communication (as data communication technology etc.) field.
Background technology
Current society is the society of an information, the transmission problem of information and transmission security problem have become urgent problems day by day, Information hiding (image watermarking) is an important branch of information security, it utilizes the visual redundancy of the mankind to be embedded in carrier by secret information, thus reaches the object of safe transmission secret information.
Along with the development of science and technology, the high efficiency of transmission of view data and the safety of transmission become more and more important.Can accomplish, when not changing image size, to embed secret information in the picture by Information hiding (also claiming image watermarking) technology, thus realize the transmission of secret information and image.On the other hand, for remote sensing images, medical image and law image etc., require that carrier image can recover as much as possible after extraction secret information.At present, information concealing method extensively adopts, and for image, has various information concealing method, as spatio concealment method, and transform domain hidden method etc.But be mostly that small data quantity data (information) are hidden in Volume data (as image) carrier, can say, the capacity hidden is less than 1 or be far smaller than 1, in some occasion, more data message is needed to be hidden in little data (carrier), traditional hidden method lost efficacy, or cannot head it off, or lacked this class methods.
Summary of the invention
The technical problem that the present invention solves: overcome the deficiencies in the prior art, there is provided a kind of and multiple image is hidden in the method transmitted in piece image, the method is based on nonlinear transformation and Information hiding, construct the Nonlinear Multi neural net of special output, produce stego-image (identical with former figure size); Then the stego-image of transmission is recovered multiway images through inverse mapping.
Technical scheme of the present invention: a kind ofly multiple image is hidden in the method transmitted in piece image, step is as follows:
1) decompose the 1st width image A1, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data A1 (m) of L, m=1, and 2 ... M; X1 is become after the process of each one-dimension array gray scale normalization; This image A1 is of a size of W × H, and 8bit quantizes;
2) decompose the 2nd width image A2, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data A2 (m) of L, m=1, and 2 ... M; X2 is become after the process of each one-dimension array gray scale normalization; This image A2 is of a size of W × H, and 8bit quantizes;
3) decompose the n-th width image An, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data An (m) of L, m=1, and 2 ... M; Xn is become after the process of each one-dimension array gray scale normalization; This image An is of a size of W × H, and 8bit quantizes;
4) by the normalization data of n width image respectively through nonlinear neural network net1, type is multilayer feedforward neural network, comprises input layer, intermediate layer and output layer, and the input number of nodes of input layer is N1, and the output node number of output layer is N2; Input number of nodes N1=n*L, output node number N2=L; Nonlinear neural network net1 is input as X1, X2 ... the output of Xn, nonlinear neural network net1 certain value that to be Xp, p be between 1 to n; Described Xp preferably draws according to network weight, and wherein network weight is pre-formed based on learning algorithm;
5) export Xp to network to transmit, network sequence number p is hidden in wherein;
6), after receiving Xp, extract the network sequence number p be hidden in wherein, through nonlinear neural network net2, comprise input layer, intermediate layer and output layer, the input number of nodes of input layer is N2, and the output node number of output layer is N1; Input number of nodes N2=L, output node number is N1=n*L, export into Z=(Z1, Z2 ... Zn); Described output Z preferably draws according to network weight, and wherein network weight is pre-formed based on learning algorithm;
7) to network export Z=(Z1, Z2 ... Zn) inverse process is carried out, the image A1 be restored, A2 ... .An.
Step 4) in neural network weight learning process as follows:
41) with the n width training image B1 of A1-An with size, B2 ... Bn combines the large image B of generation one width in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantizes;
42) decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q; Each one-dimension array gray scale is normalized;
43) the nonlinear neural network net1 that this large image B is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1; Input number of nodes N1=n*L, output node number is N1=n*L, and what large image B was corresponding is input as X1, X2, Xn, respectively with X=(X1,0 ... 0), X=(0, X2,0 ... 0), X=(0,0, X3,0 ... 0), X=(0,0 ... Xn) for network exports, learn according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net1, i=1,2 ... ..n.
Step 6) in neural network weight learning process as follows:
61) with the n width training image B1 of A1-An with size, B2 ... Bn combines the large image B of generation one width in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantizes;
62) decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q; Each one-dimension array gray scale is normalized;
63) the nonlinear neural network net2 that this large image B is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1; Input number of nodes N1=n*L, output node number is N1=n*L, respectively with X=(X1 ...), X=(0, X2 ...), X=(0,0, X3 ...), X=(0,0 ... Xn) be network input, the output X1 that large image B is corresponding, X2 ... Xn is target, learns according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net2, i=1,2 ... ..n.
The present invention's beneficial effect is compared with prior art:
(1) the current information concealing method amount of hiding mostly is no more than 100%, and this method, when not increasing channel resource, improves n doubly the transmission quantity of system, or has been equivalent to image compression n doubly.
(2) the present invention adopt proposition method carry out hiding process based on nonlinear neural network, directly obtain stego-image, this image seems the width belonging to original image, and carrier invisibility is good;
(3) the inventive method recovers the quality (as PSNR value) of carrier image is to control in advance;
(4) method proposed by the invention has robustness, because main information is included in the weighted value of neural net, the just partial data of transmission, therefore has certain robustness;
(5) the present invention adopt proposition hidden method there are confidentiality and anti-intercepting and capturing while hiding multiple image, network weight does not transmit, and others cannot obtain, therefore cannot recover original image according to the data of transmission;
(6) technical solution of the present invention is unique, and directly realize based on nonlinear network (neural net), have the feature of Serial Distribution Processing, contribute to ASIC realization of High Speed, practicality improves greatly;
(7) the present invention adopt proposition hidden method do not need to carry out complicated preliminary treatment, do not need compression, while conversion, achieve hiding of multiple image, and traditional hidden method hidden after often converting front or change;
(8) technical solution of the present invention is based on machine learning techniques, and learning process is characteristic: based on the neural net that input-output nodes is equal, and adopt special target to be optimized study, adaptable image type is many, and Postprocessing technique quality is guaranteed.
Highspeed Data Transmission Technology has been widely used in the spacecraft such as remote sensing satellite, space probe and all kinds of satellite data transmission system, will obtain broader applications from now on.Star has multiple optical pickocff (multiple CCD camera or multiple CCD chip), often occur the scene of several (multichannel) image data transmission.
Multiplex techniques has been applied in spacecraft and all kinds of satellite data transmission systems technology, will obtain broader applications from now on.The present invention can be transmitted several (n width) image concealings in piece image, be equivalent to image compression many times (n doubly) and traditional data compression technique need not have been adopted, there is efficiency of transmission high, the features such as Postprocessing technique quality is high, quality controllable, simultaneously the method has the advantages that complexity is low, realize the practicality such as resource occupation is few, thus at spacecraft engineering, all there is practical value in image delivering system.
Accompanying drawing explanation
Fig. 1 is principle of the invention figure.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is further described in detail.
One, basic technology
In order to understand the present invention better, first the basic fundamental that the present invention relates to is explained.
1, Information Hiding Techniques
Information hiding is counted as a communication process usually, it be input as secret data waiting for transmission, what transmit in channel is disclosed carrier data, and what receive is then the carrier data being concealed with secret information.
Hide embedded mobile GIS for information about and be divided into two large classes: one is spatial domain embedded mobile GIS, and two is transform domain embedded mobile GIS.The data volume hidden is called absolute hidden capacity or absolute hidden capacity, and hiding data volume is called relative hidden capacity or embedding rate with the ratio of the data volume of carrier, and general relative capacity is far smaller than 1.The present invention's embedding rate relatively has then exceeded 1.
2, BP nerual network technique
BP neural net is the multilayer feedforward neural network based on error backpropagation algorithm, is the Learning Algorithm be most widely used at present.By the training of input and output sample set, can realize any Nonlinear Mapping from being input to output, its technological essence takes steepest gradient descent method to realize approaching of mapping relations.Generally comprise input layer, hidden layer and output to bend, every layer has one or more neuron, and they are connected with adjacent neuron, and each connection has multiple weights.Adjust these weights and each neuronic thresholding by I/O sample set, carry out repeatedly method generation according to learning algorithm, after meeting some requirements, deconditioning (study), forms the neural net from being input to functional relation between output.
Two, embodiment
In order to verify the performance of algorithm in this paper, in emulation experiment, have employed the 8 bit grayscale image A1 that size is 512 × 512, A2, A3, A4 tetra-width image carry out Information hiding transmission with recover, Information hiding relative capacity: 300%.(in piece image, hiding 3 width)
Emulating network net1 used is input node 64*4, output node 64, error E=0.001, requires that corresponding PSNR is 40dB.Middle node count into: N >=16, can be taken as 16-64, obtain network net1 after BP Algorithm Learning.
Emulating network net2 used is input node 64, output node 64*4, intermediate node, error E=0.001, requires that corresponding PSNR is 40dB.Middle node count into: N >=16, can be taken as 16-64, obtain network net2 after BP Algorithm Learning.
There is transmitting terminal in the weights of net2 and net2 network, the weights of net2 exist receiving terminal.
The weights of net2 and net2 network are based on training sample image, obtain through Learning Algorithm, and after study terminates, weights just converge to certain value.
It is a kind of that multiple image is hidden in the method concrete steps transmitted in piece image is as follows:
1) to the 1st width image A1, decompose, be divided into M subgraph 8*8, subgraph becomes the one-dimension array that size is 64 by row or by row arrangement, and forming M size is the data A1 (m) of 64, m=1, and 2 ... M.X1=(X11, X12 is become after the process of each one-dimension array gray scale normalization ... X1L).This image A1 is of a size of 512 × 512,8bit and quantizes;
2) decompose the 2nd width image A2, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data A2 (m) of 64, m=1, and 2 ... M.X2=(X21, X22 is become after the process of each one-dimension array gray scale normalization ... X2L).This image A2 is of a size of 512 × 512,8bit and quantizes;
3) decompose the n-th width image An, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data An (m) of L, m=1, and 2 ... M; Xn is become after the process of each one-dimension array gray scale normalization; This image An is of a size of W × H, and 512 × 512,8bit quantizes;
4) n width image is respectively through nonlinear neural network net1, and type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N2.Input number of nodes N1=n*L, output node number is N2=L, nonlinear neural network net1 is input as X1, X2 ... Xn, n*L node, nonlinear neural network net1 exports as Xp, p is certain value between 1 to n, and export Xp and preferably draw according to network weight, wherein weights are pre-formed based on learning algorithm; Wherein N1=64*n, N2=64
5) export Xp to network to transmit, network sequence number p is hidden in wherein;
6) receive after Xp, extract the network sequence number p be hidden in wherein, through nonlinear neural network net2, export into Z=(X1, X2 ... Xn); N=64
This network type is multilayer feedforward neural network, and input number of nodes is N2, and output node number is N1.Input number of nodes N2=L, output node number is N2=n*L, export into Z=(Z1, Z2 ... Zn), wherein weights are pre-formed based on learning algorithm;
Nonlinear neural network net2 is input as Xp, and p is certain value between 1 to n, export into Z=(Z1, Z2 ... Zn), export Z and preferably draw according to network weight, wherein weights are pre-formed based on learning algorithm;
7) to network export Z=(X1, X2.。。Xn) inverse process is carried out, the image A1 be restored, A2 ... .An.
Step 4) in neural network weight learning process as follows:
1) with the n width training image B1 of A1-An with size, B2, Bn, combine generation one width large image B in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantize, n=4;
Decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q.X is become after the process of each one-dimension array gray scale normalization;
The nonlinear neural network net1 that this large image X is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1.Input number of nodes N1=n*L, output node number is N1=n*L, and what large image X was corresponding is input as X1, X2, Xn, respectively with networking output for X=(X1 ...), X=(0, X2 ...), X=(0,0, X3 ...), X=(0,0 ... Xn) be target, learn according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net1, i=1,2 ... ..n;
Step 6) in neural network weight learning process as follows:
With the n width training image B1 of A1-An with size, B2, Bn, combine generation one width large image B in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantize, n=4;
Decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q.X is become after the process of each one-dimension array gray scale normalization;
The nonlinear neural network net2 that this large image X is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1.Input number of nodes N1=n*L, output node number is N1=n*L, respectively with X=(X1 ...), X=(0, X2 ...), X=(0,0, X3 ...), X=(0,0 ... Xn) be network input, the output X1 that large image X is corresponding, X2 ... Xn is target, learns according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net2, i=1,2 ... ..n.
The present invention proposes a kind of any multiple image is hidden in and wherein transmits in piece image, and the new method that receiving terminal can recover as requested, there is the feature of MPP, there are robustness, confidentiality and anti-intercepting and capturing, be applicable to the occasions such as all kinds of image delivering system (such as satellite data transmission system).
The content be not described in detail in specification of the present invention belongs to the known technology of those skilled in the art.
Claims (3)
1. multiple image is hidden in the method transmitted in piece image, it is characterized in that step is as follows:
1) decompose the 1st width image A1, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data A1 (m) of L, m=1, and 2 ... M; X1 is become after the process of each one-dimension array gray scale normalization; This image A1 is of a size of W × H, and 8bit quantizes;
2) decompose the 2nd width image A2, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data A2 (m) of L, m=1, and 2 ... M; X2 is become after the process of each one-dimension array gray scale normalization; This image A2 is of a size of W × H, and 8bit quantizes;
3) decompose the n-th width image An, be divided into M subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming M size is the data An (m) of L, m=1, and 2 ... M; Xn is become after the process of each one-dimension array gray scale normalization; This image An is of a size of W × H, and 8bit quantizes;
4) by the normalization data of n width image respectively through nonlinear neural network net1, type is multilayer feedforward neural network, comprises input layer, intermediate layer and output layer, and the input number of nodes of input layer is N1, and the output node number of output layer is N2; Input number of nodes N1=n*L, output node number N2=L; Nonlinear neural network net1 is input as X1, X2 ... the output of Xn, nonlinear neural network net1 certain value that to be Xp, p be between 1 to n; Described Xp preferably draws according to network weight, and wherein network weight is pre-formed based on learning algorithm;
5) export Xp to network to transmit, network sequence number p is hidden in wherein;
6), after receiving Xp, extract the network sequence number p be hidden in wherein, through nonlinear neural network net2, comprise input layer, intermediate layer and output layer, the input number of nodes of input layer is N2, and the output node number of output layer is N1; Input number of nodes N2=L, output node number is N1=n*L, export into Z=(Z1, Z2 ... Zn); Described output Z preferably draws according to network weight, and wherein network weight is pre-formed based on learning algorithm;
7) to network export Z=(Z1, Z2 ... Zn) inverse process is carried out, the image A1 be restored, A2 ... .An.
2. according to claim 1ly a kind ofly multiple image is hidden in the method transmitted in piece image, it is characterized in that: step 4) in neural network weight learning process as follows:
41) with the n width training image B1 of A1-An with size, B2 ... Bn combines the large image B of generation one width in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantizes;
42) decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q; Each one-dimension array gray scale is normalized;
43) the nonlinear neural network net1 that this large image B is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1; Input number of nodes N1=n*L, output node number is N1=n*L, and what large image B was corresponding is input as X1, X2, Xn, respectively with X=(X1,0 ... 0), X=(0, X2,0 ... 0), X=(0,0, X3,0 ... 0), X=(0,0 ... Xn) for network exports, learn according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net1, i=1,2 ... ..n.
3. according to claim 1ly a kind ofly multiple image is hidden in the method transmitted in piece image, it is characterized in that: step 6) in neural network weight learning process as follows:
61) with the n width training image B1 of A1-An with size, B2 ... Bn combines the large image B of generation one width in order, is of a size of W × (H*n), or (W*n) × H, 8bit quantizes;
62) decompose this large image B, be divided into Q subgraph, subgraph becomes the one-dimension array that size is L by row or by row arrangement, and forming Q size is the data A1 (m) of L, m=1, and 2 ... Q; Each one-dimension array gray scale is normalized;
63) the nonlinear neural network net2 that this large image B is corresponding, type is multilayer feedforward neural network, and input number of nodes is N1, and output node number is N1; Input number of nodes N1=n*L, output node number is N1=n*L, respectively with X=(X1 ...), X=(0, X2 ...), X=(0,0, X3 ...), X=(0,0 ... Xn) be network input, the output X1 that large image B is corresponding, X2 ... Xn is target, learns according to BP neural network learning method, from n kind weights according to the maximum one group of weights of the Y-PSNR PSNR of Xi and original Xi after study as net2, i=1,2 ... ..n.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510337092.7A CN105049669B (en) | 2015-06-17 | 2015-06-17 | It is a kind of multiple image to be hidden in the method transmitted in piece image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510337092.7A CN105049669B (en) | 2015-06-17 | 2015-06-17 | It is a kind of multiple image to be hidden in the method transmitted in piece image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105049669A true CN105049669A (en) | 2015-11-11 |
CN105049669B CN105049669B (en) | 2017-12-22 |
Family
ID=54455847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510337092.7A Active CN105049669B (en) | 2015-06-17 | 2015-06-17 | It is a kind of multiple image to be hidden in the method transmitted in piece image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105049669B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107547773A (en) * | 2017-07-26 | 2018-01-05 | 新华三技术有限公司 | A kind of image processing method, device and equipment |
CN108197488A (en) * | 2017-12-25 | 2018-06-22 | 大国创新智能科技(东莞)有限公司 | Information hiding, extracting method and system based on big data and neural network |
CN109729286A (en) * | 2019-01-28 | 2019-05-07 | 北京晶品特装科技有限责任公司 | A kind of method that motion graphics are realized in superposition in video |
CN110312138A (en) * | 2019-01-04 | 2019-10-08 | 北京大学 | A kind of high embedding capacity video steganography method and system based on the modeling of time series error convolution |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030081809A1 (en) * | 2001-10-15 | 2003-05-01 | Jessica Fridrich | Lossless embedding of data in digital objects |
US20030149879A1 (en) * | 2001-12-13 | 2003-08-07 | Jun Tian | Reversible watermarking |
CN102695059A (en) * | 2012-05-31 | 2012-09-26 | 西安空间无线电技术研究所 | Method for hiding, compressing and transmitting images |
CN104144277A (en) * | 2014-07-23 | 2014-11-12 | 西安空间无线电技术研究所 | Multi-path image lossless hidden transmission method |
-
2015
- 2015-06-17 CN CN201510337092.7A patent/CN105049669B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030081809A1 (en) * | 2001-10-15 | 2003-05-01 | Jessica Fridrich | Lossless embedding of data in digital objects |
US20030149879A1 (en) * | 2001-12-13 | 2003-08-07 | Jun Tian | Reversible watermarking |
CN102695059A (en) * | 2012-05-31 | 2012-09-26 | 西安空间无线电技术研究所 | Method for hiding, compressing and transmitting images |
CN104144277A (en) * | 2014-07-23 | 2014-11-12 | 西安空间无线电技术研究所 | Multi-path image lossless hidden transmission method |
Non-Patent Citations (2)
Title |
---|
周继军,杨义先: "基于神经网络的隐写图像盲检测系统设计", 《微电子学与计算机》 * |
李洪安等: "基于分存的多幅图像信息隐藏方案", 《计算机应用研究》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107547773A (en) * | 2017-07-26 | 2018-01-05 | 新华三技术有限公司 | A kind of image processing method, device and equipment |
CN107547773B (en) * | 2017-07-26 | 2020-01-03 | 新华三技术有限公司 | Image processing method, device and equipment |
CN108197488A (en) * | 2017-12-25 | 2018-06-22 | 大国创新智能科技(东莞)有限公司 | Information hiding, extracting method and system based on big data and neural network |
CN108197488B (en) * | 2017-12-25 | 2020-04-14 | 大国创新智能科技(东莞)有限公司 | Information hiding and extracting method and system based on big data and neural network |
CN110312138A (en) * | 2019-01-04 | 2019-10-08 | 北京大学 | A kind of high embedding capacity video steganography method and system based on the modeling of time series error convolution |
CN110312138B (en) * | 2019-01-04 | 2020-08-11 | 北京大学 | High-embedding-capacity video steganography method and system based on time sequence residual convolution modeling |
CN109729286A (en) * | 2019-01-28 | 2019-05-07 | 北京晶品特装科技有限责任公司 | A kind of method that motion graphics are realized in superposition in video |
Also Published As
Publication number | Publication date |
---|---|
CN105049669B (en) | 2017-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903228B (en) | Image super-resolution reconstruction method based on convolutional neural network | |
CN109685716B (en) | Image super-resolution reconstruction method for generating countermeasure network based on Gaussian coding feedback | |
CN105049669A (en) | Method for transmitting multiple images hidden in one image | |
CN110826593A (en) | Training method for fusion image processing model, image processing method, image processing device and storage medium | |
CN102523453B (en) | Super large compression method and transmission system for images | |
CN110163796B (en) | Unsupervised multi-modal countermeasures self-encoding image generation method and framework | |
US11908037B2 (en) | Method and system for large-capacity image steganography and recovery based on invertible neural networks | |
KR20180004898A (en) | Image processing technology and method based on deep learning | |
CN113096017A (en) | Image super-resolution reconstruction method based on depth coordinate attention network model | |
CN112001843B (en) | Infrared image super-resolution reconstruction method based on deep learning | |
CN110569911B (en) | Image recognition method, device, system, electronic equipment and storage medium | |
CN110490659B (en) | GAN-based user load curve generation method | |
CN104751400B (en) | Secret image share method based on the insertion of pixel-map matrix | |
CN113284033A (en) | Large-capacity image information hiding technology based on confrontation training | |
CN117078539A (en) | CNN-transducer-based local global interactive image restoration method | |
CN114157773B (en) | Image steganography method based on convolutional neural network and frequency domain attention | |
CN117173024A (en) | Mine image super-resolution reconstruction system and method based on overall attention | |
CN103281178A (en) | Concealment communication method and system | |
CN113436094B (en) | Gray level image automatic coloring method based on multi-view attention mechanism | |
CN116681859A (en) | Multi-view shape reconstruction method, system, electronic device and storage medium | |
CN113989092B (en) | Image steganography method based on layered antagonism learning | |
CN105139058A (en) | Method for coding two-dimensional code | |
CN113132550B (en) | Color image covert communication system based on FPGA | |
CN104144277A (en) | Multi-path image lossless hidden transmission method | |
CN104202500A (en) | High capacity information hiding transmission method with controllable carrier quality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |