Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = quantum movie

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2048 KiB  
Article
Non-Parametric Semi-Supervised Learning in Many-Body Hilbert Space with Rescaled Logarithmic Fidelity
by Wei-Ming Li and Shi-Ju Ran
Mathematics 2022, 10(6), 940; https://doi.org/10.3390/math10060940 - 15 Mar 2022
Cited by 1 | Viewed by 2080
Abstract
In quantum and quantum-inspired machine learning, a key step is to embed the data in the quantum space known as Hilbert space. Studying quantum kernel function, which defines the distances among the samples in the Hilbert space, belongs to the fundamental topics in [...] Read more.
In quantum and quantum-inspired machine learning, a key step is to embed the data in the quantum space known as Hilbert space. Studying quantum kernel function, which defines the distances among the samples in the Hilbert space, belongs to the fundamental topics in this direction. In this work, we propose a tunable quantum-inspired kernel function (QIKF) named rescaled logarithmic fidelity (RLF) and a non-parametric algorithm for the semi-supervised learning in the quantum space. The rescaling takes advantage of the non-linearity of the kernel to tune the mutual distances of samples in the Hilbert space, and meanwhile avoids the exponentially-small fidelities between quantum many-qubit states. Being non-parametric excludes the possible effects from the variational parameters, and evidently demonstrates the properties of the kernel itself. Our results on the hand-written digits (MNIST dataset) and movie reviews (IMDb dataset) support the validity of our method, by comparing with the standard fidelity as the QIKF as well as several well-known non-parametric algorithms (naive Bayes classifiers, k-nearest neighbors, and spectral clustering). High accuracy is demonstrated, particularly for the unsupervised case with no labeled samples and the few-shot cases with small numbers of labeled samples. With the visualizations by t-stochastic neighbor embedding, our results imply that the machine learning in the Hilbert space complies with the principles of maximal coding rate reduction, where the low-dimensional data exhibit within-class compressibility, between-class discrimination, and overall diversity. The proposed QIKF and semi-supervised algorithm can be further combined with the parametric models such as tensor networks, quantum circuits, and quantum neural networks. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

Figure 1
<p>The <math display="inline"><semantics> <mi>β</mi> </semantics></math> dependence of the testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of the testing samples for the ten classes in the MNIST dataset, where <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>t</mi> </msub> </semantics></math> is evaluated by randomly taking N=10 training samples from each classes. Note the RLF becomes the standard fidelity for <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. We take the average of <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>t</mi> </msub> </semantics></math> by implementing the simulations for 20 times, and the variances are indicated by the shadowed area. By t-SNE, the insets show the visualized distributions of 2000 effective vectors <math display="inline"><semantics> <mrow> <mo>{</mo> <mover accent="true"> <mi mathvariant="bold">y</mi> <mo stretchy="false">˜</mo> </mover> <mo>}</mo> </mrow> </semantics></math> ( Equation (<a href="#FD5-mathematics-10-00940" class="html-disp-formula">5</a>)) that are randomly taken from the testing samples.</p>
Full article ">Figure 2
<p>Testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of non-parametric supervised learning using rescaled logarithmic fidelity (RLF-NSL) on the MNIST dataset with different number of labeled samples <span class="html-italic">N</span> in each class.</p>
Full article ">Figure 3
<p>Testing accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> of non-parametric semi-supervised and supervised learning using rescaled logarithmic fidelity (RLF-NSSL and RLF-NSL, respectively) on the MNIST dataset. Our results are compared with <span class="html-italic">k</span>-nearest neighbors with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and 10, naive Bayesian classifiers, and a baseline model by simply replacing RLF by the Euclidean distance. For more results of the KNN with different values of <span class="html-italic">k</span> and those of the p-norm distance with different values of <span class="html-italic">p</span>, please refer to the <a href="#app1-mathematics-10-00940" class="html-app">Appendix A</a>.</p>
Full article ">Figure 4
<p>The testing accuracy of the RLF-NSL on the IMDb dataset comparing different kernels (Euclidean, Gaussian, and RLF) and classification strategies (KNN and NSL). The <span class="html-italic">x</span>-axis shows the number of labeled samples in each class. For more results of the KNN with different values of <span class="html-italic">k</span> and those of the Gaussian kernel with different values of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, please refer to the <a href="#app1-mathematics-10-00940" class="html-app">Appendix A</a>.</p>
Full article ">Figure 5
<p>For the RLF-NSSL on the MNIST dataset with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> labeled samples in each class (few-shot case), (<b>a</b>,<b>b</b>) show the confidence <math display="inline"><semantics> <mi>η</mi> </semantics></math> and classification accuracy <math display="inline"><semantics> <msub> <mi>γ</mi> <mi>c</mi> </msub> </semantics></math> of the samples in the clusters, respectively, in different epochs. (<b>c</b>) shows the classification accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> for the testing set. The insets of (<b>c</b>) illustrate the visualizations by applying t-SNE to the testing samples in the low-dimensional space (Equation (<a href="#FD5-mathematics-10-00940" class="html-disp-formula">5</a>)). See the details in the main text.</p>
Full article ">Figure A1
<p>The classification accuracy <math display="inline"><semantics> <mi>γ</mi> </semantics></math> on the MNIST dataset for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>600</mn> </mrow> </semantics></math>. The solid line with symbols show the results by using p-norm as the kernel in the NSL algorithm. The horizontal dash lines show the accuracies of the RLF-NSL with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1.08</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1.35</mn> </mrow> </semantics></math>, respectively. The shadows demonstrate the standard deviation.</p>
Full article ">Figure A2
<p>The classification accuracies <math display="inline"><semantics> <mi>γ</mi> </semantics></math> on the IMDb dataset for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1200</mn> </mrow> </semantics></math>, obtained by the NSL with the p-norm for different values of <span class="html-italic">p</span> (solid lines with symbols) and with the RLF as the kernel (horizontal dash lines). The shadows show the standard deviation.</p>
Full article ">
21 pages, 4085 KiB  
Review
The Development of Ultrafast Electron Microscopy
by Sergei A. Aseyev, Evgeny A. Ryabov, Boris N. Mironov and Anatoly A. Ischenko
Crystals 2020, 10(6), 452; https://doi.org/10.3390/cryst10060452 - 31 May 2020
Cited by 21 | Viewed by 6153
Abstract
Time-resolved electron microscopy is based on the excitation of a sample by pulsed laser radiation and its probing by synchronized photoelectron bunches in the electron microscope column. With femtosecond lasers, if probing pulses with a small number of electrons—in the limit, single-electron wave [...] Read more.
Time-resolved electron microscopy is based on the excitation of a sample by pulsed laser radiation and its probing by synchronized photoelectron bunches in the electron microscope column. With femtosecond lasers, if probing pulses with a small number of electrons—in the limit, single-electron wave packets—are used, the stroboscopic regime enables ultrahigh spatiotemporal resolution to be obtained, which is not restricted by the Coulomb repulsion of electrons. This review article presents the current state of the ultrafast electron microscopy (UEM) method for detecting the structural dynamics of matter in the time range from picoseconds to attoseconds. Moreover, in the imaging mode, the spatial resolution lies, at best, in the subnanometer range, which limits the range of observation of structural changes in the sample. The ultrafast electron diffraction (UED), which created the methodological basis for the development of UEM, has opened the possibility of creating molecular movies that show the behavior of the investigated quantum system in the space-time continuum with details of sub-Å spatial resolution. Therefore, this review on the development of UEM begins with a description of the main achievements of UED, which formed the basis for the creation and further development of the UEM method. A number of recent experiments are presented to illustrate the potential of the UEM method. Full article
(This article belongs to the Section Crystal Engineering)
Show Figures

Figure 1

Figure 1
<p>(Color online) Schematic of an ultrafast transmission electron microscope [<a href="#B38-crystals-10-00452" class="html-bibr">38</a>]. A commercially available transmission electron microscope that was originally designed for continuous operation with a thermionic cathode can be taken as a basis for creating such a setup. In order to adapt the industrial apparatus to UED requirements, it is necessary to ensure pulsed laser excitation of the sample (marked with symbol II), as well as the delivery of the laser radiation to the photocathode (marked with symbol I) to generate an ultrashort pulsed photoelectron beam. (Adapted from work in [<a href="#B38-crystals-10-00452" class="html-bibr">38</a>] with minor changes. Copyright (2006) National Academy of Sciences.)</p>
Full article ">Figure 2
<p>(<b>A</b>) Images obtained by the ultrafast electron microscopy (UEM) method before the phase transition in VO<sub>2</sub> films (left) and after the phase transition (right). The magnification is 42,000× (scale 100 nm). It should be noted that these images would not be observed if the femtosecond pulses for the generation of photoelectrons were blocked. (<b>B</b>) Diffraction patterns obtained by the UEM method before the phase transition in VO<sub>2</sub> (right) and after it (left). Diffraction patterns of the two phases (monoclinic phase <b>M</b> and high-temperature tetragonal rutile phase <b>R</b>) observed experimentally (on the left in panel B) and constructed as a result of calculations (on the right in panel B). The analysis was described in [<a href="#B38-crystals-10-00452" class="html-bibr">38</a>]. (Adapted from work in [<a href="#B38-crystals-10-00452" class="html-bibr">38</a>] with minor changes. Copyright (2006) National Academy of Sciences.)</p>
Full article ">Figure 3
<p>Schematic diagram of electron microscopy of electromagnetic waves [<a href="#B43-crystals-10-00452" class="html-bibr">43</a>]. Some part of the femtosecond laser radiation irradiated the photocathode to generate a pulsed electron beam, while the rest formed single-cycle terahertz pulses. The terahertz radiation, in turn, compressed electron pulses, and, also, induced electromagnetic resonance in the sample. As a result, the pulsed electron beam of femtosecond duration collimated by magnetic lenses passed through a metamagnetic resonator excited by single-cycle electromagnetic bursts. Because the probe pulses were shorter than the half-period of the electromagnetic field, the Lorentz forces “frozen” in time distorted the image of the excited sample, thereby providing a unique opportunity to visualize the electromagnetic field inside the substance with detection of its phase, amplitude, and polarization. Figure adapted from work in [<a href="#B43-crystals-10-00452" class="html-bibr">43</a>] with permission from Science AAAS.</p>
Full article ">Figure 4
<p>Subparticle ultrafast spectrum imaging in UEM. The change in the electron kinetic energy (electron spectrum) was measured as a function of the delay between an exciting femtosecond laser and probing electron pulses for each position of the probe. Energy gain was measured in units of laser radiation quantum, <span class="html-italic">h</span>ν = 2.4 eV [<a href="#B45-crystals-10-00452" class="html-bibr">45</a>]. Figure was adapted from work in [<a href="#B45-crystals-10-00452" class="html-bibr">45</a>] with permission from Science AAAS.</p>
Full article ">Figure 5
<p>Spectrally resolved PINEN [<a href="#B52-crystals-10-00452" class="html-bibr">52</a>]. (<b>a</b>) Experimental scheme. (<b>b</b>) Electron energy-loss spectroscopy (EELS) spectrum revealing multiple plasmon resonances in an individual nanowire excited by passing electrons and mapped by raster scanning the electron beam (plasmon map insets). (<b>c</b>) Sketch of an electron energy loss (EEL) spectrum of the sample under ultrafast energy-filtered microscopy conditions. The visibility of electron-induced plasmon resonances is severely reduced. (<b>d</b>) Sketch of an EEL spectrum of the specimen (e.g., long silver nanowire (130 nm diameter, 7.8 μm length) deposited on a Si<sub>3</sub>N<sub>4</sub> membrane) upon photoexcitation by light pulses (e.g., the pump with 100 fs, 1.08 eV, and 5 mJ/cm<sup>2</sup> laser pulses from an optical parametric amplifier) of energy <span class="html-italic">h</span>ν<span class="html-italic"><sub>4</sub></span>, tuned to the <span class="html-italic">n</span> = 4 plasmon-resonance frequency. This specific mode exchanges several times its characteristic energy with electrons. (<b>e</b>) Concept of the experiment in [<a href="#B52-crystals-10-00452" class="html-bibr">52</a>]: the laser excitation wavelength is scanned and the plasmon resonance profile is retrieved via quantitative analysis of the energy-filtered images in UEM. Reprinted (adapted) with permission from work in [<a href="#B52-crystals-10-00452" class="html-bibr">52</a>]. Copyright (2018) American Chemical Society.</p>
Full article ">Figure 6
<p>Illustration explaining the conceptual scheme of attosecond electron microscopy. Here, an optical attosecond pulse serves as an instantaneous gate to form an isolated ultrashort electron burst, which should be filtered after gating. As a result, the prepared attosecond free electron pulses can be used to detect ultrafast laser-induced electron dynamics in real-time (adapted from work in [<a href="#B54-crystals-10-00452" class="html-bibr">54</a>] with minor changes).</p>
Full article ">Figure 7
<p>Ultrafast mapping of electron–light interference: In the experiment in [<a href="#B60-crystals-10-00452" class="html-bibr">60</a>], light irradiates a metal film that contains an aperture, to produce surface plasmon polaritons (SPPs). A different light–field pattern, which is illustrated by stripes, is produced on the other side of the film. When an electron beam passes through the specimen, it subsequently interacts with the fields on both sides, producing a spiral interference pattern. This pattern encodes the relative phases of the light fields at each position on the film, and therefore contains holographic information (adapted from [<a href="#B61-crystals-10-00452" class="html-bibr">61</a>] with minor changes).</p>
Full article ">
3548 KiB  
Article
Metric for Estimating Congruity between Quantum Images
by Abdullah M. Iliyasu, Fei Yan and Kaoru Hirota
Entropy 2016, 18(10), 360; https://doi.org/10.3390/e18100360 - 9 Oct 2016
Cited by 23 | Viewed by 5994
Abstract
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited [...] Read more.
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Circuit structure for comparing similarity between two FRQI quantum images (figure adapted from [<a href="#B15-entropy-18-00360" class="html-bibr">15</a>,<a href="#B23-entropy-18-00360" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>Generalised circuit structure for parallel comparison of similarity between FRQI quantum images (figure adapted from [<a href="#B15-entropy-18-00360" class="html-bibr">15</a>,<a href="#B23-entropy-18-00360" class="html-bibr">23</a>]).</p>
Full article ">Figure 3
<p>(<b>A</b>) Notation for a single qubit projective measurement operation and (<b>B</b>) description of the ancilla-driven measurement operation (figures and explanations in the text are adapted from [<a href="#B2-entropy-18-00360" class="html-bibr">2</a>,<a href="#B14-entropy-18-00360" class="html-bibr">14</a>,<a href="#B24-entropy-18-00360" class="html-bibr">24</a>]).</p>
Full article ">Figure 4
<p>Layout of proposed QIFM framework to assess fidelity between two (or more) quantum images.</p>
Full article ">Figure 5
<p>Flowchart for executing the proposed QIFM framework to compare two (or more) quantum images.</p>
Full article ">Figure 6
<p>QIFM sub-circuit to execute the Binary check operation (<span class="html-italic">BCO</span>) of the QIFM image metric.</p>
Full article ">Figure 7
<p>QIFM sub-circuit to execute the Bit error rate operation (<span class="html-italic">BO</span>) of the QIFM image metric.</p>
Full article ">Figure 8
<p>Dataset of images paired for the FPS analysis. (<b>A</b>) Lena; (<b>B</b>) Inverted Lena; (<b>C</b>) Blonde Lady; (<b>D</b>) Peppers; (<b>E</b>) Scarfed Lady; (<b>F</b>) Baboon; (<b>G</b>) Brunette Lady; (<b>H</b>) Cameraman; (<b>I</b>) Man; (<b>J</b>) Couple; (<b>K</b>) Aeroplane; (<b>L</b>) House; (<b>M</b>) Pentagon; (<b>N</b>) Fingerprint; (<b>O</b>) Bridge; (<b>P</b>) Trees.</p>
Full article ">Figure 9
<p>Comparison between PSNR and QIFM for watermarked images. (<b>A</b>) PSAU watermark logo; (<b>B</b>) original Lena image; (<b>C</b>) original Blonde Lady image; (<b>D</b>) watermarked version of Lena image; (<b>E</b>) watermarked version of Blonde Lady image.</p>
Full article ">
3717 KiB  
Review
Quantum Computation-Based Image Representation, Processing Operations and Their Applications
by Fei Yan, Abdullah M. Iliyasu and Zhengang Jiang
Entropy 2014, 16(10), 5290-5338; https://doi.org/10.3390/e16105290 - 10 Oct 2014
Cited by 51 | Viewed by 11941
Abstract
A flexible representation of quantum images (FRQI) was proposed to facilitate the extension of classical (non-quantum)-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and [...] Read more.
A flexible representation of quantum images (FRQI) was proposed to facilitate the extension of classical (non-quantum)-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI) and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI), quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures


<p>Commonly used quantum gates (NOT, Z, Hadamard and CNOT).</p>
Full article ">
<p>A single qubit measurement gate.</p>
Full article ">
<p>A 2 × 2 FRQI quantum image, its circuit structure and quantum state.</p>
Full article ">
<p>Preparation of FRQI state through the unitary transform <span class="html-italic">℘</span>.</p>
Full article ">
<p>Generalized circuit design for geometric transformations on quantum images.</p>
Full article ">
<p>Single qubit gates applied on the color wire.</p>
Full article ">
<p>General circuit of MCQI quantum images.</p>
Full article ">
<p>A 2 × 2 MCQI quantum image, its circuit structure and MCQI state.</p>
Full article ">
<p>The general quantum circuit of <span class="html-italic">U<sub>X</sub></span> operations, including: (<b>a</b>) <span class="html-italic">U<sub>R</sub></span>; (<b>b</b>) <span class="html-italic">U<sub>G</sub></span>; (<b>c</b>) <span class="html-italic">U<sub>B</sub></span>; and (<b>d</b>) <span class="html-italic">U<sub>α</sub></span>.</p>
Full article ">
3725 KiB  
Review
Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers
by Abdullah M. Iliyasu
Entropy 2013, 15(8), 2874-2974; https://doi.org/10.3390/e15082874 - 26 Jul 2013
Cited by 59 | Viewed by 9754
Abstract
Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images [...] Read more.
Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI) states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both) content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum) computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework. Full article
(This article belongs to the Special Issue Quantum Information 2012)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The three stages of the circuit model of quantum. The figure was adapted from [<a href="#B35-entropy-15-02874" class="html-bibr">35</a>] from where additional explanation can be obtained.</p>
Full article ">Figure 2
<p>A simple FRQI image and its quantum state.</p>
Full article ">Figure 3
<p>Generalised circuit showing how information in an FRQI quantum image state is encoded.</p>
Full article ">Figure 4
<p>Illustration of the two steps of the PPT theorem to prepare an FRQI image.</p>
Full article ">Figure 5
<p>Colour and position transformations on FRQI quantum images. The * in (c) indicates the 0 or 1 control-conditions required to confine <span class="html-italic">U<sub>3</sub></span> to a predetermined sub-block of the image.</p>
Full article ">Figure 6
<p>Left: The circuit design for the horizontal flip operation, <span class="html-italic">F<sup>X</sup></span>, and on the right that for the coordinate swap operation, <span class="html-italic">S<sub>I</sub></span>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Original 8×8 image, and its resulting output images after applying in (<b>b</b>) the vertical flip <span class="html-italic">F<sup>Y</sup></span>, (<b>c</b>) the horizontal flip <span class="html-italic">F<sup>X</sup></span>, and in (<b>d</b>) the coordinate swap <span class="html-italic">S<sub>I</sub></span> operations, respectively.</p>
Full article ">Figure 8
<p>Circuit to rotate the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>a through an angle of 90° and (on the left) the resulting image.</p>
Full article ">Figure 9
<p>The 8 × 8 synthetic and Lena images before and after the application of the <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mstyle scriptlevel="+1"> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mn>3</mn> </mfrac> </mstyle> <mo>)</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mstyle scriptlevel="+1"> <mfrac> <mi>π</mi> <mn>3</mn> </mfrac> </mstyle> <mo>)</mo> </mrow> </semantics> </math> on the upper half and lower half of their content.</p>
Full article ">Figure 10
<p>Circuit to execute the <math display="inline"> <semantics> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mstyle scriptlevel="+1"> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mn>3</mn> </mfrac> </mstyle> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mstyle scriptlevel="+1"> <mfrac> <mi>π</mi> <mn>3</mn> </mfrac> </mstyle> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> colour operations on the upper half and lower half of the 8 × 8 synthetic and Lena images.</p>
Full article ">Figure 11
<p>General circuit design for transforming the geometric (G) and colour (C) content of FRQI quantum images.</p>
Full article ">Figure 12
<p>Demonstrating the use of additional control to target a smaller sub-area in an image.</p>
Full article ">Figure 13
<p>The control on the <span class="html-italic">y<sub>n-1</sub></span> qubit in the circuit on the left divides an entire image into its upper and lower halves. Using this control, this circuit shows how the flip operation can be confined to the lower half of an image, while the figure to its right shows the effect of such a transformation on the 8×8 binary image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a). (The image on the right corrects the image for the same example in [<a href="#B18-entropy-15-02874" class="html-bibr">18</a>]).</p>
Full article ">Figure 14
<p>Circuit to realise high fidelity version of the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a). On the left is the circuit to confine the flip operation to the predetermined 2 × 2 sub- area, <span class="html-italic">i.e.</span> left lower-half, of the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a); and to its right, the resulting transformed image. (The image on the right corrects the image for the same example in [<a href="#B18-entropy-15-02874" class="html-bibr">18</a>]).</p>
Full article ">Figure 15
<p>A 4×4 image showing sub-blocks labelled a–e within which the transformations <span class="html-italic">U<sub>a</sub></span>, <span class="html-italic">U<sub>b</sub></span>, <span class="html-italic">U<sub>c</sub></span>, <span class="html-italic">U<sub>d</sub></span> and <span class="html-italic">U<sub>e</sub></span> should be confined.</p>
Full article ">Figure 16
<p>Circuit showing the layers to confine the operations <span class="html-italic">U<sub>a</sub></span>, <span class="html-italic">U<sub>b</sub></span>, <span class="html-italic">U<sub>c</sub></span>, <span class="html-italic">U<sub>d</sub></span> and <span class="html-italic">U<sub>e</sub></span> to the layers labelled “a” to “e” of the image in <a href="#entropy-15-02874-f015" class="html-fig">Figure 15</a>. MSQ and LSQ indicate the most and least significant qubits of the FRQI representation encoding the image.</p>
Full article ">Figure 17
<p>Original Lena image with labelled sub-blocks.</p>
Full article ">Figure 18
<p>The original Lena image and the two different output images using <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mo>(</mo> <mfrac> <mi>π</mi> <mn>12.5</mn> </mfrac> <mo>)</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mo>(</mo> <mfrac> <mi>π</mi> <mn>125</mn> </mfrac> <mo>)</mo> </mrow> </semantics> </math> as discussed in the text.</p>
Full article ">Figure 19
<p>The quantum circuit to realise the output images in <a href="#entropy-15-02874-f018" class="html-fig">Figure 18</a>.</p>
Full article ">Figure 20
<p>Watermark embedding procedure of the WaQI scheme.</p>
Full article ">Figure 21
<p>Merger of 2×2 sub-block entries from the first to the 2nd iteration.</p>
Full article ">Figure 22
<p>Merging the content of 2×2 sub-block entries to realise (<b>i</b>) e = 1and (<b>ii</b>) e = –1values as explained in step 3 of the watermark-embedding algorithm.</p>
Full article ">Figure 23
<p>(<b>a</b>) the a–d Alphabet test image—HTLA text logo watermark pair, and (<b>b</b>) the watermarked version of the a–d Alphabet test image.</p>
Full article ">Figure 24
<p>Watermark map for a–d alphabet–HTLA text watermark pair.</p>
Full article ">Figure 25
<p>Watermark embedding circuit for the a–d alphabet/HTLA text logo pair in <a href="#entropy-15-02874-f023" class="html-fig">Figure 23</a>(a).</p>
Full article ">Figure 26
<p>Decomposing layer 1 of the watermark-embedding circuit in <a href="#entropy-15-02874-f025" class="html-fig">Figure 25</a> into its two sub-layers.</p>
Full article ">Figure 27
<p>Merging flip gates to realise revised <span class="html-italic">F<sup>X</sup></span>, and <span class="html-italic">F<sup>Y</sup></span> operations for (i) <span class="html-italic">R &gt; L</span> and (ii) <span class="html-italic">R &lt; L.</span></p>
Full article ">Figure 28
<p>Merger of watermark map content to realise the revised GTQI operations for <span class="html-italic">R = L</span>. The operation <span class="html-italic">G<sub>I</sub></span> could be any of the operations from our <span class="html-italic">r</span>GTQI library comprising of the flip operations, <span class="html-italic">F<sup>X</sup></span> or <span class="html-italic">F<sup>Y</sup></span>; the coordinate swap operation, <span class="html-italic">S</span>; or the do nothing operation, <span class="html-italic">D</span>.</p>
Full article ">Figure 29
<p>Merging of flip gates to realise the revised <span class="html-italic">F<sup>X</sup></span> and <span class="html-italic">F<sup>Y</sup></span> flip operations for <span class="html-italic">R = L.</span></p>
Full article ">Figure 30
<p>Quantum watermarked image authentication procedure.</p>
Full article ">Figure 31
<p>Dataset comprising of images and watermark signals used for simulation-based experiments on WaQI.</p>
Full article ">Figure 32
<p>Top row shows the watermark maps for the image paired with different watermark signals HTLA text, Baboon, and Noise image. Below is the watermarked version for each pair and their corresponding PSNR values.</p>
Full article ">Figure 33
<p>Variation of watermarked image quality (PSNR) with the size of the Lena–Noise image pair. The size of each point in the watermark maps in the top row varies with the size of the image–watermark pairs. It is 8×8 for the 256×256 and 512×512 pairs; and 16×16 for the 1024×1024 Lena–Noise pair.</p>
Full article ">Figure 34
<p>Variation of watermarked image quality (PSNR) with size of image–watermark pair.</p>
Full article ">Figure 35
<p>Relationship between the colour angle <span class="html-italic">θ<sub>i</sub></span> and greyscale value |<span class="html-italic">G<sub>i</sub></span>〉 in an FRQI image.</p>
Full article ">Figure 36
<p>Greyscale spectrum showing the correlation between the greyscale values and changes in their values that can be perceived by the HVS.</p>
Full article ">Figure 37
<p>General schematic for two-tier watermarking and authentication of greyscale quantum images.</p>
Full article ">Figure 38
<p>Generalised circuit for the two-tier watermarking of greyscale FRQI images. The visible and invisible watermark embedding transformations <span class="html-italic">T<sub>α</sub></span> and <span class="html-italic">T<sub>β</sub></span> are confined to predetermined areas of the cover image using the control-conditions specified by <span class="html-italic">I<sub>Rl</sub></span> and <span class="html-italic">I<sub>S</sub></span> respectively.</p>
Full article ">Figure 39
<p>(<b>a</b>)–(<b>d</b>) Cover images and (<b>e</b>) watermark logo used for experiments on the proposed scheme.</p>
Full article ">Figure 40
<p>Watermark embedding circuit for the Lena-Titech logo pair.</p>
Full article ">Figure 41
<p>(Top row) shows the four watermarked images while (Bottom row) shows the magnified visible watermarked windows and PSNR for each pair.</p>
Full article ">Figure 42
<p>Watermark recovery circuit for the Lena- Titech logo pair.</p>
Full article ">Figure 43
<p>Results for the Lena-Titech logo pair based on the revised watermark embedding circuit for the scheme-designated watermark window on the left and one whose watermark window has been assigned to the extreme lower-right corner by default.</p>
Full article ">Figure 44
<p>Revised watermark-embedding circuit for the Lena-Titech logo pair using the scheme-designated watermark window.</p>
Full article ">Figure 45
<p><span class="html-italic">m</span>-shots from a movie showing the key |<span class="html-italic">F<sub>m</sub></span>〉, makeup <math display="inline"> <semantics> <mrow> <mo>|</mo> <msubsup> <mi>K</mi> <mi>c</mi> <mi>m</mi> </msubsup> <mo>〉</mo> </mrow> </semantics> </math>, and viewing |<span class="html-italic">F<sub>mq</sub></span>〉 frames.</p>
Full article ">Figure 46
<p>Circuit structure to encode the input of a movie strip.</p>
Full article ">Figure 47
<p>Circuit structure to encode the input of a movie strip Circuits for SMO. Depending on the motion axis <span class="html-italic">Z<sub>n</sub></span> = <span class="html-italic">x</span> or <span class="html-italic">y</span>) the circuit on the left is used to accomplish the <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>D</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> operations when applied along the <span class="html-italic">x</span><sup>−</sup> and <span class="html-italic">y</span><sup>−</sup> axis, respectively. Similarly, the circuit on the right is used to accomplish the <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>B</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>U</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> operations when applied the <span class="html-italic">x</span><sup>−</sup> and <span class="html-italic">y</span><sup>−</sup> axis, respectively.</p>
Full article ">Figure 48
<p>SMOs on the key frame in (<b>a</b>) to mimic the movement of the + shaped ROI on a constant white background and its viewing frames after applying (<b>b</b>) the forward motion operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math>, (<b>c</b>) the upward motion operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>U</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math>, and (<b>d</b>) a somewhat zigzag movement of the + ROI.</p>
Full article ">Figure 49
<p>Movie scenes to demonstrate SMO operations. The panels in (a) and (b) show the transcribed scripts for scene 1 and 2, (c) shows the key frame for scene 1, and (d)-(l) show the resulting viewing frames.</p>
Full article ">Figure 50
<p>Movie sub-circuit to realise the first three viewing frames of scene 1 (of the example in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>). The layers separated by short-dashed lines labelled “a” indicate SMO operations, while the layers grouped and labelled as “b” indicate CTQI transformations on the key frame. Layers labelled <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>1</mn> <mn>2</mn> </msubsup> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>2</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>3</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> indicate sub-circuits of the movie reader to recover the classical readout of frames |<span class="html-italic">f</span><sub>0,1</sub>〉, |<span class="html-italic">f</span><sub>0,2</sub>〉 and |<span class="html-italic">f</span><sub>0,3</sub>〉.</p>
Full article ">Figure 51
<p>Restricting the movie operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>C</mi> </msubsup> </mrow> </semantics> </math> in order to move the ROI <span class="html-italic">R<sub>1</sub></span> from node 0 to node 1 as specified by the movie script.</p>
Full article ">Figure 52
<p>Movie sub-circuit for scene 2 in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>(b). The labels 5 through 7 and 5′ through 7′ for <span class="html-italic">R<sub>1</sub></span> and <span class="html-italic">R<sub>2</sub></span> indicate the circuit layers to perform the operations that yield the viewing frames in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>(j) –(l).</p>
Full article ">Figure 53
<p>Circuit on the <span class="html-italic">m</span>FRQI strip axis to perform the frame-to-frame transition operation.</p>
Full article ">Figure 54
<p>The cyclic shift transformation for the case <span class="html-italic">c</span> = 1 and <span class="html-italic">n</span> = 5.</p>
Full article ">Figure 55
<p>A single qubit measurement gate.</p>
Full article ">Figure 56
<p>Exploiting the position information |<span class="html-italic">i</span>〉 of the FRQI representation to predetermined the 2D grid location of each pixel in a transformed image <span class="html-italic">G<sub>I</sub></span>(|<span class="html-italic">I(θ)</span>〉).</p>
Full article ">Figure 57
<p>Control-conditions to recover the readout of the pixels of a 2<span class="html-italic"><sup>n</sup></span>×2<span class="html-italic"><sup>n</sup></span> FRQI quantum image.</p>
Full article ">Figure 58
<p>Predetermined recovery of the position information of an FRQI quantum image. The * between the colour |<span class="html-italic">c</span>(<span class="html-italic">θ<sub>i</sub></span>)〉 and ancilla |<span class="html-italic">a</span>〉 qubits indicates the dependent ancilla-driven measurement as described in <a href="#entropy-15-02874-f059" class="html-fig">Figure 59</a> and Theorem 4.</p>
Full article ">Figure 59
<p>Circuit to recover the content of the single-qubit colour information of an FRQI quantum image. This circuit represents each of the * between the colour and ancilla qubit in <a href="#entropy-15-02874-f058" class="html-fig">Figure 58</a>.</p>
Full article ">Figure 60
<p>Reader to recover the content of a 2<span class="html-italic"><sup>n</sup></span>×2<span class="html-italic"><sup>n</sup></span> FRQI quantum image.</p>
Full article ">Figure 61
<p>Movie reader sub-circuit to recover pixel <span class="html-italic">p</span><sub>0</sub> and <span class="html-italic">p</span><sub>1</sub> for frame |<span class="html-italic">f</span><sub>0,1</sub>〉 corresponding to <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>e.</p>
Full article ">Figure 62
<p>Movie reader to recover pixels <span class="html-italic">p</span><sub>4</sub>, <span class="html-italic">p</span><sub>6</sub>, <span class="html-italic">p</span><sub>8</sub>, <span class="html-italic">p</span><sub>9</sub>, <span class="html-italic">p</span><sub>10</sub> of viewing frame |<span class="html-italic">f</span><sub>0,1</sub>〉.</p>
Full article ">Figure 63
<p>Readout of the new state of pixel <span class="html-italic">p</span><sub>4</sub> as transformed by sub-circuit 1 in <a href="#entropy-15-02874-f052" class="html-fig">Figure 52</a>.</p>
Full article ">Figure 64
<p>Movie reader sub-circuit to recover the content of pixels <span class="html-italic">p</span><sub>2</sub>, <span class="html-italic">p</span><sub>3</sub>, <span class="html-italic">p</span><sub>7</sub>, <span class="html-italic">p</span><sub>11</sub>, <span class="html-italic">p</span><sub>12</sub>, <span class="html-italic">p</span><sub>13</sub> and <span class="html-italic">p</span><sub>15</sub>.</p>
Full article ">Figure 65
<p>Key and makeup frames for the scene “The lonely duck goes swimming”. See text and [<a href="#B19-entropy-15-02874" class="html-bibr">19</a>] for additional explanation.</p>
Full article ">Figure 66
<p>Key and makeup frames for the scene “The cat and mouse chase”. See text and [<a href="#B19-entropy-15-02874" class="html-bibr">19</a>] for additional explanation.</p>
Full article ">Figure 67
<p>Framework for quantum movie representation and manipulation.</p>
Full article ">
Back to TopTop