Nothing Special   »   [go: up one dir, main page]

Photonic Signal Processor Based On A Kerr Microcomb For Real-Time Video Image Processing (s44172-023-00135-7)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

ARTICLE

https://doi.org/10.1038/s44172-023-00135-7 OPEN

Photonic signal processor based on a Kerr


microcomb for real-time video image processing
Mengxi Tan1,2,3, Xingyuan Xu4, Andreas Boes 3,5, Bill Corcoran6, Thach G. Nguyen4, Sai T. Chu 7,

Brent E. Little8, Roberto Morandotti 9, Jiayang Wu2, Arnan Mitchell3 & David J. Moss 2 ✉

Signal processing has become central to many fields, from coherent optical tele-
communications, where it is used to compensate signal impairments, to video image pro-
cessing. Image processing is particularly important for observational astronomy, medical
diagnosis, autonomous driving, big data and artificial intelligence. For these applications,
1234567890():,;

signal processing traditionally has mainly been performed electronically. However these, as
well as new applications, particularly those involving real time video image processing, are
creating unprecedented demand for ultrahigh performance, including high bandwidth and
reduced energy consumption. Here, we demonstrate a photonic signal processor operating at
17 Terabits/s and use it to process video image signals in real-time. The system processes
400,000 video signals concurrently, performing 34 functions simultaneously that are key to
object edge detection, edge enhancement and motion blur. As compared with spatial-light
devices used for image processing, our system is not only ultra-high speed but highly
reconfigurable and programable, able to perform many different functions without any
change to the physical hardware. Our approach is based on an integrated Kerr soliton crystal
microcomb, and opens up new avenues for ultrafast robotic vision and machine learning.

1 School of Electronic and Information Engineering, Beihang University, Beijing 100191, China. 2 Optical Sciences Centre, Swinburne University of Technology,

Hawthorn, VIC 3122, Australia. 3 School of Engineering, RMIT University, Melbourne, VIC 3001, Australia. 4 State Key Laboratory of Information Photonics
and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China. 5 Institute for Photonics and Advanced Sensing
(IPAS) and School of Electrical and Electronic Engineering, University of Adelaide, Adelaide, SA 5005, Australia. 6 Department of Electrical and Computer
System Engineering, Monash University, Clayton, VIC 3168, Australia. 7 Department of Physics and Material Science, City University of Hong Kong, Tat Chee
Avenue, Hong Kong, China. 8 Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an, China. 9 INRS-Énergie, Matériaux et Télécommunications, 1650
Boulevard Lionel-Boulet, Varennes, Québec J3X 1S2, Canada. ✉email: dmoss@swin.edu.au

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 1


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

I
mage processing, the application of signal processing techni- tasks such as object recognition or identification, feature capture,
ques to photographs or videos, is a core part of emerging and data compression12,13. We use an integrated Kerr soliton
technologies such as machine learning and robotic vision1, crystal microcomb source that generates 95 discrete taps, or
with many applications to LIDAR for self-driving cars2, remote wavelengths as the basis for massively parallel processing, with
drones3, automated in-vitro cell-growth tracking for virus and single channel rates at 64 GigaBaud (pixels/s). Our experimental
cancer analysis4, optical neural networks5, ultrahigh-speed results agree well with theory, demonstrating that the processor is
imaging6,7, holographic three-dimensional (3D) displays8,9, and a powerful approach for ultrahigh-speed video image processing
others. Many of these require real-time processing of massive for robotic vision, machine learning, and many other emerging
real-world information, placing extremely high demands on the applications.
processing speed (bandwidth) and throughput of image proces-
sing systems. While electrical digital signal processing (DSP) Results
technologies10 are well established, they face intrinsic limitations Principle of operation. The operational principle of the video
in energy consumption and processing speed such as the well- image processor is based on the RF photonic transversal
known von Neumann bottleneck11. filter18–20 approach, as represented by Eq. (1) and illustrated in
To overcome these limitations, optical signal processing offers Fig. 1a–c(i–iii). We employ wavelength division multiplexed
the potential for much higher speeds2, and this has been achieved (WDM) signals to provide the different taps, or channels– each
using a variety of techniques including silicon photonic crystal wavelength representing a single tap/channel. We also use WDM
metasurfaces12, surface plasmonic structures13, and topological as a central means of accomplishing both single and multiple
interfaces14. These free-space, spatial-light devices offer many functions simultaneously. The tap delays required by Eq. 1 are
attractions such as compact footprint, low power consumption, achieved here by an optical delay line in the form of standard
and compatibility with commercial cameras and optical micro- single-mode fiber (SMF) in order to perform the wavelength (i.e.,
scopes. However, they tend to be non-reconfigurable fixed sys- tap or channel) dependent delays. We use WaveShaper to flatten
tems designed to perform a single fixed function. On a more the comb and implement the channel weightings for the trans-
advanced level, human action recognition through processing of versal filter for each function, as well as to separate different
video image data, has been achieved using photonic groups of wavelengths to perform parallel and simultaneous
computers15,16. However, these were achieved either in com- processing of multiple functions. We used a maximum of 95
paratively low speed systems15 or in high bandwidth (multi- wavelengths supplied by a soliton crystal Kerr microcomb that
TeraOP regime) systems based on bulk-optics that is incompa- produced a comb spacing of ~50 GHz. The transfer function of
tible with integration16. To date, optical systems, especially those the system is given by
compatible with integration17, still have not demonstrated that
are capable of processing of large data sets of high-definition N1
video images and at ultrahigh speeds—enough for real-time video HðωÞ ¼ ∑ hðnÞejωnT ð1Þ
n¼0
image processing.
Here, we demonstrate an optical real-time signal processor for where ω is the RF angular frequency, T is the time delay between
video images that is reconfigurable and compatible with inte- adjacent taps (i.e., wavelength channels), and h(n) is the tap
gration. It is based on components that are either already inte- coefficient of the nth tap, or wavelength, which can be calculated
grated or have been demonstrated in integrated form, and by performing the inverse Fourier transform of H(ω)18–20. In Eq.
operates at an ultrahigh bandwidth of 17 Terabits/s. This is suf- (1), the tap coefficients can be tailored by shaping the power of
ficient to process ~400,000 (399,061) video signals both con- comb lines according to the different computing functions (e.g.,
currently and in real-time, performing up to 34 functions on each differentiation, integration, and Hilbert transformation), thus
signal simultaneously. enabling different video image processing functions. For micro-
Here, the term “functions” refers to signal processing opera- combs with multiple equally spaced comb lines transmitted over
tions comprised of fundamental mathematical operations that are the dispersive SMF, in Eq. (1) T is given by T ¼ D ´ L ´ 4λ,
performed by the system, which in our case relate to object image where D is the dispersion coefficient of the SMF, L is the length of
edge enhancement, detection and motion blur. These functions the SMF, Δλ is the spectral separation between adjacent comb
operate on the input signal to extract or enhance these key lines (in our case 48.9 GHz) and the RF bandwidth of the system
characteristics and include both integral and fractional order is given by f = 1/T. The optical delay lines play a crucial role in
differentiation, fractional order Hilbert transforms, and integra- achieving simultaneous processing by introducing wavelength
tion. For differentiation and Hilbert transforms we perform both dependent controlled time delays to the different channels,
integral order and a continuous range of fractional order trans- enabling the functions to be processed independently and in
forms. Therefore, while there are 3 basic types of functions that parallel. These time delays coincide with the requirements of the
we perform, with the inclusion of a range of integral and frac- transversal filter function (Eq. 1) and ensure that the input signals
tional orders we achieve 34 functions in total. Importantly, these for each function are properly aligned and synchronized. To
34 functions are all achieved without any change in hardware, but change the system bandwidth one needs to change the time delay
only by tuning the parameters of the system. Furthermore, between adjacent wavelengths. While this is generally fixed for a
beyond these 34 functions, the range of possible functions is in given system, it can be changed by either using different lengths
fact unlimited given that the system can process a continuous of SMF or alternatively adding a length of dispersion compen-
range of arbitrary fractional and high-order differentiation and sating fiber (DCF) which effectively reduces the net dispersion D
fractional Hilbert transforms. of the fiber, equivalent to decreasing the SMF fiber length. To
Our system is comparable to electrical DSP systems but with achieve dynamic tuning of the RF bandwidth would require a
the important advantages that it operates at multi-terabit/s tunable delay line which is beyond the scope of our work.
speeds, enabled by massively parallel processing. It is also very All 95 wavelengths from the microcomb were passed through a
general, flexible, and highly reconfigurable—able to perform a single output port WaveShaper which flattened the comb and
wide range of functions without requiring any change in hard- weighted the individual lines according to the required tap
ware. We perform multiple image processing functions in real- weights for the particular function being performed. The
time, which are essential for machine vision and microscopy for weighted wavelengths were then passed through an electro-

2 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

Fig. 1 Operation principle of the RF photonic video image processor. PD photodetector. a Diagram illustration of the flattening method applied to the
input video frames including both horizontally and vertically. b Schematic illustration of experimental setup for video image processing. c The processed
video frames after (i) 0.5 order differentiation for edge detection, (ii) integration for motion blue, and (iii) Hilbert transformation for edge enhancement.

optic modulator which was driven by the analog input video accelerator5. Figure 2 shows the results for 34 different functions,
signal. The output function was finally generated by summing all which are listed in detail along with their individual parameters in
of the wavelengths, achieved by photodetection of all Supp. Table S1. In this work, for the ultrahigh-speed demonstra-
wavelengths. tion we chose fewer taps for each function—typically 5—in order
The setup for the massively parallel signal processing to increase the number of functions we could perform, while
demonstration is shown in Fig. 2, which uses an approach maximizing the overall speed or bandwidth. We found that 5 taps
similar to that used for our ultrahigh-speed optical convolution was the minimum number that was able to achieve good

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 3


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

Fig. 2 Massively parallel multi-functional video processing. EDFA erbium doped fiber amplifier, MRR micro-ring resonator, EOM electro-optical Mach-
Zehnder modulator, SMF single-mode fiber, WS WaveShaper. PD photodetector, OC optical coupler. Detailed parameters for each function have been
shown in Supp. Table S1.

4 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

performance, striking a balance between complexity and A key feature of our system that was critical in achieving high
efficiency, providing satisfactory performance for the desired fidelity and performance signal processing was the improvement
functions while minimizing the number of required elements. We we obtained in the frequency comb spectral line shaping
were able to achieve 34 functions simultaneously overall. As for accuracy. To accomplish this we employed a two-stage shaping
the initial tests, the microcomb lines were fed into a WaveShaper strategy (see methods)27 where a feedback control path was
then weighted and directed to an EO modulator followed by the employed to calibrate the system and further improve the comb
SMF delay fiber. Lastly, the generated RF signal for each function shaping accuracy. The feedback loop in the optical signal
was resampled and converted back to digital video image frames, processing system plays a crucial role in ensuring the optimiza-
which formed the digital output signal of the system. tion of signal processing quality. It involves monitoring the
In our experiments the analog input video image frames were system’s output and feeding it back to adjust the tap weights in
first digitized and then flattened into 1D vectors (X) and encoded order to achieve the best performance (see Methods for more
as the intensities of temporal symbols in a serial electrical detail). The error signal was generated by directly measuring the
waveform by a high speed analog to digital converter with a impulse response of the system and then comparing it with ideal
resolution of 8 bits per symbol at a sampling rate of 64 gigabaud tap coefficients. Note that this type of feedback calibration
(see Methods). In principle, for analog video signals this A/D and approach is challenging and rarely used for analog optical signal
D/A step can be avoided. We added this step since this allowed us computing, such as the systems based on either spatial-light
to dramatically increase the speed of the video signal over metasurfaces12,28 or waveguide resonators29,30, for example.
standard video rates in order to fully exploit the ultrahigh-speed Our system is based on a soliton crystal (SC) microcomb
of our processor. source, generated in an integrated MRR18–20 (Fig. 3(a, b),
The use of WDM and WaveShapers allows for very flexible Supplementary Note 1, Supplementary Fig. S1–7). Since their
allocation of wavelengths and highly reconfigurable tuning. By first demonstration in 200731, and subsequently in CMOS
employing these components in a carefully designed configura- compatible integrated form32, optical frequency combs generated
tion where each function only requires a limited number of by compact micro-scale resonators, or micro-combs32–35, have
wavelengths, multiple functions can be simultaneously processed. led to significant breakthroughs in many fields such as
By configuring the WaveShapers appropriately according to the metrology36, spectroscopy35,37, telecommunications33,38, quan-
transfer function, different functions can be applied to different tum optics39,40, and radio-frequency (RF) photonics18–20,41–44.
groups of wavelength channels, facilitating simultaneous proces- Microcombs offer new possibilities for major advances in
sing of multiple functions across the entire microcomb spectrum. emerging applications such as optical neural networks5, fre-
In the experiments presented here, we demonstrate real-time quency synthesis29, and light detection & ranging (LIDAR)2,45,46.
video image processing, simultaneously executing 34 functions With a good balance between gain and cavity loss as well as
encompassing edge enhancement, edge detection, and motion blur. dispersion and nonlinearity, soliton microcombs feature high
Edge detection serves as the foundation for object detection, feature coherence and low phase noise and have been highly successful
capture, and data compression12,13. We achieve this by temporal for many RF photonic applications19,23,47–51.
signal differentiation with either high integral or fractional order SC microcombs, multiple self-organized solitons52, have been
derivatives that extract information about object boundaries within highly successful particularly for RF photonic signal
images or videos. We also perform a motion blur function based on processing18–20,26,53–56, ultra-dense optical data transmission33,
signal integration that represents the apparent streaking of moving and optical neuromorphic processing5,57. They feature very high
objects in images or videos. It usually occurs when the image being coherence with low phase noise56, are intrinsically stable with
recorded changes during the recording of a single exposure, and has only open-loop control (Supplementary Note 1, Supplementary
wide applications in computer animation and graphics21. Edge Fig. S2 and Supplementary Movie S1) and can be simply and
enhancement or sharpening based on signal Hilbert transformation reliably initiated via manual pump wavelength sweeping. Further,
is also a fundamental processing function with wide applications22. they have intrinsically high conversion efficiency since the intra-
It enhances the edge contrast of images or videos, thus improving cavity energy is much higher than for single-soliton states5,33.
their acutance. The standard Hilbert transform implements a 90 Our microcomb has a low free spectral range (FSR) of ~48.9 GHz,
degree phase shift and is commonly used in signal processing to closely matching the ITU frequency grid of 50 GHz, and
generate a complex analytic signal from a real-valued signal. We generates over 80 wavelengths in the telecom C-band, which
also employ arbitrary, or fractional order, Hilbert transforms which serves as discrete taps for the video image processing system.
have been shown to be particularly useful for object image edge
enhancement23. These processing functions not only underpin
conventional image or video processing24,25 but also facilitate Experimental results—static images. Since each frame of a
emerging technologies such as robotic vision and machine streaming video signal is essentially a static image, we initially
learning2,4. benchmarked the system single function system performance on
To achieve fractional order operations, the system utilizes the static images with varying numbers of taps to understand the
concept of optical fractional differentiation and Hilbert tradeoffs in performance. Figure 4a(i–iii), b(i–iii), c(i–iii), d(i–iii),
transforms26. This is accomplished by carefully designing the e(i–iii), f(i–iii), g(i–iii), h(i–iii), i(i–iii) shows the experimental
WaveShapers in the optical signal processing setup with the results of static image processing using the above RF photonic
appropriate set of weights (phase and amplitude) for shaping the system. We conducted initial experiments to investigate the
optical signals in the frequency domain, and their configurations performance of the transfer functions with 15, 45, and 75 taps for
can be adjusted to achieve different fractional order operations. single function performance, where the WaveShaper was set to
While we use off-the-shelf commercial WaveShapers, in practice zero out any unneeded wavelengths. These experiments were
these can be realized using various techniques compatible with performed on single static images—i.e., a single frame of the video
integration, such as cascaded Mach-Zehnder interferometers or signal, as shown in Fig. 4. This was aimed at exploring the
programmable phase modulators. These components enable precise influence of the tap number on the signal processing performance
control over the spectral phase and amplitude profiles of the optical and determining the optimal tap number for the subsequent
signals, allowing the realization of fractional order operations. massively parallel signal processing demonstration. This allowed

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 5


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

Fig. 3 Soliton crystal (SC) microcomb used for video image processing. The SC microcomb is generated in a 4-port integrated micro-ring resonator
(MRR) with an FSR of 48.9 GHz. Optical spectra of (i) Pump. (ii) Primary comb with a spacing of 39 FSRs. (iii) Primary comb with a spacing of 38 FSRs. (iv)
SC micro-comb. a Optical spectrum of the micro-comb when sweeping the pump wavelength. b Measured soliton crystal step of the intra-cavity power.

us to assess these tradeoffs between complexity, accuracy, and roughly within telecom band channel spacings. Thus the system is
efficiency in the signal processing operations. very flexible and can accommodate any FSR by the WaveShaper
The original (unprocessed) high definition (HD) digital images or delay by changing the length of SMF if a different microcomb
had a resolution of 1080 × 1620 pixels. The results for edge device is used. Alternatively, the WaveShaper can be used to filter
detection based on signal differentiation with orders of 0.5, 0.75, out certain channels if a larger effective channel spacing is desired
and 1 are shown in Fig. 2a–c, respectively. In each figure, we show compared to the source FSR. The tradeoff in varying the FSR is
(i) the designed and measured spectra of the shaped comb, (ii) the that smaller FSRs yield lower bandwidths whereas larger FSRs
measured and simulated spectral response, and (iii) the measured reduce the number of wavelengths within the telecom C band. As
and simulated images after processing. The measured comb can be seen, the edges in the images are enhanced, and the
spectra and spectral response were recorded by an optical experimental results are consistent with the simulations.
spectrum analyser (OSA) and a vector network analyser (VNA), We also demonstrate more specific image processing such as
respectively. The experimental results agree well with theory, edge detection based on fractional differentiation with different
indicating successful edge detection for the original images. orders of 0.1‒0.9, edge enhancement based on fractional Hilbert
In Fig. 3d–f, we show the results for motion blur based on transformation with different phase shifts of 15°‒75°, and edge
signal integration with different tap numbers of 15, 45, and 75, detection with different operation bandwidths of 4.6 GHz ‒
respectively. These are also in good agreement between the 36.6 GHz (Supplementary Note 2, Supplementary Fig. S3‒S6). By
experimental results and theory. The blur intensity increases with changing the relevant parameters, this resulted in processed
the increased number of taps, reflecting the fact that there is images with different degrees of edge detection, motion blur, and
improved processing performance as the number of taps edge enhancement. By simply programming the WaveShaper to
increases. Compared with discrete laser arrays that feature bulky shape the comb lines according to the designed tap coefficients,
sizes, limiting the number of available taps, microcombs different image processing functions were realized without
generated by a single MRR can operate as a multi-wavelength changing the physical hardware. This reflects the high reconfigur-
source that provides a large number of wavelength channels, as ability of our video image processing system, which is challenging
well as greatly reducing the size, power consumption, and for image processing based on spatial-light devices12–14. In
complexity. This is very attractive for the RF photonic transversal practical image processing, there is not one single processing
filter system that requires a large number of taps for improved function that has one set of parameters that can meet all the
processing performance. requirements. Rather, each processing function requires its own
Figure 3g–I show the results of edge enhancement based on unique set of tap weights. Hence, the high degree of reconfigur-
signal Hilbert transformation (90° phase shift) with different ability and versatility of our image processing system is critical to
operation bandwidths of 12 GHz, 18 GHz, and 38 GHz, respec- meet diverse and practical processing requirements.
tively. In our experiment, the operation bandwidth was adjusted
by changing the comb spacing (2 FSRs vs 3 FSRs of the MRR) and
the fiber length (1.838 km vs 3.96 km). Note that having to Experimental results—real-time video. In addition to static
change the fiber length in principle can be avoided by using image processing, our microcomb based RF photonic system can
tunable dispersion compensators58,59. The WaveShaper can also process dynamic videos in real-time. Our results for real-time
accommodate any FSR (channel spacing) as long as it fits video processing are provided in supplementary Movie S2, while

6 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

Fig. 4 Experimental results of image processing. a‒c Results for edge detection based on differentiation with order of 0.5, 0.75, and 1, respectively.
d–f Results for motion blur based on integration with tap number of 15, 45, and 75, respectively. g–I Results for edge enhancement based on Hilbert
transformation with operation bandwidth of 18 GHz, 12 GHz, and 38 GHz, respectively. a‒I (i) shows the designed and measured optical spectra of the
shaped microcomb, (ii) shows the measured and simulated spectral response of the video image processing system, and (iii) shows the measured and
simulated high definition (HD) video images after processing.

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 7


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

Supplementary Fig. S8–10 show samples of these experimental equates to ~47,222 video signals in parallel. The processing
results. The supplementary Movie S2. starts off with the first throughput can be increased even further by using more comb
original source video frames and is followed by the simulation lines in the L-band.
and experiment results shown side by side for the differentiator, We provide the root mean square errors (RMSEs) in Supple-
integrator, and Hilbert transformer. This is then followed by the mentary Fig. S7 and Table S2 to quantitatively assess the agree-
34 functions (Supplementary Note 2, Supplementary Table S1) ment between the measured waveforms and the theoretical results
performed simultaneous by the massively parallel video pro- for different image processing functions. We find that for the
cessor. Finally, high order of derivatives are shown, and the video Hilbert transformer, for example, with tunable phase shift, the
ends with results based on full two-dimensional derivatives (see RMSE values ranged from 0.0586 to 0.1045, depending on the
below). specific phase shift angle. These RMSE values provide a quanti-
The first original video had a resolution of 568 × 320 pixels and tative measure of the agreement between the experimental mea-
a frame rate of 30 frames per second. Supplementary Fig. S8a surements and the theoretical predictions. Lower RMSE values
shows 5 frames of the original video, together with the generally indicate a better correspondence between the two. Our
corresponding electrical waveform after digital-to-analog conver- results for the RSMEs indicate that the measured waveforms
sion. Supplementary Fig. S8b–d show the corresponding results closely align with the expected behavior of the respective image
for the processed video after edge detection (0.5 order fractional processing functions.
differentiation), motion blur (integration with 75 taps), and edge The processing accuracy of our system is lower than electrical
enhancement based on a Hilbert transformation with an DSP image processing but higher than analog image processing
operation bandwidth of 18 GHz, respectively. As for the static based on passive optical filters13,22,30 (see Supplementary Fig. S7
image processing, the real-time video processing results show and Table S1). Different lengths of fiber were used to be com-
good agreement with theoretical predictions. patible with the different spacings of the different FSRs used (set
To fully exploit the bandwidth advantage of optical processing, by the Waveshaper) and to achieve an optimum RF bandwidth.
we further performed massively parallel real-time multi-func- This is mainly a result of the hybrid nature of our system, which
tional video processing. The experimental setup and results are is equivalent to electrical DSP systems but implemented by
shown in Fig. 4 and Supplementary Note 2, Supplementary photonic hardware. There are a number of factors that can lead to
Fig. S8–10. We used 95 comb lines around the C band in our tap errors during the comb shaping, thus leading to a non-ideal
demonstration. After flattening and splitting the comb lines via frequency response of the system as well as deviations between
the first WaveShaper, we obtained 34 parallel processors, most of the experimental results and theory. These mainly include a
which consisted of five taps. We simultaneously performed 34 limited number of available taps, the instability of the optical
video image processing functions, including fractional differen- microcomb, the accuracy of the WaveShapers, the gain variation
tiation with fractional order from 0.05 to 1.1, fractional Hilbert with wavelength of the optical amplifiers, the chirp induced by
transformation with phase shift from 65° to 90°, an integrator, the optical modulator, the second-order dispersion (SOD)
and bandpass Hilbert transformation with a 90° phase shift (see induced power fading, and the third-order dispersion (TOD) of
Supplementary Table S1 for detailed parameters for each the dispersive fiber. Chirp-induced errors refers to distortions
function). The corresponding total processing bandwidth equals that arise in the signal processing system due to the presence of
64 GBaud × 34 (functions) × 8 bits = 17.4 Terabit/s ‒ well beyond chirp in the optical signals. Chirp (frequency modulation or shift
the processing bandwidth of electrical video image processors10. in time) can be caused by various factors, such as dispersion or
nonlinearity in the optical components.
We encode the image pixels directly on to the optical signal
Discussion using the intensity modulator. The reason we slice the input
To analyze the performance of our video image processor, we image is because our AWG performs 1 dimensional signal
evaluated the processed images based on the ground truth for operations, otherwise this is not necessary. There are a variety of
both quantitative and qualitative comparisons60. We used ways to slice the input image. The video signal is encoded
respective ground truths for the evaluation of 3 BSD (Berkeley without any time delay onto the optical wavelengths. For pro-
Segmentation Database) images after edge detection and com- cessing the signal, although SMF is used to achieve the incre-
pared relevant performance parameters with the same images mental delay lines required by the transversal filter transfer
processed based on the widely used Sobel’s algorithm61,62. (In function, it does not slow down the speed of the system but only
signal processing and data analysis, “Ground Truth” typically adds to the latency. For the same image/video signal with the
refers to the objectively true or correct values or information that same delay line, we only need one modulator to encode the input
serves as a reference for evaluating the performance or accuracy signal. We pre- post- the image using the arbitrary waveform
of a system or algorithm.) Fig. 5 shows the images processed generator to digitize the analog signal and convert it into a high
using Sobel’s algorithm and our video image processor (including speed analog signal to enable us to perform with the full cap-
differentiation with different orders from 0.2 to 1.0). The com- ability of our signal processor. In principle this pre- post- pro-
parison of the performance parameters including performance cessing is not necessary since the system can process and output
ratio (PR) and F-Measure is provided in Table 1, where higher analog signals directly. The AWG does not form a fundamental
values of these parameters reflect a better edge detection perfor- part of our processor.
mance. As can be seen, our differentiation results for PR and In terms of the energy efficiency of the optical signal processor,
F-Measures are better than Sobel’s approach, reflecting the high we use the same approach as reported elsewhere5. The power
performance of our video image processor. consumption of the comb source is estimated to be 1500 mW
The maximum input rate we used was 64 GBaud, or Giga- while that of the EDFAs is estimated at 2000 mW (100 mW for
pixels/s. This, combined with the fact that we performed 34 each EDFA) and for the intensity modulator is ~3.4 V × 0.01
channels with a video resolution of 568 × 320 that resulted in A = 34 mW. The overall computing speed of the optical signal
181,760 pixels at a frame rate of 30 Hz, yields 5,452,800 pixels/s, processor is 2 × 34 × 5 × 62.9 = 21.386 TeraOPs/s. As such, the
resulting in simultaneous real-time processing of 64 × 109 × 34/ energy per bit of the optical signal processor is roughly
(181,760 × 30) = 399,061 video signals per second. For HD videos (1500 + 2000 + 34 × 19) mW/21.386 TeraOPs/s = 0.194pJ/
(720 × 1280 = 921,600 pixels) at a frame rate of 50 Hz, this operation.

8 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

Fig. 5 Comparison of BSD images processed using the Sobel’s algorithm with video image processor after edge detection. Differentiation with different
orders of 0.2, 0.4, 0.6, 0.8, and 1.0 are used for the edge detection with our video image processor. The Sobel results were performed electronically.

The number of available taps can be increased by using MRRs induced tap errors can be suppressed. The discrepancies induced by
with smaller FSRs or optical amplifiers with broader operation the SOD and TOD of the dispersive fibers can also be reduced by
bandwidths. The accuracy realized by the WaveShapers and the using a second WaveShaper to compensate for the group delay ripple
optical amplifiers was significantly improved by using a two-stage of the system (see Supplementary Note 3, Supplementary Fig. S11).
comb shaping strategy as well as the feedback loop calibration Our massively parallel photonic video image processor, which
mentioned previously27. By using low-chirp modulators, the chirp- operates on the principle of time-wavelength-spatial multiplexing,

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 9


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

Table 1 Comparison of performance parameters of images processed using Sobel’s algorithm and our video image processor.

BSD image no. 118,035 42,049 35,010

PR F-measure PR F-measure PR F-measure


Sobel 12.2488 0.0068996 15.8362 0.0049214 11.3226 0.011988
Differentiation order—0.2 12.7891 0.016586 20.1547 0.021603 18.3944 0.021787
Differentiation order—0.4 13.629 0.014815 20.9594 0.024451 18.6818 0.022378
Differentiation order—0.6 15.249 0.013748 21.0244 0.024771 19.8858 0.02388
Differentiation order—0.8 16.2559 0.013491 21.4386 0.022484 20.0704 0.023491
Differentiation order—1.0 17.4338 0.014391 22.1003 0.020635 20.6862 0.026481

similar to the optical vector convolutional accelerator in our Methods


previous work5, is also capable of performing convolution Microcomb generation. We use SC microcombs generated by an
operations for deep learning neural networks. This opens up new integrated MRR (Fig. 3 and Supplementary Fig. S1–6) for video
opportunities for image or video processing applications in image processing. The SC microcombs, which include multiple
robotic vision and machine learning. In particular, each parallel self-organized solitons confined within the MRR, were also used
function can be trained and performed as many as 34 kernels for our previous demonstrations of RF photonic signal
with a size of 5 by 1 for the convolutional neural network, processing26,53–56, ultra-dense optical data transmission33, and
therefore could ultimately achieve a neural network, avoiding the optical neuromorphic processing5,57.
bandwidth limitation given by the analog-to-digital converters. The MRR used to generate SC microcombs (Fig. 3b) was
Note that, without the use of an AWG or OSC, our processor fabricated based on a complementary
could directly process analog signals, while with the use of an metal–oxide–semiconductor (CMOS) compatible doped silica
AWG and OSC it can also process digital signals. Hence it glass platform32,33. It has a radius of ~592 μm, a high quality
effectively is equivalent to electrical DSP. factor of ~1.5 million, and a free spectral range (FSR) of
Although the system presented here operated at a speed of ~17 ~0.393 nm (i.e., ~48.9 GHz). The low FSR results in a large
Terabits/s, it is highly scalable in speed. Figure S12 shows a video number of wavelength channels, which are used as discrete taps
image processor using the C + L + S bands (with 405 wavelengths in our RF photonic transversal filter system for video image
distributed over 81 processors with 5 taps each in size) and processing. The cross-section of the waveguide was 3 μm × 2 μm,
19 spatial paths, all exploiting polarization, yielding a total pro- resulting in anomalous dispersion in the C-band (Supplementary
cessing bandwidth of 1.575 Petabits/s (Supplementary Note 4). Fig. S1). The input and output ports of the MRR were coupled to
Our system is highly compatible with integrated technologies a fiber array via specially designed mode converters, yielding a
and so there is a strong potential for much higher levels of low fiber-chip coupling loss of 0.5 dB/facet.
integration, even reaching full monolithic integration. The core In our experiment, a continuous-wave (CW) pump light was
component of our system, the microcomb, is already fully inte- amplified to 30.5 dBm and the wavelength was swept from blue to
grated. Further, all of the other components have been demon- red. When the detuning between pump wavelength and MRR’s
strated in integrated form, including integrated InP spectral cold resonance became small enough, the intra-cavity power
shapers59, high-speed integrated lithium niobite modulators63,64, reached a threshold, and optical parametric oscillation driven by
integrated dispersive elements59, and photodetectors65. Finally, modulation instability (MI) was initiated. Primary combs
low power consumption and highly efficient microcombs have (Fig. 3d(ii, iii)) were first generated, with the comb spacing
been demonstrated with single-soliton states66 and laser cavity- determined by the MI gain peak33,34,69. As the detuning changed
soliton Kerr combs67,68, which would greatly reduce the energy further, a second jump in the intra-cavity power was observed,
requirements. A key advantage to monolithic integration would where distinctive ‘fingerprint’ SC comb spectra (Fig. 3d-iv)
be the ability to integrate electronic elements on-chip such as an appeared, with a comb spacing equal to the MRR’s FSR. The SC
FPGA module for feedback control. Finally, being much more microcomb arising from spectral interference between the tightly
compact, the monolithically integrated system should be much packaged solitons circulating along the ring cavity exhibits high
less susceptible to the environment, thus reducing the required coherence and low RF intensity noise (Fig. 3c), which are
level of feedback control. consistent with our simulations (Supplementary Movie S1). It is
also worth mentioning that the SC microcomb is highly stable
with only open-loop temperature control (Supplementary Fig. S2).
Conclusions In addition, it can be generated through manual adiabatic pump
In conclusion, we report the first demonstration of video image wavelength sweeping—a simple and reliable initiation process
processing based on Kerr microcombs. Our RF photonic pro- that also results in much higher energy conversion efficiency than
cessing system, with an ultrahigh processing bandwidth of single-soliton states5.
17.4 Tbs/s, can simultaneously process over 399,061 video signals
in real-time. The system is highly reconfigurable via program-
mable control, and can perform different processing functions Microcomb shaping. To achieve the designed tap weights, the
without changing the physical hardware. We experimentally generated SC microcomb was shaped in power using liquid
demonstrate different video image processing functions including crystal on silicon (LCOS) based spectral WaveShapers. We used
edge detection, motion blur, and edge enhancement. The two-stage comb shaping in the video image processing experi-
experimental results agree well with theory, verifying the effec- ments. The generated SC microcomb was pre-flattened and split
tiveness of using Kerr microcombs for ultrahigh-speed video by the first WaveShaper (Finisar 16000 S), which yields an
image processing. Our results represent a significant advancement improved optical signal-to-noise ratio (OSNR) and a reduced loss
for fundamental photonic computing, paving the way for prac- control range for the second-stage comb shaping. The pre-
tical ultrahigh bandwidth real-time photonic video image pro- flattened and split comb was then accurately shaped by the sec-
cessing on a chip. ond WaveShaper (Finisar 4000 S) according to the designed tap

10 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

coefficients for different video image processing functions. The phase responses of the RF photonic video image processing
positive and negative tap coefficients were achieved by separating system were characterized by a vector network analyser (Agilent
the wavelength channels into two spatial outputs of the second MS4644B 40 GHz bandwidth) working in the S21 mode. Finally,
WaveShaper and then detected by a balanced photodetector we restored the processed video into the original size of the
(Finisar BPDV2150R). matrix and took the average of horizontally and vertically
In order to improve the comb shaping accuracy, a feedback processed video and formed this into a two-dimensional
control loop was employed for the second WaveShaper. First, we processed video (Supplementary Movie S2).
used RF Gaussian pulses as the system input and measured
replicas of the input pulses in different wavelength channels. Details of the video image dataset. The high definition (HD)
Next, we extracted peak intensities of the system impulse image with a resolution of 1080 × 1620 pixels we performed is a
response and obtained accurate RF-to-RF tap coefficients. Finally, photo taken by Nikon D5600 in front of the Exhibition building
the extracted tap coefficients were subtracted from the ideal tap in the center of Melbourne city, Australia, in 2020. The video of
coefficients to obtain an error signal, which was used to calibrate 568 × 320 pixels was captured by a Drone Quadcopter UAV with
the loss of the second WaveShaper. After several iterations of the Optical Zoom camera (DJL Mavic Air2 Zoom), this was a short
comb shaping loop, an accurate impulse response that compen- trip during the eastern holiday, in 2019. The author and her
sated for the non-ideal impulse response of the system was friend were started from Melbourne to Adelaide, passing the pink
obtained, thus significantly improving the accuracy of the RF lake and playing guitar, this was a great memory before the
photonic video image processing. Directly measuring the system pandemic and continuous lockdown in Melbourne. The short
impulse response is more accurate compared to measuring the video of the skateboard with a resolution of 303 × 262 pixels was
optical power of the comb lines, given the slight difference taken by the author using iPhone SE in front of Victoria Library,
between the two ports into the balanced detector. The shaped Melbourne, Australia, in 2020.
impulse responses for different image processing functions are
shown in the Supplementary Figs. S3‒6.
Data availability
Derivative (from fractional to high order). The transfer function All data is available upon reasonable request to the authors.
of a differentiator is given by
Received: 5 March 2023; Accepted: 16 November 2023;
HðωÞ / ðjωÞN ð2Þ
pffiffiffiffiffiffiffi
where j equals to 1, ω represents the angular frequency, and N
is the order of differentiation, which in our case can be both
fractional70 and integral18, even complex. The experiment results
for both fractional and integral order differentiation can be seen References
in Supplementary Movie S2. The fractional-order is tunable from 1. Petrou M., Bosdogianni P. Image processing: the fundamentals, John Wiley
(1999)
0.05 to 1.1, with a step of 0.05. We achieved high order differ-
2. Riemensberger, J. et al. Massively parallel coherent laser ranging using a
entiation with an order of 2, 2.5, 3, which to the best of our soliton microcomb. Nature 581, 164–170 (2020).
knowledge, is the highest order of derivative that can be achieved 3. Hodge, V. J., Hawkins, R. & Alexander, R. Deep reinforcement learning for
for video image processing. drone navigation using sensor data. Neural Comput. Appl. 33, 2015–2033
(2021).
4. Fusciello, M. et al. Artificially cloaked viral nano-vaccine for cancer
Two-dimensional video image processing. Normally, processing immunotherapy. Nat. Commun 10, 5747 (2019).
functions such as differentiation, operating on video signals, only 5. Xu, X. et al. 11 TOPS photonic convolutional accelerator for optical neural
result in a one dimensional process—since it acts on individual networks. Nature 589, 44–51 (2021).
lines of the video raster image. However, by appropriately pre- 6. Wang P., Liang J., & Wang L. V., Single-shot ultrafast imaging attaining 70
processing the video signal it is possible to obtain a fully two- trillion frames per second, Nat. Commun. https://doi.org/10.1038/s41467-020-
15745-4 (2020).
dimensional derivative71. fz(x, y) represent the zth frame of a 7. Gao, L., Liang, J., Li, C. & Wang, L. V. Single-shot compressed ultrafast
video signal with O × P pixels, where x = 0, 1, 2, …, O − 1, y = 0, photography at one hundred billion frames per second. Nature 516, 74–77
1, 2, …, P − 1. Thus, the two-dimensional derivative result is (2014).
given by: 8. Wakunami, K. et al. Projection-type see-through holographic three-
dimensional display. Nat. Commun. 7, 12954 (2016).
M1 9. Tay, S. et al. An updatable holographic three-dimensional display. Nature 451,
Dz ðu; vÞ ¼ ∑ hx ðuÞejωx uT ∑N1
v¼0 hy ðvÞe
jωy vT ð3Þ 694–698 (2008).
u¼0
10. Gonzalez, R., Digital image processing, New York, NY: Pearson. ISBN 978-0-
where M, N is the number of taps, u = 0, 1, 2, …, M—1, v = 0, 1, 13-335672-4. OCLC 966609831, (2018).
11. Backus, John, Can Programming Be Liberated from the von Neumann Style?
2, …, N − 1. A Functional Style and Its Algebra of Programs, Communications of the
The electrical input data was temporally encoded by an ACM. Vol. 21, No. 8: 613–641. Retrieved September 19, 2020—via Karl Crary,
arbitrary waveform generator (Keysight M8195A). The raw input School of Computer Science, Carnegie Mellon University, (1978).
matrices were first sliced horizontally and vertically into multiple 12. Zhou, Y., Zheng, H., Kravchenko, I. I. & Valentine, J. Flat optics for image
rows and columns, respectively, which were flattened into vectors differentiation. Nat. Photonics 14, 316–323 (2020).
13. Zhu, T. et al. Plasmonic computing of spatial differentiation. Nat. Commun. 8,
and connected head-to-tail. After that, the generated vectors were 15391 (2017).
multicast onto different wavelength channels via a 40-GHz 14. Zhu, T. et al. Topological optical differentiator. Nat. Commun. 12, 680 (2021).
intensity eletro-optic modulator (iXblue). For the video with a 15. Antonik, P., Marsal, N., Brunner, D. & Rontani, D. Human action recognition
resolution of 303× 262 pixels and a frame rate of 30 frames with a large-scale brain-inspired photonic computer. Nat. Mach. Intell. 1,
per second, we used a sampling rate of 64 Giga samples/s to form 530–537 (2019).
16. Zhou, T. et al. Large-scale neuromorphic optoelectronic computing with a
the input symbols. A dispersive fiber was employed to provide a reconfigurable diffractive processing unit. Nat. Photonics 15, 367–373 (2021).
progressive delay T. Next, the electrical output waveform was 17. Ashtiani F., Geers A. J., Aflatouni F. An on-chip photonic deep neural
resampled and digitized by a high-speed oscilloscope (Keysight network for image classification. Nature https://doi.org/10.1038/s41586-022-
DSOZ504A) to generate the final output. The magnitude and 04714-0 (2022).

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 11


ARTICLE COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7

18. Xu, X. et al. Reconfigurable broadband microwave photonic intensity 54. Tan, M. et al. Photonic RF arbitrary waveform generator based on a soliton
differentiator based on an integrated optical frequency comb source. APL crystal micro-comb source. J. Light. Technol. 38, 6221–6226 (2020).
Photonics 2, 096104 (2017). 55. Xu, X. et al. Photonic RF and microwave integrator based on a transversal
19. Wu, J. et al. RF photonics: an optical micro-combs’ perspective. IEEE J. Select. filter with soliton crystal microcombs. IEEE Trans. Circuits Syst. II Express
Top. Quantum Electron. 24, 1–20 (2018). Article: 6101020. Briefs 67, 3582–3586 (2020).
20. Sun, Y. et al. Applications of optical micro-combs. Adv. Opt. Photonics 15, 56. Xu, X. et al. Broadband microwave frequency conversion based on an
86–175 (2023). integrated optical micro-comb source. J. Light. Technol. 38, 332–338 (2020).
21. H. Ji, C. Q. Liu, Motion blur identification from image gradients, CVPR 57. Xu, X. et al. Photonic perceptron based on a kerr microcomb for high-speed,
(2008). scalable, optical neural networks. Laser Photonics Rev. 14, 2000070 (2020).
22. Davis, J. A., McNamara, D. E. & Cottrell, D. M. Analysis of the fractional 58. Lunardi, L. M. et al. Tunable dispersion compensators based on multi-cavity
Hilbert transform. Appl. Opt. 37, 6911–6913 (1998). all-pass etalons for 40Gb/s systems. J. Light. Technol. 20, 2136 (2002).
23. Tan, M. et al. Highly versatile broadband RF photonic fractional hilbert 59. Metcalf, A. J. et al. Integrated line-by-line optical pulse shaper for high-fidelity
transformer based on a Kerr soliton crystal microcomb. J. Light. Technol. 39, and rapidly reconfigurable RF-filtering. Opt. Express 24, 23925–23940 (2016).
7581–7587 (2021). 60. Khaire, P. A. & Thakur, N. V. A fuzzy set approach for edge detection. Int. J.
24. Capmany, J. et al. Microwave photonic signal processing. J. Light. Technol. 31, Image Process. 6, 403–412 (2012).
571–586 (2013). 61. Shi, T., Kong, J., Wang, X., Liu, Z. & Zheng, G. Improved Sobel algorithm for
25. Yang, T. et al. Experimental observation of optical differentiation and optical defect detection of rail surfaces with enhanced efficiency and accuracy. J. Cent.
Hilbert transformation using a single SOI microdisk chip. Scie. Rep. 4, 3960 South Univ. 23, 2867–2875 (2016).
(2014). 62. Liu, F. F. et al. Compact optical temporal differentiator based on silicon
26. Tan, M. et al. Microwave and RF photonic fractional Hilbert transformer microring resonator. Opt. Express 16, 15880–15886 (2008).
based on a 50 GHz Kerr micro-comb. J. Light. Technol. 37, 6097–6104 (2019). 63. Wang, C. et al. Integrated lithium niobate electro-optic modulators operating
27. Tan, M. et al. Integral order photonic RF signal processors based on a soliton at CMOS-compatible voltages. Nature 562, 101 (2018).
crystal micro-comb source. J. Optics 23, 125701 (2021). 64. Sahin, E., Ooi, K. J. A., Png, C. E. & Tan, D. T. H. Large, scalable dispersion
28. Zangeneh-Nejad, F., Sounas, D. L., Alù, A. & Fleury, R. Analogue computing engineering using cladding-modulated Bragg gratings on a silicon chip. Appl.
with metamaterials. Nat. Rev. Mater. 6, 207–225 (2020). Phys. Lett. 110, 161113 (2017).
29. Spencer, D. T. et al. An optical-frequency synthesizer using integrated 65. Liang, D., Roelkens, G., Baets, R. & Bowers, J. E. Hybrid integrated platforms
photonics. Nature 557, 81–85 (2018). for silicon photonics. Materials 3, 1782–1802 (2010).
30. Ferrera, M. et al. On-chip CMOS-compatible all-optical integrator. Nat. 66. Stern, B., Ji, X., Okawachi, Y., Gaeta, A. L. & Lipson, M. Battery-operated
Commun. 1, 29 (2010). integrated frequency comb generator. Nature 562, 401 (2018).
31. Del’Haye, P. et al. Optical frequency comb generation from a monolithic 67. Bao, H. et al. Laser cavity-soliton micro-combs. Nat. Photonics 13, 384–389
microresonator. Nature 450, 1214–1217 (2007). (2019).
32. Moss, D. J., Morandotti, R., Gaeta, A. L. & Lipson, M. New CMOS-compatible 68. Rowley, M. et al. Self-emergence of robust solitons in a micro-cavity. Nature
platforms based on silicon nitride and Hydex for nonlinear optics. Nat. 608, 303–309 (2022).
Photonics 7, 597 (2013). 69. Herr, T. et al. Temporal solitons in optical microresonators. Nat. Photonics 8,
33. Corcoran, B. et al. Ultra-dense optical data transmission over standard fiber 145–152 (2013).
with a single chip source. Nat. Commun. 11, 2568 (2020). 70. Tan, M. et al. RF and microwave fractional differentiator based on photonics.
34. Pasquazi, A. et al. Micro-combs: a novel generation of optical sources. Phys. IEEE Trans. Circuits Syst. II Express Briefs 67, 2767–2771 (2020).
Rep. 729, 1–81 (2017). 71. Gonzalez R. C. & Woods R. E. Digital image processing, New York: Addison-
35. Kippenberg, T. J., Holzwarth, R. & Diddams, S. A. Microresonator-based Wesley (1993).
optical frequency combs. Science 332, 555–559 (2011).
36. Brasch, V. et al. Photonic chip–based optical frequency comb using soliton
Cherenkov radiation. Science 351, 357–360 (2016). Acknowledgements
37. Suh, M.-G., Yang, Q.-F., Yang, K. Y., Yi, X. & Vahala, K. J. Microresonator This work was supported by the Australian Research Council Discovery Projects Pro-
soliton dual-comb spectroscopy. Science 354, 600–603 (2016). gram (No. DP150104327, No. DP190101576). R. M. acknowledges support by the
38. Marin-Palomo, P. et al. Microresonator-based solitons for massively parallel Natural Sciences and Engineering Research Council of Canada (NSERC) through the
coherent optical communications. Nature 546, 274–279 (2017). Strategic, Discovery and Acceleration Grants Schemes, by the MESI PSR-SIIRI Initiative
39. Kues, M. et al. On-chip generation of high-dimensional entangled quantum in Quebec, and by the Canada Research Chair Program. Brent E. Little was supported by
states and their coherent control. Nature 546, 622–626 (2017). the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No.
40. Reimer, C. et al. Generation of multiphoton entangled quantum states by XDB24030000.
means of integrated frequency combs. Science 351, 1176–1180 (2016).
41. Del’Haye, P. et al. Phase-coherent microwave-to-optical link with a self- Author contributions
referenced microcomb. Nat. Photonics 10, 516–520 (2016). M.T., X.X., and D.J.M. developed the original concept. B.E.L. and S.T.C. designed and
42. Liang, W. et al. High spectral purity Kerr frequency comb radio frequency fabricated the integrated devices. M.T. performed the experiments. D.J.M., M.T., J.W.,
photonic oscillator. Nat. Commun. 6, 7957 (2015). A.B., B.C., T.G.N. R.M. A.M., and X.X. contributed to the development of the experiment
43. Xu, X. et al. Advanced RF and microwave functions based on an integrated and to the data analysis and to the writing of the manuscript. D.J M., X.X., J.W., and
optical frequency comb source. Opt. Express 26, 2569–2583 (2018). A.M. supervised the research.
44. Xu, X. et al. Broadband RF channelizer based on an integrated optical
frequency Kerr comb source. J. Light. Technol. 36, 4519–4526 (2018).
45. Trocha, P. et al. Ultrafast optical ranging using microresonator soliton Competing interests
frequency combs. Science 359, 887–891 (2018). The authors declare no competing interests.
46. Suh, M.-S. & Vahala, K. J. Soliton microcomb range measurement. Science
359, 884–887 (2018).
47. Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative
Additional information
Supplementary information The online version contains supplementary material
Kerr solitons in optical microresonators. Science 361, 567 (2018).
available at https://doi.org/10.1038/s44172-023-00135-7.
48. Sun, Y. et al. Quantifying the accuracy of microcomb-based photonic RF
transversal signal processors. IEEE J. Select. Top. Quantum Electron. 29, 1–17
Correspondence and requests for materials should be addressed to David J. Moss.
(2023).
49. Xu, X. et al. Microcomb-based photonic RF signal processing. IEEE Photonics
Peer review information Communications Engineering thanks Bert Jan Offrein and Bin
Technol. Lett. 31, 1854–1857 (2019).
Shi for their contribution to the peer review of this work. Primary Handling Editors:
50. Tan, M. et al. RF and microwave photonic temporal signal processing with
Chaoran Huang and Rosamund Daw.
Kerr micro-combs. Adv. Phys. X 6, 1838946 (2021).
51. Tan, M. et al. Photonic RF and microwave filters based on 49GHz and
Reprints and permission information is available at http://www.nature.com/reprints
200GHz Kerr microcombs. Opt. Commun. 465, 125563 (2020).
52. Lu, Z. et al. Synthesized soliton crystals. Nat. Commun. 12, 1–7 (2021). Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in
53. Xu, X. et al. Advanced adaptive photonic RF filters with 80 taps based on an published maps and institutional affiliations.
integrated optical micro-comb source. J. Light. Technol. 37, 1288–1295 (2019).

12 COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng


COMMUNICATIONS ENGINEERING | https://doi.org/10.1038/s44172-023-00135-7 ARTICLE

Open Access This article is licensed under a Creative Commons


Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party
material in this article are included in the article’s Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not included in the
article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.

© The Author(s) 2023

COMMUNICATIONS ENGINEERING | (2023)2:94 | https://doi.org/10.1038/s44172-023-00135-7 | www.nature.com/commseng 13

You might also like