Nothing Special   »   [go: up one dir, main page]

Previous Article in Journal
Predictive Analytics for Thyroid Cancer Recurrence: A Machine Learning Approach
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Studies on 1D Electronic Noise Filtering Using an Autoencoder

by
Marcelo Bender Perotoni
1,*,† and
Lincoln Ferreira Lucio
2,†
1
Centro de Engenharia, Modelagem e Ciências Sociais Aplicadas (CECS), Universidade Federal do ABC (UFABC), Av. dos Estados 5001, Santo Andre 09280-560, SP, Brazil
2
J.Assy, Av. Gen. Valdomiro de Lima, 441-Jabaquara, São Paulo 04344-070, SP, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Knowledge 2024, 4(4), 571-581; https://doi.org/10.3390/knowledge4040030 (registering DOI)
Submission received: 9 July 2024 / Revised: 11 November 2024 / Accepted: 13 November 2024 / Published: 18 November 2024

Abstract

:
Autoencoders are neural networks that have applications in denoising processes. Their use is widely reported in imaging (2D), though 1D series can also benefit from this function. Here, three canonical waveforms are used to train a neural network and achieve a signal-to-noise reduction with curves whose noise energy is above that of the signals. A real-world test is carried out with the same autoencoder subjected to a set of time series corrupted by noise generated by a Zener diode, biased on the avalanche region. Results showed that, observing some guidelines, the autoencoder can indeed denoise 1D waveforms usually observed in electronics, particularly square waves found in digital circuits. Results showed an average of 2.8 dB in the signal-to-noise ratio for square and triangular waveforms.

1. Introduction

Autoencoders are neural networks operated using unsupervised learning. The input data are compressed to a smaller-scale representation, called a latent variable, which contains the most relevant characteristics of the raw data. This compression function is carried out by the encoder, and the decoder, which comes in sequence, decompresses the data back into their original dimension. The latent variable is a minimal representation of the features contained in the data, and it operates as a kind of bottleneck in the convolutional neural network (CNN) architecture. Their main role can be seen as creating a rough copy of the input data, which resembles the training data [1]. Layers named MaxPooling1D concentrate the inputs from the preceding layer into single neurons, thereby reducing the data dimensionality [2]. When the input data contain complex values, the real and imaginary parts need to be treated separately since CNNs deal primarily with real numbers [3].
The goal of a denoising autoencoder is the creation of a model that is more robust against noise energy. Recurrent neural networks, where there is feedback from the output to previous layers, is sometimes used for arbitrarily long sequences [4]. Although usually applied for images, some cases report improvements in the quality of 1D waveforms. Signals from ultrasonic non-destructive testing have been subjected to denoising using an autoencoder and compared with singular value decomposition (SVD), principal component analysis (PCA) and wavelet, with the CNN providing better performance, particularly regarding the robustness [5]. Signals from seismic sensors had activation functions added to the neurons to introduce non-linearity in the process, improving the overall performance [6]. Another 1D application reported the use of denoising network followed by feature extraction on the diagnosis of a mechanical gearbox [7]. Real known defect conditions were used in the training phase. In a similar approach, denoising followed a 1D CNN, and was used to diagnose fault conditions in rotating machinery [8]. The preliminary denoising procedure is of paramount importance given the fact that real-world acquired signals are contaminated with noise, which otherwise might demand extra effort in the classification network.
When noisy conditions to be used in the training are not easy to achieve using experimental tests, the corrupted data can be created using several different alternatives, such as using a Gaussian noise distribution and back-and-forth Fourier Transforms for the case of EEG [2]. EKG signals, in turn, had high-frequency artifacts caused by electrode movements cleaned using Discrete Cosine and wavelet transform pre-processing, followed by an autoencoder, which helped improve the signal-to-noise ratio (SNR) by approximately 25 dB [9].
Concerning the field of communications, cognitive radio networks are meant to operate in a crowded electromagnetic environment using the currently most efficient modulation scheme and frequency range; therefore, they need to to learn about the available electromagnetic resources continuously. CNNs were reported to identify eight kinds of communication signals (Costas and Barker codes, binary phase shift, frequency modulation, etc.) to be applied in cognitive radio networks [10].
A convolutional autoencoder was integrated into an automated optical inspection (AOI) system, which used images from a charged-coupled device (CCD) camera to identify defects in printed circuit boards (PCBs) [11]. Similarly, vibration signals from permanent-magnet synchronous motors were processed by an autoencoder to decrease the noise levels before being classified in terms of types of fault by a support vector machine (SVM) [12]. Taking advantage of the prediction capabilities an autoencoder provides, a supervised learning network was trained to predict the reliability of digital circuits, whose results were then compared to a Monte-Carlo-based analysis [13].
Radars, in particular, operate with very-low-amplitude signals, contaminated with noise and clutter, and often have more energy than the signals of interest. Hyper-parameters of a CNN network have been analyzed to provide radar waveform recognition and classification, in real time [14]. Another similar study focused on low-probability-of-intercept (LPI) radar waveforms, identified by a dense CNN, with the original signals converted into time–frequency waveforms using Choi–Williams (CWD) distribution [15], achieving a success rate of 93.4% for an S N R of −8 dB. The intra-pulse modulation in radar signals was recognized using a convolutional denoising autoencoder and a deep convolutional neural network, with the raw data converted to images using the Cohen time–frequency distribution [16].
Real-world electronic signals are usually contaminated with noise, resulting from disturbances such as unwanted electromagnetic coupling among the circuit components, shot noise observed in semiconductors and human-made electromagnetic interference (EMI). Band-pass filtering is one alternative to reduce the noise power within the signal bandwidth, at the expense of the tradeoff between the filter time and frequency domains. Steep roll-offs in frequency imply large group delays that eventually distort the time response. More sophisticated alternatives to 1D denoising are wavelets [17], empirical mode decomposition [18] and curvelets [19]. In line with this, this paper presents an autoencoder used for denoising signals commonly found in electronics, i.e., square, triangular and sine waves. In particular, square waves are usually employed in digital electronics, and preserving their original waveform shape is directly correlated with the data throughput [20]. Disturbed digital waveforms imply false readings since the wave amplitude is used for decision-making between the binary logic levels. An ensemble of clean and noisy waveforms is generated by Matlab 2022/Octave 3.6.4. and later used for the training and testing of a CNN implemented in Python 3.8.2. For the sake of comparison with real-world signals, a noise generator circuit was built and added to a signal source, outputting the same three wave shapes, and the denoising process was applied using the previously trained neural networks. Its fast response enables its use as an option after analog signals are acquired and digitized. The use of a Zener-based noise generator innovatively proved that the CNN was operating against real-world conditions, instead of signals generated in a purely computational fashion.

2. Materials and Methods

2.1. Waveform Generation

The data to be used in the training were prepared in Matlab. Each time series was set to contain 1000 samples of real numbers, distributed in the square, triangular and sine types, each case with 2000 different waveforms to be inserted into the CNN. The parameter to be evaluated, the signal-to-noise ratio S N R , in dB, was computed according to
S N R = 10 log 10 i = 1 2000 s i 2 i = 1 2000 s n i 2
where s and s n are the clean and noisy waveforms, respectively. Each waveform has the following characteristics:

2.2. Square Wave

Its duty cycle was set to be 50%, and the on and off parts were set to 20 points. Amplitude was set to vary between 0 and 1, and the different samples had the phase randomly varied. Noise was numerically added using the rand function in Matlab. A total of 2000 waveforms were stored without noise and 2000 were contaminated with noise. The complete waveform generation and file storage took 4.1 s, using Matlab 2018a.

2.3. Triangular Wave

Similar to the square wave, there were 2000 series, each with 1000 points. The rising and falling times were set to comprise 20 points, and their amplitude varied between 0 and 1. Generation and storage took about 4.6 s.

2.4. Sine Wave

Sine waves were generated, in contrast to the former two cases, with both frequency and phase allowed to vary within the samples. Generation of data, storage and plotting figures took about 3.9 s.
Three different samples of the waveforms, clean and noisy, are shown in Figure 1. The sine waves are shown with the 2 π scale on the x-axis.
The respective S N R histograms of the created input data are shown in Figure 2. They show that the noise energy is larger than that of the signal, across all the 2000 samples.

2.5. CNN

An autoencoder is a neural network that takes an input and reproduces the same input as the output, using unsupervised learning [21]. The deployed network representation is shown in Figure 3, based on Python Tensorflow and Keras, the latter an application programming interface (API) that runs on top of the former [22]. The aforementioned vectors of length 1000 (dimension 1000 × 1) are supplied as input and sequential convolutional filters are used, followed by MaxPooling and upsampling blocks, which downsample and upsample the incoming data by a factor of 2, respectively. Convolutional filters extract the features from the input data, whereas MaxPooling performs an averaging operation on the former convolutional filter data and reduces the number of parameters. This first half-part of the CNN is named the encoder network. In other words, convolutional layers detect patterns in the incoming data, and MaxPooling reduces their dimensionality. The process is then reversed, in the so-called decoder network, to retrieve the same original input length, and instead of MaxPooling the dimensionality is progressively increased using upsampling layers. Overall, 20% of the total number of samples was allocated to testing, and the rest was used for training. Throughout the CNN, the activation function was the Rectified Linear Unit (RELU).
A total of 50 epochs were used for the training. Table 1 shows the elapsed time for the three cases; it can be seen that no relevant difference was observed for the waveshapes. The loss parameter’s evolution across the epochs is shown in Figure 4. The square wave required more effort to be trained, in contrast to the other two waveforms.
Samples of the clean and noisy signals are shown in Figure 5, collected after the training was finished. The output waveforms are visually clearer, showing the success of the denoising network in cleaning severely contaminated signals, independent of the waveshape type.
An S N R histogram analysis is applied to the input and output signals (Figure 6). It can be seen that the sine case had the worst performance, given the smaller distance between the noisy and denoised samples. This difference can be ascribed to the larger variation in the seed signals, i.e., either the frequency and phase were made to vary in contrast to the other two cases.

3. Real-World Noisy Signal

To evaluate the performance of the denoising autoencoder against real-world signals, a test was performed with a circuit subjected to a controlled level of noise. For the noise source, a Zener diode was employed, biased on its avalanche area [23]. The test was carried out using the circuit displayed in Figure 7.
An oscilloscope (input impedance 100 M Ω , DC-coupled and with a bandwidth of 200 MHz) stored the noisy waveforms, in ASCII format, which were afterward read and processed by the CNN, trained using the Matlab-generated signals, i.e., there was not another training with the Zener-contaminated signals. The instrument’s maximum time series length is by default 2000 points, so the acquired data vector was later truncated to 1000, to be consistent with the trained input signals. Individual circuit components were manually optimized, based on the visual noise amplitude seen on the oscilloscope. For instance, the 6.6 V diode was chosen among others due to its observed higher noise levels. The signal generator couples to the Zener using a 1 k Ω resistor, and the capacitor blocks DC levels from entering into the signal generator and oscilloscope inputs. Differently from the computer-generated square waveforms, the generator had finite rise and fall times (set to its minimum default value of 17 ns). The generator was set to output square, triangular and sine waves, as displayed in Figure 8, together with the cleaned signals after the autoencoder. The generator outputs zero-offset waveforms, and the capacitor’s presence cuts eventual DC offsets. However, the CNN was trained with positive amplitudes, so to the acquired signals had their amplitude numerically shifted to positive-valued amplitudes.
Alongside the time domain results, their power spectra are shown, for either raw or processed waveforms. It is possible to see that the CNN operates as a low-pass filter, attenuating frequencies above the range where the signals concentrate most of their energies. For the specific case of the square wave (Figure 8a), it started to attenuate from approximately 4 KHz. The triangular case (Figure 8b), in turn, was found to be damped after 5 kHz, and for the sine wave (Figure 8c) this was after 7 kHz. The fundamental for all these waves was 500 Hz. It is interesting to stress that this effect was automatically found by the neural network operation after the training step. In the rectangular signal case, in particular, one notices that harmonics above the third are cut, probably due to CNN training that was performed with a perfect shape (i.e., null rise and fall times). Among the whole ensemble encompassing the three signals, the triangular wave had the worst behavior, with its shape slightly rounded in the time domain, and the effect visualized on its power spectrum with its second harmonic was completely attenuated from the denoised version.
It was found that the acquired signals did not need to be compressed into the original normalized range (0 to 1), but they were very sensitive to the baseline, i.e., a DC offset had to be added otherwise the lower part of the signals were not correctly denoised. This points out that the training signals should comprise a more diverse ensemble, particularly with randomly chosen DC offsets. Figure 9 shows an example, where the offset is not added to the acquired square wave. It is the reason that explains both the initial and final discontinuities observed in the denoised waveforms seen in Figure 8. For the present case, this DC level was added during the test period, after the CNN was trained, and its amplitude was found after trial and error.

4. Discussion

The CNN was set up using a total of seven layers, and both input and output operated with a length of 1000, real positive numbers, representing a periodic time-varying waveform, whose spectral content was preserved according to the Nyquist criterion. The initial ensemble of curves had all the elements with negative S N R , meaning the noise energy was larger than that of the signal. Nevertheless, the observed results showed that the autoencoder indeed reduced the noise levels in typical waveforms usually found in electronic circuits. An initial set of curves with average S N R (−2.8, −2.5 and −0.8) were improved to (−0.2, 0 and 0) for the square, triangular and sine waves, respectively, in dB.
To address real-world signals, the denoising autoencoder was tested using a Zener diode noise generator, which proved that the training phase has, however, to fulfill a broader set of parameters to become more general. Namely, phase, frequency and DC offsets have to cover the range expected to be found in the noisy signals to be later cleaned by the CNN. Its operation on the signals was found to be similar to low-pass filtering, for instance, the 500 Hz rectangular wave had attenuation taking place around 4000 Hz. Since the rectangular wave has odd harmonics, the CNN converged to attenuate signals up to its fourth harmonic, which corresponds to 1/8 of the fundamental amplitude. Waveform amplitudes were not found to have a relevant impact on the denoising, so the training could be performed with the usual normalized values. That, however, was a nuisance when the laboratory test was performed since the signals were swinging towards negative amplitudes, which required posterior processing, adding a DC-offset level to make the time series positive-valued.
Also, though more time-consuming and less convenient, a more robust option would have been performing the training with real-world instruments, so that the signals better resembled the future noisy waveforms. It is not easy to predict which parameter found in the real-world case might impact the network response.
Table 2 shows a comparison of similar 1D denoising studies, in different application fields.

5. Conclusions

A denoising CNN was tested against numerically synthesized waveforms, contaminated by random noise. The designed CNN was able to provide cleaner signals, where S N R measurements on both inputs and outputs proved its effectiveness. An initial set of curves with average S N R (−2.8, −2.5 and −0.8) was improved (to −0.2, 0 and 0) for the square, triangular and sine waves, respectively, in dB. To innovate the way the signals are tested, jumping from shared databases or pure numerical time series, a circuit with a noise generator based on a Zener device was used to inject noise levels into a generator, which later was tested using the previously trained CNN. The results showed good performance, with the laboratory tests pointing out that concrete applications might require a broader set of parameters to be contemplated during the training phase.

Author Contributions

Conceptualization, M.B.P.; methodology, M.B.P. and L.F.L.; software, M.B.P.; validation, M.B.P. and L.F.L.; formal analysis, M.B.P.; investigation, M.B.P. and L.F.L.; resources, M.B.P.; data curation, M.B.P.; writing—original draft preparation, M.B.P.; writing—review and editing, M.B.P. and L.F.L.; visualization, M.B.P.; supervision, M.B.P. and L.F.L.; project administration, M.B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The main Python code relative to this study can be found on the github page https://github.com/mperoconsult/CNN/. Further details concerning the electronic circuit, Matlab files etc can be provided by email, [email protected].

Conflicts of Interest

The author Lincoln Ferreira Lucio is employed by J.Assy Co. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning, 1st ed.; MIT Press: Cambridge, MA, USA, 2016; pp. 499–500. [Google Scholar]
  2. Leite, N.M.N.; Pereira, E.T.; Gurjão, E.C.; Veloso, L.R. Deep Convolutional Autoencoder for EEG Noise Filtering. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018. [Google Scholar]
  3. Fuchs, J.; Dubey, A.; Lübke, M.; Weigel, R.; Lurz, F.E.F. The Automotive Radar Interference Mitigation using a Convolutional Autoencoder. In Proceedings of the IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020. [Google Scholar]
  4. Yasenko, L.; Klyatchenko, Y.; Tarasenko-Klyatchenko, O. Image noise reduction by denoising autoencoder. In Proceedings of the 11th IEEE International Conference on Dependable Systems, Services and Technologies, DESSERT’2020, Kyiv, Ukraine, 14–18 May 2020. [Google Scholar]
  5. Gao, F.; Li, B.; Chen, L.; Wei, X.; Shang, Z.; He, C. Ultrasonic signal denoising based on autoencoder. Rev. Sci. Instrum. 2020, 91, 045104. [Google Scholar] [CrossRef] [PubMed]
  6. Saad, O.M.; Chen, Y. Deep denoising autoencoder for seismic random noise attenuation. Geophysics 2020, 85, 367–376. [Google Scholar] [CrossRef]
  7. Yu, J.; Zhou, X. One-Dimensional Residual Convolutional Autoencoder Based Feature Learning for Gearbox Fault Diagnosis. IEEE Trans. Ind. Inform. 2020, 16, 6347–6358. [Google Scholar] [CrossRef]
  8. Liu, X.; Zhou, Q.; Zhao, J.; Shen, H.; Xiong, X. Fault Diagnosis of Rotating Machinery under Noisy Environment Conditions Based on a 1-D Convolutional Autoencoder and 1-D Convolutional Neural Network. Sensors 2019, 19, 972. [Google Scholar] [CrossRef] [PubMed]
  9. Sheu, M.H.; Jhang, Y.S.; Chang, Y.C.; Wang, S.T.; Chang, C.Y.; Lai, S.C. Lightweight Denoising Autoencoder Design for Noise Removal in Electrocardiography. IEEE Access 2022, 10, 98104–98116. [Google Scholar] [CrossRef]
  10. Zhang, M.; Diao, M.; Guo, L. Convolutional Neural Networks for Automatic Cognitive Radio Waveform Recognition. IEEE Access 2017, 5, 11074–11082. [Google Scholar] [CrossRef]
  11. Kim, J.; Ko, J.; Choi, H.; Kim, H. Printed Circuit Board Defect Detection Using Deep Learning via a Skip-Connected Convolutional Autoencoder. Sensors 2021, 21, 4968. [Google Scholar] [CrossRef] [PubMed]
  12. Xu, X.; Feng, J.; Zhan, L.; Li, Z.; Qian, F.; Yan, Y. Fault Diagnosis of Permanent Magnet Synchronous Motor Based on Stacked Denoising Autoencoder. Entropy 2021, 23, 339. [Google Scholar] [CrossRef] [PubMed]
  13. Xiao, J.; Ma, W.; Lou, J.; Jiang, J.; Huang, Y.; Shi, Z.; Shen, Q.; Yang, X. Circuit reliability prediction based on deep autoencoder network. Neurocomputing 2019, 370, 140–154. [Google Scholar] [CrossRef]
  14. Kong, S.H.; Kim, M.; Zhan, L.; Hoang, Z.M. Automatic LPI Radar Waveform Recognition using CNN. IEEE Access 2018, 6, 4207–4219. [Google Scholar] [CrossRef]
  15. Si, W.; Wan, C.; Zhang, C. Towards an accurate radar waveform recognition algorithm based on dense CNN. Multimed. Tools Appl. 2021, 80, 1179–1192. [Google Scholar] [CrossRef]
  16. Qu, Z.; Wang, W.; Hou, C. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Denoising Autoencoder and Deep Convolutional Neural Network. IEEE Access 2019, 7, 112339–112347. [Google Scholar] [CrossRef]
  17. Taswell, C. The what, how, and why of wavelet shrinkage denoising. Comput. Sci. Eng. 2000, 2, 12–19. [Google Scholar] [CrossRef]
  18. Gomez, J.L.; Velis, D.R. A simple method inspired by empirical mode decomposition for denoising seismic data. Geophysics 2016, 81, 403–413. [Google Scholar] [CrossRef]
  19. Moore, R.; Ezekiel, S.; Blasch, E. Denoising one-dimensional signals with curvelets and contourlets. In Proceedings of the NAECON 2014—IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 24–27 June 2014. [Google Scholar]
  20. Green, L. Understanding the importance of signal integrity. IEEE Circuits Devices Mag. 1999, 15, 7–10. [Google Scholar] [CrossRef]
  21. Sewak, M.; Karim, M.R.; Pujari, P. Practical Convolutional Networks, 1st ed.; Packt: Birmingham, UK, 2018; pp. 1–211. [Google Scholar]
  22. Github Support Files. Available online: https://github.com/mperoconsult/CNN/ (accessed on 11 November 2024).
  23. Abdipour, A.; Moradi, G.; Saboktakin, S. Design and implementation of a noise generator. In Proceedings of the IEEE International RF and Microwave Conference, Kuala Lampur, Malaysia, 2–4 December 2008. [Google Scholar]
  24. Zhang, C.; Yu, J.; Wang, S. Fault detection and recognition of multivariate process based on feature learning of one-dimensional convolutional neural network and stacked denoised autoencoder. Int. J. Prod. Res. 2021, 59, 2426–2449. [Google Scholar] [CrossRef]
Figure 1. Signal waveforms, clean and noisy (a) square, (b) triangular and (c) sine.
Figure 1. Signal waveforms, clean and noisy (a) square, (b) triangular and (c) sine.
Knowledge 04 00030 g001
Figure 2. Histogram of the S N R relative to the waveforms of (a) square, (b) triangular and (c) sine samples.
Figure 2. Histogram of the S N R relative to the waveforms of (a) square, (b) triangular and (c) sine samples.
Knowledge 04 00030 g002
Figure 3. Block diagram of the used CNN.
Figure 3. Block diagram of the used CNN.
Knowledge 04 00030 g003
Figure 4. Loss evolution along the 50 epochs for the cases of square, triangular and sine signals.
Figure 4. Loss evolution along the 50 epochs for the cases of square, triangular and sine signals.
Knowledge 04 00030 g004
Figure 5. Noisy (red) and denoised (green) waveforms from the (a) square, (b) triangular and (c) sine samples.
Figure 5. Noisy (red) and denoised (green) waveforms from the (a) square, (b) triangular and (c) sine samples.
Knowledge 04 00030 g005
Figure 6. S N R histograms for the (a) square, (b) triangular and (c) sine samples, before and after denoising.
Figure 6. S N R histograms for the (a) square, (b) triangular and (c) sine samples, before and after denoising.
Knowledge 04 00030 g006
Figure 7. Zener diode used as a noise generator.
Figure 7. Zener diode used as a noise generator.
Knowledge 04 00030 g007
Figure 8. Time domain and power spectra of the real-world signals, original and denoised, (a) square, (b) triangular and (c) sine samples.
Figure 8. Time domain and power spectra of the real-world signals, original and denoised, (a) square, (b) triangular and (c) sine samples.
Knowledge 04 00030 g008
Figure 9. Example of error in the denoising process.
Figure 9. Example of error in the denoising process.
Knowledge 04 00030 g009
Table 1. Elapsed time.
Table 1. Elapsed time.
WaveformTime [s]
Rectangular82.7
Triangular82.2
Sine81.5
Table 2. Comparison with other studies.
Table 2. Comparison with other studies.
Ref.Vector SizeApplicationImprovementReal-World Test
[2]1 × 7680EEG6 dBYes
[7]1 × 8192Gearboxes96% defective recogn.Yes
[9]1 × 1024EKG25 dBNo
[16]1 × 1024radar95% recogn. resultNo
[24]1 × 1024Manufacturing95% recogn. resultYes
This study1 × 1000Electronic waveforms3 dBYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perotoni, M.B.; Lucio, L.F. Studies on 1D Electronic Noise Filtering Using an Autoencoder. Knowledge 2024, 4, 571-581. https://doi.org/10.3390/knowledge4040030

AMA Style

Perotoni MB, Lucio LF. Studies on 1D Electronic Noise Filtering Using an Autoencoder. Knowledge. 2024; 4(4):571-581. https://doi.org/10.3390/knowledge4040030

Chicago/Turabian Style

Perotoni, Marcelo Bender, and Lincoln Ferreira Lucio. 2024. "Studies on 1D Electronic Noise Filtering Using an Autoencoder" Knowledge 4, no. 4: 571-581. https://doi.org/10.3390/knowledge4040030

APA Style

Perotoni, M. B., & Lucio, L. F. (2024). Studies on 1D Electronic Noise Filtering Using an Autoencoder. Knowledge, 4(4), 571-581. https://doi.org/10.3390/knowledge4040030

Article Metrics

Back to TopTop