1 Introduction

The tabu learning neuron (TLN) was proposed firstly by Beyer and Ogier [1] based on the Hopfield neural network (HNN) for solving non-convex optimization problems. Tabu learning applies the concept of tabu search [2, 3] to neural networks by continuously increasing the energy surface in a neighborhood of the current state, thus penalizing states already visited. References [4, 5] studied the dynamic behaviors of the one- and two-neuron-based tabu learning neural network to find out what the state trajectory of the tabu learning neural network looks like. Because the tabu learning method, unlike other methods, does not consist of forcing the state of the network to converge to an optimal or near-optimal solution, but rather, the network conducts a search in the solution space. The results of their work showed periodic, chaotic solutions and the existence of Hopf bifurcation. Reference [6] demonstrated the existence of a Pitchfork bifurcation, a Flip bifurcation, and a Neimark–Sacker bifurcation in a discrete one-neuron-based TLN model, when the control parameter exceeded the critical value. Chen and Li [7] have proposed in their works the circuit design for the implementation of a single tabu learning neuron. They also proposed circuits for the case of an autonomous two-neuron-based TLN and exhibited the periodic, chaotic solutions without carrying out in-depth studies on their dynamics. Results similar to [6] were obtained by investigations conducted by Ref. [8] on a tabu learning neuron model with two delayed neurons. Recently, Ref. [9] revealed complex activities such as chaotic/periodic spiking/bursting firing patterns and coexisting bi-stable firing patterns in a case of TLN to an autonomous neuron, following digital investigations with a compound activation function. These results were confirmed during the implementation of the circuit in which an approximation of this difficult to implement activation function was performed. Subsequently, [10] showed the bi-stability in a one-neuron-based tabu learning neuron model that they were able to validate experimentally through an FPGA-based implementation.

Little work in the literature has conducted an in-depth study of the dynamics of tabu learning neuron. It is reason enough for us to carry out detailed investigations on the dynamics of this model to better understand its behavior. In this article, we will conduct a study focused on the analysis of the dynamics of a non-autonomous two-neuron-based tabu learning neural network. The interesting thing about this study is that it is accomplished by involving the compound activation function used by [9] in the study of a single-neuron TLN model.

This made it possible to obtain interesting phenomena there because the activation function plays an important role in the appearance of complex phenomena in neural networks. As such, the results of the work obtained by several authors [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32] carried out on various models of Hopfield neural networks with two, three neurons, or even more testify to the importance that the activation function plays in the appearance of complex behaviors in neural networks in general. It will be interesting for us in this work to see the contribution that this compound activation function will have on the non-autonomous two-neuron-based tabu learning neuron model submitted to our analysis.

Nowadays, new researches are developed to propose modern encryption algorithms for confidential data including medical images [33]. A chaotic system is a major tool in this prominent research domain due to ergodicity, deterministic dynamics, unpredictable behaviors, nonlinear transformation, and sensitivity dependence of the system [34,35,36,37]. For instance, Gao and collaborators [34] designed an encryption algorithm based on Chen hyperchaotic system. A simple diffusion-confusion encryption scheme is developed in their work. Zhou and co-workers [37] used 1-D chaotic map to establish the encryption key for both color and gray images. Analysis of the proposed scheme showed a high-security level. As strong cryptographic technics are developed, cryptanalysis is also growing. Another idea of chaos encryption is to use the discrete output of the neural network to increase the security of the process. In this line, Xing and collaborator [36] designed an encryption scheme using both the sequences of the Lorenz attractor and the discrete output of the perceptron model. Experimental results show a high-security algorithm. Lakshmi and co-workers [35] design an encryption algorithm based on HNN. The technique uses a simple diffusion confusion algorithm, and security against some existing attacks is achieved.

A variety of encryption methods can be found in the literature and classified either as spatial domain or frequency domain encryption algorithms [34,35,36,37]. The first method directly considers the pixel of the original image without any transformation. The second method applies a mathematical transformation on the original image to compute some coefficients based on image pixels. Transform domain-based algorithms seem to be more efficient and robust than the spatial domain. The above-mentioned techniques combining both chaos and neural network in cryptography rely on spatial domain algorithms. In this paper, we will use a modified Julia set and the discrete sequences of the proposed TLN to transform the pixels of the plain image. Then, the sequences of a simple TLN will be applied for encryption. Besides, the main characteristic of the model addressed in this work is summarized in Table 1. The main contributions of this work include:

  1. (a)

    Introduce a novel two-neuron-based tabu learning model,

  2. (b)

    Use nonlinear analysis tools to prove that the newly introduced model exhibits more complex behaviors than the previously presented one (see Table 1)

  3. (c)

    Provide PSPICE simulations of the electronic circuit of the proposed model to further support the presented results.

  4. (d)

    We design a robust and simple encryption/decryption algorithm using the chaotic sequence of the proposed tabu learning two-neuron model.

  5. (e)

    The performance analysis indicates that the proposed encryption scheme is secured with regard to some well-known attacks.

  6. (f)

    An idea of real-time implementation of our algorithm is provided such as an application of the proposed cryptosystem in the Internet of Medical Things (IoMT).

Table 1 Summary of some recent work addressed on tabu neural networks

The next section of this article will be organized as follows: In Sect. 2, the mathematical model is described, the equilibrium points are illustrated graphically, and the determination of the characteristic equation from the Jacobian matrix is also presented. In Sect. 3, the dynamic behaviors associated with a control parameter under the presence of the stimulus are revealed numerically by bifurcation diagrams and dynamic evolution in the parameter space as well as phase portraits. In Sect. 4, analog analyses are performed in Pspice to verify compliance with digital simulations. The cryptosystem based on IoMT is designed and experimented in Sect. 5. A comparative analysis in terms of encryption/decryption performance is presented in Sect. 6. Finally, in Sect. 7 the conclusions of this work are summarized.

2 Preliminary model description

2.1 Mathematical model involving composite activation function

The form of two-neuron-based HNN [38, 39] is given by Eq. (1):

$$\dot{x}_{i} = - a_{i} x_{i} + \sum\limits_{j = 1}^{2} {b_{ij} f(x_{j} )} + I_{i}$$
(1)

where \(a_{i}\) are the positive constants, \(x_{i} = \left[ {x_{1} ,x_{2} } \right]^{T}\) is the state vector of neurons,\(f(x_{j} ) = \left[ {f(x_{1} ),f(x_{2} )} \right]^{T}\) is the vector of activation function, and \(b_{ij}\) represents a synaptic weight matrix \(2 \times 2\) describing the strength of the connections between the two neurons of the network and \(I_{i} = \left[ {I_{1} ,I_{2} } \right]^{T}\) is the stimulus can be constant or sinusoidal. The TLN model [1, 10, 40] for two neurons is described by Eq. (2):

$$\left\{ \begin{aligned} \dot{x}_{i} &= - a_{i} x_{i} + \sum\limits_{j = 1}^{2} {b_{ij} f(x_{j} )} + y_{i} + I_{i} \hfill \\ \dot{y}_{i} &= - c_{i} y - d_{i} f(x_{i} ) \hfill \\ \end{aligned} \right.$$
(2)

With \(y_{i} = \left[ {y_{1} ,y_{2} } \right]^{T}\) being the vector of the tabu learning neuron variables, \(c_{i}\) and \(d_{i}\) are the positive constants that represent the memory decay rate and the learning rate. The activation function considering in Eq. 1 is described in Eq. 3. This activation function was used by [9] in his work on the tabu learning neuron model with one neuron. The graphical representation of the compound activation function given in Eq. 3 is shown in Fig. 1a and b for different values of constants \(\eta\) and \(\sigma\), respectively. In Fig. 1a for \(\sigma^{2} = 0.2\), the compound activation function curve becomes flat when the value of constant \(\eta\) is decreases, which corresponds to a fast response of neurons. On the other hand in Fig. 1b when, the compound activation function curve is dilated, this implies an increase in response range of neurons. In other words by acting on the constants \(\eta\) and \(\sigma^{2}\), it is possible to modify the speed and the limits of the response of neurons, respectively. This makes the compound activation function (3) much more flexible than the hyperbolic tangent type activation function, of which we can only adjust the response speed of neurons by acting on its activation gradient as indicated in [11, 19].

$$f(x_{j} ) = \eta x_{j} \exp \left[ { - \left( {\frac{{\eta x_{j} }}{\sigma }} \right)^{2} } \right]$$
(3)

with \(\eta > 0\) and \(\sigma > 0\) represented in Fig. 1. From these diagrams, it can be seen that the crossover point of the neuron activation function strongly depends on these important parameters, namely \(\eta\) and \(\sigma\).

Fig. 1
figure 1

Existence of different nonlinearities compound activation function, due to \(\eta\) and \(\sigma^{2}\) constants in (a) and (b), respectively

The mathematical model of the TLTN corresponding to the configuration of the synaptic weight matrix \(2 \times 2\) is given by Eq. (4):

$$\left\{ {\begin{array}{*{20}l} {\dot{x}_{1} = - a_{1} x_{1} + b_{11} f(x_{1} ) + b_{12} f(x_{2} ) + y_{1} + I_{1} } \hfill \\ {\dot{x}_{2} = - a_{2} x_{2} + b_{21} f(x_{1} ) + b_{22} f(x_{2} ) + y_{2} + I_{2} } \hfill \\ {\dot{y}_{1} = - c_{1} y_{1} - d_{1} f(x_{1} )} \hfill \\ {\dot{y}_{2} = - c_{2} y_{2} - d_{2} f(x_{2} )} \hfill \\ \end{array} } \right.$$
(4)
$${\text{where}}\;\left[ {\begin{array}{*{20}c} {b_{11} } & {b_{12} } \\ {b_{21} } & {b_{22} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} { - 0.1} & {2.8} \\ { - 0.1} & 4 \\ \end{array} } \right]$$
(5)

with \(a_{1} = 0.1\), \(a_{2} = 0.1\), \(c_{1} = 0.1\), \(c_{2} = 0.1\), \(I_{1} = I_{m} \sin (2\pi F_{1} \tau )\), \(I_{2} = 0\), \(I_{m} = 0.2\), \(F_{1} = 0.2\), \(\eta = 10\) and

$$\sigma^{2} = 0.2.$$
(6)

2.2 Equilibrium points related to parameters

The equilibrium points of system (4) are obtained by making its left-hand member equal to zero at \(\tau = 0\); this enables to solve the set of equations given in (7):

$$\left\{ {\begin{array}{*{20}l} {0 = - a_{1} x_{1} + b_{11} f(x_{1} ) + b_{12} f(x_{2} ) + y_{1} } \hfill \\ {0 = - a_{2} x_{2} + b_{21} f(x_{1} ) + b_{22} f(x_{2} ) + y_{2} } \hfill \\ {0 = - c_{1} y_{1} - d_{1} f(x_{1} )} \hfill \\ {0 = - c_{2} y_{2} - d_{2} f(x_{2} )} \hfill \\ \end{array} } \right.$$
(7)

After developments and arrangements of the system (7), Eq. 8 is obtained.

$$\left\{ \begin{gathered} x_{2e}^{n} = \frac{{a_{1} }}{{a_{2} b_{12} }}\left( {b_{22} - \frac{{d_{2} }}{{c_{2} }}} \right)x_{1e}^{n} - \frac{{f(x_{1e}^{n} )}}{{a_{2} b_{12} }}\left[ {\left( {b_{11} - \frac{{d_{1} }}{{c_{1} }}} \right)\left( {b_{22} - \frac{{d_{2} }}{{c_{2} }}} \right) - b_{12} b_{21} } \right] \hfill \\ y_{1e}^{n} = \frac{{ - d_{1} }}{{c_{1} }}f(x_{1e}^{n} ) \hfill \\ y_{2e}^{n} = \frac{{ - d_{2} }}{{c_{2} }}f(x_{2e}^{n} ) \hfill \\ \end{gathered} \right.$$
(8)

The equilibrium points \(P_{n} = \left( {x_{1e}^{n} ,x_{2e}^{n} ,y_{1e}^{n} ,y_{2e}^{n} } \right)\) (trivial and non-trivial) are obtained by expression (9) which is graphically solved using a MATLAB function [12,13,14, 28]:

Remark that \(n \in {\rm N}\) is the numbering index of equilibrium points \(x_{1e}^{n}\), which corresponds to the graphical intersections with the abscissa of the solution curve represented by Eq. (9):

$$S(x_{1e} ) = - a_{1} x_{1e} + b_{11} f(x_{1e} ) + b_{12} f(x_{2e} ) - \frac{{d_{1} }}{{c_{1} }}f(x_{1e} )$$
(9)
$${\text{where}}\quad x_{2e} = \frac{{a_{1} }}{{a_{2} b_{12} }}\left( {b_{22} - \frac{{d_{2} }}{{c_{2} }}} \right)x_{1e} - \frac{{f(x_{1e} )}}{{a_{2} b_{12} }}\left[ {\left( {b_{11} - \frac{{d_{1} }}{{c_{1} }}} \right)\left( {b_{22} - \frac{{d_{2} }}{{c_{2} }}} \right) - b_{12} b_{21} } \right]$$
(10)

when the learning rate \(d_{2} = 0.3\), the curve of Fig. 2a shows three and seven intersections with the X-axis for \(d_{1} < 0.0283\) and \(d_{1} > 0.0283\), respectively. In Fig. 2b, we get seven intersections for the learning \(d_{2} < 0.3\) and three intersections for \(d_{2} > 0.3\) with the X-axis when we fix the learning rate \(d_{1} = 0.0283\). The two curves (Fig. 2a, b) have five intersections with the X-axis, for the critical values \(\left( {d_{1} ,d_{2} } \right) = \left( {0.0283,0.3} \right)\). Let us specify that the number of the intersection of the solution curve with the X-axis coincides with the number of the equilibrium point. Thus, the intersection of three, five, and seven of the solution curve with the X-axis denotes the presence of three, five, and seven equilibrium points, respectively, in the model (4). From this observation, it is easy to choose which parameter should be modified to obtain a significant effect on the dynamics of the model (4).

Fig. 2
figure 2

Function curve described by Eq. 9 and their intersection points with respect to parameters \(\left( {d_{1} ,d_{2} } \right)\); equilibrium points varying with respective parameters \(d_{1}\) and \(d_{2}\) in (a) and (b), respectively

The Jacobian matrix derived from Eq. (4) for equilibrium points \(P_{n = 0,1,2,3,4,5,6}\) is given in (11):

$$J = \left[ {\begin{array}{*{20}c} { - a_{1} + b_{11} g_{1} } & {b_{12} g_{2} } & 1 & 0 \\ {b_{21} g_{1} } & { - a_{2} + b_{22} g_{2} } & 0 & 1 \\ { - d_{1} g_{1} } & 0 & { - c_{1} } & 0 \\ 0 & { - d_{2} g_{2} } & 0 & { - c_{2} } \\ \end{array} } \right]$$
(11)

The characteristic equation associated with (11), specified in Eq. (12), is obtained from the MATLAB software:

$$det(J - \lambda I_{4} ) = \alpha_{4} \lambda^{4} + \alpha_{3} \lambda^{3} + \alpha_{2} \lambda^{2} + \alpha_{1} \lambda^{1} + \alpha_{0} = 0$$
(12)
$$\left\{ \begin{aligned} \alpha_{4} &= 1 \hfill \\ \alpha_{3} &= a_{1} + a_{2} + c_{1} + c_{2} - b_{11} g_{1} - b_{22} g_{2} \hfill \\ \alpha_{2} &= a_{1} a_{2} + a_{1} c_{1} + a_{1} c_{2} + a_{2} c_{1} + a_{2} c_{2} + c_{1} c_{2} + d_{1} g_{1} + d_{2} g_{2} - a_{2} b_{11} g_{1} - \, a_{1} b_{22} g_{2} - b_{11} c_{1} g_{1} \hfill \\ &\quad -b_{11} c_{2} g_{1} - b_{22} c_{1} g_{2} - b_{22} c_{2} g_{2} + b_{11} b_{22} g_{1} g_{2} - b_{12} b_{21} g_{1} g_{2} \hfill \\ \alpha_{1} &= a_{1} a_{2} c_{1} + a_{1} a_{2} c_{2} + a_{1} c_{1} c_{2} + a_{2} c_{1} c_{2} + a_{2} d_{1} g_{1} + a_{1} d_{2} g_{2} + c_{2} d_{1} g_{1} + c_{1} d_{2} g_{2} - a_{2} b_{11} c_{1} g_{1} \hfill \\ &\quad -a_{2} b_{11} c_{2} g_{1} - a_{1} b_{22} c_{1} g_{2} - a_{1} b_{22} c_{2} g_{2} - b_{11} c_{1} c_{2} g_{1} - b_{22} c_{1} c_{2} g_{2} - b_{11} d_{2} g_{1} g_{2} + b_{22} d_{1} g_{1} g_{2} \hfill \\ &\quad+ b_{11} b_{22} c_{1} g_{1} g_{2} - b_{12} b_{21} c_{1} g_{1} g_{2} + b_{11} b_{22} c_{2} g_{1} g_{2} - b_{12} b_{21} c_{2} g_{1} g_{2} \hfill \\ \alpha_{0} &= a_{1} a_{2} c_{1} c_{2} + a_{1} c_{1} d_{2} g_{2} + a_{2} c_{2} d_{1} g_{1} + d_{1} d_{2} g_{1} g_{2} - a_{2} b_{11} c_{1} c_{2} g_{1} - a_{1} b_{22} c_{1} c_{2} g_{2} - b_{11} c_{1} d_{2} g_{1} g_{2} \hfill \\ & \quad- b_{22} c_{2} d_{1} g_{1} g_{2} + b_{11} b_{22} c_{1} c_{2} g_{1} g_{2} - b_{12} b_{21} c_{1} c_{2} g_{1} g_{2} \hfill \\ \end{aligned} \right.$$
(13)
$${\text{where}}\quad g_{j} = f^{\prime}(\overline{x}_{j} ) = \left[ {\eta + 2\eta \left( {\frac{{\eta \overline{x}_{j} }}{\sigma }} \right)^{2} } \right]\exp \left[ { - \left( {\frac{{\eta \overline{x}_{j} }}{\sigma }} \right)^{2} } \right]\quad j = \left[ {1,2} \right]$$
(14)

The coefficients of the characteristic polynomial (12) are all nonzero. The parameters \(\left( {d_{1} ,d_{2} } \right)\) will be \(d_{2} = 0.3\) and \(d_{1} = {\text{tuneable}}\) in the rest of the document unless otherwise indicated. For some discrete values of the control parameter \(d_{1}\), the equilibrium points and Eigenvalues are determined, with their stabilities in Table 2. Based on the results displayed on that table, the first condition to have self-excited dynamics from the TNL neuron model is verified since all the equilibria are unstable.

Table 2 Equilibrium points states, their eigenvalues and stabilities for some values of the learning rate \(d_{1}\)

3 Numerical results

The numerical approach is done based on the nonlinear dynamical systems analysis tools, namely bifurcation diagrams and Lyapunov exponents, the phase portrait, the basin of attraction, etc. These tools could be implemented thanks to the Turbo Pascal environment for numerical calculations and the results of these calculations represented from the MATLAB environment. The numerical integrations are based on the 4th-order Runge–Kutta algorithm for the precision and the speed of convergence which it offers with an integration time interval \(0 \le \tau \le 5500\) choice for an integration time step \(\Delta \tau = 0.005\). The bifurcation diagram plot consists of taking the local maxima of the neuron state variable when weeping the control parameter \(d_{1}\), performed on the integration time interval \(5000 \le \tau \le 5500\) for a time step \(\Delta \tau = 0.005\). The plot of the Lyapunov exponent diagram is done using the Wolf method in the time interval \(0 \le \tau \le 5000\) with the time step \(\Delta \tau = 0.005\) [41].

3.1 Bifurcation diagrams and phase portraits

The superimposition of the bifurcation diagrams in Fig. 3a shows the effect induced by the variation in the learning rate \(d_{1}\) on the dynamics of the model (4) in the range \(0 \le d_{1} \le 1\). On this diagram, there is an appearance of the zones of periodic behaviors, followed by zones of chaotic behavior and periodic windows separated by chaotic zones of behavior by complex bifurcations. Note also that the road to chaos begins through a period-doubling scenario when the learning rate \(d_{1}\) increases. During his excursion, there is the appearance of the phenomenon of crisis followed by a new doubling of the period toward chaos. This crisis is due to the fact that our system suddenly switches from chaotic behavior to regular oscillations from a given value of the learning rate \(d_{1}\). There are several crisis configurations specified in [42, 43].

Fig. 3
figure 3

Bifurcation diagrams show the local maximum of \(x_{1}\) in term of the learning \(d_{1}\) (rate control parameter) in (a) and his corresponding Lyapunov spectrum in (b)

According to [42, 43], the crisis occurs because an unstable fixed point or an unstable limit cycle “collides” with the chaotic attractor as some system control parameters are changed. The diagrams of the Lyapunov spectrum (Fig. 3b) drawn from the Wolf algorithm [41] confirm these scenarios of bifurcation observed. The initial conditions used for these diagrams are summarized in Table 2. For some discrete values of the learning rate \(d_{1}\), some phase portraits have been represented including limit cycles and chaotic attractors in different planes. For example, in Fig. 4, the projections of the symmetric chaotic attractors are well done in the \(\left( {x_{1} ,y_{1} } \right)\), \(\left( {x_{1} ,x_{2} } \right)\) and \(\left( {y_{1} ,x_{2} } \right)\) planes for \(d_{1} = 0.28\) which exhibits the complexity of the system. An enlargement of the bifurcation diagram of Fig. 3a is given in Fig. 5 for the control parameter \(d_{1}\) belonging to the range \(0.154 \le d_{1} \le 0.304\). In this figure, we can see the coexistence of three bifurcation diagrams, the presence of parallel branches’ and the crisis phenomena. The representation of the bifurcation diagram of Fig. 5 in the range \(0.1881 \le d_{1} \le 0.1969\) in Fig. 6 showed clearly that these parallel branches’ coexist four in number. The enlargement of Fig. 3 in the interval \(0.279 \le d_{1} \le 0.281\) presented in Fig. 7 shows the coexisting of other diagrams in this region. These diagrams argue the existence of the phenomenon of multistability in these ranges. The initial methods and conditions for obtaining these diagrams are given in Table 3.

Fig. 4
figure 4

Projection of symmetrical chaotic attractors on different planes. These attractors are obtained for \(d_{1} = 0.25\) and by changing sign of the initial conditions \(\left( {0,0, \pm 0.1,0} \right)\)

Fig. 5
figure 5

Enlargement of bifurcation diagram of Fig. 3a in the range \(0.154 \le d_{1} \le 0.304\)

Fig. 6
figure 6

Enlargement of bifurcation diagram of Fig. 5 in the range \(0.1881 \le d_{1} \le 0.1969\)

Fig. 7
figure 7

Enlargement of bifurcation diagram of Fig. 5 in the range \(0.279 \le d_{1} \le 0.281\)

Table 3 Methods used to obtain coexisting bifurcation diagrams of Fig. 3 and its enlargements of Figs. 3, 4, 5, 6, 7

3.2 Coexistence of attractors and basins of attraction

The notion of multistability or coexistence of multiple attractors is a very important phenomenon in chaotic dynamical systems. Among others, for the flexibility, it offers to the system and its adapted applications in information engineering [12]. Multistability has caught the attention of many researchers in recent years [44], because it encompasses a diversity of many stable states in a system.

The study of the coexistence of attractors in the HNNs would allow understanding in depth its dynamical effect on the aspects of the treatment of the brain information and the cognitive function [12]. We have exhibited through the TLTN presented in (4) the coexistence of four (periodic–chaotic), four (chaotic–chaotic) and six (periodic–chaotic) attractors, considering the values of the control parameters \(d_{1} = 0.196\) \(d_{1} = 0.2089\) and \(d_{1} = 0.28\), respectively, presented in Figs. 8a, b, 9a, b, and 10a–c, respectively. Different types of coexisting attractors and the initial conditions making it possible to obtain them are specified in these figures. The basins of attraction corresponding to the coexistence of four (periodic-chaotic), four (chaotic-chaotic), and six (periodic-chaotic) symmetric attractors are shown in Figs. 8c, d, 9c, d and 10d, respectively. On these attraction basins, each color corresponds to a set of initial conditions that enable to obtain each coexisting attractor. These basins of attraction correspond to those of the self-excited attractors because they intercede with the open neighborhood of other equilibrium points. In contrast, hidden attractors have basins of attraction that do not cross the open neighborhood of the other points of equilibrium. Note that the coexistence of these four and six attractors is unprecedented; these numbers have nowhere been reported in the literature of tabu learning neuron, whose maximum number so far was two [9, 10]. This demonstrates to the satisfaction that this considered compound activation function makes the model (4) more complex and interesting than those already existing using other activation functions.

Fig. 8
figure 8

Representation the phase portraits of the coexistence of four different attractors in \(\left( {x_{1} ,x_{2} } \right)\) plane, showing: period-4 limit cycles (low and upper), chaotic attractors (low and upper), and its corresponding cross section of basin with respective colors in ©. These attractors are obtained for \(d_{1} = 0.196\) and for initial conditions \(\left( {x_{1} (0),x_{2} (0),y_{1} (0),y_{2} (0)} \right) =\) \(\left( {0,0, \pm 0.28,0} \right)\) and \(\left( {0,0, \pm 1.44,0} \right)\), respectively

Fig. 9
figure 9

Representation the phase portraits of the coexistence of four different symmetric chaotic attractors (a, b) in plane and its corresponding cross section of basin with respective colors in ©. These attractors are obtained for \(d_{1} = 0.2089\) and for initial conditions \(\left( {x_{1} (0),x_{2} (0),y_{1} (0),y_{2} (0)} \right) =\) \(\left( {0,0,{{ + 0.2} \mathord{\left/ {\vphantom {{ + 0.2} { - 0.12}}} \right. \kern-\nulldelimiterspace} { - 0.12}},0} \right)\) and \(\left( {0,0, \pm 0.16,0} \right)\), respectively

Fig. 10
figure 10

Representation the phase portraits of the coexistence of six different symmetric attractors in \(\left( {x_{1} ,y_{1} } \right)\) plane, showing: period-1 limit cycle (low and upper) in (a), peiod-6 limit cycles (low and upper) and chaotic attractors (low and upper) and its corresponding cross section of basin with respective colors in (d). These attractors are obtained for \(d_{1} = 0.28\) and for initial conditions \(\left( {x_{1} (0),x_{2} (0),y_{1} (0),y_{2} (0)} \right) =\) \(\left( {0,0,{{ + 0.04} \mathord{\left/ {\vphantom {{ + 0.04} { - 0.072}}} \right. \kern-\nulldelimiterspace} { - 0.072}},0} \right)\), \(\left( {0,0, \pm 0.11,0} \right)\) and \(\left( {0,0, \pm 0.216,} \right.\left. 0 \right)\), respectively

3.3 Dynamics of tabu learning neuron in the parameter space

Figures 11 and 12 show the effect of the variation in two different parameters particularly the learning rate on the dynamics of model (4) exposed in the parameter space in \(\left( {d_{1} ,d_{2} } \right)\) and \(\left( {\eta ,d_{1} } \right)\) planes, respectively.

Fig. 11
figure 11

Influence of the variation in two parameters \(\left( {d_{1} ,d_{2} } \right)\), to the dynamical behavior

Fig. 12
figure 12

Influence of the variation in two parameters \(\left( {d_{1} ,\eta } \right)\), to the dynamical behavior

We can note from Figs. 11a and 12a that the dynamics of model (4) has two main regions, one periodic (in blue and red) and one another chaotic (in green and magenta), where the red and blue colors are used distinguishing periodic attractors of opposite sign. Likewise, the green and magenta colors allow distinguishing the chaotic attractors of the opposite sign. These representations enable us to confirm that model (4) is symmetrical and its dominant behavior for the composition of these parameters. The coded colors contained in Figs. 11b, 12b correspond to values of the maximum exponent of Lyapunov \((\lambda_{\max } )\); these values are indicated on the graduations of the color bar of the corresponding figures. In these figures, we can appreciate the expansion and reduction in chaotic puff via transitions of the zones of periodic behaviors (where \(\lambda_{\max } < 0\)) toward areas of chaotic behavior (where \(\lambda_{\max } > 0\)) and vice versa during the variation in parameters.

4 Circuit design

In this section, the TLTN model (4) will be studied in the form of a circuit or an analog simulator in PSPICE. The analog simulator equivalent to the mathematical model (4) is set up essentially by electronic components. This rigorous and inexpensive strategy is employed because it has been used for experimental studies of some models of dynamic systems [12, 13] or to emulate other complex systems [22, 45,46,47].

4.1 Analog circuit synthesis for the TLTN

The nonlinear activation function involving in model (4) is an exponential and polynomial-mixed nonlinearity, which is rather complicated and difficult to implement in an analog circuit form. It is for this reason that [9] proposed an approximation function using hyperbolic tangent which is more simple to realize. The equivalent analog circuit provided for this approximation to realize the nonlinear activation function of the TLTN model (4) is given in Fig. 13, which is an equivalent of three hyperbolic tangent modules with different offsets. More details of this approximation of the hyperbolic tangent circuit design can be found in the [9, 11, 48] and his current–voltage characteristic given in Fig. 14. From this figure, it is obvious that the nonlinear characteristic is bounded below and above. The analog circuit of the TLTN model (4) is constituted by referring to [7, 9]; it consists of four integrators involving three inverters in feedback loops and two approximation functions shown in Fig. 15.

$$\left\{ \begin{gathered} C_{1} \frac{{{\text{d}}X_{1} }}{{{\text{d}}t}} = - \frac{{X_{1} }}{{R_{1} }} - \frac{1}{{R_{2} }}f(X_{1} ) + \frac{1}{{R_{3} }}f(X_{2} ) + \frac{{Y_{1} }}{R} + \frac{{A_{m} }}{R}\sin (2Pif_{1} t) \hfill \\ C_{2} \frac{{{\text{d}}X_{2} }}{{{\text{d}}t}} = - \frac{{X_{2} }}{{R_{4} }} - \frac{1}{{R_{5} }}f(X_{1} ) + \frac{1}{{R_{6} }}f(X_{2} ) + \frac{{Y_{2} }}{R} \hfill \\ C_{3} \frac{{{\text{d}}Y_{1} }}{{{\text{d}}t}} = - \frac{{Y_{1} }}{{R_{7} }} - \frac{1}{{R_{{d_{1} }} }}f(X_{1} ) \hfill \\ C_{4} \frac{{{\text{d}}Y_{2} }}{{{\text{d}}t}} = - \frac{{Y_{2} }}{{R_{8} }} - \frac{1}{{R_{9} }}f(X_{2} ) \hfill \\ \end{gathered} \right.$$
(15)

where \(X_{1}\), \(X_{2}\), \(Y_{1}\) and \(Y_{2}\) denote capacitor voltages \(C_{1}\), \(C_{2}\), \(C_{3}\) and \(C_{4}\), respectively, with \(C_{1} = C_{2} =\) \(C_{3} = C_{4} = C = 10\;{\text{nF}}\) and \(R = 10\;{\text{k}}\Omega\).

Fig. 13
figure 13

Synthesized circuit of the approximate activation function using hyperbolic tangent modules

The set of Eq. (15) is equivalent to (4) considering the following equalities:

$$\begin{gathered} t = \tau RC,\,X_{i} ,Y_{i} = x_{i} ,y_{i} (i = 1,2),\;R_{1} = \frac{R}{{\left| {a_{1} } \right|}} = 100\,{\text{k}}\Omega ,\,R_{2} = \frac{R}{{\left| {b_{11} } \right|}} = 100\,{\text{k}}\Omega , \hfill \\ R_{3} = \frac{R}{{\left| {b_{12} } \right|}} = 3.571\,{\text{k}}\Omega ,\,{\text{and}}\;R_{4} = \frac{R}{{\left| {a_{2} } \right|}} = 100\;{\text{k}}\Omega ,\;R_{5} = \frac{R}{{\left| {b_{21} } \right|}} = 100\;{\text{k}}\Omega ,\;R_{6} = \frac{R}{{\left| {b_{22} } \right|}} = 2.5\;{\text{k}}\Omega \hfill \\ R_{7} = \frac{R}{{\left| {c_{1} } \right|}} = 100\;{\text{k}}\Omega ,\;R_{8} = \frac{R}{{\left| {c_{2} } \right|}} = 100\;{\text{k}}\Omega ,\;R_{9} = \frac{R}{{\left| {d_{2} } \right|}} = 3.33\;{\text{k}}\Omega \hfill \\ A_{{\text{m}}} = 0.2V,\;{\text{and}}\; \, f_{1} = \frac{{F_{1} }}{RC} = 20\;{\text{kHz}} \hfill \\ \end{gathered}$$
(16)

4.2 Validation by PSPICE analog simulation

The implementations of the circuit design of Fig. 13 and the analog circuit of Fig. 15 in PSPICE have led to the results of Fig. 14 and Figs. 16, 17, respectively. The curve in Fig. 14 is the trace of the approximation of the compound activation function whose circuit design is given in Fig. 13. This curve was represented by considering \(V_{i} = 5\sin (100\pi t)\). By comparing it with that of Fig. 1a obtained for \(\left( {\eta ,\sigma^{2} } \right) = \left( {10,0.2} \right)\), we see that they coincide which validates the circuit design of Fig. 13 proposed by [9]. The symmetrical chaotic attractors shown in Fig. 16 in various planes are obtained through Fig. 15 for \(R_{d} = 38\,{\text{k}}\Omega\) and the initial conditions \(V(X_{1} (0),X_{2} (0),Y_{1} (0),Y_{2} (0)) = (0,0, \pm 0.9,0)\). These attractors are similar to those obtained during the numerical investigations in Fig. 4. The coexisting of four symmetrical chaotic attractors in Fig. 17 was obtained using the analog circuit of Fig. 15 for \(R_{d} = 51\,{\text{k}}\Omega\) and the initial conditions \(V(X_{1} (0),X_{2} (0),Y_{1} (0),Y_{2} (0)) = (0,0, \pm 0.1,0)\);\((0,0, \pm 0.9,0)\). These coexisting attractors are close to that of Fig. 6. All these results were obtained by choosing the Final step: 500 ms; No-Print Delay: 480 ms; Step Ceiling: 2 µs.

Fig. 14
figure 14

Capture in Pspice of the approximation of the compound activation function given in Fig. 13, effectively validating the numerical results of Fig. 1a

Fig. 15
figure 15

Analog circuit of non-autonomous tabu learning neuron with two neurons

Fig. 16
figure 16

Representation in Pspice of the complexity of symmetrical chaotic attractors for \(R_{d} = 38\;{\text{k}}\Omega\) in different planes. These attractors were obtained for initial conditions \(V\left( {X_{1} (0),X_{2} (0),Y_{1} (0),} \right.\)\(\left. {Y_{2} (0)} \right) = \left( {0,0, \pm 0.9,0} \right)\)

Fig. 17
figure 17

Representation in Pspice of the coexistence of four symmetrical attractors for \(R_{d} = 51\;k\Omega\) in: a period-4 limit cycles (Low and Upper) and b chaotic attractors (Low and Upper). These attractors were obtained for initial conditions \(V\left( {X_{1} (0),X_{2} (0),Y_{1} (0),} \right.\left. {Y_{2} (0)} \right) = \left( {0,0, \pm 0.1,0} \right)\) and \(\left( {0,0, \pm 0.9,0} \right)\), respectively

5 Application to secure biomedical images in IoMT

5.1 Modified Julia set

Julia set is a fractal of complex numbers considered as input whose output through a quadratic function \(f(z) = z^{2} + c\) is bounded [49]. Here c is a complex constant. The function \(f(z)\) is initialized and iterated. Setting the real values of the complex number \(z\) as the x pixel index and the imaginary values of the complex number \(z\) as the y pixel index, the Julia set can be visualized for different values of the complex constant c.

5.2 Encryption process

An issue of this visualization for the traditional complex Julia set is illustrated in Fig. 18a for c =  − 0.745429. If \((x,y)\) is the location of pixels in Fig. 18a, the intensity values of the image representing the modified complex Julia set are computed as \(I(x,y) = [a*(x/y)\,]\,\,\bmod 256\) when the values of pixels of Fig. 18a are 0 representing the red pixel. An issue of the visualization of the image representing the modified complex Julia set is illustrated in Fig. 18b for c =  − 0.745429 and a = 25. Internet of things (IoT) is a new prominent research field where objects, sensors, and the internet are jointly interconnected to form a unique system [50]. The main objective of this interconnection is to solve common life problems. IoT can be used to provide a set of healthcare services and this is defined as the internet of healthcare things (IoHT) or the internet of medical things (IoMT) [51]. One of the most important services of IoHT is the communication of medical data of patients including medical images. As medical images contain very confidential data, we provide in this paper lightweight encryption/decryption technics useful to secure medical images in the IoMT. Figure 19 presents the Structure of the secure IoHT system where a medical sensor is used for medical image acquisition. The acquired image is encrypted using the structure of Fig. 20 and transmitted in the IoMT. The receiver uses the decryption scheme to recover the original image for analysis. The encryption algorithm can be detailed in the following five steps, and the decryption procedure is the reverse of the encryption scheme:

Fig. 18
figure 18

Visualization of a the traditional Julia set for c =  − 0.745429 and b the modified Julia set for c =—0.745429 and a = 25

Fig. 19
figure 19

Structure of the secure IoMT system

Fig. 20
figure 20

Structure of the cryptosystem

Step 1: Read the original image I at the output of the acquisition system. This is 256 × 256 × 3 image or 256 × 256 × 1 image. Apply row and column rotation on each red, green and blue component to obtain R, G and B matrixes.

Step 2: Using the output matrix of the Julia set J, substitute the values of each component R, G and B to obtain SR, SG and SB.

Step 3: Select the initial values (x10, x20, y10, y20, \(a_{1} ,\) \(a_{2} ,\) \(c_{1} ,\)\(c_{2} ,\)\(I_{2} ,\)\(I_{m} ,\)\(F_{1} ,\)\(\eta ,\)\(\sigma ,\)\(b_{11} ,\) \(b_{12} ,\)\(b_{21} ,\)\(b_{22}\)) as indicated in Eq. 6 for iterating the presented TLTN model for 256 × 256 × 3 times for color images and 256 × 256 × 1 times for gray scale images to outcome four real sequences {X1}, {X2}, {Y1} and {Y2}. Then convert the real sequence X1 to integer as \(X = {\text{fix}}\left( {X1_{i} \times {10}^{{{16}}} \bmod {256}} \right)\).

Step 4: Perform Bit-XORed-based diffusion process on the diffused matrixes SR, SG and SB using the sequence X of the TLTN model.

Step 5: Perform components fusion to obtain the final encrypted image from SR, SG and SB.

5.3 Performance study of the proposed cryptosystem

To validate any cryptosystem performance, analysis is required [52]. In this work, we will use some well-known metrics such as histogram, chi-square, correlation coefficients, NPCR, UACI, local and global entropy, occlusion and key sensitivity to assess the proposed cryptosystem. Experimentations are performed on a workstation equipped with Intel Core™ i7-3630QM, 16 GB RAM, and provided with MATLAB R2016b. Our dataset consists of three medical images each of size 256 × 256 × 3 (see Fig. 21) obtained from various medical image sources including the COVID-CT database [53] which is the most important database of COVID-19 Computed Tomography (CT) images available for the public. Setting the initial seed for iterating the presented TLTN model as in Eq. 6, Fig. 21 shows that the input medical images are no more recognizable after encryption and can be sent to the transmission system securely. But we still need to access the security or performance of the transmitted (encrypted) images given that visual test is not a sufficient condition of security [54].

Fig. 21
figure 21

Visual test of the dataset images. It is observed that the plain medical images are no more recognizable after encryption

5.3.1 Histogram and Chi-square tests

Any good encryption scheme must pass the histogram and chi-square test to be able to resist the statistical intrusion of a third party [55, 56]. The histogram of a plain image is usually distributed randomly, whereas the histogram of the corresponding cipher is required to be uniform. Figure 22 presents the histograms of the plain and cipher medical images. It is obvious to observe that the histograms of the plain image are randomly distributed while the histograms of the encrypted data are flat. This flatness can be checked using the Chi-square test. Table 4 provides the issue of Chi-square values with 0.05 as the weight value. Usually, the flatness of the histogram is validated if the Chi-square value of the test sample is less than 293.2478 indicating a p-value higher than 0.5. Regarding Table 4, the histograms of various test samples are validated.

Fig. 22
figure 22

Histograms for each plain data set and its corresponding cipher

Table 4 Chi-square values for each encrypted test data

5.3.2 Correlation coefficients

If the correlation coefficients of the cipher data are very close to zero, it is sure that the proposed cryptosystem can resist all form statistical attacks [57, 58]. The correlation coefficient is computed by a random selection of 104 pairs of neighboring pixels in each plan of the image. The following formula is usually applied:

$$C_{mn} = \frac{{\sum\nolimits_{x = 1}^{A} {\left( {m_{x} - \overline{m}} \right)} \left( {n_{x} - \overline{n}} \right)}}{{\sqrt {\sum\nolimits_{x = 1}^{A} {\left( {m_{x} - \overline{m}} \right)^{2} \sum\nolimits_{x = 1}^{A} {\left( {n_{x} - \overline{n}} \right)^{2} } } } }}$$
(17)

here \(m_{x}\), \(n_{x}\) point out the values of adjacent pixels and A points out the whole amount of nearby pixel pairs (104). Table 5 provides the outcome of computations of the correlation coefficients of each encrypted medical image in the red (R), green (G), and blue (B) components. We can observe that the correlation coefficients of the cipher data are very close to zero; consequently, we are sure that the proposed cryptosystem is able to resist all forms of statistical attacks. The distribution of correlation coefficients for the plain data set and corresponding cipher is also represented in Fig. 23. As linear relation is observed for plain images, it is evident that the pixels of these images are highly correlated. On the other hand, no linear relation is observed in the case of cipher images. This simply implies that there is no correlation between the pixels of the encrypted images. Consequently, no profitable information can be retrieved from these encrypted data.

Table 5 Correlation coefficients of each encrypted color test data
Fig. 23
figure 23

Distribution of correlation for plain data set and corresponding cipher

5.3.3 NPCR and UACI tests

To assess the capability of an encryption algorithm to withstand differential attacks, NPCR (“Number of Pixels Change Rate”) and UACI are commonly used [59]. These metrics evaluate the rate of change in the original image on its equivalent cipher one. The numerical value of NPCR is computed as:

$${\text{NPCR}} = \frac{{\sum\nolimits_{m;n} {{\text{Diff}}(m,n)} }}{D} \times 100\%, \quad \;{\text{Diff}}(m,n) = \left\{ {\begin{array}{*{20}c} {0\,\,{\text{if}}\,\,P(m,n) = C(m,n)} \\ {1\,\,{\text{if}}\,\,P(m,n) \ne C(m,n)} \\ \end{array} } \right.$$
(18)

here D indicates to the complete pixel numbers in the image. On the other hand, numerical value of UACI is computed as:

$${\text{UACI}} = \frac{100}{{m \times n}}\sum\nolimits_{1}^{m} {\sum\nolimits_{1}^{n} {\frac{{\left| {{\text{IC}}_{1} \left( {m,n} \right) - {\text{IC}}_{2} \left( {m,n} \right)} \right|}}{255}} }$$
(19)

where \({\text{IC}}_{1}\) and \({\text{IC}}_{2}\) are two encrypted images obtained from ciphers images different in just on pixel. \(m\) and \(n\) are the dimension of the images.

The outcomes of NPCR and UACI for the experimented dataset are displayed in Table 6. From these results, the given encryption approach has a high sensitivity to tiny pixel changes in the original image. Consequently, encrypted images are secured against any form of differential attacks.

Table 6 NPCR and UACI of each encrypted test data

5.3.4 Global and local entropy

To estimate the distribution of the pixel values of each plane in the image, local and global entropy is commonly used [60]. Global entropy is computed as:

$$E(Y) = - \sum\limits_{a = 1}^{{2^{b} - 1}} {p(y_{a} } )\log_{2} \left( {p(y_{a} )} \right)$$
(20)

here \(p(y_{a} )\) represents the possibility of \(y_{a}\) and b indicates the pixel bit level, which is equivalent to the typical entropy value 8-bit. Local entropy (which is more accurate than global entropy) can be calculated by evaluating the average value of global entropy for all non-overlapping blocks in the image. Table 7 states the outcomes of local and global entropy values for encrypted images, which values for the cipher images are very close to 8.

Table 7 Global and local entropy of each encrypted test data

5.3.5 Occlusion analysis

In this work, medical images are transferred securely from one user to the other. During the transmission process, encrypted data may be easily infected by losing partial information (occlusion) [61]. The decryption algorithm at the receiver side must be able to recover the original image with minimal loss. To assess the proposed capability of the proposed cryptosystem to resist occlusion, we made a cutting block with various sizes of pixels to the ciphered Enc-Img03 and then decrypt it. Figure 24 presents the outcomes of the data loss attacks, in which the decrypted images have good visual quality.

Fig. 24
figure 24

Img03 is encrypted, occluded and decrypted with success to illustrate occlusion attacks

Fig. 25
figure 25

Img02 is decrypted with various keys to illustrate key sensitivity

5.3.6 Key sensitivity analysis

A good encryption algorithm is required to be highly sensitive to the encryption key [62]. To assess the key sensitivity of the proposed technics, we execute the decryption procedure for the encrypted image Enc-Img02 with several initial values, in which the outcomes are presented in Fig. 25.

5.3.7 Complexity analysis of the proposed technics

Complexity analysis is one of the most important measures to assess the performance of an algorithm. This complexity can be computed in terms of running speed or the Encryption Throughput (ET) and the Number of Cycles (NC) required securing one byte of the plain image. Note that the encryption time is computed using the ‘‘tic-toc’’ functions of MATLAB while ET and NC are computed as:

$${\text{ET}} = \frac{{{\text{Size}}\,{\text{of}}\,{\text{the}}\,{\text{image}}\,({\text{Byte}})}}{{{\text{Encryption}}\,{\text{time}}\,({\text{s}} )}}$$
(21)
$${\text{NC}} = \frac{{{\text{CPU}}\,{\text{speed}}\,(Hz)}}{{{\text{ET}}({\text{Byte}}/{\text{s}} )}}$$
(22)

A good encryption algorithm is required to take less encryption time, less NC, and high ET to be suitable for real time implementation. Table 8 contains the running speed of the encryption algorithm while using various size of test image Img01 (example 512 × 512 × 3 bytes). On the other hand, Table 8 provides the ET and the NC computed with 512 × 512 × 3 bytes version of Img01. The computational workstation is characterized by a 2.4 GHz processor Intel ® core TM i7-3630QM 16 GB RAM and MATLAB R2016b software. The computational time increases with respect to the size of the plain image. Note this computational time also relies on the capacity of the workstation (the processor and the RAM). Tables 8 and 9 show that an acceptable complexity is obtained, and the algorithm is competitive with some fastest chaos-based cryptosystems results of the state of the art.

Table 8 Computational time (in milliseconds) for various size test images and comparison with existing works
Table 9 ET and NC computed with 512 × 512 × 3 bytes version of Img01

6 Superiority of the encryption scheme based on the TLN chaotic attractor

In this section, a comparative analysis in terms of encryption/decryption performance and key sequence is achieved. Table 10 compares complexity, NPCR, UACI, χ2-values, entropy, and key sequence of the present work with some fastest chaos-based encryption/decryption algorithms from the literature. This comparative analysis shows that the proposed cryptosystem has high-security issues and is competitive with the fastest chaos-based cryptosystems.

Table 10 Comparative analysis for Img01 in terms of computational time (CT), Encryption Throughput (ET), Number of Cycles (NC), NPCR, UACI, χ2-values, entropy and key sequence

7 Conclusion and future works

This paper was focused on the dynamics of a non-autonomous tabu learning two-neuron model with compound activation functions. For some discrete values of two parameters of this compound function, its characteristic curves have been obtained. The graphical representations of the equilibria under the variation in the learning rate of the first and the second as well as their stability have been investigated. It has been found that the adapting synapse-based neuron model displays self-excited dynamics. Using the learning rate of the first neuron as the bifurcation control parameter, some complex behaviors including periodic states, chaotic behavior, and periodic windows separated by areas of chaotic behaviors are observed using bifurcation diagrams as well as the graph of the Lyapunov spectrum as argument. Besides, phenomena such as crisis, the parallel bifurcations branches, and the uncovered coexistence of up to six disconnected stable states have been also captured during our numerical simulations. The representations of the basins of attraction of some of those coexisting states have shown that they are all self-excited. An analog circuit was developed [7, 9, 11, 48], through its implementation in PSPICE. Some results similar to those from the numerical approach were given, and good accordance was observed between both results [17, 20, 46,47,48, 66, 67]. Finally, the complex sequence of random numbers obtained from the dynamical analysis of the TLTN model was used to design an encryption/decryption algorithm based on a modified Julia set and confusion-diffusion operations. The security performances of the built cryptosystem were analyzed in terms of Computational time (CT = 1.82), Encryption Throughput (ET = 151.82 MBps), Number of Cycles (NC = 15.80), NPCR = 99.6256, UACI = 33.6512, χ2-values = 243.7786, global entropy = 7.9992 and local entropy = 7.9083 and it was found that the algorithm was highly secured compared to some fastest neuron chaos-based cryptosystems thus suitable for IoMT security.

Cluster or clustering analysis is usual for data classification. Unfortunately, this idea is out of the scope of this work and may be taken into consideration for future works. That is, for future work we can consider an unsupervised clustering technique to extract meaningful information from complex data such as medical images and encrypt only these meaningful data to improve the encryption time. Let us mention that the aim of the present work is to derive a robust and simple encryption algorithm based on the chaotic sequences from the TLTN model useful for complex data such as medical image encryption.