1 Introduction

In the literature, there have been several techniques that address the issue of detecting detective sensors in an array antenna. Rodríguez-González et al. (2000; 2009) diagnosed the defective sensor using the genetic algorithm (GA), where the fitness function is used to compare the measured radiation pattern with the given configuration of failed/unfailed sensors. Patnaik et al. (2007) used a neural network (NN) approach to detect a maximum of three defective sensors in a small array composed of 16 sensors. Bucci et al. (2000) considered the ambiguity of the result in continuous and discrete on-off cases. Xu et al. (2007) used the support vector machine (SVM) to diagnose the defective sensors in a small array composed of four sensors. However, this technique is not applicable to large arrays, where the possible number of combinations boosts. Moreover, the available techniques are computationally expensive, as they not only require to store the patterns of all defective sensors in the array, but also need to scan the entire region from 0° to 180°. Oliveri et al. (2009) presented a linear thinned array with predictable and well-behaved sidelobes, in which element placement is based on almost difference sets. The array power pattern is forced to pass through uniformly spaced values. Oliveri et al. (2010) further proposed an analytical technique based on almost difference sets for thinning planar arrays with well controlled sidelobes. Khan et al. (2015) used the compressed sensing technique hybridized with the genetic algorithm for the detection of faulty sensors, while Mailloux (1996), Yeo and Lu (1999), and Khan et al. (2013; 2014) developed different algorithms for failure correction.

Today, biologically inspired techniques, especially differential evolution (DE) and the cultural algorithm (CA), are considered efficient and reliable optimization methods (Zaman et al., 2012a; 2012b).

CA and DE are optimization techniques that include domain knowledge obtained during the evolutionary process. For many optimization problems, both CA and DE have successfully overcome the shortcomings of conventional optimization techniques due to their suppleness and effectiveness (Reynolds and Chung, 1996a; 1996b; Jin and Reynolds, 1999; Reynolds and Peng, 2005; Becerra and Coello, 2006). DE and CA are stochastic-based search algorithms, in which function parameters are programmed as floating-point variables. They are simple in structure, converge fast, and are robust against noise. Fonollosa et al. (2013) developed a more reliable electronic nose (e-nose) and a robust system, in which machine learning based on multiple kernels was generated to overcome sensor failures. The outcome confirms that multi-kernel models are more robust to sensor failures when the sub-kernel models are trained with small sets of sensors.

In this paper, the detection of fully and partially defective sensors in a linear array composed of N sensors is addressed. First, the symmetrical structure of a linear array is proposed. Second, a hybrid technique based on CA and DE is developed. In this hybrid process, the results achieved through CA are further tuned using DE. The mean squared error (MSE) is used as an objective evaluation function that defines the error between the responses of the desired and estimated patterns. The symmetrical structure has two advantages: (1) Instead of finding all damaged patterns, only (N−1)/2 patterns are needed; (2) We are required to scan the region from 0° to 90° instead of from 0° to 180°. Obviously, the computational complexity can be reduced. The proposed method outperforms the conventional one proposed by Choudhury et al. (2013) in terms of computational time and MSE. Monte Carlo simulations are carried out to validate the performance of the proposed scheme, compared with the existing methods in terms of computational time and MSE.

2 Problem formulation

Consider a uniform linear array (ULA) composed of N=2M+1 sensors along its x-axis with respect to the original one. The far-field array factor (AF) for a healthy setup of equally spaced sensors of nonuniform amplitudes and progressive phase excitations can be given as (Wolff, 1937)

$${\rm{AF(}}{\theta _i}{\rm{) = }}\sum\limits_{n = - M}^M {{w_n}\,\exp [{\rm{j}}n(kd\cos {\theta _i} + \alpha )],} $$
((1))

where w n is the nonuniform weight of the nth sensor, d is the spacing between the adjacent sensors, θ is the angle from broadside, k=2π/λ is the wave number with wavelength λ, and α=−kdcos θs is the progressive phase shift, where θs is the steering angle for the main beam. For an unhealthy setup (Fig. 1), AF can be written as

$${\rm{AF(}}{\theta _i}{\rm{) = }}\sum\limits_{\begin{array}{*{20}c} {n = - M} \\ {n \neq m} \\ \end{array} }^M {{w_n}\,\exp [{\rm{j}}n(kd\cos {\theta _i} + \alpha )].} $$
((2))
Fig. 1
figure 1

Nonuniform amplitude array composed of 2M+1 sensors with sensor w2 defective

If either w m or wm is damaged, i.e., by putting w m or wm equal to zero, the array factor of the mth damaged sensor in a noisy environment is given by

$${\rm{AF}}({\theta _i}{\rm{) = }}\sum\limits_{\begin{array}{*{20}c} {n = - M} \\ {n \neq m} \\ \end{array} }^M {{w_n}\,\exp [{\rm{j}}n(kd\cos {\theta _i} + \alpha )] + {\eta _i},} $$
((3))

where η i is the additive zero mean complex Gaussian noise with variance σ at the nth sensor. AF(θ i ) is the pattern when w m or wm is fully faulty (Fig. 1). Mathematically, the measurement noise (in dB) can be expressed as

$${\rm{SNR }}={{\sum\limits_{i = 1}^K {|{\rm{AF}}({\theta _i}{\rm{)}}{{\rm{|}}^2}} } \over {\sum\limits_{i = 1}^K {|{\rm{AF(}}{\eta _i}{\rm{)}}{{\rm{|}}^2}} }}.$$
((4))

Assume that sensor w4 fails in the array. The method of locating a faulty element in a linear array starts with the measurement of several samples of the faulty pattern. The damaged array pattern for sensor w4 is shown in Fig. 2, where one can clearly observe that the pattern is symmetrical about θ=90°.

Fig. 2
figure 2

Original Chebyshev array and w4 sensor damage pattern

3 Proposed methodology

In this section, we develop a method based on faulty patterns that are symmetric about θ=90°. Due to the symmetric structure, no matter w m or wm is damaged, the patterns are the same. The failure of w4 or w−4 gives the same pattern (Figs. 2 and 3). To detect the faulty sensor, we tabulate only half the number of faulty patterns; i.e., only (N−1)/2 faulty patterns are required. The other advantage is that we need only to scan the damage pattern from 0° to 90°, as the damage pattern is symmetrical about line θ=90°. The location of the faulty sensor can be found as

$${C_m} = \sum\limits_{i = 1^\circ }^{90^\circ } {{{\left| {{P_{\rm{F}}}({\theta _i}) - {P_m}({\theta _i})} \right|}^2},} $$
((5))

where PF(θ i ) is the faulty pattern and P m (θ i ) is the pattern when w m or wm is fully faulty (1≤m≤ (N−1)/2). In Eq. (5) the faulty patterns are compared with a given configuration of a fully faulty sensor, and its minimum result will give us the location of a faulty sensor. Then, based on another fitness function, we will decide whether the sensor is fully or partially defective. The value of the threshold has been found on the basis of MSE. If the lowest error is not larger than Eth (which is set as 0.5), the weight w m or wm is fully faulty. If the lowest error is larger than Eth (Eth=0.5), then the weight is partially faulty. We use the cultural algorithm with differential evolution (CADE) technique to find the weights for a partially detective sensor. The fitness function is given by

$$G = \sum\limits_{i = 1^\circ }^{90^\circ } {{{\left| {{P_{\rm{F}}}({\theta _i}) - {P_{{\rm{CADE}}}}({\theta _i})} \right|}^2},} $$
((6))

where PF(θ i ) is the desired response, and PCADE(θ i ) is the value of the pattern obtained by using the CADE technique. The proposed method starts with tabulating both the faulty patterns {F1(θ i ), F2(θ i ), …, F(N1)/2(θ i )} and the single defective pattern P m (θ i ) evaluated in the range of 0° to 90°. Then C m in Eq. (5) is calculated by finding the faulty sensor that minimizes Cm between one faulty pattern and one defective pattern when w m or wm is fully faulty.

Fig. 3
figure 3

Original Chebyshev array and w4 sensor failure pattern

The proposed method is computationally efficient, as we require half the number of samples {θ0°, θ2°, …, θ90°}; also, we have to tabulate (N−1)/2 faulty patterns to detect faulty sensors. The method of finding the faulty sensor in an array starts with measuring faulty patterns. If the defective sensor fails but radiates some power (i.e., the defective array pattern can be obtained from Eq. (2)), the defective weight is a fraction of the original one.

3.1 Differential evolution

DE, developed by Storn and Price (1997), is used to solve real-valued optimization problems. DE is a stochastic-based search algorithm that has a simple structure, fast convergence, and robustness against noise. It has shown good results for multimodel and non-differential fitness functions. DE is based on a mutation operator, which adds an amount obtained by the difference between two randomly chosen individuals of the present population. Thus, it has found tremendous applications (Rogalsky et al., 2000; Das and Konar, 2006). The basic steps are given in the form of pseudo-codes as follows:

  1. Step 1

    (Initialization): First we randomly initialize Q chromosomes, each with a length of 1×P. The P genes in each chromosome represent the weights of the array antenna, given as

    $$S = \left( {\begin{array}{*{20}c} {{w_{1,1}}} & {{w_{1,2}}} & \cdots & {{w_{1,p}}} \\ {{w_{2,1}}} & {{w_{2,2}}} & \cdots & {{w_{2,p}}} \\ \vdots & \vdots & {} & \vdots \\ {{w_{Q,1}}} & {{w_{Q,2}}} & \cdots & {{w_{Q,p}}} \\ \end{array} } \right),$$

    wi,k ∈ ℝ, lbwi,kub, ∀i = 1,2, ⋯, Q, k = 1,2, ⋯, P, where lb and ub are the lower and upper bounds of wi,k, respectively.

  2. Step 2

    (Update): All the chromosomes from 1 to Q of the current generation are updated. Choose \(d_h^{q,\,{g_e}}\) from the matrix, where g e and h represent the particular generation and length of chromosome, respectively. Our main task is to find the chromosome of the next generation, i.e., \({{\rm{e}}^{q,\,{g_{e + 1}}}}\), by using mutation, crossover, and selection operations.

    Mutation: To perform the mutation process, we select randomly three different chromosomes from matrix S:

    $$\begin{array}{*{20}c} {{f^{q,\,ge}} = {d^{{c_1}\,,{g_e}}} + F({d^{{c_2}\,,{g_e}}} - {d^{{c_3}\,,{g_e}}}),} \\ {0.5 \leq F \leq 1,\,\,\,1 \leq {c_1},\,{c_2},\,{c_3} \leq Q,\,\,\,{c_1} \neq {c_2} \neq {c_3} \neq j.} \\ \end{array} $$
    ((7))

    Crossover: Crossover is performed using

    $$r_k^{q,\,{g_e}} = \left\{ {\begin{array}{*{20}c} {f_h^{q,\,{g_e}},\,\,\,\,{\rm{rand()}} \leq {\rm{CR}}\,\,{\rm{or}}\,\,h = {h_{{\rm{rand}}}},} \\ {d_h^{q,\,{g_e}},\,\,\,{\rm{otherwise,}}\,\,\,\,\,\,\quad \quad \quad \quad \quad \quad } \\ \end{array} } \right.$$
    ((8))

    where 0.5≤CR≤1, and hrand is chosen randomly.

    Selection: The next-generation chromosome is generated by

    $${d^{q,\,{g_{e + 1}}}} = \left\{ {\begin{array}{*{20}c} {{r^{q,\,{g_e}}},\,\,\,\,{\rm{error(}}{r^{q,\,{g_e}}}{\rm{)}} \leq {\rm{error(}}{r^{q,\,{g_e}}}{\rm{)CR}}\,{\rm{or}}\,\,j = {j_{{\rm{rand}}}},} \\ {{d^{q,\,{g_e}}},\,\,\,{\rm{otherwise,}}\,\,\,\,\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ \end{array} } \right.$$

    where \({\rm{error}}({r^{q,\,{g_e}}})\) and \({\rm{error}}({d^{q,\,{g_e}}})\) are defined in matrix S.

  3. Step 3

    (Stopping criterion): The stopping criterion is based on the following condition:

    If \({\rm{error}}({d^{q,\,{g_{e + 1}}}}) < \varepsilon \), the maximum number of iterations is reached.

3.2 Cultural algorithm

CA was developed by Reynolds (1994) to model the evolution of the cultural component of an evolutionary computational system over time. The main idea behind CA is to clearly attain the problem-solving knowledge from the growing population, and apply this knowledge to guide the search space (Reynolds and Chung, 1996b). CA uses culture as a van for storing information available to the entire population over many generations. The parameter setting is given in Table 1. The flow diagram for CA is shown in Fig. 4. CA consists of three components: a populated space, a belief space, and a communication protocol. The first one contains the population to evolve and the mechanisms for its estimate. The population space consists of a set of possible solutions to the problem. In this study, the population space is DE. The second one is a belief space which represents the bias that has been acquired by the population during its problem-solving process. The belief space is the information depository in which the individuals can store their experiences for other individuals to learn ultimately. These two spaces are connected to each other through the communication protocol composed of two functions, i.e., acceptance and influenc. The acceptance function is used to accept the experience of the best individuals from the population space, and store them in the belief space. Then the knowledge in the belief space can be updated through the update function. The influence function can guide the search space. In the present work, the belief space is divided into two knowledge components, i.e., situational knowledge and normative knowledge. CA is used as an optimization algorithm, whose belief space is stored and updated with the population space during each generation. The knowledge in the belief space is used to guide the search space toward the required solution.

Fig. 4
figure 4

Generic flow diagram of the cultural algorithm

Table 1 Parameters used in the cultural algorithm with differential evolution

The differential evolution variation operators are influenced in the following way:

$${y^{i,\,g}} = {Q_i} + F({w^{{n_2},\,g}} - {w^{{n_3},\,g}}),$$
((9))

where Q i is the ith component of the individuals stored in the situational knowledge.

The normative knowledge includes a scaling factor ds i to influence the mutation operator adopted in DE. The following expression shows the influence of the normative knowledge of the variation operators:

$$z_j^{i,\,g} = \left\{ {\begin{array}{*{20}c} {{w^{{n_1}\,,g}} + F({w^{{n_2}\,,g}} - {w^{{n_3}\,,g}}),\quad {w^{{n_1}\,,g}} < {l_i},\quad \quad \quad } \\ {{w^{{n_1}\,,g}} - F({w^{{n_2}\,,g}} - {w^{{n_3}\,,g}}),\quad {w^{{n_1}\,,g}} > {u_i},\quad \quad \quad } \\ {{w^{{n_1}\,,g}} + {{{u_i} - {l_i}} \over {{\rm{d}}{{\rm{s}}_i}}} \cdot F({w^{{n_2}\,,g}} - {w^{{n_3}\,,g}}),\quad {\rm{otherwise}}\,,} \\ \end{array} } \right.$$
((10))

where l i and u i are the lower and upper bounds for the ith decision variable, respectively, and \({w^{{n_1}\,,g}}\) represents the jth component of the ith individual selected from the gth generation by the acceptance function i (i=1, 2, …, naccepted, where naccepted is the number of best individuals at the gth generation). {ds i } is updated with the difference \(({w^{{n_2}\,,g}} - {w^{{n_3}\,,g}})\) of the variation operators of the prior generation. Normative knowledge leads the individuals to jump into the good range if they are not there. Normative knowledge is updated as follows. Consider \(\{ {x_{{a_1}}},\,{x_{{a_2}}}, \cdots ,\,{x_{{a_{{n_{{\rm{accepted}}}}}}}}\} \) as the accepted individuals in the current generation, where \(\{ {a_1},\,{a_2}, \cdots ,\,{a_{{n_{{\rm{accepted}}}}}}\} \) is the series of accepted individuals. Thus, we have

$${u_i} = \left\{ {\begin{array}{*{20}c} {{w_{i,\,{{\max }_i}}},} & {{w_{i,\,{{\max }_i}}} > {u_i}\,{\rm{or}}\,f({w_{{{\max }_i}}}) > {U_i},} \\ {{u_i},\quad \quad } & {{\rm{otherwise,}}\quad \quad \quad \quad \quad \quad \quad \quad } \\ \end{array} } \right.$$
((11))
$${l_i} = \left\{ {\begin{array}{*{20}c} {{w_{i,\,{{\min }_i}}},} & {{w_{i,\,{{\min }_i}}} < {l_i}\,{\rm{or}}\,f({w_{{{\min }_i}}}) < {L_i},} \\ {{l_i},\quad \quad } & {{\rm{otherwise,}}\quad \quad \quad \quad \quad \quad \quad \quad } \\ \end{array} } \right.$$
((12))

where \({w_{{{\min }_i}}}\) and \({w_{{{\max }_i}}}\) are the minimum and maximum values for parameter i, respectively. If l i and u i are updated, the values of L i and U i will be updated in the same way. {ds i } is updated with the largest difference of \(|{w_{i,\,{r_1}}} - {w_{i,\,{r_2}}}|\) found during the variation operators at the previous generation.

4 Simulation results and discussions

In this section, we discuss several cases based on different numbers of defective sensors in an array. Case 1 Consider a Chebyshev linear array composed of 51 sensors with a λ/2 intersensor spacing as the test antenna. The array of sensors was placed symmetrically along the x-axis, and excited around the center of the array. An analytical technique was used to find the nonuniform weights for a −30 dB constant sidelobe level (SLL) in the Chebyshev array. To diagnose the faulty sensor in the linear symmetrical array, the radiation patterns for the fully and partially faulty sensors were generated. The samples were taken from the patterns in the region of 0° to 90°. Fifteen samples were taken from each pattern at an interval of 6° to scan the region of 0° to 90° to detect fully and partially faulty sensors. It is clear from Figs. 2 and 3 that the failure of either w4 or w−4 gave the same pattern, which is symmetrical about θ=90°. For the detection of the faulty sensor, we we have to tabulate both the faulty patterns {F1(θ i ), F2(θ i ), …, F25(θ i )} and the single fault pattern P m (θ i ), if either w m or wm is damaged (m=0, 1, …, 25). The cost function in Eq. (5) is minimized for a given sample {θ0°, θ1°, …, θ90°}. The decision for the detection of a faulty sensor will be made based on the cost function. The minimization of cost functions in Eq. (5) will give us the location of a faulty sensor. If C m Eth=0.5, the weight w m is fully faulty. If C m &gt;Eth=0.5, then the weight is partially faulty, and for the partial fault we will use the CADE technique (Eq. (6)) to find the weights. Simulation results for full and partial faults have been checked, showing the validity of the proposed method.

Fig. 5 shows the behavior of the diagnostic errors for different values of the signal-to-noise ratio (SNR). We examined different values of SNR versus MSE. As one can see from Fig. 5, MSE decreases as the value of SNR increases.

Fig. 5
figure 5

Mean-squared error versus the number of iterations

Case 2 Consider that a sensor fails partially and radiates some power; i.e., its weight is not zero, but a fraction of the original one. First we consider that sensor w2 is 50% damaged and this damage pattern is created by making the weight of the sensor half of its original weight in the original Chebyshev array. Now the CADE technique is used to locate its position. This array factor was obtained by making the weight of sensor w2 equal to half of the original weight in Eq. (1), represented by PF(θ i ) in the fitness function of Eq. (6). Then the CADE technique is used to minimize the fitness function, which in turn gives the weight of the defected sensor. MSE is used as a fitness function, given by Eq. (6). To check the performance of the CADE technique, these weights are compared to those obtained by the Chebyshev method in which the weight of the sensor equals half of the original one. The weight obtained by CADE is given in Table 2, and the pattern recovered using the CADE technique is shown in Fig. 6.

Fig. 6
figure 6

Weight distributions of the original, w} 2 defected, and that obtained by the cultural algorithm with differential evolution (CADE: cultural algorithm with differential evolution)

Now we suppose that the sensor is 50% faulty by making the weight of w5 half of the original value. The CADE technique was used to locate its position. The array factor was obtained by making the weight of sensor w5 equal to half of its original value in Eq. (1). Then the CADE technique was used to minimize the fitness function in Eq. (6), which in turn gave the weight of the faulty array. The weights obtained by using the CADE technique for partial failure are given in Table 2. To check the validity, the weights obtained by the CADE technique were compared to the weights obtained from the Chebyshev method of damage patterns for 50% fault. The comparison of the weights of the defective array with those obtained by the CADE technique is given in Table 2. The original weight distributions, partially faulty weight distributions, and the weight distributions obtained by CADE are depicted in Fig. 7. From this comparison, the partial fault can be clearly identified. Comparison of the weights obtained by CADE with those of the defected array shows the level of the partial fault. The weights for the original Chebyshev array and the weights obtained for different cases by the CADE technique are given in Table 2. Now we assume that sensor w10 is 25% failed. The CADE algorithm is run to locate the partial fault of the sensor. The weights obtained by CADE for detection of partial faults (25%) for the sensor are given in Table 2 and shown in Fig. 8.

Table 2 Chebyshev and normalized weights obtained from the cultural algorithm with those of the differential evolution algorithm
Fig. 7
figure 7

Weight distributions of the original, w5 defected, and that obtained by the cultural algorithm with differential evolution (CADE: cultural algorithm with differential evolution)

Fig. 8
figure 8

Weight distributions of the original, w10 defected, and that corrected by the cultural algorithm with differential evolution (CADE: cultural algorithm with differential evolution)

Case 3 Consider a linear Chebyshev array composed of 24 sensors taken as the reference antenna to execute the method of fault diagnosis developed by Choudhury et al. (2013). The proposed method was compared with the conventional method. The fully and partially faulty patterns were generated by making their weights either equal to zero or some fractions of the original weights. Assume that the 4th, 10th (50%), and 17th (100%) sensors in the array have become faulty. The faulty pattern and symmetrical counterpart failure of the 4th, 10th (50%), and 17th (100%) sensors are shown in Fig. 9a and 9b, respectively. It is clear from Figs. 9a and 9b that the failures give the same power pattern. Therefore, we have to tabulate half of the faulty patterns. The second advantage of using a symmetrical linear array is that the faulty pattern is symmetrical about θ=90°; i.e., we need half the number of samples to scan the pattern. First we simulated the pattern with 1–3 faulty elements. We used a set of 1162 patterns. In each case, the faulty pattern contains M samples as the input to check the diagnosis of fault. In this case, we take 18 samples, 35 samples, or a random number of samples to validate the performance of the proposed method. Assume there are a maximum of three defective sensors, which yields a total of \(\sum\limits_{f = 1}^3 {{{N!} \over {f!(N - f)!}} = 2324} \) patterns by the conventional method, while 1162 patterns by our proposed method. To locate the faulty sensors in an array of antennas, the weight of each sensor was considered as the optimizing parameter for the bacteria foraging optimization (BFO) and CADE algorithms. CADE would converge to the optimum solution. In an array diagnosis, 35 samples were taken in the range of 0° to 180° at an interval of 5°. We supposed that the 4th and 10th sensors were partially faulty and that the 17th sensor was fully faulty. We ran BFO and the proposed method to diagnose the faulty sensors. Fig. 10a shows the faulty pattern with the positions of 35 samples. The fault diagnosed by the conventional method is shown in Fig. 10b. Then the same process was repeated using the proposed method (Figs. 11a and 11b).

Fig. 9
figure 9

Patterns of the Chebyshev array and the 4th, 10th (50%), and 17th (100%) sensors faulty array (a) and Chebyshev array and the symmetrical counterpart of the 4th, 10th (50%), and 17th (100%) sensors faulty array (b)

Fig. 10
figure 10

Defective array pattern with fault at the 4th, 10th (50%), and 17th (100%) sensors with 35 sample points (a) and its fault diagnosed by the conventional method (Choudhury et al., 2013) (b)

Fig. 11
figure 11

Defective array pattern with fault at the 4th, 10th (50%), and 17th (100%) sensors with 19 sample points (a) and its fault diagnosed by the proposed method (b)

By the proposed method, we diagnosed the fault by half the number of samples. Similarly, the fault was repeated for 18 samples and some other random number of samples for the conventional and proposed methods, respectively. The results obtained by the conventional and proposed methods for 18 samples and some random number of samples are shown in Figs. 1213. By the proposed method, one can detect the faulty sensor accurately even with fewer sample points. Fig. 14 shows the MSE plots for conventional and proposed methods.

Fig. 12
figure 12

Defective array pattern with fault at the 4th, 10th (50%), and 17th (100%) sensors with 10 sample points (a) and its fault diagnosed by the proposed method (b)

Fig. 13
figure 13

Defective array pattern with fault at the 4th, 10th (50%), and 17th (100%) sensors with a random number of sample points (a) and its fault diagnosed by the proposed method (b) (BFO: bacteria foraging optimization)

Fig. 14
figure 14

Mean square error performance of the conventional (Choudhury et al., 2013) and the proposed methods

For a few initial iterations, the value of MSE was high, but after some iterations it went down. BFO and the proposed method were run to detect the location of faults for six random cases and find the average time of the faulty sensors for various scenarios. The results are given in Tables 36. From the simulation results, it is clear that the computation time increases as the number of faulty sensors increases.

Table 3 Time comparison for six configurations of one defective sensor
Table 4 Time comparison for six configurations of two defective sensors
Table 5 Time comparison for six random configurations of three defective sensors
Table 6 Time comparison for six random configurations of fully and partially faulty sensors

5 Conclusions

We proposed a computationally efficient technique to find fully and partially defective sensors in a linear array. Using the approach of a symmetrical linear array brings two advantages. First, the failure of w m or w m gives the same pattern; i.e., we require (N−1)/2 patterns instead of finding all damaged patterns. Second, we need to scan half of the damage patterns {θ0°, θ1°, …, θ90°}, as the patterns are symmetrical about θ=90°. The decision of the fully or partially faulty sensor is made based on the cost function. If C m &gt;0.5, the sensor is fully faulty; if C m ≤0.5, the sensor is partially faulty. For partial faults we used the CADE technique to locate the defective sensors. This method can be extended to planar arrays and L-type arrays.