- Research
- Open access
- Published:
Computation-distributed probability hypothesis density filter
EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 126 (2016)
Abstract
Particle probability hypothesis density filtering has become a promising approach for multi-target tracking due to its capability of handling an unknown and time-varying number of targets in a nonlinear, non-Gaussian system. However, its computational complexity linearly increases with the number of obtained observations and the number of particles, which can be very time consuming, particularly when numerous targets and clutter exist in the surveillance region. To address this issue, we present a distributed computation particle probability hypothesis density(PHD) filter for target tracking. It runs several local decomposed particle PHD filters in parallel while processing elements. Each processing element takes responsibility for a portion of particles but all measurements and provides local estimates. A central unit controls particle exchange among the processing elements and specifies a fusion rule to match and fuse the estimates from different local filters. The proposed framework is suitable for parallel implementation. Simulations verify that the proposed method can significantly accelerate and maintain a comparative accuracy compared to the standard particle PHD filter.
1 Introduction
Multi-target filtering is a class of dynamic state estimation problems in which the object of interest is a finite set that consists of a random number of elements and the values of individual elements [1]. Classical approaches, such as the nearest neighbor (NN) [2], the joint probabilistic data association filter (JPDA) [3], and multi-hypothesis tracking (MHT) [4], are based on the framework of filter and data association. Recently, considerable work has been devoted to random finite set (RFS)-based approximations, such as the probability hypothesis density (PHD) [5, 6], the cardinalized PHD (CPHD) [7, 8], and the multiple-target multi-Bernoulli (MeMBer) filter [9]. These methods avoid the data association problem and provide the set-valued estimations of target states.
Among these filters based on RFS, the MeMBer filter is more suitable for low-clutter environments. The CPHD filter propagates both the intensity of the RFS and the posterior cardinality distribution; consequently, it provides a more accurate target number estimation but a lower efficiency. For the PHD filter, a particle PHD filter for nonlinear and/or non-Gaussian MTT problems which are more suitable for general scenarios was proposed in [1]. Moreover, another implementation is a closed-form solution [6] with assumptions on linear Gaussian systems, which is called the GM-PHD filter. The GM-PHD filter provides the best efficiency in general but is constrained to linear Gaussian systems, whereas the particle PHD filter can adapt nonlinear or non-Gaussian MTT problems even in dense-clutter environments. However, the particle PHD filter suffers from high computational complexity because hundreds of thousands of particles are required. To solve the high computational complexity problem, Hong et al. [10] proposed a new update model for the particle PHD filter that is suitable for hardware implementation. In the paper [11], a data-driven particle PHD filter was developed for real-time multi-target tracking of nonlinear/non-Gaussian systems in dense-clutter environments. Using gating technology, Li et al. [12] accelerated the PHD filter to some extent. A pipeline approach was also introduced into the particle PHD filter in [13]. In contrast to the aforementioned methods, we attempt to improve the efficiency of the particle PHD filter through the use of a distributed architecture, which is an effective solution for accelerating its computation to satisfy the time demands in MTT and some distributed hardware such as wireless sensor networks (WSNs) [14, 15]. The existing distributed PHD/CPHD filter is generally based on such an architecture in which each sensor runs a local PHD/CPHD filter with some local measurements, called distributed sensing PHD (DSPHD) filter [16, 17].
In this paper, we propose a distribution implementation for the particle PHD filter using multiple processing elements (PEs) and a central unit (CU). The CU transfers all measurements to all the PEs first, and then each PE runs a local particle PHD filter and provides target state estimation with labels independently and in parallel. To match the local estimations from different PEs, we replace the general particle PHD filter with the decomposed particle PHD filter [18] at each PE, which can help associate estimated states according to their corresponding measurements. Finally, the global estimates are obtained on the CU through the association and fusion of the local estimated states from different PEs. Two fusion rules are presented for different tracking scenarios with clutter. Moreover, a particle exchange strategy is exploited to eliminate particle degradation.
Compared to the DSPHD filter exploiting local measurements and information exchange to estimate the global states, our algorithm provides a different type of allocation of measurements. In our algorithm, each PE runs with all sensor measurements but only a subset of particles, whereas in the general distributed PHD/CPHD filter, each PE runs with some sensor measurements rather than all measurements. In the DSPHD filters, the local means and covariances of the state estimates are exchanged among PEs and the CU. Since the local filters use the different particles and measurements, they do not speak the same language. Therefore, they need to agree on consensus through some mechanisms. In contrast, our distributed computational PHD filter shares the same measurements but only uses a subset of particles in each PE; therefore, their languages are compatible with each other, and thus, PEs can exchange information of the particle-weight couples.
The main contributions of this paper consist of the following three parts:
First, we propose a distributed architecture for the PHD filter, which makes the update and resampling step in the particle PHD filter in parallel in part and significantly accelerates the particle PHD filter. This architecture is valid not only for the particle PHD filter but also for the GM-PHD filter.
Second, we exploit the decomposed PHD filter to extract the estimated states and obtain their corresponding measurement labels, which guarantees the association and fusion of states from different PEs. Furthermore, we present two rules for the fusion of local state estimations, considering the influence of clutter on the PHD filter.
Finally, based on the architecture and algorithm of the proposed distributed particle PHD filter, the real-time performance is enhanced while the tracking accuracy is comparable to a traditional particle PHD filter.
The remainder of this paper is organized as follows. The standard particle PHD filter is briefly described in Section 2. In Section 3, we present and analyze our distributed particle PHD filter in detail. Simulation results are presented in Section 4. Section 5 provides the conclusions.
2 Background
2.1 The PHD filter
The PHD filter was initially developed in the framework of finite set statistics (FISST) [5]. The PHD function D Ξ is the first-order moment of the random finite set (RFS) Ξ and can be defined as
where \(\delta _{\Xi }(x) = \sum _{x \in \Xi } \delta _{x}\) is the random density representation of Ξ. P Ξ is the probability distribution of the RFS Ξ. E[·] is the expectation operator. The PHD D Ξ of Ξ is a unique function on space E, unless it is on a set of measure zero. The PHD has the properties that [19], the integral over a measurable subset S⊆E, \(\int _{S} D_{\Xi }(x)\lambda (dx)\) is the expected number of targets. In addition, the peaks of the PHD function provide the estimates of the target states.
The PHD filter consists of two steps: the prediction step and the update step. It is used to recursively estimate the combined posterior density using multi-target transition density f k|k−1(·) and measurement likelihood g k (·). Assuming that the RFS is Poisson, it has been shown that the recursion propagating the PHD D k|k of the multi-target posterior p k|k follows [5]
where ∘ represents the composition of functions, Φ k|k−1 is the prediction operator, and Ψ k is the update operator. They are defined as follows:
where P D (·) is the probability of detection, κ k (·) denotes the intensity function of clutter at time k, γ k (·) denotes the intensity function of the random finite set of spontaneous birth targets, e k|k−1(ξ) denotes the survival probability with state ξ; c k (z) as the clutter probability density, and b k|k−1(·|ξ) denotes the PHD of the RFS B k|k−1({ξ}) spawned by a target with previous state ξ.
2.2 Particle PHD filter
As an approximate implementation of the PHD filter, the particle PHD filter is composed of three steps (for detailed parameter notation, refer to [1]):
At time k>0, let L k and Jk denote the number of surviving particles and birthed particles, respectively. q k denotes the importance function.
-
1)
Prediction step.
For i=1,…,L k−1, sample \( \widetilde {x}_{k}^{i} \sim q_{k}(. |x_{k-1}^{i}, Z_{k})\) and compute the predicted weights
$$ \widetilde{w}_{k|k-1}^{i}=\frac{\phi_{k|k-1}({x_{k}^{i}}, x_{k-1}^{i})}{q_{k}({x_{k}^{i}}|x_{k-1}^{i}, Z_{k})}w_{k-1}^{i} $$(5)For i=L k−1+1,…,L k−1+Jk, sample \( \widetilde {x}_{k}^{i} \sim p_{k}(. |Z_{k})\) and compute the weights of newborn particles
$$ \widetilde{w}_{k|k-1}^{i}=\frac{\gamma_{k}({x_{k}^{i}})}{p_{k}({x_{k}^{i}}|Z_{k})}\frac{1}{Jk} $$(6) -
2)
Update step.
For each z∈Z k ,compute
$$ C_{k}(z)=\sum_{j=1}^{L_{k-1}+Jk}{\psi_{k, z}(\widetilde{x}_{k}^{j}) \widetilde{w}_{k|k-1}^{j}* \widetilde{x}_{k}^{j}} $$(7)For i=1,…,L k−1+Jk, update weights
$$ \widetilde{w}_{k}^{i}=\left[ \nu(\widetilde{x}_{k}^{i})+ \sum_{z\in Z_{k}}{\frac{\psi_{k, z}(\widetilde{x}_{k}^{i})}{\kappa_{k}(z)+C_{k}(z)}}\right] \widetilde{w}_{k|k-1}^{i} $$(8) -
3)
Resampling step.
Compute the total target number \(N_{k}=\sum _{j=1}^{L_{k-1}+Jk}{\widetilde {w}_{k}^{j}}\) and resample \(\left \{{\widetilde {x}_{k}^{i}, \widetilde {w}_{k}^{i}/N_{k}}_{i=1}^{L_{k-1}+Jk} \right \}\) to obtain \(\left \{{{x_{k}^{i}}, {w_{k}^{i}}/N_{k}} \right \}_{i=1}^{L_{k}}\).
As with the particle filter, the application of the particle PHD filter is limited by its computational complexity, which is primarily caused by resampling. At the same time, the computational complexity is also caused by the update, in which all particles participate.
3 Distributed computation particle PHD filter
The particle PHD filter has the capacity of handling nonlinear non-Gaussian dynamics inherent from the particle filter. However, it also suffers from a high computational cost, which limits its application in real-time systems. To improve its efficiency, we propose a distributed computation particle PHD filter (DCPPHD). This method is motivated by the distributed resampling with non-proportional allocation (DRNA) proposed by Bolic et al. [20]. The underlying concept of the DRNA is to attain parallelism by resampling independently for different subsets of particles. In a multicore computer with K processing elements (PEs), n=1,…,K, a particle filter is performed locally by each PE. To keep balance of aggregated weights, PEs exchange parts of particles with each other. Although DRNA has provided an effective distribution framework, there are still challenges in the distribution implementation of the particle PHD filter. First, more than one target generally exists in the surveillance region; thus, each PE provides a local state estimation set rather than a single estimation. How to distinguish these state estimations among PEs, that is, which estimations originate from the same target, is a key issue to address. Furthermore, after associating state estimations, it is necessary to design a fusion method in order to obtain the global estimations. To address these problems, in the proposed DCPPHD, we exploit the decomposed PHD filter [18] to obtain labeled state estimations corresponding to the measurements, which gives rise to the direct association of the estimated states. The global estimated states can be obtained from the labeled local state estimations. Moreover, some fusion strategies are presented according to different tracking scenarios.
3.1 General structure
In the proposed DCPPHD, particles are divided into groups to make the update and resampling steps parallel between groups. Each PE only contains a group of particles and runs a particle PHD filter independently until they exchange parts of particles among each other. A central unit (CU) takes responsibility for transferring measurements to each PE, fusing the local estimations from PEs and providing the global state estimations. Assume that K PEs exist and that each PE runs a local particle PHD filter with M particles. Therefore, the total number of particles of the DCPPHD is N=MK. The CU transfers all measurements to K PEs. For all the PEs, after the prediction, update and resampling steps, each local particle PHD filter provides local estimations (target number and states) to the CU in parallel. In particular, resampling is conducted locally without interacting with the particles in other PEs. After receiving these local estimations from the different PEs, the CU will associate and fuse them according to the proposed rules. To avoid the degeneracy of the particles in local PEs, a particle exchange step is utilized, in which each PE mutually interchanges part of its particles to another PE.
The structure of the DCPPHD with four PEs is shown in Fig. 1. At each time step, the CU broadcasts all the measurements to all PEs; then, each PE runs its local decomposed particle PHD filter and transmits its local estimated states and corresponding index set \(\left \{{\zeta _{k}^{l}}, {I_{k}^{l}} \right \}\) to the CU, where \({I_{k}^{l}}\) denotes the index of the measurement that is related to \({\zeta _{k}^{l}}\). Subsequently the CU can associate the local estimated states based on the measurement index to construct global estimates. Following the local estimation step, each PE exchanges L particles with its neighboring PEs.
Note that the particle exchange step begins once the local estimations have been submitted to the CU.
3.2 Algorithm of DCPPHD filter
The algorithm of the DCPPHD filter primarily involves four steps and runs on PEs and the CU: local filter, local estimation, particle exchange on PEs, and global estimation on the CU. The details of each step are summarized in the following.
3.2.1 Local filter and local labeled state estimation
To indentify the set-valued state estimations from different PEs, we use the decomposed PHD filter to find the relationship between estimated states and measurements rather than the standard particle PHD filter. For the decomposed PHD filter, the estimated PHD is decomposed into several sub-PHDs in the weight domain. This concept comes from the fact that the multi-target PHD D t|t can be rewritten as follows:
where
and
where Δ D t|t (x|ϕ) denotes the intensity function from the target with no measurement received. From Eq. (9), PHD can be considered as the sum of M t +1 sub-PHDs corresponding to every single target and every sub-PHD is relevant to a measurement received at time t. Consequently, the target states can be directly estimated from the sub-PHDs and each target state is labeled by its corresponding measurement. The local filter and local labeled state estimation algorithm is as follows:
-
1.
Step 1: Prediction. At time t−1, assume that the particle set \(\left \{x_{t-1}^{(n, m)}, w_{t-1}^{(n, m)}\right \},n=1,\ldots, K, m=1, \ldots, L_{t-1}^{n}\) is available where \( L_{t-1}^{n}\) denotes the number of particles in the nth PE. For the nth group and \(m=1, \ldots, L_{t-1}^{n}\), sample \(\left \{ x_{t}^{(n, m)} \right \} \) from q t for surviving targets and sample \(\left \{ x_{t}^{(n, m)} \right \}, m=L_{t-1}^{n}+1,\ldots, L_{t-1}^{n}+Jk/K \), from p(x) for newborn targets.
-
2.
Step 2: Update. Let Z t denote the measurement set. For each group n,n=1,…,K and each observation z p ∈Z t , p=1,…,M t ,
$$ C_{k}(z_{t, p})=\sum_{i=1}^{L_{t-1}^{n}+J_{t}}{\psi_{t, z_{t, p}}(\widetilde{x}_{t}^{i,n}) \widetilde{w}_{t|t-1}^{(i, n)} \widetilde{x}_{t}^{(i, n)}} $$(12)$$ G_{t}^{i, p, n} = \frac{\psi_{t, z}(\widetilde{x}_{t}^{i, n})}{\kappa(z_{t, p})+C_{t}(z_{t, p})} $$(13)then calculate the sub-weight of each particle for observations z t,p
$$ \Delta\widetilde{w}_{t}^{i, p,n} = G_{t}^{i, p, n} \widetilde{w}_{t|k-1}^{i, n} $$(14)Additionally, the particle sub-weight for obtained target without measurements is
$$ \Delta \widetilde{w}_{t}^{i, p, 0} = \nu(\widetilde{x}_{t}^{i, n})\widetilde{w}_{t|t-1}^{i, n} $$(15)Based on formula (9), particle weights can be computed by
$$ \widetilde{w}_{t}^{i, n} = \sum_{p=1}^{M_{t}} \Delta \widetilde{w}_{t}^{i, p, n} + \Delta\widetilde{w}_{t}^{i, p, 0} $$(16) -
3.
Step 3: Local estimation. For each measurement z t,p , p=1,…,M t , compute the sum of \(\Delta \widetilde {w}_{t}^{i, p, n}\) relevant to z t,p in group n,
$$ \Delta W_{t}^{n, p} = \sum_{i=1}^{M+J_{t}}\Delta\widetilde{w}_{t}^{i, p, n}. $$(17)Then, the sum of sub-weight \(\Delta W_{t}^{n, 0}\) corresponding to targets without observations is
$$ \Delta W_{t}^{n, 0} = \sum_{i=1}^{M+J_{t}}\Delta\widetilde{w}_{t}^{i, 0, n}. $$(18)The target number can be estimated by
$$ L{T_{t}^{n}} = \text{round} \left(\sum_{i=1}^{i=L_{t-1}^{n}+Jk/K}\widetilde{w}_{t}^{i, n}\right) $$(19)where \(\text {round}(\sum _{i=1}^{i=L_{t-1}^{n}+Jk/K}\widetilde {w}_{t}^{i, n})\) is the nearest integer to \(\sum _{i=1}^{i=L_{t-1}^{n}+Jk/K}\widetilde {w}_{t}^{i, n}\). Find the \(L{T_{t}^{n}}\) largest sum weight \(\Delta W_{t}^{n, p}\) and the index set \({I_{t}^{n}} \) relevant to \(\Delta W_{t}^{n, p}\). The local estimated target state can be calculated by \(\zeta _{t, l} = w_{t}^{i, l, n} \widetilde {x}_{t}^{i, n}\) where l in index set,
$$ w_{t}^{i, l, n} = \frac{\Delta\widetilde{w}_{t}^{i, l, n}}{ \sum_{i=1}^{M+Jk}\Delta \widetilde{w}_{t}^{i, p, n}} $$(20)When the nth PE obtains a local estimation set \({\zeta _{t}^{l}}, l= 1, \ldots, L{T_{t}^{n}}\), the PEs transmit the pair \(\left \{ \zeta _{t}^{{l,n,p}}, I_{t}^{{l,p,n}} \right \}\) to the CU, where \(I_{t}^{{l,p,n}}\) denotes the state estimation \(\zeta _{t}^{{l,n,p}}\) from the n−th PE coming from the measurement z t,p .
-
4.
Step 4: Resampling.
Each PE calculates its own estimated number of targets \(L{T_{t}^{i}}\) from its total particle mass according to Eq. (19). Consequently, the number of particles of the ith PE is updated by
$$ \tilde{N}_{i} = L{T_{t}^{i}} \rho $$(21)Then, the resampling can be performed locally at all the PEs. Subsequently, normalize the weights of the particles with the sum of weights at each PE.
3.2.2 Global estimation
After receiving the local state estimations \(\left \{ \zeta _{t}^{n, l, p}, I_{t}^{{l,p,n}} \right \}\) from all PEs, the CU will combine the local estimated states and their corresponding measurement index from each group to construct a global estimate. Classify the estimated states with a same measurement index into a group \(\zeta _{t,i} = \left \{\zeta _{t}^{1, l_{1}, p}, \zeta _{t}^{2, l_{2}, p}, \ldots, \zeta _{t}^{K, l_{K}, p}\right \}\), where if the nth PE does not provide an estimated state \(\zeta _{t}^{n, l_{n}, p}\), let \(\zeta _{t}^{n, l_{n}, p} = \phi \).
For each estimated state group ζ t,i , calculate a unified state estimation according to a fusion rule. Here, we propose two fusion rules for global estimation under the basic assumption “at most one measurement per target” [21].
Rule 1: Only if |ζ t,i |=K, a global state can be obtained by \(\hat {x}_{t} = \sum _{i=1}^{K}{(\zeta _{t,i})} /K\). This rule is designed for tracking scenarios with a higher clutter rate, in which false state estimates from false alarms tend to occur. However, false target state estimates are not always obtained by all PEs; then, rule 1 can help to eliminate some of them because only when all PEs consider an observation from a target will it be considered from a target.
Rule 2: If |ζ t,i |>=K/2 (half number of PEs), then a global state can be obtained by \(\hat {x}_{t} = \text {mean}(\zeta _{t,i})\). This rule is designed for tracking scenarios with lower clutter rates. Rule 2 relaxes the condition that a target exists because once the majority of PEs consider an observation from a target, the global estimation also considers it as if from a target.
3.2.3 Particle exchange
The particles in the nth PE will tend to degenerate when its aggregate weight becomes negligible relative to the aggregated weights of the other PEs. Then, the nth PE hardly contributes to the approximation of the posterior probability distribution. To keep the PEs valid, neighboring PEs should exchange a portion of particles.
The nth PE transmits P particles to its neighbor. The selected particles from the nth PE to the sth PE can be denoted as \(\mathcal {M}_{t}^{n,s}=\left \{x_{t}^{(n,i_{1})},\ldots,x_{t}^{(n,i_{P})} \right \}\). Then, the information of particle-weight pairs held by the ith PE after the exchange step is given by
One of the particle exchange methods is to select the P highest weight particles at each PE to exchange. Let \(\hat {p}_{i}(X_{k}|Z_{k})\) be the local approximation of the posterior pdf, which can be defined as
where the subscript i indicates that weights and particles are referred to the ith local PHD filter. Let \(\hat {w}_{k}^{i,j}\) denote the weight of the jth particle of the ith PE after the exchange step but prior to resampling. Let \(\hat {N}_{i}=N_{i}+P\) be the total number of particles of the i-th PE after the exchange step prior to resampling.
It is known that the most representative particles of the \(\hat {p}_{i}(X_{k}|Z_{k})\) are those with the highest weights. We utilize the Kullback-Leibler (KL) divergence to measure the distance between two pdfs. The smaller the KL divergence is, the closer the two pdfs are. Consider the approximated posterior density \(\hat {p}_{i}^{'}(X_{k}|Z_{k})\) computed using only P particles
The P particles are those with the highest weights. The KL divergence between \(\hat {p}_{i}(X_{k}|Z_{k})\) and \(\hat {p}_{i}^{'}(X_{k}|Z_{k})\) can be written as:
The maximization of \( \sum _{j=1}^{P} w_{k}^{i,j}\) is equivalent to the minimization of \(D(\hat {p}_{i},\hat {p}_{i}^{'})\).
Therefore, the exchange information can be constructed by selecting the particles with the highest weights, to be communicated with other PEs. Thus, we use the P particles with the highest weights.
The global posterior \(\hat {p}(X_{t}|Z_{t})\) can be represented as a local posterior \(\hat {p}_{i}(X_{t}|Z_{t})\). We define the cumulative sum of the local KL divergences as:
where \(\Omega = \sum _{i=1}^{K}\Omega _{i}\). From Eq. (25), we can infer that the higher the number of exchanged particles is, the closer the local posterior and global posterior are. With P→∞, the local posterior is the closest to the \(\hat {p}_{i}(X_{t}|Z_{t})\). However, increasing the number of exchanged particles can lead to a high communication burden. In particular, if we choose the all-to-all network configuration in which the PE can receive other PEs’ local approximation posterior, then the all-to-all network can obtain the best estimation compared with other types of networks. The network and the selected particles should compromise between the communication burden and estimation quality. Therefore, we simply select those particles with the highest weights to exchange to balance these two factors.
3.3 Analysis of DCPPHD
The efficiency of improvement using parallel computing can be roughly estimated using Amdahl’s law. According to Amdahl’s law [22], the acceleration of DCPPHD is defined by
where S(K) is the theoretical acceleration achieved for K PEs, where p is the proportion of the algorithm that sequentially executed. For a completely parallelizable algorithm (p=0), the DCPPHD has a theoretically maximum achievable acceleration limited to K. Noted that Amdahl’s law only represents a theoretically predicted acceleration. In practical cases, the extra time introduced by the parallel computing should also be taken into consideration.
In the case of the DCPPHD filter, the parallel operations take a higher percentage; thus, the DCPPHD algorithm has the potential to achieve good acceleration.
3.4 Remark
The distributed computational PHD filter can be introduced to the jump Markov systems (JMS) with the transition probability being a constant instead of a variable following some distribution. For the jump Markov systems, several versions of the PHD filter have been proposed: (1) Augmented PHD filter and its extension for the linear jump Markov systems and the nonlinear jump Markov models proposed by Pasha et al [23]; (2) the generalized pseudo Bayesian (GPB) PHD and best-fitting Gaussian (BFG) PHD filter proposed in [24] and [25], respectively. These methods can be extended to the distributed version based on the mechanism proposed in our paper. For example, in the augmented PHD filter, multiple PHD filters work independently under different models, consequently, each PHD filter can be converted into our DCPPHD filter running in parallel on each PE, which does not influence the key idea of the augmented PHD filter in essence. On the other hand, for the JMSPHD filter propose in [24] and [25], the same approximation from the jump Markov system to a single model by GPB or BFG can be also used on CU in our DCPPHD before filtering. Therefore, our DCPPHD filter can be applied for the JMS under the assumption that the transition probability is a constant. However, for the semi-Markovian jump systems mentioned in the papers [26] and [27], the transition probabilities with sojourn-time subject to Weibull distribution or Laplace distribution, Gaussian distribution, etc. is time-varying, which does not satisfy the assumption for the current JMS PHD filter; thus, our DCPPHD cannot be applied for the semi-Markovian jump systems directly since transition probabilities are unknown and need to be estimated at each time. The details about the augmented PHD filter, GPB PHD filter, and BFG PHD filter refer to [23 – 25], respectively.
As one kind of the typical distributed systems, multi-agent systems have been widely used in many applications [28 – 31]. The PHD filter including both of the proposed DCPPHD and DSPHD can be applied for the multi-agent systems with switching network topologies and periodic sampling by an extension to the consensus PHD filter. However, all of them are not suitable in the case of stochastic switches of multi-agent systems because of the fluctuation of the deterministic parameters and the timescales of communication and observation. To solve this problem, the mean square and almost sure convergence can be introduced into the PHD filter for the periodic multi-agent systems. In reference [32], the problems of random link failures, stochastic communication noises, and Markovian switching topologies are addressed and the mean square and almost sure convergence are established. However, for the multi-agent systems with event-based sampling mentioned in references [33] and [34], since the non-periodic event-based sampling is exploited, the DSPHD filter and our algorithm cannot be applied directly as they are taken only if some events happen rather than with a fixed sampling interval, which does not satisfy the model assumptions of the PHD filter. Therefore, extra approximation or other mechanism are required to adapt the PHD filter to such multi-agent systems. One possible solution may be developed according to the idea proposed in paper [35]. The author proposed a modified Kalman filter for target tracking combined with the send-on-delta sampling [36]. When no measurements are received, the last received measurement with a dynamic measurement noise can be used to update step.
4 Simulation results
In this section, we provide the results from experiments on simulation data to evaluate the performance of the proposed DCPPHD filter. The DCPPHD filter is compared to two traditional particle PHD filters with different particle numbers. The number of particles in DCPPHD is the same as that of the first particle PHD filter (PHD1). The numbers of particles used in the entire DCPPHD filer and PHD1 both are K times larger than that of another particle PHD filter (PHD2), i.e, the number of particles in PHD2 is equal to the number of particles in one PE of the DCPPHD filter. The purpose of the comparison with PHD2 is to verify whether the performance of the DCPPHD filter can degenerate to one PE. The algorithm is programmed with a MATLAB code running on an HP Z600 Workstation with 2x Intel Xeon processors at 2.53 GHz with 6-GB ram, running Windows 7 Professional 64 b.
4.1 Model
For simplicity, we consider a two-dimensional tracking scenario. The dynamic state model is expressed in the following form:
where \(\mathbf {x}_{k}=[ x_{k}, \dot {x}_{k}, y_{k}, \dot {y}_{k} ]\) is the target state vector at time k and T=1 is the sampling period. [x k ,y k ] is the position, and \([\dot {x}_{k},\dot {y}_{k}]\) is the velocity. \(w_{k}=[ {\omega _{k}^{x}}, {\omega _{k}^{y}} ]\) is the vector of independent zero-mean Gaussian white noise with standard deviations of \( \left [ \begin {array}{cc} 0.025 & 0 \\ 0 & 4 \\ \end {array} \right ] \). There is only a signal sensor in the scenario and the target-originated measurements are given by
where \({v_{k}^{R}}\) and \(v_{k}^{\theta }\) are mutually independent zero-mean Gaussian white noise with variance σ R =1m 2,σ θ =0.01rad2. Clutter κ k is uniformly distributed over the surveillance region with a Poisson distribution with a rate of λ k . The probability of target survival is 0.9, and the detection probability is P D,k =1. The intensity of target birth is \(\mathcal {N}(\cdot,\overline {x},Q)\), where \(\mathcal {N}(\cdot,\overline {x},Q)\) denotes a normal density with mean \(\overline {x}\) and covariance Q. In this experiment, \(\overline {x} = \left [ 0,3, 0, -3 \right ]^{T}\) and Q=diag[10;1;10;1]. The rates of clutter range over 3 different levels (λ=0,10,50.).
The surveillance region is [−π,π]×[−1000,1000] rad m. There are eight PEs in the experiment. The initial number of particles is 2000.
Every existing or newborn target is assigned 200 particles in each group.
4.2 Simulation results
We run 100 independent simulations of the model given in Subsection 4.1.
The true trajectories of the five targets and the estimated positions by PHD1 and DCPPHD are plotted in Figs. 2 a–c, respectively. The red points denote the estimated positions. As shown, the estimated positions of the DCPPHD and PHD1 are similar, and they are all close to the true tracks.
The true target number and the average estimated target number by DCPPHD, PHD1, and PHD2 at each scan are presented in Fig. 3. It can be observed that under the same simulation conditions, PHD1 and the DCPPHD filter achieve similar estimation accuracy, whereas PHD2 has the largest estimated error.
We use the optimal sub-pattern assignment (OSPA) as a multi-target miss-distance metric [37]. The parameters in OSPA are set as p=1 and c=100 in our evaluation. Figure 4 shows OSPA distances of the DCPPHD, PHD1, and PHD2. The times of these three methods are illustrated in Fig. 5. From Fig. 5, it is observed that the time of DCPPHD is considerably less than that of PHD1 and a slightly more than that of PHD2. Note that the time of DCPPHD is the execution time in theory.
Table 1 shows the average results over 100 Monte Carlo runs including the time, mean of OSPA, and standard deviation (std) of OSPA. One objective with the simulations is to assess the potential efficiency of DCPPHD. For this purpose, we record the following times
-
T cp is the average execution used by the operations that cannot be implemented in parallel in a DCPPHD. It is calculated according to the sum time of global estimation and particle exchange.
-
T pi is the average execution time of the operations that can be implemented in parallel in a DCPPHD.
-
T si is the average execution time of the implementation of the particle PHD filter. For DCPPHD, it is calculated according to T si=T cp+T pi/K.
As shown in Table 1, the mean OSPA distances of the DCPPHD filter and PHD1 are less than that of PHD2 at different clutter rates, proving that the DCPPHD filter has comparable accuracy with PHD1 and outperforms PHD2. This result indicates that DCPPHD can achieve the same efficiency level of PHD1 but will not degenerate to the PHD2 level, i.e., one PE of DCPPHD. Although each PE has the same number of particles as PHD2, the estimation results using the fusion strategy on eight PEs are better than the result of each PE.
The approximate times consumed by these approaches are also listed in Table 1, which shows that DCPPHD has great potential (T pi/K≫T cp) for also achieving in a shorter execution time, producing the same level of accuracy as the traditional particle PHD filter. In different clutter environments, the theoretically maximum achievable accelerations are 3.38, 5.03, and 6.68, respectively.
4.3 Compared to the parallel particle PHD filter
In the proposed work of [38], all measurements at each filtering iteration will be separated and sent to each PE. Each PE runs an independent filter and obtains conditionally independent local estimates; from this aspect, it can be viewed as a distributed algorithm. We compared our algorithm with this parallel PHD algorithm.
Figure 6 presents the OSPA metric results over 100 Monte Carlo trials for r=10 with both algorithms. The results of both filters are very close.The average computing times of these algorithms are presented in Fig. 7.
It can be observed that our algorithm provides a comparable OSPA with the parallel PHD filter, which indicates that our filter has the similar accuracy to the parallel PHD filter. Although the execution times of these two filters are similar, note that the parallel PHD filter uses more PEs than our method, which suggests that our method can become more efficient if more PEs are used. However, the data communication of the parallel PHD filter is significantly larger than that of our method because it not only needs to broadcast all particles from the CU to all PEs and collect all the updated particles back to the CU but also broadcasts one measurement to one PE at each iteration, all of which cause significant communication overhead. Our method simply allocates all the measurements to all the PEs, and the particles do not need to transit between the CU and PEs. Only part of the particles transit between different PEs simultaneously; therefore, the communication cost overhead of our method is much considerably.
5 Conclusions
This paper presents a distributed scheme of the particle PHD filer with several PEs and a CU. PEs run local decomposed particle PHD filters independently until part of the particles need to exchange among them; then, the CU associates and fuses the local state estimations submitted by all PEs. This architecture and fusion strategy can make the parallelization of local particle PHD filters possible and provides a comparable filtering accuracy with the sequential particle PHD filter. The advantage of the DCPPHD filter over the particle PHD filter lies in that the DCPPHD filter can be implemented in parallel. The simulation results verify the performance of the DCPPHD filter in terms of accuracy and efficiency.
Possible topics for future work include the following. (1) Various methods that can be applied for the exchange step, such as random exchange [20], exchange depending on the divergence between the empirical distribution, and the complete particle population or exchange of the representative particles to further improve the filter’s performance. (2) Implementation of the DCPPHD algorithm in practical WSNs, particularly under the constraints in the real-time operation and the communication capabilities(compared to a centralized PHD filter).
References
BN Vo, S Singh, A Doucet, Sequential Monte Carlo methods for multi-target filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst. 41(4), 1224–1245 (2008).
RA Singer, JJ Stein, in Decision and Control, 1971 IEEE Conference On, 10. An optimal tracking filter for processing sensor data of imprecisely determined origin in surveillance systems (IEEEWashington D.C, 1971), pp. 171–175.
TE Fortmann, Y Bar-Shalom, M Scheffe, Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Ocean. Eng. 8(3), 173–184 (1983).
SS Blackman, Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp. Electron. Syst. Mag. 19(1), 5–18 (2004).
R Mahler, Multitarget bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1152–1178 (2003).
B-N Vo, W-K Ma, The gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 54(11), 4091–4104 (2006).
R Mahler, PHD filters of higher order in target number. IEEE Trans. Aerosp. Electron. Syst. 43(4), 1523–1543 (2007).
BT Vo, BN Vo, A Cantoni, Analytic implementations of the cardinalized probability hypothesis density filter. IEEE Trans. Signal Process. 55(7), 3553–3567 (2007).
B-T Vo, B-N Vo, A Cantoni, The cardinality balanced multi-target multi-bernoulli filter and its implementations. IEEE Trans. Signal Process. 57(2), 409–423 (2009).
S Hong, L Wang, Z-G Shi, KS Chen, Simplified particle PHD filter for multiple-target tracking: algorithm and architecture. Prog. Electromagn. Res. 120:, 481–498 (2011).
Y Zheng, Z Shi, R Lu, S Hong, X Shen, An efficient data-driven particle PHD filter for multitarget tracking. IEEE Trans. Ind. Inform. 9(4), 2318–2326 (2013).
T Li, S Sun, TP Sattar, High-speed sigma-gating SMC-PHD filter. Sig. Process. 93(9), 2586–2593 (2013).
Z-G Shi, Y Zheng, X Bian, Z Yu, Threshold-based resampling for high-speed particle PHD filter. Prog. Electromagn. Res. 136:, 369–383 (2013).
L Liu, X Zhang, H Ma, in INFOCOM 2009, IEEE. Dynamic node collaboration for mobile target tracking in wireless camera sensor networks (IEEERio de Janeiro, 2009), pp. 1188–1196.
X Zhang, Adaptive control and reconfiguration of mobile wireless sensor networks for dynamic multi-target tracking. IEEE Trans. Autom. Control. 56(10), 2429–2444 (2011).
G Battistelli, L Chisci, C Fantacci, A Farina, A Graziano, Consensus CPHD filter for distributed multitarget tracking. IEEE J. Sel. Top. Sign. Process. 7(3), 508–520 (2013).
G Battistelli, L Chisci, C Fantacci, A Farina, RP Mahler, in Signal processing, sensor/information fusion, and target recognition XXIV. Distributed fusion of multitarget densities and consensus PHD/CPHD filters (SPIEBaltimore, 2015), pp. 94740–94740.
L Zhao, P Ma, X Su, H Zhang, in Information Fusion (FUSION), 2010 13th Conference On. A new multi-target state estimation algorithm for PHD particle filter (IEEEEdinburgh, 2010), pp. 1–8.
K Panta, B-N Vo, S Singh, A Doucet, in Signal processing, sensor fusion, and target recognition XIII, 5429. Probability hypothesis density filter versus multiple hypothesis tracking (SPIEOrlando, 2004), pp. 284–295.
M Bolic, PM Djuric, S Hong, Resampling algorithms and architectures for distributed particle filters. IEEE Trans. Sign. Process. 53(7), 2442–2450 (2005).
R Streit, The probability generating functional for finite point processes, and its application to the comparison of phd and intensity filters. J. Adv. Inf. Fusion. 8:, 119–132 (2013).
JL Gustafson, Reevaluating amdahl’s law. Commun. ACM. 31(5), 532–533 (1988).
SA Pasha, B-N Vo, HD Tuan, W-K Ma, A gaussian mixture PHD filter for jump Markov system models. IEEE Trans. Aerosp. Electron. Syst. 45(3), 919–936 (2009).
C Ouyang, H-b Ji, Z-q Guo, Extensions of the SMC-PHD filters for jump Markov systems. Sign. Process. 92(6), 1422–1430 (2012).
W Li, Y Jia, Gaussian mixture PHD filter for jump Markov models based on best-fitting Gaussian approximation. Sign. Process. 91(4), 1036–1042 (2011).
Y Wei, J Qiu, S Fu, Mode-dependent nonrational output feedback control for continuous-time semi-Markovian jump systems with time-varying delay. Nonlinear Anal. Hybrid Syst. 16:, 52–71 (2015).
Y Wei, J Qiu, HR Karimi, M Wang, Filtering design for two-dimensional Markovian jump systems with state-delays and deficient mode information. Inf. Sci. 269:, 316–331 (2014).
F Previtali, L Iocchi, in Multisensor Fusion and Integration for Intelligent Systems (MFI), 2015 IEEE International Conference On. Ptracking: distributed multi-agent multi-object tracking through multi-clustered particle filtering (IEEE, 2015), pp. 110–115.
AT Kamal, JH Bappy, JA Farrell, AK Roy-Chowdhury, Distributed multi-target tracking and data association in vision networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(7), 1397–1410 (2016).
R Claessens, A de Waal, P de Villiers, A Penders, G Pavlin, K Tuyls, in Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems AAMAS ’15. Multi-agent target tracking using particle filters enhanced with context data: (demonstration) (ACMIstanbul, 2015), pp. 1933–1934.
C Lin, Z Lin, R Zheng, G Yan, G Mao, Distributed source localization of multi-agent systems with bearing angle measurements. IEEE Trans. Autom. Control. 61(4), 1105–1110 (2016).
Q Zhang, J-F Zhang, Distributed parameter estimation over unreliable networks with Markovian switching topologies. IEEE Trans. Autom. Control. 57(10), 2545–2560 (2012).
Y Fan, L Liu, G Feng, Y Wang, Self-triggered consensus for multi-agent systems with zeno-free triggers. IEEE Trans. Autom. Control. 60(10), 2779–2784 (2015).
Y Fan, J Yang, Average consensus of multi-agent systems with self-triggered controllers. Neurocomputing. 177:, 33–39 (2016).
JW Marck, J Sijs, in Sensor Technologies and Applications (SENSORCOMM), 2010 Fourth International Conference On. Relevant sampling applied to event-based state-estimation (IEEEVenice, 2010), pp. 618–624.
YS Suh, VH Nguyen, YS Ro, Modified Kalman filter for networked monitoring systems employing a send-on-delta method. Automatica. 43(2), 332–338 (2007).
D Schuhmacher, B-T Vo, B-N Vo, A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Sign. Process. 56(8), 3447–3457 (2005).
T Li, S Sun, M Bolić, JM Corchado, Algorithm design for parallel implementation of the SMC-PHD filter. Sign. Process. 119:, 115–127 (2016).
Acknowledgements
This paper was supported by the National Natural Science Foundation of China (NSFC) under Grants 61175027 and 61305013, by the Fundamental Research Funds for the Central Universities (Grant No. HIT.NSRIF.2014071), and by the Research Fund for the Doctoral Program of Higher Education of China (No. 20132302120044).
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, J., Zhao, L., Su, X. et al. Computation-distributed probability hypothesis density filter. EURASIP J. Adv. Signal Process. 2016, 126 (2016). https://doi.org/10.1186/s13634-016-0418-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634-016-0418-z