Abstract
The correlationextremum systems theory is extended to stochastic systems with random structure or with switching parameters and the new suboptimal (due to the nonlinear system state and measurement equations) filtering and parameter identification algorithms and their linearized form are derived, which provide adaptive features and reliable operation for the proposed combined correlationextremum dynamic systems with random structure under environment influences, and represent new solutions of the linearization problem for the case of great estimation errors. The obtained linearized solution allows the simplification of the filter a priori performance investigation at the signal processing system design stage.
1. Introduction
The known nonlinear filtering algorithms (e.g., the extended Kalman filter (EKF) and the most extended versions) developed for nonlinear systems (in particular, this relates to the case of radar or optics tracking of an airborne target when the vehicle dynamics is described by nonlinear differential equations) are based on the assumption of linearization of the nonlinear functions in the state dynamics and measurements equations relative to estimation errors or about the current state estimate.
These algorithms providing optimal or suboptimal (for nonlinear systems) estimates (e.g., least squares, the likelihood maximum or the a posteriori probability density maximum) remain true when the estimation errors are small enough to satisfy a linearization. At the same time, the normal conditional probability density exists in the case of grate estimation errors and is expanding with increasing variances. In this case, an application of the traditional linearization theory becomes incorrect.
Among the previous work devoted to the linearization problem in the case of great estimation errors or great biases, one of the first approaches was taken in [1], where the twofeedback filtering system (with deterministic structure) with sequentially changing operating conditions depending on the estimation error value of the observable object coordinates was suggested, with introduction of a supplementary initial value bias as soon as the parameter estimate error exceeded the determined threshold value to describe the system motion with respect to a newly inserted position by a linear differential equation rather adequately.
Linearization of equations is known as one of the causes of filter unstable behavior. The typical causes of the EKF diverge analyzed and generalized in [2] include: inaccuracy of process description for the state and measurements models, linearization of equations (polynomial simplifications (commonly secondorder) to the equations of dynamics and measurement signal), the lack of full information on a real physical problem, simplifying assumptions providing mathematical descriptions of a problem, errors connected with modelling of probabilistic characteristics of noises and unknown input signals. Furthermore, the filters instability may occur as a result of roundoff errors which usually come into existence in digital modelling of filtering algorithms and which may result in the loss of the positive definiteness and symmetry of the estimates errors covariance matrix.
The lack of stability and the possibility of the presence of a wide range of uncertain parameters values in the system and measurement models generate a necessity to use adaptive filtering algorithms and corresponding adaptive systems.
Many scientific investigations have been already performed in the class of adaptive filtering scheme of the state vector estimation together with parameter identification using algorithms for both adaptive estimation and control, which represent a bank of elemental estimators with each matched to a possible parameter value (for example, [35]). In [4] a robust adaptive statefeedback control algorithm for a class of signalinput/signaloutput uncertain nonlinear systems, affected both by uncertain timevarying parameters (with known bounds) and unknown timevarying bounded disturbances is proposed. Finitedimensional filters for linear Gaussian statespace models derived in [5] can be used with the expectation maximization algorithm to yield maximum likelihood estimates of the model parameters with a possibility of parallel implementation on a multiprocessor system.
Many authors (e.g., [6], [7]) applying EKFs to tracking problems (and one of the first Moura et al. [6]) have come to the conclusion that some problems of numerical illconditioning may arise in this approach if the ratio between the maximum and minimum eigenvalues of the covariance matrix is not enough small. To overcome this problem the use of an EKF with the square root algorithm combined with the a posteriori probability maximum techniques was proposed in [7].
Noise identification problem is an important part of adaptive estimation (especially, in maneuvering target tracking in measurement noise with rapidly varying statistics). In [810] the typically rapid changes of the background noises in comparison with rather slow changers of the actual target image (particularly, if a background is swept behind a moving target), are stated as a common problem in discriminating between target and background (e.g., infrared (IR)) intensity patterns.
One of the earlier approaches to overcome a further restriction of the Kalman filtering algorithms (linear or suboptimal extended one) application connected with the fact of their capacity to process only timevarying signal functions, and to process the spatialtimevarying (STV) signals such as twodimensional IR (FLIR) target images through a combination of an enhanced correlator with a linear Kalman filter was considered in [8], and in the more resent work [9] a missile target tracker using a filter/correlator (with adaptive target shape identification) based on forwardlooking FLIR sensor measurements to track the centerofintensity of a hardbody/plume combination, and another filter using Doppler information to receive smaller bias and error variance was designed.
The design of a movingbank of multiple model adaptive controller incorporating a parallel bank of Kalman filters (for linear system model, quadratic cost, and Gaussian noise models) controllers that provides a method to estimate a wide range of parameter variations and quells oscillations in structure is presented in [10]. For comparison, both a previously developed IR tracking algorithm based on an EKF and the method based on the reduced sufficient statistics are used to track a target through a sequence of IR images and are considered in [11].
There is a significant class of the filtering and identification problems of interest (certain of which are mentioned above), especially in tracking systems, such as the cases of great estimation errors, tracking interruption, abrupt increasing of the measurements noises, and jumping changes of the estimated process parameters (e.g., if a target exhibits considerably changing trajectory characteristics (to reflect them in dynamics model)), and etc., when the performance of EKF becomes unstable.
The solution of the abovementioned problems applied to the STV signals processing was originally proposed in [12] where the correlationextremum systems theory was first extended to systems with random structure and the new suboptimal filtering algorithms were derived for the general case when the change of the structure is supposed to be governed by Markov process with a finitedimensional set of states and for the case when the parameter changes form Markov process with two states. The last case is considered in this paper and the linearized solution of the proposed estimation algorithm is presented.
In [1215] the correlationextremum methods were originally applied to signal processing for the systems with random structure, and the nonlinear estimation algorithms for measurement models described by STV signals in spatialtimevarying Gaussian white noise (STVGWN), and in spatialtimevarying GaussianMarkov colored noise (STVGMCN) were deduced.
To overcome the mentioned contradiction between the normal conditional probability density existence in the case of grate estimation errors and the incorrectness of the traditional linearization theory application, the theory of stochastic systems with random structure or with switching parameters and Markov processes has been originally extended to the correlationextremum systems and the new algorithms have been derived which are under consideration in this work.
The purpose of the present scientific investigation is the solutions of the problem of correlationextremum signal processing algorithms synthesis and analysis for stochastic dynamic systems with random structure or with switching parameters with noise statistics identification, and their linearization which can provide the filter adaptive capability and assure the system operation under varying natural or/and artificial environment influences in a number of their possible civil and military areas of application.
1.1. The estimation problem statement
The problem under consideration is adaptive estimation for the dynamic state process model described by a stochastic nonlinear differential equation (Eq. (1)) [1215]:
where $\mathbf{\Lambda}\left(t\right)$ is the state vector (of dimension $n$, in general case), $\mathbf{\Lambda}\in {R}^{n}$, which includes the random, unknown, and timevarying parameters vector ${\mathit{a}}^{\mathrm{T}}=({a}_{1},\dots ,{a}_{q})$, with initial Gaussian value $\mathbf{\Lambda}{(t}_{0})$, ${\mathbf{F}}^{\left(l\right)}(\mathbf{\Lambda},\mathbf{u},\mathrm{}t)$ is the nonlinear deterministic Lipschitz continuous vector function ${\mathbf{F}}^{\left(l\right)}\left(\mathbf{\Lambda},\mathbf{u},\mathrm{}t\right)=\Vert {f}_{i}^{\left(l\right)}(\mathbf{\Lambda},\mathbf{u},\mathrm{}t)\Vert $, $(i=\overline{1,n})$, $\mathbf{u}\left(t\right)$ is the known control vector, which may be a function of the state vector estimates components, $l\left(t\right)$ is a stationary Markov process taking values in the set $\left\{\mathrm{1,2},\dots ,p\right\}$ (system mode index or number of the state). Here ${\mathbf{W}}^{\left(l\right)}\left(t\right)$ is a vector process, $\mathit{W}\in {R}^{d}$, of the state Gaussian white noise with a zeromean $E\left[{\mathbf{W}}^{\left(l\right)}\left(t\right)\right]=0$ and correlation matrix ${K}_{w}\left(t,\tau \right)=E\left[{\mathbf{W}}^{\left(l\right)}\left(t\right){{\mathbf{W}}^{\left(l\right)}}^{T}\left(\tau \right)\right]={\mathbf{Q}}^{\left(l\right)}\left(t\right)\mathrm{}\delta (t\tau )$, where ${\mathbf{Q}}^{\left(l\right)}\left(t\right)$ is the diagonal intensity matrix ${\mathbf{Q}}^{\left(l\right)}\left(t\right)=\Vert {\mathrm{q}}_{j}^{\left(l\right)}\left(t\right)\Vert $, $(j=\overline{1,d},d\le n)$, $\delta $ is the delta function. The following notations are used: $E[\bullet ]$ denotes the expectation operator for stochastic processes; ${{\mathbf{W}}^{\left(l\right)}}^{T}$ is a transposed vector for ${\mathbf{W}}^{\left(l\right)}$.
The following measurement equation (Eq. (2)):
describes the observable signal $\mathbf{r}\left(x,\mathrm{}y,\mathrm{}t\right)$ as the spatialtimevarying process (of dimension $m$), $\mathit{r}\in {R}^{m},(m\le n)$,where $x$, $y$are the space variables – space coordinates at any point – $x\in X=[{x}_{0},{x}_{X}]$, $y\in \mathrm{Y}=[{y}_{0},{y}_{Y}]$ , $t$ is the time variable $t\in T=[{t}_{0},{t}_{T}]$, ${\mathbf{S}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}\mathbf{\Lambda},t\right)$ is the vector of STV signals of different physical nature, ${\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}t\right)$ is the measurements STVGWN with diagonal intensity matrix ${\mathbf{C}}_{0}^{\left(l\right)}$ and correlation function ${h}^{\left(l\right)}(\mathrm{\Delta}x,\mathrm{\Delta}y,\mathrm{\Delta}t)={\mathbf{C}}_{0}^{\left(l\right)}\delta (\mathrm{\Delta}x,\mathrm{\Delta}y,\mathrm{\Delta}t)$. The state system noise and the measurements noise are assumed independent and temporally uncorrelated.
The Markov process describes the random changes of the structure with p finite states and the transition intensities ${\mathrm{\nu}}_{jl}\left(t\right)$ and ${\mathrm{\nu}}_{j}\left(t\right)$, where $j,\mathrm{}l=\overline{1,p}$. The system behavior and the features of the system may be explicated as follows. The system begins in a particular mode of operation, say $l\left(t\right)=1$, then at a random time the system jumps to one of the other $\left(p1\right)$ possible modes of operation and may or may not remain in this state, and the dynamics differential equations corresponding to the different switch positions form the description of the system dynamics for each state. The transitions may occur from one state (or location) to another under varying and uncertain external conditions (e.g., when the process dynamics at a certain state would take the associated continuous state outside a distinct region of the state space), with Markov process models describing the continuous stochastic process of system (e.g., target) dynamics and digital process of the mode or structure changes.
The nonlinear filtering or estimation problem described above by the system and measurement equations (Eq. (12)) is to determine the finitedimensional dynamical system whose output is the best minimum variance estimate of the joint Markov process ${\left(\mathbf{\Lambda}\right(t),\mathrm{}l(t\left)\right)}^{T}$, for $t\ge 0$ given the STV observed data $\mathit{r}(x,y,t)$.
1.2. The algorithms synthesis and linearization problem solution
The STV signal processing algorithms synthesis and analysis problem solution [1215] was built upon an integration of two theories – the correlationextremum systems theory and the theory of stochastic systems with random structure, and was based on the generalized FokkerPlankKolmogorovStratonovich differential equation for the evolution of joint conditional probability density function of the state dynamics $\mathbf{\Lambda}\left(t\right)$ and the system structure $l\left(t\right)$ given the STV measurement data $\mathit{r}\left(x,y,t\right)$, $\omega \left(\mathbf{\Lambda},l,t\mathit{r}\left(x,y,\tau \right),{t}_{0}\le \tau \le t\right)=\widehat{\omega}\left(\mathbf{\Lambda},\mathrm{}l,t\right)={\widehat{\omega}}^{\left(l\right)}\left(\mathbf{\Lambda},\mathrm{}t\right)$ (with the initial value $\omega \left({\mathbf{\Lambda}}_{0},{t}_{0}\right)$ of probability density of the state dynamics $\mathbf{\Lambda}\left({t}_{0}\right)$) (the sign ^ means the a posteriori function value).
The a posteriori probability density $\widehat{\mathrm{\omega}}\left(\mathbf{\Lambda},t\right)$ for the whole dynamics process is determined by the following expression (Eq. (3)):
where ${\widehat{P}}_{l}\left(t\right)$ is the a posteriori probability of the $l\mathrm{t}\mathrm{h}$ state, whose evolution is defined by the state probability estimate differential equations (Eq. (4)) (or discrete (for a discrete problem statement)), the presence of which in the filtering and identification algorithms and the relation between them form the main distinguishing properties of signal processing in systems with the random structure [12, 13, 15]:
$+\frac{1}{2}{\widehat{P}}_{l}\left(t\right)\left\{\underset{\infty}{\overset{\infty}{\int}}{\mathcal{F}}^{\left(l\right)}\left(\mathit{z},r,t\right){\widehat{\omega}}^{\left(l\right)}\left(\mathit{z},t\right)d\mathit{z}\sum _{k=1}^{p}{\widehat{P}}_{k}\left(t\right)\underset{\infty}{\overset{\infty}{\int}}{\mathcal{F}}^{\left(k\right)}\left(\mathit{z},r,t\right){\widehat{\omega}}^{\left(k\right)}\left(\mathit{z},t\right)d\mathit{z}\right\},$
where $\mathrm{}l=\overline{1,p}$, ${\mathcal{F}}^{\left(l\right)}(\mathbf{\Lambda},\mathbf{r},t)$ is the derivative of the likelihood function logarithm in the $l\mathrm{t}\mathrm{h}$ state ($z$ is the variable, $z\in \mathbf{\Lambda})$. The a priori state probabilities ${P}_{l}\left(t\right)$ can be found using the known Kolmogorov equations.
All processes are defined on the probability space ($\mathrm{\Omega}$, $\mathcal{F}$, $\stackrel{~}{P}$), where $\left\{{\stackrel{~}{P}}_{\lambda .},\lambda \in \mathbf{\Lambda}\right\}$ is a family of probability measures on ($\mathrm{\Omega}$, $\mathcal{F}$) which are absolutely continuous with respect to a fixed probability measure ${\stackrel{~}{P}}_{0}$, with the corresponding $\sigma $algebra $\mathcal{F}$. The likelihood function ${L}_{f}\left(\mathbf{\Lambda}\right)$ for obtaining the estimate $\widehat{\mathbf{\Lambda}\mathbf{}}$ of the state process $\mathbf{\Lambda}$ (including parameters) is based on the information contained in STV measurements $\mathit{r}\left(x,y,\tau \right),{t}_{0}\le \tau \le t$, and both are defined by the expressions:
(the relationship between the likelihood function and the RadonNikodym derivative), and $\widehat{\mathbf{\Lambda}}\in \underset{\mathrm{\lambda}\in \mathbf{\Lambda}}{\mathrm{argmax}}{L}_{f}\left(\mathbf{\Lambda}\right).$ To determine the function ${\mathcal{F}}^{\left(l\right)}(\mathbf{\Lambda},\mathbf{r},t)$ it is necessary to know the likelihood functional for the measurements additive STVGWN ${\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}t\right)$. The conditional probability density $\omega \left(\mathit{r}\left(x,y,\tau \right)\mathbf{\Lambda},\mathrm{}l,\tau ,{t}_{0}\le \tau \le t\right)$ called likelihood function ${L}_{f}\left(\mathbf{\Lambda},\mathrm{}l\right)={L}_{f}^{\left(l\right)}\left(\mathbf{\Lambda}\right)$ (as a function of $\mathbf{\Lambda}$ and $l$ (for systems with random structure)) is indubitably normal for a linear system. In the case under consideration the measurement $\mathit{r}\left(x,y,\tau \right)$ is the sum of the normal random process ${\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}\tau \right)$ and the deterministic signal function ${\mathbf{S}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}\mathbf{\Lambda},\tau \right)$ (or may be random signal function depending on Gaussian value $\mathbf{\Lambda}\left(\tau \right)$) (for example, when the target intensity pattern is modeled by a Gaussian function).
Denote the limit of the conditional probability density $\omega \left(\mathit{r}\left(x,y,\tau \right)\mathbf{\Lambda},\mathrm{}l,\tau ,{t}_{0}\le \tau \le t\right)$ by ${L}^{\left(l\right)}\left(\mathbf{\Lambda}\right):$
The principle of the likelihood functional ${L}^{\left(l\right)}\left(\mathbf{\Lambda}\right)$ maximum for the STVGWN ${\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}t\right)$ with the spectral density matrix ${\mathbf{C}}_{0}^{\left(l\right)}$ is first extended in this research to the systems with random structure in the form (Eq. (5)):
$\left(l=\overline{1,\mathrm{}p}\right),$
where ${c}^{\left(l\right)}$ is a value depending on ${\mathbf{C}}_{0}^{\left(l\right)}$; $X$, $Y$ and $T$ are the spatial and time limits of integration (or spatial and time domains of observation).
The suboptimal estimate $\widehat{\mathbf{\Lambda}}\left(t\right)$ of the state dynamics Markov process $\mathbf{\Lambda}\left(t\right)$ on the assumption of mean square loss function is the conditional mathematical expectation. The optimal estimate of discrete process $l\left(t\right)$ by the a posteriori probability criterion will be such a value of $l$ that makes the value of the a posteriori probability ${\widehat{P}}_{l}\left(t\right)$ maximum.
The suboptimal (due to nonlinearities) estimate of the state is $\widehat{\mathbf{\Lambda}}\left(t\right)=\sum _{l=1}^{p}{\widehat{P}}_{l}\left(t\right){\widehat{\mathbf{\Lambda}}}^{\left(l\right)}\left(t\right)$.
The signal position on the image plane $XOY$ (for example, the FLIR image plane) can be determined by parameters vector: ${\mathrm{\lambda}}_{x}\left(t\right)={\phi}_{x}(\mathbf{\Lambda},t)$, ${\lambda}_{y}\left(t\right)={\phi}_{y}(\mathbf{\Lambda},t)$ (whose dynamics defines the target intensity pattern on image plane). Then the signal ${\mathbf{S}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}\mathbf{\Lambda},t\right)$ may be represented as a function ${\mathbf{S}}^{\left(l\right)}\left(x,\mathrm{}y,\mathrm{}\mathbf{\Lambda},t\right)={\mathbf{S}}^{\left(l\right)}\left(x{\lambda}_{x},\mathrm{}y{\lambda}_{y},t\right)$. In this case the suboptimal estimator is defined as a tracker system. Assuming that the signal or image position along one of the axes (e.g., in $y$ direction) is known, to simplify the derivation, and denoting the STV signal ${\mathbf{S}}^{\left(l\right)}\left(x,\mathbf{\Lambda},t\right)={\mathbf{S}}^{\left(l\right)}\left(x{\lambda}_{x},t\right)$, and the state parameter ${\lambda}_{x}$ without index ${\lambda}_{x}=\lambda .$, the measurement equation (Eq. (2)) becomes: $\mathbf{r}\left(x,t\right)={\mathbf{S}}^{\left(l\right)}\left(x\lambda .,t\right)+{\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}t\right).$
For the case when the changes of the structure and corresponding parameters form Markov process with two states ($l=\overline{1,2}$) and the transition intensity $\mathrm{\nu}\left(t\right)$ the nonlinear filtering problem solution for correlationextremum systems with random structure derived in [12, 13] is presented by the following correlationextremum algorithms (Eq. (69)).
The differential Eq. (6) for the a posteriori probabilities of state ${\widehat{\mathrm{P}}}_{l}\left(t\right)$ is presented below:
$+\left\{\nu \frac{{\widehat{P}}_{1}\left(t\right)}{{C}_{X}^{\left(2\right)}}\left[{k}_{2}+{B}^{\left(2\right)}\left(\mathrm{\Delta}{\lambda}_{\left(2\right)}\right)+\frac{1}{2}{\sigma}_{\left(2\right)}^{2}\left(t\right)\frac{{\partial}^{2}{B}^{\left(2\right)}\left(\mathrm{\Delta}{\lambda}_{\left(2\right)}\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(2\right)}^{2}}\right]\right\}\left[1{\widehat{P}}_{1}\left(t\right)\right],$
${\widehat{P}}_{2}\left(t\right)=1{\widehat{P}}_{1}\left(t\right)$, where ${\widehat{P}}_{2}\left(t\right)$ is the a posteriori probability of the second state, ${\mathrm{\Delta}\mathrm{\lambda}}_{\left(l\right)}\left(t\right)$ is the state estimate error ${\Delta \lambda}_{\left(l\right)}\left(t\right)=\lambda \left(t\right){\widehat{\lambda}}^{\left(l\right)}\left(t\right)$, $l=\overline{\mathrm{1,2}}$; ${\sigma}_{\left(l\right)}^{2}\left(t\right)$ is the variance of the a posteriori probability density function ${\sigma}_{\left(l\right)}^{2}\left(t\right)=\u2329{\left[\right(\lambda \left(t\right){\widehat{\lambda}}^{\left(l\right)}\left(t\right)]}^{2}\u232a$, $l=\overline{1,2}$; ${B}^{\left(l\right)}\left({\Delta \lambda}_{\left(l\right)},t\right)$ is the spatial correlation function in the $l\mathrm{t}\mathrm{h}$ state ${B}^{\left(l\right)}\left({\Delta \lambda}_{\left(l\right)},t\right)=\langle {{\mathbf{S}}^{\left(l\right)}}^{T}\left(x{\widehat{\lambda}}^{\left(l\right)},t\right){\mathbf{S}}^{\left(l\right)}(x\lambda ,t)\rangle $, (or for the scalar measurement: ${B}^{\left(l\right)}\left({\Delta \lambda}_{\left(l\right)},t\right)=\langle {S}^{\left(l\right)}\left(x{\widehat{\lambda}}^{\left(l\right)},t\right){S}^{\left(l\right)}(x\lambda ,t)\rangle $); ${\mathrm{C}}_{X}^{\left(l\right)}$ is the specific spectral intensity of the STVGWN ${\mathbf{N}}^{\left(l\right)}\left(x,\mathrm{}t\right)$ in the $l$th state: ${C}_{X}^{\left(l\right)}={C}_{0}^{\left(l\right)}/X$.
The derivation of the equations Eq. (6) assumes that the parameters are considered as “unpowered” (the term well known in the signal processing theory, first applied in radar signal processing), which is to say that the integrals $\underset{\u2013X}{\overset{X}{\int}}[{{S}^{\left(l\right)}\left(x{\widehat{\lambda}}^{\left(l\right)},t\right)]}^{2}dx$ (representing the signal power) and $\underset{X}{\overset{X}{\int}}{r}^{2}\left(x,t\right)dx$ (which are explicitly independent of the estimated parameter), may be taken into account in the coefficients ${k}_{1}$ and ${k}_{2}$. A further assumption in the algorithms synthesis is that the integral limits X and Y are vastly larger than the signal correlation intervals $(X\gg {{\u2206}_{X}}_{cor},Y\gg {{\u2206}_{Y}}_{cor})$.
The state estimate equation (Eq. (7)) has been derived in the form:
${\widehat{\lambda}}^{\left(l\right)}\left({t}_{0}\right)={{\widehat{\lambda}}_{0}}^{\left(l\right)},(l,j=\overline{\mathrm{1,2}};j\ne l),$
where ${N}_{X}^{\left(l\right)}={\int}_{X}^{X}\frac{\partial {{\mathbf{S}}^{\left(l\right)}}^{T}\left(x\u2013{\widehat{\lambda}}^{\left(l\right)},t\right)}{\partial {\widehat{\lambda}}^{\left(l\right)}}{\mathbf{N}}^{\left(l\right)}\left(x,t\right)dx$ (As it is known from the signal theory, the measurements signals and noises in many cases are (or are assumed to be) uncorrelated). The suboptimal state estimate of the whole process (for two states) can be obtained by using a weighted sum: $\widehat{\lambda}\left(t\right)={\widehat{P}}_{1}\left(t\right){\widehat{\lambda}}^{\left(1\right)}\left(t\right)+{\widehat{P}}_{2}\left(t\right){\widehat{\lambda}}^{\left(2\right)}\left(t\right)$.
The variance differential equation is presented below:
$+\nu \frac{{\widehat{P}}_{j}\left(t\right)}{{\widehat{P}}_{l}\left(t\right)}\left[{\sigma}_{\left(J\right)}^{2}\left(t\right){\sigma}_{\left(l\right)}^{2}\left(t\right)+{\left({\widehat{\lambda}}^{\left(j\right)}\left(t\right){\widehat{\lambda}}^{\left(l\right)}\left(t\right)\right)}^{2}\right],{\sigma}_{\left(l\right)}^{2}\left({t}_{0}\right),\left(l,j=\overline{\mathrm{1,2}};j\ne l\right),$
and the estimate error variance for the whole process is: ${\sigma}^{2}\left(t\right)={\widehat{P}}_{1}\left(t\right){\sigma}_{\left(1\right)}^{2}\left(t\right)+{\widehat{P}}_{2}\left(t\right){\sigma}_{\left(2\right)}^{2}\left(t\right)$.
The suboptimal (due to nonlinearities) estimate $\widehat{\lambda}\left(t\right)$ and covariance ${\sigma}^{2}\left(t\right)$ for the whole process represent weighted sums, where the $l$th weighting factor ${\widehat{P}}_{l}\left(t\right)$ is the a posteriori $l$th hypothesis conditioned probability.
In this paper the spatialtimevarying filtering in systems with random structure with changing noise statistics (intensities) (as well as other characteristics) for each state is proposed. For this noise intensity identification problem the suboptimal estimate (in the minimum mean square error sense) of the measurement noise intensity ${C}_{0}$ (considering as a timevarying parameter) at a given time ${t}_{i}$ can be obtained from the following conditional mean value Eq. (9) (originally presented in this paper for spatialtime varying signal processing):
where ${P}_{l}\left({t}_{i}\right){\mathit{r}}_{i}$ is the conditional probability of the state or mode $l$, conditioned on the observed measurements to time ${t}_{i}$, (e.g., for two states (or structures) two STVGWN models can be used with different intensity levels corresponding to the lower and upper statistics bounds). The spectral densities may be treated for the case of stationary measurements noise ${\mathbf{C}}_{0}^{\left(l\right)}$, and for the case of nonstationary noise ${\mathbf{C}}_{0}^{\left(l\right)}\left(t\right)$ depending on some of the state vector components or parameters (e.g., [13]).
Spatialtimevarying filtering in STVGMCN, proposed in [15] may also be useful to discriminate between target (or another object) and background intensity patterns.
The filtering algorithm (Eq. (68)) is composed of two separate estimators processing the STV signals or images in parallel and exchanging information according to the state estimate equation (Eq. (7)). The signals of different nature fields are used to compute the a posteriori probabilities ${\widehat{P}}_{l}\left(t\right)$ via Eq. (6). The suboptimal parameter estimate $\widehat{\mathrm{\lambda}}\left(t\right)$ corresponds to the maximum value of the crosscorrelation function of the reference and received STV signals, when $\mathrm{\Delta}{\mathrm{\lambda}}_{\left(l\right)}\left(t\right)=0$.
The proposed algorithms (Eq. (69)) can be applied to the solution of the linearization problem in the case of great estimation errors. The fundamental difference between the approaches taken in [1] and in the present work lies in the first application of the systems with random structure theory here to STV signal processing.
In this paper the following new linearized correlationextremum algorithms for systems with random structure have been derived (Eqs. (1012)) using the Taylor series expansion 1) of the crosscorrelation function ${B}^{\left(l\right)}\left(\mathrm{\Delta}{\mathrm{\lambda}}_{\left(l\right)}\right)$, and 2) of the crosscorrelation function second derivative $\frac{{\partial}^{2}{B}^{\left(l\right)}\left(\mathrm{\Delta}{\lambda}_{\left(l\right)}\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(l\right)}^{2}}$, taking into account the features of the singular processes (${{B}^{\text{'}}}^{\left(l\right)}\left(0\right)=0$, and ${{B}^{\text{'}\text{'}}}^{\left(l\right)}\left(0\right)<0$), to simplify the study of the a posteriori probabilities functions ${\widehat{P}}_{l}\left(t\right)$ behavior.
The equations for the a posteriori probabilities of state may be defined as:
$+\left\{\nu \frac{{\widehat{P}}_{1}\left(t\right)}{{C}_{X}^{\left(2\right)}}\left[{k}_{2}+{B}^{\left(2\right)}\left(0\right)\frac{1}{2}\frac{{\partial}^{2}{B}^{\left(2\right)}\left(\mathrm{\Delta}{\lambda}_{\left(2\right)}\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(2\right)}^{2}{}_{\mathrm{\Delta}{\lambda}_{\left(2\right)}=0}}\left(\mathrm{\Delta}{\lambda}_{\left(2\right)}^{2}{\sigma}_{\left(2\right)}^{2}\left(t\right)\right)\right]\right\}\left[1{\widehat{P}}_{1}\left(t\right)\right],$
${\widehat{P}}_{2}\left(t\right)=1{\widehat{P}}_{1}\left(t\right).$
The linearization of the state estimate equation (Eq. (7)) has been obtained by 1) applying the series approximation to $f\left({\widehat{\lambda}}^{\left(l\right)},u,t\right)$, and 2) the series approximation of the spatial correlation function derivatives $\frac{\partial {B}^{\left(l\right)}\left(\mathrm{\Delta}{\lambda}_{\left(l\right)},t\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(l\right)}}$, in the form:
$\frac{{\sigma}_{\left(l\right)}^{2}\left(t\right)}{{C}_{X}^{\left(l\right)}}\left[\frac{\partial {B}^{\left(l\right)}\left(\mathrm{\Delta}{\lambda}_{\left(l\right)},t\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(l\right)}{}_{\mathrm{\Delta}{\lambda}_{\left(l\right)}=0}}+\frac{{\partial}^{2}{B}^{\left(l\right)}\left(\mathrm{\Delta}{\lambda}_{\left(l\right)}\right)}{\partial \mathrm{\Delta}{\lambda}_{\left(l\right)}^{2}{}_{\mathrm{\Delta}{\lambda}_{\left(l\right)}=0}}\mathrm{\Delta}{\lambda}_{\left(l\right)}\right]$
$+\frac{{\sigma}_{\left(l\right)}^{2}\left(t\right)}{{C}_{0}^{\left(l\right)}}{N}_{X}^{\left(l\right)}+\nu \frac{{\widehat{P}}_{j}\left(t\right)}{{\widehat{P}}_{l}\left(t\right)}\left[{\widehat{\lambda}}^{\left(l\right)}\left(t\right){\widehat{\lambda}}^{\left(j\right)}\left(t\right)\right],{\widehat{\lambda}}^{\left(l\right)}\left({t}_{0}\right)={{\widehat{\lambda}}_{0}}^{\left(l\right)},\left(l,j=\overline{\mathrm{1,2}};j\ne l\right).$
For the whole process the suboptimal state estimate is: $\widehat{\lambda}\left(t\right)={\widehat{P}}_{1}\left(t\right){\widehat{\lambda}}^{\left(1\right)}\left(t\right)+{\widehat{P}}_{2}\left(t\right){\widehat{\lambda}}^{\left(2\right)}\left(t\right)$. The variance equation (the Riccatitype one) with linearized functions is:
$+{q}^{\left(l\right)}\left(t\right)+\nu \frac{{\widehat{P}}_{j}\left(t\right)}{{\widehat{P}}_{l}\left(t\right)}\left[{\sigma}_{\left(J\right)}^{2}\left(t\right){\sigma}_{\left(l\right)}^{2}\left(t\right)+{\left({\widehat{\lambda}}^{\left(j\right)}\left(t\right){\widehat{\lambda}}^{\left(l\right)}\left(t\right)\right)}^{2}\right],{\sigma}_{\left(l\right)}^{2}\left({t}_{0}\right),\left(l,j=\overline{\mathrm{1,2}}\right),$
and for the whole process the estimate error variance: ${\sigma}^{2}\left(t\right)={\widehat{P}}_{1}\left(t\right){\sigma}_{\left(1\right)}^{2}\left(t\right)+{\widehat{P}}_{2}\left(t\right){\sigma}_{\left(2\right)}^{2}\left(t\right)$. As a remark: for quasisingular processes the first derivative of the crosscorrelation function of zero is not equal to zero (${{B}^{\text{'}}}^{\left(l\right)}\left(0\right)\ne 0$), therefore the linearized algorithms equations will contain the terms with the first derivative of the crosscorrelation function. In a nonlinear estimator for every ${\mathrm{\Delta}\lambda}_{\left(l\right)}$ it is necessary to know the correlation function derivatives, but using the series approximation, the value of $\frac{{\partial}^{2}B\left(\xi \right)}{\partial {\xi}^{2}{}_{\xi =0}}$ can be interpreted as a constant coefficient. The linearized estimate equations (Eq. (11)) define the filtering scheme as the twostates (or twochannels) tracking system, which changes its threshold according to linearized Eq. (10).
For the class of tracking systems an adaptation mechanism of the proposed estimation algorithms represents an effective means of adaptive switching (expansion and contraction) of the effective field of view of a radar tracker or an optics (IR image) tracker, and ensures the system operation, particularly, for a wide dynamic range of target maneuvers.
The variance equations (nonlinear and the linearized one) represent the new Riccatitype differential equations first obtained for STV signal processing in STVGWN for systems with random structure (with cross correlation functions (and their derivatives) in the nonlinear term).
The adaptive filtering algorithms (nonlinear and the linearized ones) for correlationextremum systems with random structure 1) for estimation of signal position along the axis $y$, and 2) for both components of the state vector ${\lambda}_{x}\left(t\right)$ and ${\lambda}_{y}\left(t\right)$ have been derived.
The application of the proposed correlationextremum filtering algorithms to the systems with interrupted signal information (the filtering problem with observation process feedback adaptive control) has been investigated and the new appropriate filters have been derived for the cases of estimation problem with unobservable moments of changing structure, and for a special case of estimation with the known moments of changing structure.
2. Conclusions
The proposed correlationextremum filtering and identification algorithms (nonlinear and the linearized one) including the differential equations for the a posteriori probabilities of states, the state estimates, and the variances are derived for stochastic dynamic systems with random structure for the adaptive estimation problem, when the system state and parameter models are described by Markov processes, and the measurements are the nonlinear STV signals of different physical nature fields against a background of the additive STVGWN, whose intensity identification a) is first obtained 1) for STV signal processing 2) in systems with random structure, and b) reflects more adequately the true noise statistics existing in real external conditions.
The adaptation mechanism of the systems with random structure coupled with the correlationextremum signal processing techniques yields the advantages of the combined system based on both performance attributes (robust properties) and computational loading taking into account the recent increases in processor speeds.
The derived algorithms (nonlinear and the linearized one) 1) represent new solutions of the linearization problem in the class of recursive filters for stochastic dynamic systems, and 2) provide the estimator adaptive capability (e.g., for the cases of great estimation errors, tracking interruption, abrupt increasing of the measurements noises, and jumping changes of the estimated process parameters with a capacity to change a) filter gains (first as an analytical functions (not as an experimentally modified values)) without necessity in artificial or experimental tuning the gain matrix, and b) field of view (for tracking systems) rapidly and effectively due to the preference of the combined correlationextremum system with random structure versus nonlinear filters in systems with a deterministic structure.
The proposed STV signal processing algorithms in comparison with the traditional estimation algorithms first show a relationship between the a posteriori probability density maximum criterion, the likelihood maximum criterion, the covariance matrix minimum criterion, and the crosscorrelation function maximum criterion.
The possible civil and military areas of application of the derived STV signal processing algorithms involve, in particular complex stochastic dynamic systems, such as tracking, navigation systems, robotics equipped with image sensors (e.g., radar, digital, optics, and etc.) using the STV signals of different nature fields.
References

Baklitski V., Yuriev A. CorrelationExtremum Methods in Navigation. Radio and Communication, Moscow, 1982, (in Russian).

Sage A. P., Melse J. L. Estimation Theory with Applications to Communication and Control. NewYork, McGrawHill, 1976, (in Russian).

Chang C., Athans M. State estimation for discrete systems with switching parameters. IEEE Transactions on Aerospace and Electronic Systems, Vol. 14, Issue 3, 1978, p. 418425.

Marino R., Tomei P. Robust adaptive statefeedback tracking for nonlinear systems. IEEE Transactions on Automatic Control, Vol. 43, Issue 1, 1998, p. 8489.

Elliot R. J., Krishnamurthy V. New finitedimensional filters for parameter estimation of discretetime linear Gaussian models. IEEE Transactions on Automatic Control, Vol. 44, Issue 5, 1999, p. 938951.

Moura J. M. F., Van Trees H. L., Baggeroer A. B. Spacetime tracking by a passive observer. Proceedings of the 4th Symposium on Nonlinear Estimation, San Diego, CA, 1973.

Cortina E., Otero D., D’Attellis C. E. Maneuvering target tracking using extended Kalman filter. IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, Issue 1, 1991, p. 155158.

Maybeck P. S., Suizu R. I. Adaptive tracker fieldofview variation via multiple model filtering. IEEE Transactions on Aerospace and Electronic Systems, Vol. 21, Issue 4, 1985, p. 529538.

Maybeck P. S., Herrera T. D., Evans R. J. Target tracking using infrared measurements and laser illumination. IEEE Transactions on Aerospace and Electronic Systems, Vol. 30, Issue 3, 1994, p. 758768.

Gustafson J. A., Maybeck P. S. Flexible spacestructure control via movingbank multiple model algorithms. IEEE Transactions on Aerospace and Electronic Systems, Vol. 30, Issue 3, 1994, p. 750757.

Anderson K. L., Iltis R. A. A tracking algorithm for infrared images based on reduced sufficient statistics. IEEE Transactions on Aerospace and Electronic Systems, Vol. 33, Issue 2, 1997, p. 464471.

Kolosovskaya T. Spatialtimevarying signals processing algorithms in systems with random structure. Mechanical Engineering and Machine Reliability Problems, Vol. 5, 1995, p. 105112, (in Russian).

Kolosovskaya T. Nonlinear filtering and identification algorithms for correlationextremum dynamic systems with random structure. Journal of Vibroengineering, Vol. 8, 2016, p. 531537.

Kolosovskaya T. Adaptive estimation using linearized spatialtimevarying signal processing algorithms in systems with random structure. 15th International Conference on Aviation and Cosmonautics, Moscow, 2016, p. 450452.

Kolosovskaya T. P. Nonlinear signal processing in systems with random structure for the case of spatialtimevarying colored GaussianMarkov noise. Journal of Vibroengineering, Vol. 13, 2017, p. 272279.
About this article
The author is grateful to the mentioned authors [111], whose scientific contributions form the fundamentals for the further researches.