Abstract
This article addresses the problem of blind source separation, in which the source signals are most often of the convolutive mixtures, and moreover, the source signals cannot satisfy independent identical distribution generally. One kind of prevailing and representative approaches for overcoming these difficulties is joint block diagonalization (JBD) method. To improve present JBD methods, we present a class of simple Jacobitype JBD algorithms based on the LU or QR factorizations. Using Jacobitype matrices we can replace high dimensional minimization problems with a sequence of simple onedimensional problems. The novel methods are more general i.e. the orthogonal, positive definite or symmetric matrices and a preliminary whitening stage is no more compulsorily required, and further, the convergence is also guaranteed. The performance of the proposed algorithms, compared with the existing stateoftheart JBD algorithms, is evaluated with computer simulations and vibration experimental. The results of numerical examples demonstrate that the robustness and effectiveness of the two novel algorithms provide a significant improvement i.e., yield less convergence time, higher precision of convergence, better success rate of block diagonalization. And the proposed algorithms are effective in separating the vibration signals of convolutive mixtures.
1. Introduction
Blind source separation (BSS) deals with the problem of finding both the unknown input sources and unknown mixing system from only observed output mixtures. BSS has recently become the focus of intensive research work due to its high potential in many applications such as antenna processing, speech processing and pattern recognition [13]. The recent successes of BSS might be also used in mechanical engineering [47]. In these reported applications for tackling the BSS, there were two kinds of BSS models i.e., instantaneous and convolutive mixture models. The instantaneous mixture models with a simple structure have been described in many papers and books [810]. However, when it came to deal with convolutive mixture signals, BSS might face a number of difficulties which seriously hindered its feasibility [11, 12]. There is currently an endeavor of research in separating convolutive mixture signals, yet no fully satisfying algorithms have been proposed so far.
Many approaches have been proposed to solve the convolutive BSS (CBSS) problem in recent years. One kind of prevailing and representative approaches is joint block diagonalization (JBD) which can produce potentially more elegant solution for CBSS in time domain [13]. In the article, we focus on the JBD problem, which has been firstly treated in [14] for a set of positive definite symmetric matrices. And then the same conditions also mentioned in [15]. To solve the JBD problem, Belouchrani have sketched several Jacobi strategies in [1618]: the JBD problem was turn into a minimization problem which was processed by iterative methods; as a product of Givens rotations, each rotation made a blockdiagonality criterion minimum around a fixed axis. Févotte and Theis [19, 20] have pointed out that the behavior of Jacobi approach was very much dependent on the initialization of the orthogonal basis and also on the choice of the successive rotations. Then they proposed some strategies to improve the efficiency of JBD. But there are also several critical constraints: the joint blockdiagonalizer is an orthogonal (unitary in the complex case) matrix; the spatial prewhitening is likely to lead to a larger error and, moreover, this error is unable to correct in the subsequent analysis. In [21] a gradientbased JBD algorithm have been used to achieve the same task but for nonunitary joint blockdiagonalizers. This approach suffers from slow convergence rate since the iteration criteria possesses a fixed step size. To improve this shortcoming, some gradientbased JBD algorithms with optimal step size have been provided and studied in [2224]. However, these algorithms are apt to converge to a local minimum and have low computational efficiency. To eliminate the degenerate solutions of the nonunitary JBD algorithm, Zhang [25] optimized a penalty term based weighted leastsquares criterion. In [26], a novel triquadratic cost function was introduced, furthermore, an efficient algebraic method based on triple iterations has been used to search the minimum point of the cost function. Unfortunately, this method exists redundancy value and arises error when the mixture matrix is inversed. Some new Jacobilike algorithms [27, 28] for nonorthogonal joint diagonalization have been proposed, but unfortunately cannot be used to solve the problem of block diagonalization.
Our purpose, here, is not only to tackle the problem of the approximate JBD by discard the orthogonal constraint on the joint blockdiagonalizer i.e., impose as few assumptions as possible on the matrix set, but also to propose the JBD algorithms characterized by the merits of simplicity, effectiveness, and computational efficiency. Subsequently, we suggest two novels nonorthogonal JBD algorithms as well as the Jacobitype schemes. The new methods are an extension of joint diagonalization (JD) algorithms [29] based on LU and QR decompositions mentioned in [30, 31] to the block diagonal case.
This article is organized as follows: the JBD problem is stated in Section 2.1. The two proposed algorithms are derived in section 2.2, whose convergence is proved in Section 2.3. In Section 3, we show how to apply the JBD algorithms to tackle the CBSS problem. Section 4 gives the results of numerical simulation by comparing the proposed algorithms with the stateoftheart gradientbased algorithms introduced in [23]. In Section 5, the novel JBD algorithms are proved to be effective in separating vibration source of convolutive type, which outperforms JBD_{OG} and JRJBD algorithms [19].
2. Problem formulation and nonorthogonal JBD algorithms
2.1. The joint block diagonalization problem
Let ${\left\{{\mathbf{R}}_{k}\right\}}_{k=1}^{K}\in {C}^{M\times M}$ be a set of $K$ matrices (the matrices are square but not necessarily Hermitian or positive definite), that can be approximately diagonalized as:
where $\mathbf{A}\in {C}^{M\times M}$ is a general mixing matrix and the matrix ${\mathbf{N}}_{k}\in {C}^{M\times M}$ denotes residual noise, ${\left(\cdot \right)}^{H}$ denotes complex conjugate transpose (replace it by ${\left(\cdot \right)}^{T}$ in real domain). ${\mathbf{D}}_{k}(k=1,\cdot \cdot \cdot ,K)$ is an $M\times M$ block diagonal matrix where the diagonal blocks are square matrices of any size and the offdiagonal blocks are zeros matrices i.e.:
where ${\mathbf{D}}_{k,ii}$ denotes the $i$th diagonal block of the size ${n}_{i}\times {n}_{j}$ such that ${n}_{1}+{n}_{2}+\cdot \cdot \cdot +{n}_{r}=M$. In general case, e.g. in the context of CBSS, the diagonal blocks are assumed to be the same size, i.e. $M={n}_{i}\times r$ and where ${0}_{ij}$ denotes the ${n}_{i}\times {n}_{j}$ null matrix. The JBD problem consists of the estimation of $\mathbf{A}$ and ${\mathbf{D}}_{k}(k=1,\cdot \cdot \cdot ,K)$ when the matrices ${\left\{{\mathbf{R}}_{k}\right\}}_{k=1}^{K}$ are given. It can be noticed that the JBD model remains unchanged if one substitute $\mathbf{A}$ by $\mathbf{A}\mathrm{\Lambda}\mathbf{P}$ and ${\mathbf{D}}_{k}$ by ${\mathbf{P}}^{1}{\mathrm{\Lambda}}^{1}{\mathbf{D}}_{k}({\mathbf{P}}^{1}{\mathrm{\Lambda}}^{1}{)}^{H}$, where $\mathrm{\Lambda}$ is a nonsingular block diagonal matrix in which the arbitrary blocks are the same dimensions as ${\mathbf{D}}_{k}$. $\mathbf{P}$ is an arbitrary blockwise permutation matrix. The JBD model is essentially unique when it is only subject to these indeterminacies of amplitude and permutation [24].
To this end, our aim is to present a new algorithm to solve the problem of the nonorthogonal JBD. The cost function with neglecting the noise term suggested in [23] is considered as follows:
The above cost function can be regarded as the offdiagonalblockerror, our aim is to find a nonsingular $\mathbf{B}$ such that the ${C}_{JBD}\left(\mathbf{B}\right)$ is as minimum as possible. Where ${\Vert \cdot \Vert}_{F}$ is the Frobenius norm and $\mathbf{B}$ stands for inverse of the matrix $\mathbf{A}$ (in BSS context, $\mathbf{B}$ serves as the separating matrix). Considering the square matrix ${\mathbf{R}}_{k}=\left({\mathbf{R}}_{k,ij}\right)\in {C}^{M\times M}$:
where ${\mathbf{R}}_{k,ij}$ for all $i,j=1,\cdot \cdot \cdot ,r$ are ${n}_{i}\times {n}_{j}$ matrices (and ${n}_{1}+{n}_{2}+\cdot \cdot \cdot +{n}_{r}=M$) and two matrix operators $\mathrm{B}\mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(\cdot )$ and $\mathrm{O}\mathrm{f}\mathrm{f}\mathrm{B}\mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(\cdot )$ can be respectively defined as:
2.2. Two novel joint block diagonalization algorithms based LU and QR factorizations
Any nonsingular matrix $\mathbf{B}$ admits the LU factorization [30]:
where $\mathbf{L}$ and $\mathbf{U}$ are $M\times M$ unit lower and upper triangular matrices, respectively. A unit triangular matrix represents a triangular matrix with diagonal elements of one. $\mathbf{B}$ also admits the QR factorization:
where $\mathbf{Q}$ is $M\times M$ orthogonal matrix. Considering the JBD model’s indeterminacies, we note that any nonsingular square separating matrix can be represented as these two types of decomposition. Here, we will implement the JBD in real domain (which is the problem usually encountered in BSS) i.e., in a real triangular and orthogonal basis. It is reasonable to consider the decompositions Eq. (4) or (5) and hence replace the minimization problem represented in Eq. (3) by two alternating stages involving the following suboptimization:
where ${\mathbf{R}}_{k}^{\text{'}}=\stackrel{~}{\mathbf{U}}{\mathbf{R}}_{k}{\stackrel{~}{\mathbf{U}}}^{T}$, $\stackrel{~}{L}$, $\stackrel{~}{\mathbf{U}}$ and $\stackrel{~}{\mathbf{Q}}$ denote the estimates of $\mathbf{L}$, $\mathbf{U}$ and $\mathbf{Q}$, respectively. Moreover, we adopt the Jacobitype scheme to solve Eq. (6) and 7(a), 7(b) via a set of rotations.
The Jacobi matrix of lower unit triangular is denoted as ${\mathbf{L}}_{ij}\left(a\right)$, where the parameter $a$ corresponding to the position $(i,j)$ ($i>j$) i.e., ${\mathbf{L}}_{ij}\left(a\right)$ equals the identity matrix except the entry indexed $(i,j)$ is $a$. In a similar fashion, we define the Jacobi matrix of unit upper triangular ${\mathbf{U}}_{ij}\left(a\right)$ with parameter $a$ corresponding to the position $(i,j)$ ($i>j$). In order to solve Eq. (6) and 7(a), we will firstly find the optimal ${\mathbf{L}}_{ij}\left(a\right)$ and ${\mathbf{U}}_{ij}\left(a\right)$ in each iteration. For fixed $a$, one iteration of the method consists of
1. Solving Eq. (6) with respect to $a$ of ${U}_{ij}\left(a\right)$, and.
Updating ${\mathbf{R}}_{k}\leftarrow {U}_{ij}\left(a\right)\mathbf{R}{\mathbf{U}}_{ij}(a{)}^{T}$ for all $k$ (Ustage)
2. Solving Eq. (7a) with respect to $a$ of ${\mathbf{L}}_{ij}\left(a\right)$, and.
Updating ${\mathbf{R}}_{k}\leftarrow {L}_{ij}\left(a\right)\mathbf{R}{\mathbf{L}}_{ij}(a{)}^{T}$ for all $k$ (Lstage)
We herein note that the proposed two nonorthogonal JBD algorithms are all of abovementioned Jacobitype, with the only differences on the adopted decompositions (LU or QR) and implementation details. Next, we give the details of the proposed algorithms. Following the Eq. (6), we have:
$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}{a}^{2}{R}_{1}+2a{R}_{2}+cons,i>L,$
$\text{or}\text{}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}{a}^{2}{R}_{3}+2a{R}_{4}+cons,i\le L,$
where $L$ is the block dimension, $i,j=1,\dots ,M$ and $i<j$.
For matrix $\mathbf{R}$, $\mathbf{R}(i,index)$ denotes a rowvector whose elements are from the $i$th row of $\mathbf{R}$ indexed by $index$, the $A$ is a row vector $\mathbf{R}(index,j)$ is defined similarly.
The computation of the optimal $\stackrel{~}{a}$ in Eq. (8) is:
If ${R}_{1}=\text{0}$ or ${R}_{3}=\text{0}$ set $\stackrel{~}{a}=\text{0}$, i.e. ${C}_{JBD}\left(\mathbf{U}\right)$ cannot be reduced by the particular ${U}_{ij}\left(a\right)$.
As for the lower triangular matrices ${\mathbf{L}}_{ij}\left(a\right)$, we have similar result:
$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}{a}^{2}{R}_{1}^{\text{'}}+2a{R}_{2}^{\text{'}}+cons,i>L,$
$\text{or}\text{}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}{a}^{2}{R}_{3}^{\text{'}}+2a{R}_{4}^{\text{'}}+cons,i\le L,$
where $L$ is the block dimension, $i,j=1,\dots ,M$ and $i>j$:
$l$ is a row vector in which element ${l}_{1}$satisfy $\u2308\frac{{l}_{1}}{L}\u2309\ne \u2308\frac{i}{L}\u2309{l}_{1}\in [1:i1,i+:M]$, $\u2308X\u2309$ rounds the elements of $X$ to the nearest integers towards infinity:
The computation of the optimal$\stackrel{~}{a}$value in Eq. (10):
If ${R}_{1}^{\text{'}}=\text{0}$ or ${R}_{3}^{\text{'}}=\text{0}$ set $\stackrel{~}{a}=\text{0}$, i.e. ${C}_{JBD}\left(\mathbf{L}\right)$ cannot be reduced by the particular ${\mathbf{L}}_{ij}\left(a\right)$. The computation of the optimal parameter $\stackrel{~}{a}$ requires solving a polynomial of degree 2 in the real domain, which is more effective than other JBD methods that need to solve a polynomial of degree 4, such as JBD_{OG}, JBD_{ORG} and JBDNCG,_{}etc. [2224].
In the QR algorithm, we consider the QR decomposition of $\mathrm{{\rm B}}$, hence the suboptimization problem in the Qstage in Eq. 7(b) is indeed an orthogonal JBD problems which can be solved by Févotte’s Jacobitype algorithm [19]. Févotte indicated that the behavior of the Jacobi approach was very much dependent on the initialization of the orthogonal basis, and also relied on the choice of the successive rotations. Here, the algorithm is initialized with the matrix ${\mathbf{Q}}_{JD}$ provided by joint diagonalization of ${\mathbf{R}}_{k}$, and consists of choosing at each iteration the couple $(p,q)$ ensuring a maximum decrease of criterion ${C}_{JBD}\left(\mathbf{Q}\right)$.
Now that we have obtained the Ustage and Lstage (Qstage) for the proposed algorithm, we loop these two stages until convergence is reached. In addition, we note that there would be several ways to control the convergence of the JBD algorithms. For example, we could stop the iterations when the parameter values in each iteration of the Ustage or Lstage (Qstage) are small enough, which indicates a minute contribution from the elementary rotations, and hence convergence. We may as well monitor the sum offdiagonal squared norms ${C}_{JBD}\left(\mathbf{B}\right)$ in iteration, and stop the loops when the change in it is smaller than a preset threshold. The values of ${\Vert \mathbf{L}\mathbf{U}\mathbf{I}\Vert}_{F}^{2}<\epsilon $ between two successive complete runs of Ustage and Lstage are usually used as a terminate criteria. Here, we stop the iteration when the values ${I}_{off}=\sum _{k=1}^{K}{\Vert OffBdiag\left({\mathbf{R}}_{k}\right)\Vert}_{F}^{2}/\sum _{k=1}^{K}{\Vert Bdiag\left({\mathbf{R}}_{k}\right)\Vert}_{F}^{2}<\epsilon $, which can reflect relative change between offblock diagonal and block diagonal. And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between these methods. Therefore, this terminate criteria is much more rational and effective. In the following context of our manuscript, we will use one of the following terminate criteria
(st1) ${I}_{off}<1{0}^{7}$;
(st2) $\left{C}_{JBD}^{t+1}{C}_{JBD}^{t}\right/{C}_{JBD}^{t}<1{0}^{7}$;
(st3) $\left{I}_{conv}^{t+1}{I}_{conv}^{t}\right<1{0}^{5}$ (mentioned at section 4).
We name the novel JBD approaches based on LU and QR factorizations as LUJBD and QUJBD, respectively, and summarize them as following:
1. Set $\mathbf{B}=\mathbf{I}$.
2. Ustage (Rstage): set $\mathbf{U}=\mathbf{I}$, for $1\le i<j\le M$
Find ${\stackrel{~}{a}}_{ij}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}{n}_{a}{C}_{JBD}\left({\mathbf{U}}_{ij}\right(a\left)\right)$ from Eq. (9)
Update ${\mathbf{R}}_{k}\leftarrow {\mathbf{U}}_{ij}\left({\stackrel{~}{a}}_{ij}\right){\mathbf{R}}_{k}{\mathbf{U}}_{ij}({\stackrel{~}{a}}_{ij}{)}^{T}$ and $\mathbf{U}\leftarrow {\mathbf{U}}_{ij}\left({\stackrel{~}{a}}_{ij}\right)\mathbf{U}$
3. Lstage (Qstage): set $\mathbf{L}=I$($\mathbf{Q}={\mathbf{Q}}_{JD}$), for $1\le j<i\le M$
Find ${\stackrel{~}{a}}_{ij}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}{n}_{a}{C}_{JBD}\left({\mathbf{L}}_{ij}\right(a\left)\right)$ from Eq. (11) ( ${\theta}_{ij}=\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}{n}_{\theta}{C}_{JBD}\left({\mathbf{U}}_{ij}\right(\theta \left)\right)$ from [20])
Update ${\mathbf{R}}_{k}\leftarrow {\mathbf{L}}_{ij}\left({\stackrel{~}{a}}_{ij}\right){\mathbf{R}}_{k}{\mathbf{L}}_{ij}({\stackrel{~}{a}}_{ij}{)}^{T}$ (${\mathbf{R}}_{k}\leftarrow {\mathbf{G}}_{ij}\left({\theta}_{ij}\right){\mathbf{R}}_{k}{\mathbf{G}}_{ij}({\theta}_{ij}{)}^{T}$)
and $\mathbf{L}\leftarrow {\mathbf{L}}_{ij}\left({\stackrel{~}{a}}_{ij}\right)\mathbf{L}$ ($\mathbf{Q}\leftarrow {\mathbf{G}}_{ij}\left({\theta}_{ij}\right)\mathbf{Q}$)
4. If the terminate criteria from (st1), (st2) or (st3) isn’t satisfied completely, then $\mathbf{B}\leftarrow \mathbf{L}\mathbf{U}\mathbf{B}$($\mathbf{B}\leftarrow \mathbf{Q}\mathbf{U}\mathbf{B}$) and go to 2, else end.
We replace each of these $M(M1)/2$ dimensional minimization problems by a sequence of simple onedimensional problems via using triangular and orthogonal Jacobi matrices. Note that for updating ${\mathbf{R}}_{k}$, the matrix multiplications can be realized by few vectors scaling and vector. In additions, this will cost fewer time than other method of nonorthogonal JBD [2224]. And the existence and uniqueness of joint block diagonalization of this cost function has been proved in [20].
2.3. Convergence of the algorithm
By construction, the algorithm ensures decrease of criterion ${C}_{JBD}\left(\mathbf{B}\right)$ at iteration process. According to Eq. (8) or (10), we have:
Respectively ${R}_{c}={R}_{1},{R}_{3},{R}_{1}^{\text{'}}$, ${R}_{3}^{\text{'}}$, ${R}_{d}={R}_{2},{R}_{4},{R}_{2}^{\text{'}},{R}_{4}^{\text{'}}$ and ${R}_{c}>0$.
Replace $\stackrel{~}{a}$ from Eqs. (9) or (11):
Only ${R}_{c}=0$ or ${R}_{d}=0$ set $\stackrel{~}{a}=0$, Eq. (12) is putted equal sign, but usually ${R}_{c}\ne 0$ and ${R}_{d}\ne 0$ in CBSS mentioned in Section 3.
Similarly, ${C}_{JBD}\left({\mathbf{L}}^{t+1}\right){C}_{JBD}\left({\mathbf{L}}^{t}\right)\le 0\text{,}$ or ${C}_{JBD}\left({\mathbf{Q}}^{t+1}\right){C}_{JBD}\left({\mathbf{Q}}^{t}\right)\le 0$ has been proved in [20].
At each iteration of the algorithm, the matrix ${\mathbf{R}}_{k}^{t+1}$ obtained after rotations is thus ‘at least as diagonal as’ matrix ${\mathbf{R}}_{k}^{t}$ at previous iteration. Since every bounded monotonic sequence in real matrix domain, the convergence of our algorithm is guaranteed.
3. Application to CBSS
The problems of convolutive BSS (CBSS) occur in various applications. One typical application is in blind separation of vibration signals, which is fully studied in this paper for detecting the solution of the CBSS problems. The CBSS consists of estimating a set of unobserved source signals from their convolutive mixtures without requiring a priori knowledge of the sources and corresponding mixing system. Then the CBSS can be identified by means of JBD of a set of covariance matrices. We consider the following discretetime MIMO model [25]:
where $t$ is the discrete time index, ${l}^{\text{'}}=1,\dots ,{L}^{\text{'}}\text{,}$${L}^{\text{'}}$denotes FIR filter’s length. $\mathbf{S}\left(t\right)={\left[{s}_{1}\left(t\right),\cdot \cdot \cdot ,{s}_{m}\left(t\right)\right]}^{T}$ denotes source signal vector with the source numbers are $m$, and $\mathbf{X}\left(t\right)={\left[{x}_{1}\left(t\right),\cdot \cdot \cdot ,{x}_{n}\left(t\right)\right]}^{T}$ is the mixing signal vector obtained by$n$observation signals. In the mixing linear timeinvariant system, the matrixtype impulse response $\mathbf{H}\left({l}^{\text{'}}\right)=\left[{\mathbf{h}}_{1}\left({l}^{\text{'}}\right),\cdot \cdot \cdot ,{h}_{m}\left({l}^{\text{'}}\right)\right]$ consists of channel impulse responses ${h}_{ij}\left({l}^{\text{'}}\right)$$(i=1,\dots ,n\text{,}$$j=1,\dots ,m)$. Aiming to the received signal on the $i$th array element, we take the $P+1$ sliding window and constitute a column vector:
Then putting the array element processed by sliding window together and defining the observed signal vector:
Hence:
where $\stackrel{}{\mathbf{S}}\left(t\right)=\left[{\mathbf{s}}_{1}\right(t),{\mathbf{s}}_{2}(t),\cdot \cdot \cdot ,{\mathbf{s}}_{m}(t){]}^{T}$, and:
${\mathbf{H}}_{ij}$ is block element of $\mathbf{H}$ which matrix dimension is $(P+1)\times ({L}^{\text{'}}+P)$, and:
The following assumptions concerning the above mixture model Eq. (14) have to be made to ensure that it is possible to apply the proposed algorithms to CBSS [25].
Assumption 1. The source signals are zero mean, individually correlated in time but mutually uncorrelated, i.e., for all time delay $\tau $, the cross correlation functions ${r}_{{s}_{i}{s}_{j}}(t,\tau )=0$, $\forall i\ne j$, and the autocorrelation functions ${r}_{{s}_{i}{s}_{i}}(t,\tau )\ne 0$, $i=1,\dots ,m$. Meanwhile, the source signals have a different autocorrelation function.
Assumption 2. The sensor noises $\mathbf{N}\left(t\right)$ are zero mean, independent identically distributed with the same power ${\sigma}_{N}^{2}$. The noises are assumed to be independent with the sources.
Assumption 3. The mixing matrix $\mathbf{H}$ is assumed as column full rank. This requires that the length of the sliding window $P$ satisfies $n(P+1)\ge m(P+{L}^{\text{'}})$.
Assumption 1 is the core assumption. As is shown in [25], this assumption enables us to separate the sources from their convolutive mixtures by diagonalizing the secondorder statistic of the reformulated observed signals (this will be addressed below). Assumption 2 enables us to easily deal with the noise and Assumption 3 guarantees that the mixing system is invertible, therefore it is a necessary condition that the source signals can be completely separated.
Under these assumptions, the spatial covariance matrices of the observations satisfy:
where ${\tau}_{k}\in \left[\mathrm{0,1},2,\dots ,q1\right](k=1,\dots ,K)$ is the successive time delays, ${\mathbf{R}}_{N}(t,t{\tau}_{k})$ is computed according to [26]. It can be deduced from the above assumptions that in Eq. (15) the matrices take the following forms, respectively:
where the block matrices in ${\mathbf{R}}_{\stackrel{}{x}}\left({\tau}_{k}\right)$ and ${\mathbf{R}}_{\stackrel{}{s}}\left({\tau}_{k}\right)$ have the following form:
where $r(a,b)=E\left({x}_{i}\right(ta\left){x}_{j}^{T}\right(tb\left)\right)$, ${\mathbf{R}}_{{s}_{i}{s}_{i}}\left({\tau}_{k}\right)$ have the similar form, which is the $({L}^{\text{'}}+P)\times ({L}^{\text{'}}+P)$ matrix. According to the Eq. (15), a group of matrices $\mathbf{R}=\left\{{\mathbf{R}}_{k},k=1,\dots ,K\right\}$ which can be block diagonalized, and satisfy ${\mathbf{R}}_{k}=\mathbf{A}{\mathbf{D}}_{k}{\mathbf{A}}^{T}$, ${\mathbf{D}}_{k}$ have diagonalization structure. Hence, The JBD method mentioned in section 2.2 can be used to solve CBSS problem. Once the joint block diagonalizer $\mathbf{B}$ is determined, the recovered signals are obtained up to permutation and a filter by:
It is worth mentioning that the indeterminacies of amplitude and permutation exist in JBD algorithms correspond to the wellknown indeterminacies in CBSS. The correlation matrices ${\mathbf{R}}_{\stackrel{}{x}}\left({\tau}_{k}\right)$ is actually replaced by their discrete time series estimate. To acquire a good estimate of the discrete correlation matrices, we may divide the observed sequences (the output of the reformulated model (15)) into the appropriate length of the sample.
4. Numerical simulations
Simulations are now provided to illustrate the behavior and the performance of the proposed JBD algorithms (LUJBD, QRJBD). We will also compare the proposed algorithms with the JBD_{OG}, JBD_{ORG} in the robustness and efficiency by generating random dates. To achieve these purposes, a set of real blockdiagonal matrices ${\mathbf{D}}_{k}$ (for all $k=1,\dots ,K$) are devised from random entries with a Gaussian distribution of zero mean and unit variance. Then, random noise entries with a Gaussian distribution of zero mean and variance ${\sigma}_{N}^{2}$ will be added on the offdiagonal blocks of the previous matrices ${\mathbf{D}}_{k}$. A signal to noise ratio can be defined as $SNR=10\mathrm{l}\mathrm{o}\mathrm{g}(1/{\sigma}_{N}^{2})$.
To measure the quality of the different algorithms, the following performance index is used [22]:
where ${\mathbf{G}}_{ij}$ ($i,j\in \left\{1,\cdot \cdot \cdot ,r\right\}$) denotes the $\left(i,j\right)$th block matrix of $\mathbf{G}=\stackrel{~}{\mathbf{B}}\mathbf{A}$. This index will be used in the CBSS, which can take into account the inherent indetermination of the BSS problem. It is clearly that the smaller the index performance ${I}_{conv}\left(\mathbf{G}\right)$, the better the separation quality. Regarding to the charts, ${I}_{conv}\left(\mathbf{G}\right)$ is often given in dB i.e., ${I}_{conv}\left(\mathbf{G}\right)dB=10\mathrm{l}\mathrm{o}\mathrm{g}\left({I}_{conv}\right(\mathbf{G}\left)\right)$. In all the simulations, the true mixing matrix $\mathbf{A}$ or $\mathbf{H}$ has been randomly chosen with zero mean and unit variance.
In Fig. 1 and Fig. 2, we focus on the exactly determined case $M=$ 9, $L=$ 3, $K=$ 20, $SNR=$ 60 dB, the results have been averaged over 100 runs. Fig. 1 represents the percentage of successful runs, where a run is declared successful w.r.t. the following criteria satisfy: ${I}_{conv}<1{0}^{7}$, (st1), (st2). Fig. 2 represents the average running time per successful convergence. Comparing the LUJBD and QRJBD approaches with the stateoftheart JBD_{OG}, and JBD_{ORG.}, we can confirm that the approaches proposed in Section 2.2 improve the performance of JBD better than the gradientbased methods: the LUJBD and QRJBD methods converge to the global minimum more frequently, see Fig. 1, and faster see Fig. 2 than JBD_{OG}, JBD_{ORG}. Under the same terminated criteria, it can be observed that the LUJBD and QRJBD methods also outperform the JBD_{OG}, JBD_{ORG} methods, and the LUJBD method show the best performance. We can also conclude that the sensitivity of the different convergence termination criteria for different JBD methods is diverse and, moreover, the percentage of successful runs and average running time are also varying. In other words, we should choose appropriate terminate criteria which is able to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed.
In Fig. 3 we focus on the exactly determined case $M=$ 9, $L=$ 3, $K=$ 20, $SNR=$ 60 dB, the results have been averaged over 100 runs. Because various approaches have different iteration time of each step, Here, we consider all methods converge to the same time, which is more reasonable than converge to certain iteration steps mentioned in [23, 25]. The evolution of the performance index versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods is better than the JBDOG and JBDORG method. The LUJBD and QRJBD algorithms cost less time when performance index ${I}_{conv}$ reaches a stable convergence level, and have smaller value of performance index ${I}_{conv}$ when all algorithms converge to same time. In other words, the BSS methods proposed in this paper possess less convergence time, higher precision of convergence, faster convergence speed.
Fig. 1The percentage of successful runs
Fig. 2The average running time per successful convergence
Fig. 3Average performance index Iconv versus the convergence time for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR=60 and K= 20 matrices
In Fig. 4, we discuss the number of the matrices how to affect the performance. The results have been averaged over 100 runs. We devise the same stop criteria i.e., one of the terminate criteria (st1), (st2) and (st3) is satisfied. We set $M=$ 9, $L=$ 3, $SNR=$ 60 dB. The following observations can be made: the more matrices to be joint blockdiagonalized, the better performance we can obtain. But the computational cost also increases when the number of matrix rises. Therefore, the choice of matrix number should combine the accuracy of JBD algorithms with complexity of JBD algorithms. From Fig. 4, the matrix number 20 is a better choice. The LUJBD algorithm with better convergence turned out to be slightly superior to other JBD algorithms.
Fig. 4Average Iconv versus the number of matrices for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR= 60 and avarage 100 runs
Fig. 5Average Iconv versus SNR for LUJBD, QLJBD, JBDOG, JBDORG algorithms. K= 20 and avarage 100 runs
With the same stop criteria and other assumptions in Fig. 4, one can observe from Fig. 5 that when the SNR grows, the average performance of each algorithm becomes better except few fluctuation points. The noise sensitivity of LUJBD is slightly higher that the remaining three kinds of methods, however, for a given value of SNR, the average performance index of LUJBD and QLJBD is always better than that of two gradientbased methods.
Finally, in Fig. 6 and Fig. 7, we study separation performance versus matrix dimension $M$ and the block dimension $L$ for both algorithms ($K=$ 20, $SNR=$ 60 dB), the results have been averaged over 100 runs. One can observe that in the same block dimension $L$ case, the larger the matrix dimension $M$, the weaker the estimation accuracy of mixing matrix. And in the same matrix dimension $M$ case, the larger the block dimension $L$, the better the estimation accuracy of mixing matrix. Therefore, we can improve the performance index by increasing the number of the matrix $K$ and the block dimension $L$ when the dimension of target matrix increases.
Fig. 6Average Iconv versus the convergence time with different M and L for LUJBD algorithms. SNR=60 and K= 20 matrices
Fig. 7Average Iconv versus the convergence time with different M and L for QLJBD algorithms. SNR=60 and K= 20 matrices
5. Applying CBSS to the vibration source separation
The experimental model is a doublestiffened cylindrical shell depicted in Fig. 8, which is used to simulate the cabin model. Underwater vibration tests of the doublestiffened cylindrical shell were carried out in anechoic water pool with a length of 16 meters, a width of 8 meters and a height of 8 meters. In the doublestiffened cylindrical shell, three exciters were arranged in the front part (No. 1 excitation source), middle part (No. 2 excitation source), rear part (No. 3 excitation source), respectively, which were used to simulate vibration sources of the internal equipment. Twentynine vibration acceleration sensors were arranged in the inner shell, and four accelerometers containing abundant vibration information were arranged in the vicinity of each excitation point. The location of exciters and acceleration sensors were shown in Fig. 9. Only the vertical excitation and response were considered in this test, and the model was located underwater 3.75 m. During the test, three exciters were controlled on the shore, and each exciter was turned on separately or multiple exciters were operated simultaneously according to different test requirements. Three exciters launched a continuous sinusoidal signal with different excitation frequencies and same energy, the frequency was 5 kHz, 4 kHz, 3 kHz, corresponding to No. 1 exciter, No. 2 exciter and No. 3 exciter. The vibration data was collected when the exciter was in stable operation. The sampling frequency was 16384 Hz and the sampling time was 10 s.
Fig. 11(d), (e), (f) show the mixture signals obtained by three sensors (17, 20, 23) on the inner shell when all of the exciters act simultaneously. It is obvious that the mixsignals with mutual spectrum aliasing are not able to represent real vibration characteristics and be utilized directly. We can also demonstrate that a mixture of vibrations is most often of the convolutive type which is not prone to be tackled, and moreover, the independence among the vibration sources is often not satisfactory strictly. Therefore, it is difficult to separate mechanical vibration source using traditional source separation methods. However, we propose the Jacobitype JBD algorithms based on the LU or QR factorizations in this article, which can overcome above shortcomings effectively.
To ensure the stability of the solution, the terminate criteria $\epsilon =abs({I}_{off}^{t+{t}_{1}}{I}_{off}^{t+{t}_{1}+1})<1{0}^{6}$$({t}_{1}=0,\dots ,9)$ is selected. We select observed signals $n=$ 5 i.e., sensors 17, 20, 23, 26, 29，and source signals $m=$ 3. The model parameters including ${L}^{\text{'}}$, $P$, $K$ are selected to guarantee that the solution accuracy satisfy ${I}_{off}\le $ –35 dB. The filter length ${L}^{\text{'}}=$ 13, the sliding window $P=$ 17 and a set of covariance matrices $K=$ 30 with a time lag ${\tau}_{k}$ taking linearly spaced valued.
In Fig. 10, the evolution of the performance index ${I}_{off}$ versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods are superior to the JBDOG and JRJBD methods [20]. The LUJBD and QRJBD algorithms cost less time when ${I}_{off}<$ –20 dB, and have smaller performance index ${I}_{off}$ when the convergence time is same. In other words, the BSS methods proposed in this paper possess higher precision of convergence, faster convergence speed. The low computational accuracy and efficiency of the latter two algorithms are mainly due to following reasons: (1) the JBDOG algorithm generally suffers from slow convergence rate and is apt to converge to a local minimum; and the accuracy of blind source separation is often hindered by the inversion of illconditioned matrices. (2) the joint blockdiagonalizer of JRJBD algorithm is an orthogonal matrix, the spatial prewhitening which is likely to lead to a larger error need to be applied. Moreover, this error is unable to correct in the subsequent analysis.
Fig. 11(a), (b), (c) represents source signals acquired by the acceleration sensors near excitation point (Here, we choose acceleration sensors 1, 8, 14 which have a higher signaltonoise ratio) when each of the exciter act respectively. Comparison between the recovered primary sources – see Fig. 11(g), (h), (i) for LUJBD model and (j), (k), (l) for QRJBD model – with the true one shows that the proposed separation algorithms only subjected to indeterminacies of the permutation and amplitude are effective.
Fig. 8The experimental cabin model
Fig. 9The location of exciters and acceleration sensors
Fig. 10Average performance index Iconv versus the convergence time for four algorithms. K= 30, L'= 13, P= 17
Fig. 11Separation results for known channelSources: a) s1, b) s2, c) s3. Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'. Separation sources for QLJBD: j) s1'', k) s2'', l) s3''
a)
b)
c)
d)
e)
f)
g)
h)
i)
j)
k)
l)
6. Conclusions
In this article, to solve the convolutive BSS (CBSS) problem, we present a class of simple Jacobitype JBD algorithms based on the LU or QR factorizations. Using Jacobitype matrices we can replace high dimensional minimization problems with a sequence of simple onedimensional problems. The two novel methods named LUJBD and QRJBD are no more necessarily orthogonal, positive definite or symmetric matrices. In addition, we propose a novel convergence criteria which can reflect relative change between offblock diagonal and block diagonal. And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between different methods. The computation of the optimal parameter requires solving a polynomial of degree 2 in the real domain in LUJBD and QRJBD methods, which is more effective than other JBD methods that need to solve polynomial of degree 4. Therefore, the two new algorithms will cost fewer times than other nonorthogonal JBD methods, and moreover, the convergence of these two algorithms is also guaranteed.
A series of comparisons of the proposed approaches with the stateoftheart JBD approaches (JBD_{OG} and JBD_{ORG}) based on gradient algorithms are implemented by varieties of numerical simulations. The results show that the LUJBD and QRJBD methods converge to the global minimum more frequently, and faster than JBD_{OG}, JBD_{ORG}. Choosing appropriate terminate criteria is beneficial to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed. It can be readily observed that the more target matrices selected to be joint blockdiagonalized, the better performance we can obtain. But the computational cost also increases when the number of matrix rises. Therefore, the choice of matrix number should combine algorithm accuracy with complexity. We can also improve the performance index by increasing the block dimension and decreasing the matrix dimension. Then the two novel JBD algorithms and the JBD_{OG}, JRJBD methods for separating practical vibration sources are studied. We can conclude that the convergence performance and accuracy of the LUJBD and QRJBD methods are superior to the JBD_{OG} and JRJBD methods. Finally, Comparison the recovered primary sources with the true one demonstrates the validity of the proposed algorithms for separating the vibration signals of convolutive mixtures.
References

Castell M., Bianchi P., Chevreuil A., et al. A blind source separation framework for detecting CPM sources mixed by a convolutive MIMO filter. Signal Processing, Vol. 86, 2006, p. 19501967.

Zhang Y., Zhao Y. X. Modulation domain blind speech separation in noisy environments. Speech Communication, Vol. 55, Issue 10, 2013, p. 10811099.

Pelegrina G. D., Duarte L. T., Jutten C. Blind source separation and feature extraction in concurrent control charts pattern recognition: novel analyses and a comparison of different methods. Computers and Industrial Engineering, Vol. 92, 2016, p. 105114.

Popescu T. D. Blind separation of vibration signals and source change Detection: Application to machine monitoring. Applied Mathematical Modelling, Vol. 34, 2010, p. 34083421.

McNeill S. I., Zimmerman D. C. Relating independent components to freevibration modal responses. Shock and Vibration, Vol. 17, 2010, p. 161170.

Lee D. S., Cho D. S., Kim K., et al. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures. International Journal of Naval Architecture and Ocean Engineering, Vol. 7, Issue 1, 2015, p. 128141.

Li Y. B., Xu M. Q., Wei Y., et al. An improvement EMD method based on the optimized rational Hermite interpolation approach and its application to gear fault diagnosis. Measurement, Vol. 63, 2015, p. 330345.

Popescu T. D. Analysis of trafficinduced vibrations by blind source separation with application in building monitoring. Mathematics and Computers in Simulation, Vol. 80, 2010, p. 23742385.

BabaieZadeh M., Jutten C. A general approach for mutual information minimization and its application to blind source separation. Signal Processing, Vol. 85, 2005, p. 975995.

Jing J. P., Meng G. A novel method for multifault diagnosis of rotor system. Mechanism and Machine Theory, Vol. 44, 2009, p. 697709.

Antoni J. Blind separation of vibration components: principles and demonstrations. Mechanical Systems and Signal Processing, Vol. 19, 2005, p. 11661180.

Rhabi M. E., Fenniri H., Keziou A., et al. A robust algorithm for convolutive blind source separation in presence of noise. Signal Processing, Vol. 93, 2013, p. 818827.

BousbiaSalah H., Belouchrani A., AbedMeraim K. Blind separation of convolutive mixtures using joint block diagonalization. International Symposium on Signal and its Applications(ISSPA), Kuala Lumpur, Malaysia, 2001, p. 1316.

Flury B. D., Neuenschwander B. E. Simultaneous diagonalization algorithm with applications in multivariate statistics. International Series of Numerical Mathematics, Vol. 19, 1994, p. 179205.

Pham D. T. Blind separation of cyclostationary sources using joint block approximate diagonalization. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 244251.

Belouchrani A., Amin M. G., AbedMeraim K. Direction finding in correlated noise fields based on joint blockdiagonalization of spatiotemporal correlation matrices. IEEESP Letters, 1997, p. 266268.

Belouchrani A., AbedMeraim K., Hua Y. Jacobi like algorithms for joint block diagonalization: Application to source localization. Proceedings of IEEE International Workshop on Intelligent Signal Processing and Communication Systems, Melbourne, 1998.

AbedMeraim K., Belouchrani A. Algorithms for joint block diagonalization. 12th European Signal Processing Conference, Vienna, Austria, 2004, p. 209212.

Févotte C., Theis F. J. Pivot selection strategies in Jacobi joint blockdiagonalization. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 177184.

Févotte C., Theis F. J. Orthonormal Approximate Joint BlockDiagonalization. Technical Report GET/Telecom Paris 2007D007, 2007, p. 124.

Ghennioui H., Fadaili E. M., Thirion Moreau N., et al. A nonunitary joint block diagonalization algorithm for blind separation of convolutive mixtures of sources. IEEE Signal Processing Letters, Vol. 14, Issue 11, 2007, p. 860863.

Ghennioui H., Thirion Moreau N., Moreau E., et al. Nonunitary jointblock diagonalization of complex matrices using a gradient approach. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 201208.

Ghennioui H., ThirionMoreau N., Moreau E., et al. Gradientbased joint block diagonalization algorithms: application to blind separation of FIR convolutive mixture. Signal Processing, Vol. 90, 2010, p. 18361849.

Nion D. A tensor framework for nonunitary joint block diagonalization. IEEE Transactions on Signal Processing, Vol. 59, Issue 10, 2011, p. 45854594.

Zhang W. T., Lou S. T., Lu H. M. Fast nonunitary joint block diagonalization with degenerate solution elimination for convolutive blind source separation. Digital Signal Processing Vol. 22, 2012, p. 808819.

Xu X. F., Feng D. Z., Zheng W. X. Convolutive blind source separation based on joint block Toeplitzation and blockinner diagonalization. Signal Processing, Vol. 90, 2010, p. 119133.

Zhang W. T., Lou S. T. A recursive solution to nonunitary joint diagonalization. Signal Processing, Vol. 93, 2013, p. 313320.

Cheng G. H., Li S. M., Moreau E. New Jacobilike algorithms for nonorthogonal joint diagonalization of Hermitian matrices. Signal Processing, Vol. 128, 2016, p. 440448.

Zeng Feng T. J. Q. Y. Nonorthogonal joint diagonalization algorithm based on hybrid trust region method and its application to blind source separation. Neurocomputing, Vol. 133, 2014, p. 280294.

Afsari B. Simple LU and QR based nonorthogonal matrix joint diagonalization. Lecture Notes in Computer Science, Vol. 3889, 2006, p. 17.

Gong X. F., Wang K., Lin Q. H., et al. Simultaneous source localization and polarization estimation via nonorthogonal joint diagonalization with vectorsensors. Sensors, Vol. 12, 2012, p. 33943417.
About this article
The work described in this paper was supported by the National Natural Science Foundation of China (No. 51609251). Research on the identification of crosscoupling mechanical vibration sources of warship based on transfer path analysis.