Abstract
Different description results will be obtained when apply hidden Markov model (HMM) to the two different channel signals from the same data collection point respectively. Besides, wrong fault diagnosis result might be obtained because fault feature information would not be described comprehensively by using only one single channel signal. In theory, two channel signals collected form the same data collection point will contain much more fault information than the single channel signal contain, but the coupled phenomenon might occur between the two channel signals. Coupled hidden Markov model (CHMM) is the improved method of HMM and it can fuse the information of two channel signals from the same data collection point efficiently, so much more reliable diagnosis result could be obtained by using CHMM than by using HMM. Stated thus, the fault diagnosis method of rolling element bearing based on wavelet kernel component analysis (WKPCA)CHMM is proposed: Firstly, use WKPCA as fault feature vectors extraction method to increase the efficiency of the proposed method. Then apply CHMM to the extracted fault feature vectors and satisfactory fault diagnosis result is obtained at last. The feasibility and advantages of the proposed method are verified through experiment.
1. Introduction
HMM has been used in fault diagnosis of rotating machinery widely [15]. However, it could not solve the multichannel data fusion problem. Many machine condition monitoring techniques have been proposed based on multichannel data acquisition system [6]. The current data fusion techniques are mainly classified into three categories: datalevel fusion, featurelevel fusion and decisionlevel fusion. Vibration and current signals were fused basing on DempsterShafer (DS) to improve the diagnostic accuracy [7]. Some vibration parameters such as RMS, peak and peak to peak were used in the detection defects in the bearing [8]. In order to obtain better diagnostic result, the waterfall fusion model was adopted by fusing information from two different kinds of sensors: the accelerometer and load cell [9]. CHMH [10] was first proposed as a novel sensory fusion architecture to solve the data fusion problem in audiovisual speech recognition (AVSR). Xie [11] proposed a coupled hidden Markov model approach to videorealistic speech animation and realistic facial animations driven by speaker independent continuous speech was realized. In paper [12] the dependent faults occurring over time were diagnosed successfully by the proposed coupled factorial hidden Markov model method. In paper [13] the spatial and temporal dynamics in multichannel electrocorticographic (ECoG) time series was investigated using CHMM. Though CHMM has been used widely in the above stated aspects, very few papers presented its using in fault diagnosis of rolling element bearing. The CHMM was used in rolling element bearing fault diagnosis and performance degradation assessment respectively in paper [14] and paper [15], and satisfactory experiment analysis results were obtained. So the using of CHMM in fault diagnosis of rolling element bearing is studied and the WKPCA is used as feature extraction method in the paper.
2. Wavelet kernel principle component analysis
Various feature parameters are expected to be obtained so as to reflect the running state of the machinery comprehensively. However, the efficiency of the subsequent intelligent diagnosis will be decreased greatly when too many feature vectors are used as the input vectors. Besides, some of the feature parameters are redundant and useless which will decrease the accuracy of intelligent diagnosis to some extent. Principle components analysis (PCA) and the improved PCA methodkernel principle component analysis (KPCA) [16, 17] are the common used linear and nonlinear feature dimensionality reduction methods to solve the above contradiction. The schematic diagrams of PCA and KPCA can be referred to Fig. 1 and Fig. 2. KPCA not only owns the virtues of PCA, but also can analyze the nonlinear problems which PCA could not. Besides, the KPCA has other advantages which can be referred to the paper [18]. Though KPCA improve the PCA greatly, there are still some defects in the traditional KPCA: firstly, the selection of kernel function in the traditional KPCA is based on experience. Secondly, there is not criterion for selection of the relative parameters of kernel function. Any functions can be fitted by the wavelet function [19] in theory, so in the paper a novel features reduction method named WKPCA is proposed: the wavelet function is used as the kernel function instead of the common used radial basis function (RBF) in KPCA, and the wavelet function can increase the nonlinear mapping ability of KPCA greatly. The relative definitions and theory of WKPCA are given as the following.
Fig. 1The schematic diagram of PCA
Fig. 2The schematic diagram of KPCA
Definition 1 [20]: Kernel is a function $K$ which satisfies the following equation for any $x(x\in {R}^{n})$:
where ${x}^{\text{'}}$ represents the transpose and $\varphi (.)$ represents a mapping from the data space ${R}^{n}$ to the feature space $F$, and the relationship of them can be shown as following:
It not only can calculate the inner product more efficiently but also need not calculate mapping $\varphi $ process explicitly. The kernel function must satisfy the requirement of Mercer [21].
Theorem 1 [22]: Supposing $K$ is a continuous symmetric function $K\in {L}_{\infty}({R}^{n}\times {R}^{n})$ which makes the integral operator ${T}_{K}:{L}_{2}\left({R}^{n}\right)\to {L}_{2}\left({R}^{n}\right)$:
to be positive. That is to say the following relationship can be obtained:
In Eq. (4), the $\otimes $ symbol represents convolution algorithm. $K\left(x,{x}^{\text{'}}\right)$ could be used as the representation of dot product in the feature space if the above conditions can be satisfied.
The kernel function $K(x,{x}^{\text{'}})=K(x{x}^{\text{'}})$ satisfies the requirement of Mercer which is given in theorem 2.
Theorem 2 [23]: If the translation invariant kernel function $K(x,{x}^{\text{'}})=K(x{x}^{\text{'}})$ is an allowable kernel whose fourier transform (FT) must satisfy the following condition:
Wavelet function has the peculiar characteristics of multiresolution analysis compared with the common used kernel functions such as RBF used in the traditional KPCA. The wavelet function can fit any function much more precisely, so the wavelet function is combined with PCA instead of the common used kernel function such as RBF, so much stronger nonlinear mapping capability can be obtained. The combination of wavelet function with PCA is named wavelet kernel principal component analysis also called WKPCA for short.
Supposing $\psi \left(x\right)\in {L}_{2}\left(R\right)$ is a mother wavelet function, $x$, $x\in {R}^{n}$, and a translation invariant wavelet kernel function satisfying the requirement of Mercer can be constructed as following [22]:
where ${a}_{i}$ is the scale factor.
The requirement of Mercer is not only satisfied but also the properties of the wavelet function are considered when the wavelet kernel function is being constructed. The wavelet construction kernel function meeting the wavelet framework conditions has obvious advantage because it takes into account the sparseness of the training data and the complexity of the constructed kernel functions. Mexican hat wavelet function is a kernel function meeting the wavelet framework conditions [24]. The Mexican hat wavelet function shown in Eq. (7) is used to construct the translation invariant kernel function:
The constructed translation invariant wavelet kernel function is shown in Eq. (8):
The proof of Mexican hat wavelet satisfying Theorem 2 is given as following.
With regard to the Mexican hat wavelet shown in Eq. (9):
In Eq. (9), $\gamma $ is the scale factor same as the meaning of ${\alpha}_{i}$ shown in Eqs. (6) and (8). The Eq. (10) can be obtained:
$=(2\pi {)}^{n/2}{\int}_{{R}^{n}}\mathrm{e}\mathrm{x}\mathrm{p}(j(\omega x\left)\right)\prod _{i=1}^{n}\left[\left(1{\left(\frac{{x}_{i}}{\gamma}\right)}^{2}\right)\mathrm{e}\mathrm{x}\mathrm{p}\left(\frac{{\Vert {x}_{i}\Vert}^{2}}{2{\gamma}^{2}}\right)\right]dx$
$=(2\pi {)}^{\frac{n}{2}}\prod _{i=1}^{n}{\int}_{\infty}^{\infty}\left(1{\left(\frac{{x}_{i}}{\gamma}\right)}^{2}\right)\mathrm{e}\mathrm{x}\mathrm{p}\left(\frac{{\Vert {x}_{i}\Vert}^{2}}{2{\gamma}^{2}}j\left({\omega}_{i}{x}_{i}\right)\right)d{x}_{i}$
$=\prod _{i=1}^{n}{\omega}_{i}^{2}{\left\gamma \right}^{3}\mathrm{e}\mathrm{x}\mathrm{p}\left(\frac{{\omega}_{i}^{2}{\gamma}^{2}}{2}\right)\ge 0.$
From the above, the proof of Mexican hat wavelet satisfying Theorem 2 is obtained which can be used to construct the allowable kernel function.
3. CHMM
CHMM is constituted by multiHMM chains which couple through crosstime and crosschain conditional probabilities as illustrated in Fig. 3 and Fig. 4, and the CHMM can be regarded as a special case of dynamic Bayesian network. The observations of each chain in CHMM are decided by the corresponding state in the same chain. Besides, the unobservable state sequence can be only estimated by the observation sequence. The above two characteristics of CHMM are similar to HMM. Different from HMM, all the state variables in different chains may be contained at certain time slice in the states of the CHMM system. The states of all chains in the previous time slice decide the state in each chain. So much comprehensive fault diagnosis result of bearing can be obtained using CHMM because it has a potential to fuse data from multichannel. The following is the basic theory introduction of a twochain CHMM.
Fig. 3The schematic of HMM
Fig. 4The schematic diagram of CHMM
3.1. Elements of CHMM
The chain index is represented by $c$, i.e., $\mathbf{c}=\left\{\mathrm{1,2}\right\}$. The total set of hidden states of each chain is represented as ${\mathbf{S}}^{c}=\left\{{S}_{1}^{c},{S}_{2}^{c},\dots ,{S}_{{N}_{c}}^{c}\right\}$. Let ${\mathbf{o}}_{t}=\left\{{o}_{t}^{1},{o}_{t}^{2}\right\}$ represent the observation vector and the hidden state at time $t$ is expressed as ${\mathbf{q}}_{t}=\left\{{q}_{t}^{1},{q}_{t}^{2}\right\}$. The following expression describes the elements of CHMM: $\lambda =(\mathbf{A},\mathbf{B},\pi )$.
(1) $\mathbf{A}=\left\{{a}_{i,j}\right\}$ represents the state transition probability matrix. The system transfers from the state ${\mathbf{S}}_{i}=\{{S}_{{i}_{1}}^{1},{S}_{{i}_{2}}^{2}\}$ to the state ${\mathbf{S}}_{j}=\{{S}_{{j}_{1}}^{1},{S}_{{j}_{2}}^{2}\}$ with probability ${a}_{i,j}$ which could be represented by the following equation:
(2) The observation probability matrix is expressed as $\mathbf{B}=\left\{{b}_{j}\left({\mathbf{o}}_{t}\right)\right\}$. The output ${\mathbf{o}}_{t}$ generated by each state ${\mathbf{S}}_{i}=\{{S}_{{i}_{1}}^{1},{S}_{{i}_{2}}^{2}\}$ with a probability distribution function can used the following equation:
(3) The initial state distribution is $\pi =\left\{{\pi}_{i}\right\}$, and the calculated probability value of the system’ initial state in ${S}_{i}=\{{S}_{{i}_{1}}^{1},{S}_{{i}_{2}}^{2}\}$ is ${\pi}_{i}$:
The probability distribution of continuous observation can use the Gaussian mixed model (GMM) as follows:
where ${M}_{j}^{c}$ is the number of Gaussian mixtures of chain $c$ in state ${S}_{j}^{c}$, ${w}_{j,m}^{c}$ is the weight for each Gaussian mixture, and $N({\mathbf{o}}_{t}^{c},{\mu}_{j,m}^{c},{\sum}_{j,m}^{c})$ is a Gaussian density with mean vector ${\mu}_{j,m}^{c}$ and covariance matrix ${\sum}_{j,m}^{c}$.
3.2. CHMM' basic problems
There are three basic problems existing for CHMM in real application:
(1) Evaluation. How the observation sequence $\mathbf{O}=\left\{{\mathbf{o}}_{1}{\mathbf{o}}_{2}\dots {\mathbf{o}}_{T}\right\}$ with a given CHMM$\lambda $ is computed, i.e., $P\left(\mathbf{O}\right\lambda )$?
(2) Decoding. Given the observation sequence $\mathbf{O}=\left\{{\mathbf{o}}_{1}{\mathbf{o}}_{2}\dots {\mathbf{o}}_{T}\right\}$ and a CHMM $\lambda $, how do we select a hidden state sequence $\mathbf{S}=\left\{{\mathbf{S}}_{1}{\mathbf{S}}_{2}\dots {\mathbf{S}}_{T}\right\}$ to explain the process, i.e., $ma{x}_{\mathbf{S}}P\left(\mathbf{S}\right\mathbf{O},\lambda )$?
(3) Learning. Given the observation sequence $\mathbf{O}=\left\{{\mathbf{o}}_{1}{\mathbf{o}}_{2}\dots {\mathbf{o}}_{T}\right\}$, how do we adjust the model parameters $\lambda $ to maximize the probability $P\left(\mathbf{O}\right\lambda )$?
Many algorithms such as Viterbi algorithm, forwardbackward procedure and BaumWelch method were proposed to solve the above problems. The reference [21] gives more details about the above algorithms.
4. Experiment
The flow chart of the proposed method based on WKPCACHMM is shown in Fig. 5 and the specific details of each step are given as following:
Step 1: Data collection: collect the signals of the four states (normal state, outer race fault state, rolling element fault state and inner race fault state) of rolling element bearing using double channel accelerator sensors.
Table 1Timedomain statistics indexes
Calculation formulas  
1  Peak  $x=max\{\leftx\left(1\right)\right,\leftx\left(2\right)\right,\cdots ,\leftx\left(N\right)\right\}$ 
2  Ppvalue  ${x}_{pp}=x(n{)}_{max}x(n{)}_{min}$ 
3  Meanamp  ${\overline{x}}_{p}=\frac{1}{N}\sum _{i=1}^{N}{x}_{i}$ 
4  Rootamp  ${x}_{r}={\left(\frac{1}{N}{\sum}_{n=1}^{N}\sqrt{\leftx\left(n\right)\right}\right)}^{2}$ 
5  Root mean square  ${x}_{RMS}=\sqrt{\frac{1}{N}{\sum}_{n=1}^{N}{x}^{2}\left(n\right)}$ 
6  Waveind  ${S}_{f}=\frac{{x}_{RMS}}{{\overline{x}}_{p}}$ 
7  Pluseind  ${I}_{f}=\frac{x}{{\overline{x}}_{p}}$ 
8  Peakind  ${C}_{f}=\frac{x}{{x}_{RMS}}$ 
9  Marginind  $C{L}_{f}=\frac{{x}_{RMS}}{{x}_{r}}$ 
10  Skewness  ${S}_{k}=\frac{1}{N}{{\sum}_{n=1}^{N}\left(\frac{x\left(n\right)\overline{x}}{\sigma}\right)}^{3}$ 
11  Kurtosis  ${K}_{u}=\frac{1}{N}{{\sum}_{n=1}^{N}\left(\frac{x\left(n\right)\overline{x}}{\sigma}\right)}^{4}$ 
Remark: $x\left(n\right)$ is time domain discrete signal $\overline{x}=\frac{1}{N}\sum _{n=1}^{N}x\left(n\right)$$\sigma =\sqrt{\frac{1}{N1}\sum _{n=1}^{N}\left(x\right(n)\mu {)}^{2}}$ 
Step 2: Data separation and feature extraction: separate the data of the eight channel signals into 50 groups (The 140th groups are used as CHMM training data and 41th50th groups are used as testing data) respectively and 400 groups are obtained in all. There are 1024 points in each group. Apply the eleven timedomain statistical indexes (The 11 indexes and their corresponding calculation formulas are shown in Table 1) and one timefrequency domain index (the wavelet packet energy entropy (WE) which will be stated in the following chapter) to each group data.
Noting: The 11 indexes are the traditional common used timedomain statistical feature vectors and they can reflect the running state correctly when the fault signal is linear. The signals usually take on nonlinear characteristic when fault occurs in machinery, so the timefrequency index is also need so as to capture the characteristic of the fault signal. In the paper, the wavelet packet energy entropy (WE) is used as timefrequency index which will be discussed in the Subsequent chapters.
Step 3: Dimensionality reduction: apply WKPCA to the feature vectors obtained in step 2 in order to obtain dimensionality reduction feature vectors.
Step 4: CHMM models training: use the dimensionality reduction training feature vectors to train four CHMM models (normal state CHMM, inner race fault state CHMM, outer race fault state CHMM and rolling element fault state CHMM).
Step 5: Diagnosis: input the dimensionality reduction testing feature vectors into the trained four state CHMMs in step 4 and fault diagnosis results are obtained.
Fig. 5The framework of the proposed method
Fig. 6The test rig
The test rig is shown in Fig. 6. The two ends of the shaft are supported by two rolling element bearings, and the right end is detachable which is convenient for replacement of the test rolling element bearings. The shaft is driven by AC motor and connected by coupling. The rated power of the AC is 1.1 kW. The test rig is equipped with hydraulic position and clamping device which are used in fixing the outer race of rolling bearing. The inner race, rolling element and outer race of the test rolling bearings are eroded with very tiny point corrosions respectively using Electrical Discharge Machining (EDM) technology to simulate the three kinds of faults of the rolling bearing. The type of the test rolling bearing is GB203. The outer race is fixed on the bench and the inner race rotates synchronously with the shaft in the test process. The rotation frequency of the shaft is ${f}_{r}=$12 Hz. The parameters and the rotation frequency of the test rolling element bearings are shown in Table 2.
Table 2Rolling bearing’s parameters and the rotating frequency
Type  Pitch diameter $D$ (mm)  Ball diameter $d$ (mm)  Ball number $Z$ (N)  Contact angle $\alpha $ (angle) 
GB203  28.5  6.747  7  0 
Feature frequency Shaft frequency  Calculation formulas ${f}_{r}=\frac{n}{60}$  Calculated result (Hz) 12  
Remark: $n$ represent the shaft rotation speed 
One sensor is installed in the traditional vibration data collection method. In the paper the two accelerometers are installed in the same one bearing case synchronously, and the two installed directions are shown in Fig. 7.
Fig. 7The installation direction of the two sensors
The four states of the test rolling bearings are carried on respectively and the corresponding timedomain waveforms of the two channel signals from the same data collection point of the four states are shown in Fig. 8. The sampling frequency is ${f}_{s}=$25.6 kHz.
It is usually taking on nongaussian and nonlinear characteristic whatever the condition of the rolling bearings is (normal or fault). The timefrequency analysis method is a very effective nonlinear and nongaussian signal handling tool to extract the nonlinear features buried in the original signal. In the paper, the wavelet packet energy entropy is used as the timefrequency indicator whose calculation process is shown as following:
Apply the wavelet packet transform (WPT) to the original signal and the energies ${E}_{i}(i={2}^{N}$, $N$ is the decomposition level) named wavelet packet energy on each node is obtained which is the division of the original signal in the timefrequency domain. In theory, much better frequencydomain performance could be obtained with the bigger value of $N$. However, the amount of calculation will also be increased with the bigger value of $N$. So, the value of $N$ is selected 3 as compromising here, and the satisfactory frequencydomain performance could be obtained with the following verification of experimental results. The wavelet packet energy entropy (WE) is defined as in Eq. (15):
In Eq. (15), ${p}_{i}$ represents probability distribution. The Normalized wavelet packet energy and wavelet packet energy entropy results of the signals shown in Fig. 8 are summarized in Fig. 9.
Fig. 8The timedomain waveforms of the two channels signals from the same data collection point of the four states
a) The timedomain waveform of channel 1 of normal state
b) The timedomain waveform of channel 2 of normal state
c) The timedomain waveform of channel 1 of outer race fault state
d) The timedomain waveform of channel 2 of outer race fault state
e) The timedomain waveform of channel 1 of rolling element fault state
f) The timedomain waveform of channel 2 of rolling element fault state
g) The timedomain waveform of channel 1 of inner race fault state
h) The timedomain waveform of channel 2 of inner race fault state
So, the dimensionality of training feature vectors of every channel is 4×50×12. Apply WKPCA to the 4×40×12 feature vectors of channel 1 and 2 respectively and the analysis results are shown in Fig. 10 and Fig. 11. In Fig. 12, the curves of classification result with the number of kernel principle components (PC) is given, and it is evident that the correction ratio would be almost unchanged when the number of PC varies from 312. Though in theory the correction ration will obtain the biggest value when the number of PC is selected 12 as shown in Fig. 12, the classification speed will be decreased too much. So, the number is selected 3 as compromising to ensure the classification speed, and the classification correction is guaranteed at the same time. From the dimensionality reduction result it is evident that the dimensionality of the feature vectors is reduced to 4×40×3 respectively.
Fig. 9Normalized wavelet packet energy and wavelet packet energy entropy
a) The normalized energy of channel 1
b) The normalized energy of channel 2
c) The energy entropy of channel 1
d) The energy entropy of channel 2
Fig. 10The three principle components of the twelve features of the channel 1 of the four states analyzed by WKPCA method
From Fig. 10, it can be seen that, almost all of the four states’ sample vectors (* represents the sample vectors of normal state, + represents the samples vectors of outer race fault state, blue o represents the sample vectors of rolling element fault state and yellow o represents the sample vectors of inner race fault state) are classified correctly. Though small amount of sample vectors of outer race fault state and inner race fault state are misclassified in Fig. 11, most of the other sample vectors of the four states are classified correctly. In order to verify the advantage of WKPCA over KPCA, the analysis result of the signals shown in Fig. 8 using KPCA are given in Fig. 13 and Fig. 14. The advantages of WKPCA over KPCA is obvious compared the Fig. 10 and Fig. 11 with Fig. 13 and Fig.14: Much more amount of sample vectors of the four states are misclassified compared Fig. 13 and Fig. 14 with Fig. 10 and Fig. 11. Besides, the better clustering result which has bigger classes distance and smaller intraclass distance is evident in Fig. 10 and Fig. 11 compared with the results obtained in Fig. 13 and Fig. 14.
Fig. 11The three principle components of the twelve features of the channel 2 of the four states analyzed by WKPCA method
Fig. 12The curves of classification result with the number of kernel principle components
Fig. 13The three principle components of the twelve features of the channel 1 of the four states analyzed by KPCA method
Use the 4×40×3×2 feature vectors as training feature vectors respectively and the normal state CHMM, outer race fault state CMHH, rolling element fault state CHMM and inner race fault state CHMM diagnosis trained models are erected respectively. Then input the 4×40×3×2 feature vectors as testing feature vectors into the above four trained diagnosis models, and the diagnosis results are obtained and shown in Fig. 15(c) at last. From the Fig. 15(c) the ten groups of the testing feature vectors of the four states are classified corrected completely.
Fig. 14The three principle components of the twelve features of the channel 2 of the four states analyzed by KPCA method
In order to verify the advantage of CHMM over HMM, the diagnosis result based on WKPCAHMM of the two channel signals of the four states are also carried out respectively. Same as the above CHMM models training and testing process: firstly, use the 4×40×3×2 as training feature vectors respectively and the normal state HMM, outer race fault state MHH, rolling element fault state HMM and inner race fault state HMM diagnosis trained models of the two channel signals are erected respectively, then input the 4×10×3×2 feature vectors as testing feature vectors into the above eight trained diagnosis models, and the diagnosis results are obtained and shown in Fig. 15(a) and Fig. 15(b) respectively. Compared Fig. 15(a) and Fig. 15(b) with Fig. 15(c), the advantages of the proposed method are obvious: in Fig. 15(a) the are three groups of rolling element fault state testing feature vectors are misclassified as normal state. In Fig. 15(b) there is not only one group of rolling element fault state testing feature vector is misclassified as normal state but also there are three groups of outer race fault state testing feature vectors are misclassified as inner race fault state. Based on the above shown results, the advantage of CHMM over HMM in fault diagnosis of rolling element bearing is verified: the CHMM can fuse the information of two channels signals from the same data collection point efficiently and might also resolve the possible coupling phenomenon occurring between the two channel signals synchronously so much more reliable diagnosis result could be obtained compared with the HMM method. Besides, the dimension redundancy and dimension insufficient contraction is not only resolved but also the diagnosis efficiency and correction ratio are also increased because the WKPCA method is used as feature dimensionality method.
Besides, the computation time and correction ratio of the other three relative methods (WKPCAHMM, RKPCACHMM and CHMM without dimensionality reduced) and the proposed method (WKPCACHMM) are shown in Table 3. Based on Table 3 the advantages of the proposed are further verified.
Table 3The computation time and correction of the relative methods and the proposed method
The name of methods  Computation time (second)  Correction ratio 
WKPCAHMM  34.5  70 % 
RKPCACHMM  40.4  65 % 
CHMM  65.3  60 % 
WKPCACHMM  38.6  100 % 
Fig. 15The diagnosis results based on WKPCAHMM and WPCACHMM
a) The diagnosis results of channel 1 signals of the four states based on WKPCAHMM
b) The diagnosis results of channel 2 signals of the four states based on WKPCAHMM
c) The diagnosis results of the two channels signals of the four states based on WKPCACHMM
5. Conclusions
The paper presents an integrated WKPCACHMM method to realize the intelligent fault diagnosis of rolling element bearing. The advantage of CHMM over HMM is following: the CHMM can fuse the information of two channel signals from the same data collection point efficiently and might also solve the possible coupling phenomenon occurring between the two channel signals synchronously. The WKPCA is used as feature dimensionality reduction method for it is not only can solve the dimension redundancy and dimension insufficient contraction but also is much more flexible than RKPCA. The feasibility and validity of the proposed method is verified through experiment. Besides, the advantages of the proposed method over other relative methods are also verified and presented.
References

Jiang R., Yu J., Makis V. Optimal Bayesiam estimation and control scheme for gear shaft fault detection. Computers and Industrial Engineering, Vol. 63, Issue 4, 2012, p. 754762.

Boutros T., Liang M. Detection and diagnosis of bearing and cutting tool faults using hidden Markov models. Mechanical Systems and Signal Processing, Vol. 25, Issue 6, 2011, p. 21022124.

Geramifard O., Xu J. X., Panda S. K. Fault detection and diagnosis in synchronous motors using hidden Markov modelbased seminonparametric approach. Engineering Applications of Artificial Intelligence, Vol. 26, Issue 8, 2013, p. 19191929.

Georgoulas G., Mustafa M. O., Tsoumas I. P. Principal component analysis of the startup transient and hidden Markov modeling for broken rotor bar fault diagnosis in asynchronous machines. Expert Systems with Application, Vol. 40, Issue 17, 2013, p. 70247033.

Purushotham V., Narayanan S., Prasad S. A. N. Multifault diagnosis of rolling bearing elements using wavelet analysis and hidden Markov model based fault recognition. NDT&E International, Vol. 38, Issue 8, 2005, p. 654664.

Jardine A. K. S., Lin D., Banjevic D. A review on machinery diagnostics and prognostics implementing conditionbased maintenance. Mechanical Systems and Signal Processing, Vol. 20, Issue 7, 2006, p. 14831510.

Yang B. S., Kim K. J. Application of DempsterShafer theory in fault diagnosis of induction motors using vibration and current signals. Mechanical Systems and Signal Processing, Vol. 20, Issue 2, 2006, p. 403420.

Kulkarni S., Bewoor A. Vibration based condition assessment of ball bearing with distributed defect. Journal of Measurements in Engineering, Vol. 4, Issue 8, 2016, p. 8794.

Safizadeh M. S., Latifi S. K. Using multisensor data fusion for vibration fault diagnosis of rolling element bearings by accelerometer and load cell. Information Fusion, Vol. 18, Issue 4, 2014, p. 18.

Brand M., Oliver N., Pentland A. Coupled hidden Markov models for complex action recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, USA, 1997, p. 994999.

Xie L., Liu Z. Q. A coupled HMM approach to videorealistic speech animation. Pattern Recognition, Vol. 40, Issue 8, 2007, p. 23252340.

Kodali A., Pattipati K. Coupled factorial hidden Markov models (CFHMM) for diagnosing multiple and coupled faults. IEEE Transactions on Systems, Man, and Cybernetics: Systems, Vol. 43, Issue 3, 2013, p. 522534.

Zhao R., Schalk G., Ji Q. Coupled hidden Markov model for electrocorticographic signal classification. 22nd International Conference on Pattern Recognition, 2014, p. 18581862.

Xiao W. B., Chen J., Dong G. M. A multichannel fusion approach based on coupled hidden Markov models for rolling element bearing fault diagnosis. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, Vol. 226, Issue 1, 2012, p. 202216.

Liu T., Chen J., Zhou X. N., Xiao W. B. Bearing performance degradation assessment using linear discriminant analysis and coupled HMM. 25th International Congress on Condition Monitoring and Diagnostic Engineering, Journal of Physic: Conference Series, Vol. 364, 2012.

Bellino A., Fasana A., Garibaldi L. PCAbased detection of damage in timevarying systems. Mechanical Systems and Signal Processing, Vol. 24, Issue 4, 2010, p. 22502260.

Xiao Y. Q., Feng L. G. A novel linear ridgelet network approach for analog fault diagnosis using waveletbased fractal analysis and kernel PCA as preprocessors. Measurement, Vol. 45, Issue 3, 2010, p. 297310.

Cao M. S., Ding Y. J., Ren W. X., Wang Q., Ragulskis M. Hierarchical waveletaided neural intelligent identification of structural damage in noisy conditions. Applied Science, Vol. 7, Issue 391, 2017, https://doi.org/10.3390/app7040391

Ganey J. L., Block W. M., Jenness J. S. Mexican spotted owl home range and habitat use in pineoak forest: implications for forest management. Forest Science, Vol. 45, Issue 1, 1999, p. 127135.

Taylor J. S., Cristianini N. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, 2004.

Nefian A. V., Liang L. H., Pi X. P. Dynamic Bayesian networks for audiovisual speech recognition. Eurasip Journal on Applied Signal Processing, 2002, https://doi.org/10.1155/S1110865702206083.

Zhang L., Zhou W. D., Jiao L. C. Wavelet support vector machine. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 34, Issue 1, 2004, p. 3439.

Smola A. J., Scholkopf B., Muller K. R. The connection between regularization operators and support vector kernels. Neural Networks, Vol. 11, Issue 4, 1998, p. 637649.

Wen X. J., Xu X. M., Cai Y. Z. Leastsquares wavelet kernel method for regression estimation. International Conference on Natural Computation 2005, Changsha, 2005, p. 582591.
Cited by
About this article
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: the research is supported by the National Natural Science Foundation (China) (approved Rant: 51405453 and 51205371).