Abstract
To realize automation and high accuracy of pedestal looseness extent recognition for rotating machinery, a novel pedestal looseness extent recognition method for rotating machinery based on vibration sensitive timefrequency feature and manifold learning dimension reduction is proposed. Firstly, the pedestal looseness extent of rotating machinery is characterized by vibration signal of rotating machinery and its spectrum, then the timefrequency features are extracted from vibration signal to construct the origin looseness extent feature set. Secondly, the algorithm of looseness sensitivity index is designed to filter out the nonsensitive feature and poor sensitivity feature from the origin looseness extent feature set, avoiding the interference of nonsensitive and poor sensitivity feature. The sensitive features are selected to construct the looseness extent sensitive feature set, which has stronger characterization capabilities than the origin looseness extent feature set. Moreover, an effective manifold learning method called linear local tangent space alignment (LLTSA) is introduced to compress the looseness extent sensitive feature set into the lowdimensional looseness extent sensitive feature set. Finally, the lowdimensional looseness extent sensitive feature set is inputted into weight K nearest neighbor classifier (WKNNC) to recognize the different pedestal looseness extents of rotating machinery, the WKNNC’s recognition accuracy is more stable compared with that of a k nearest neighbor classification (KNNC). At the same time, the pedestal looseness extent recognition of rotating machinery is realized. The feasibility and validity of the present method are verified by successful pedestal looseness extent recognition application in a rotating machinery.
1. Introduction
The pedestal looseness often occurs in rotating machinery due to the poor quality of installation or long time vibration, such as wind turbine, tunnel fan, electric machinery, rolling mill and so on. The pedestal looseness brings negative affection to normal operation of rotating machinery, and may even lead to serious damage a serious damage accident, resulting in significant economic losses. For example, the tunnel suspension jet fan is one of the important part of tunnel ventilation systems [1, 2], it is easy to cause problems because of adverse operation circumstances, such as imbalance, poor lubrication and so on, which lead violent vibration. Then, it easily leads to loosening of the fan pedestal, the pedestal looseness of fan would makes the tunnel fan oscillation more serious, which in turn makes the pedestal looseness more serious, and then forms a vicious circle. The pedestal looseness would be bound affect the normal operation of fan, and even lead to fan fall, threatening to traffic safety. So, effective pedestal looseness extent recognition is an important task to ensure normal operation of rotating machinery and to avoid accidents.
Up to now, the pedestal looseness of rotating machinery is studied by many researchers. Qin studied the influence of bolt loosening on the rotor dynamics by using nonlinear FE simulations, and the research results showed that the rotor dynamics characteristics have been changed [3]. Zhang proposed continuous wavelet grey moment approach to extract fault features of pedestal looseness [4]. Lee got better rub and looseness diagnosis results using EMD compared with STFT and wavelet analysis [5]. Wu chose informationrich IMFs to construct the marginal Hilbert spectra and then defined a fault index to identify looseness faults in a rotor system [6]. Wu et al. combined EEMD and autoregressive model to identify looseness faults of rotor systems [7]. Nembhard proposed MultiSpeed MultiFoundation technique to improve clustering and isolation of the condition tested for bow, looseness and seal rub [8]. However, current pedestal looseness diagnosis methods are not completely suitable for pedestal looseness extent recognition for three reasons. 1) They usually use the single or single domain signal processing method for looseness feature extraction as in [4, 7], making it very difficult to dig weak, nonlinear and strongly coupled looseness extent feature of rotating machinery. 2) These methods generally require priori knowledge or experience to select looseness extent feature, and the diagnostic process need people to participate. For example, it requires people to determine the type of failure through exact time and frequency information in [5]. 3) They generally determine whether the pedestal is loose in [6, 7], or distinguish between looseness fault and other fault in [8], don’t recognize the pedestal looseness extent. Therefore, it is hard to realize high precision and high efficiency of pedestal looseness extent by using these methods.
The typical fault automatic recognition method for rotating machinery has been used extensively. The fault diagnosis method generally involves three steps. Firstly, fault features are extracted to construct the fault feature set [9, 10], the fault signal are analyzed by classic signal processing methods, such as short time fourier transform (STFT), wigner ville distribution (WVD), wavelet decomposition and empirical mode decomposition (EMD). Second, the main eigenvectors with low dimension and easy of identification are extracted from the highdimensional fault feature set by applying an appropriate dimensionality reduction method, such as Principle component analysis (PCA) [11], Locality Preserving Projection (LPP) [12], Linear Discriminant Analysis (LDA) [13]. Thirdly, the lowdimensional feature set is inputted into a learning machine for pattern recognition, for example, K nearest neighbor classifier (KNNC) [14, 15], artificial neural network (ANN) [16, 17] and support vector machine (SVM) [18, 19]. Application of this fault recognition method for looseness extent of fan foundation will face the following problems: 1) looseness feature extraction, and nonsensitive or poor sensitive feature interference. The effective extraction of the looseness feature is directly related to the construction of the looseness feature set, quantitative characterization and automatic identification of looseness. In order to fully reflect the looseness of the fan foundation, it is needed to extract multiple features to construct looseness feature set. The looseness feature set includes nonsensitive feature and poor sensitivity feature, which are bound to affect the looseness characterization capabilities of the feature set, and interfere with the recognition results. 2) overhigh dimension and nonlinear of feature set. Science the features are extracted from vibration signal to construct looseness feature set, it is generally nonlinear. It is not only increase the time but also reducing recognition accuracy rate because of overhigh dimension of feature set, so it is necessary to obtain main eigenvectors with low dimension and ease of identification from the highdimensional looseness feature set by applying dimensionality reduction method. The traditional dimension reduction method can effectively reduce the linear high dimensional feature set, such as PCA, LPP, LDA and so on, however, these dimension reduction methods have limited effect on the reduction of the nonlinear feature set for looseness of fan foundation.
To handle these problems and realize automation and high accuracy of pedestal looseness extent recognition for rotating machinery, a pedestal looseness extent recognition method for rotating machinery based on vibration sensitive timefrequency feature and manifold learning is proposed. Firstly, the vibration signal is collected, and the pedestal looseness extent of rotating machinery is characterized by characteristics of vibration signal. Then, 15 time domain features and 14 frequency domain features are extracted to construct the origin looseness extent timefrequency feature set, which achieve the purpose of quantitative characterization for pedestal looseness. Secondly, the looseness sensitivity index algorithm is designed based on scatter matrix to select the looseness extent sensitive feature for avoiding the interference of nonsensitive feature or poor sensitivity feature. And the looseness extent sensitive feature set is constructed by the looseness sensitive feature. Thirdly, the highdimensional looseness extent sensitive feature set are compressed to lowdimensional looseness extent sensitive feature set with linear local tangent space alignment (LLTSA) [20], a novel manifold learning algorithm, which has a superior clustering performance and advantage of suitable nonlinear reduction when compared with other algorithms, it achieves effective differentiation of looseness patterns while automatically compressing the timefrequency looseness extent feature set. Finally, the lowdimensional extent sensitive feature set is inputted into Weight K nearest neighbor classifier (WKKNC) [21, 22] for looseness recognition, the recognition results of WKNNC are insensitive the size of the neighborhood k, and the robustness is good. Then, the pedestal looseness extent recognition method for rotating machinery is realized.
Overall, a pedestal looseness extent recognition method comprising looseness extent timefrequency feature extraction, looseness extent sensitive feature selection, dimensionality reduction and pattern recognition is proposed for rotating machinery. The feasibility and validity of the present method are verified by experimental results.
2. The extraction method of pedestal looseness extent feature and construction method of pedestal looseness sensitive feature set based on vibration signal
2.1. The characterization method of pedestal looseness extent based on vibration
It is bound to change the structure dynamic characteristics of the rotating machinery when the pedestal is loose as in [3], and the change of structure dynamic characteristics is reflected as the change of vibration characteristics. So, the pedestal looseness extent of rotating machinery can be characterized by vibration characteristics.
Fig. 1The waveform and spectrum of 5 kinds of looseness extents a) all bolts tightened; b) 1 bolt looseness; c) 2 bolts looseness; d) 3 bolts looseness; e) 4 bolts looseness
The vibration signals are collected in different pedestal looseness extents, such as T1 (all bolts tightened), T2 (1 bolt looseness), T3 (two bolts looseness), T4 (3 bolts looseness) and T5 (4 bolts looseness). When the number of loose bolts increases, the pedestal looseness extent strengthens. Fig. 1 shows the collected vibration signals and its spectrum of the five kinds of looseness extents.
Fig. 1 shows that there is difference among the vibration signal and its spectrum, for example the amount of spectrum energy, the position change of main frequency band and the decentralization or centralization degree of the spectrum. Fig.1 shows that the dominant frequency is 175 Hz, 156.3 Hz, 943.8 Hz, 425 Hz and 175 Hz respectively, and the dominant frequency amplitude is 0.677 m/s^{2}, 0.802 m/s^{2}, 0.759 m/s^{2}, 0.793 m/s^{2} and 0.872 m/s^{2} respectively, the amplitude at 287.5 Hz is 0.247 m/s^{2}, 0.369 m/s^{2}, 0.596 m/s^{2}, 0.401 m/s^{2} and 0.203 m/s^{2} respectively.
Fig. 2The probability density function in 5 kinds of looseness extent
In order to further observe the characteristics of different looseness extents, the probability density function is calculated as shown in Fig. 2. The waveform of probability density function is different with different looseness extents of pedestal, for example degree of asymmetry of probability density function, the peak value of density distribution curve at the average and so on.
Overall, the characteristics of vibration signal are changed with the change of the pedestal looseness extent, and the different pedestal looseness extents are characterized by different characteristics of vibration signal and its spectrum.
2.2. Looseness extent feature extraction and looseness extent feature set construction
The pedestal looseness extent of rotating machinery can be visibly characterized by timefrequency characteristics of vibration signal, but it difficult to automatic recognition by pattern recognition algorithm. In order to realize quantitative characterization of pedestal looseness extent and easily implement intelligent recognition, multiple features are extracted to construct origin looseness extent feature set. To reflect the change of the characteristic of the vibration signal as comprehensive as possible, integrated vibration signal time and frequency domain information, 29 timefrequency parameters are extracted. 15 timedomain parameters are extracted from vibration timedomain signal such as maximum amplitude, average amplitude, root mean square, shape factor, kurtosis index, crest factor, impulse factor, clearance factor, skewness index and so on. And, 14 frequencydomain parameters are extracted from frequency spectrum such as average frequency amplitude, centroid frequency, meansquare frequency, frequency variance, root mean square frequency and so on. Finally, we construct 15 feature parameters in timedomain and 14 feature parameters in frequencydomain as provided in Table 1, and further combine them into a 29dimensional timefrequency feature parameter set in order to fully and reliably obtain the pedestal looseness extent features. Also, the origin looseness extent feature set is obtained.
In Table 1, $x\left(n\right)$ represents the time series of a signal, $n=$1, 2,…, $N$, where $N$ is a sampling number. $s\left(k\right)$ represents the frequency spectrum of $x\left(n\right)$, $k=$1, 2,…, $K$, where $K$ is spectrum line number. ${f}_{k}$ is the frequency of the $k$th spectrum line. $u$ represents the mean value of the $x\left(n\right)$. Time domain feature parameters ${P}_{1}$${P}_{6}$ represent the amplitude and energy of the time domain signal; ${P}_{7}$${P}_{15}$ denote the distribution situation of time series of the signal. Frequency domain feature parameters ${P}_{16}$ characterizes the spectrum energy; ${P}_{17}$${P}_{20}$ characterize the position change of main frequency band; ${P}_{21}$${P}_{29}$ characterize the decentralization or centralization degree of frequency spectrum.
Table 1The timefrequency feature parameters
No.  Feature parameter expression  No.  Feature parameter expression  No.  Feature parameter expression  No.  Feature parameter expression 
${P}_{1}$  $max\left(\rightx\left(n\right)\left\right)$  ${P}_{9}$  $\frac{1}{N}{\sum}_{n=1}^{N}\left(x\right(n)u{)}^{4}$  ${P}_{17}$  $\frac{{\sum}_{k=1}^{K}{f}_{k}s\left(k\right)}{{\sum}_{k=1}^{K}s\left(k\right)}$  ${P}_{25}$  $\frac{{\sum}_{k=1}^{K}\sqrt{({f}_{k}{p}_{18})}s\left(k\right)}{\sqrt{{p}_{25}}K}$ 
${P}_{2}$  $\frac{1}{N}{\sum}_{n=1}^{N}\leftx\left(n\right)\right$  ${P}_{10}$  $\frac{{p}_{8}}{{p}_{6}^{3}}$  ${P}_{18}$  $\sqrt{\frac{{\sum}_{k=1}^{K}{f}_{k}^{2}s\left(k\right)}{{\sum}_{k=1}^{K}s\left(k\right)}}$  ${P}_{26}$  $\frac{{\sum}_{k=1}^{K}({f}_{k}{p}_{18}{)}^{2}s(k)}{{\sum}_{k=1}^{K}s\left(k\right)}$ 
${P}_{3}$  ${\left(\frac{1}{N}{\sum}_{n=1}^{N}\sqrt{\leftx\left(n\right)\right}\right)}^{2}$  ${P}_{11}$  $\frac{{p}_{9}}{{p}_{6}^{4}}$  ${P}_{19}$  $\sqrt{\frac{{\sum}_{k=1}^{K}{f}_{k}^{4}s\left(k\right)}{{\sum}_{k=1}^{K}{f}_{k}^{2}s\left(k\right)}}$  ${P}_{27}$  $\frac{{\sum}_{k=1}^{K}({f}_{k}{p}_{18}{)}^{3}s(k)}{{p}_{25}^{3}K}$ 
${P}_{4}$  $\sqrt{\frac{1}{N}{\sum}_{n=1}^{N}{x}^{2}\left(n\right)}$  ${P}_{12}$  $\frac{{p}_{4}}{{p}_{2}}$  ${P}_{20}$  $\frac{{\sum}_{k=1}^{K}{f}_{k}^{2}s\left(k\right)}{\sqrt{{\sum}_{k=1}^{K}s\left(k\right){\sum}_{k=1}^{K}{f}_{k}^{4}s\left(k\right)}}$  ${P}_{28}$  $\frac{{\sum}_{k=1}^{K}({f}_{k}{p}_{18}{)}^{4}s(k)}{{p}_{25}^{4}K}$ 
${P}_{5}$  $\frac{1}{N}{\sum}_{n=1}^{N}{x}^{2}\left(n\right)$  ${P}_{13}$  $\frac{{p}_{1}}{{p}_{2}}$  ${P}_{21}$  $\sqrt{\frac{1}{K1}{{\sum}_{k=1}^{K}\left[s\left(k\right){p}_{17}\right]}^{2}}$  ${P}_{29}$  $\frac{{p}_{24}}{{p}_{17}}$ 
${P}_{6}$  $\sqrt{\frac{1}{N1}{\sum}_{n=1}^{N}\left(x\right(n)u{)}^{2}}$  ${P}_{14}$  $\frac{{p}_{1}}{{p}_{4}}$  ${P}_{22}$  $\frac{{\sum}_{k=1}^{K}{\left[s\left(k\right){p}_{17}\right]}^{3}}{K{p}_{22}^{3}}$  
${P}_{7}$  $\frac{1}{N1}{\sum}_{n=1}^{N}\left(x\right(n)u{)}^{2}$  ${P}_{15}$  $\frac{{p}_{1}}{{p}_{3}}$  ${P}_{23}$  $\frac{{\sum}_{k=1}^{K}{\left[s\left(k\right){p}_{17}\right]}^{4}}{K{p}_{22}^{4}}$  
${P}_{8}$  $\frac{1}{N}{\sum}_{n=1}^{N}\left(x\right(n)u{)}^{3}$  ${P}_{16}$  $\frac{1}{K}{\sum}_{k=1}^{K}s\left(k\right)$  ${P}_{24}$  $\sqrt{\frac{{\sum}_{k=1}^{K}({f}_{k}{p}_{18}{)}^{2}s(k)}{K}}$ 
Compared with single domain signal feature extraction, the 29dimensional timefrequency domain feature set has three advantages as explained below: 1) it is more benefit for pattern recognition than classic spectrum analysis method. 2) It is more comprehensive than single timedomain or single frequencydomain feature extraction method to reflect the characteristics of looseness extent. 3) It is more sensitive to the change of looseness extent characteristics than STFT, WVD, wavelet decomposition and EMD.
2.3. Looseness extent sensitive feature selection and the looseness extent sensitive feature set construction
The looseness extent sensitivity of each feature from origin looseness extent feature set is different, the nonsensitive feature and poor sensitivity feature are not only bound to affect the looseness characterization capabilities of the feature set but also interfere with the recognition results. Therefore, the sensitive feature for pedestal looseness extent must be selected to construct the looseness extent sensitive feature set, which characterization capabilities is stronger than origin looseness extent feature set, and the recognition accuracy will be improved.
The scatter matrix includes betweenclass scatter matrix and withinclass scatter matrix [23, 24], the betweenclass scatter value and withinclass scatter value could be calculated by the two scatter matrix. The identifiability of different feature classes is reflected by the betweenclass scatter value, the larger the betweenclass scatter value is, the better identifiability of the feature is. The clustering characteristic of the same feature classes is reflected by the withinclass scatter matrix, the smaller the withinmatrix scatter value is, the better clustering characteristic of the feature is. If the betweenclass scatter value is larger and the withinclass scatter value is smaller for the feature, the identifiability of feature will be better for different looseness extent feature sets, and the clustering characteristic of the feature will be better too for the same looseness extent feature set, then, feature’s recognition capability will be stronger for different looseness extents of pedestal, and the looseness sensitivity of the feature will be better. Therefore, the looseness sensitivity index algorithm is designed based on betweenclass scatter matrix and withinclass scatter matrix, according to the feature’s characteristics of reflecting the identification and the degree of clustering. At the same time the looseness extent sensitive features are able to select from origin looseness extent feature set to construct the looseness extent sensitive feature set.
There are class $C$ looseness samples of different looseness extents, including ${N}_{i}$ sample number each class. For origin highdimensional feature set, $X=\left\{{x}_{1},{x}_{2},\dots ,{x}_{D}\right\}$, $D$ is number of dimension.
The withinscatter matrix ${S}_{B}$ as follows:
where ${u}_{i}$ is mean of the $i$th class, ${u}_{0}$ is the mean of total samples.
The betweenscatter matrix ${S}_{W}$ as follows:
where ${x}_{i}^{j}$ is the $i$th eigenvalues of $j$th class.
Then, the $tr\left\{{S}_{W}\right\}$ and $tr\left\{{S}_{B}\right\}$ are calculated, $tr\left\{\xb7\right\}$ is the trace of the matrix. $tr\left\{{S}_{W}\right\}$ is average measurement of characteristic variance for all feature classes, $tr\left\{{S}_{B}\right\}$ is one of average measurement of average distance between global mean value and mean of each class.
Different looseness samples can be identified because they are located in different regions of the feature space. The larger the distance of these regions is, the better identifiability is. So, the algorithm of the looseness sensitivity index $J$ is designed as follows:
Obviously, the value of $J$ will be larger, if value of $tr\left\{{S}_{W}\right\}$ is larger or value of $tr\left\{{S}_{B}\right\}$ is smaller. The looseness sensitivity index $J$ reflect the feature’s recognition capability. The larger the value $J$ is, the stronger the recognition capability is. Also the smaller the value $J$ is, the weaker the recognition capability is.
The looseness extent sensitive feature is selected by the value of $J$. The information is more overall if more features are selected, and it is useful to improve the recognition accuracy. But, the nonsensitive feature or poor sensitivity feature will be brought into feature set if the too many features are selected, the recognition accuracy will be affected. So, it is very important to decide how many features are selected. First of all, the looseness sensitivity index of each feature is calculated, ${J}_{i}$ ($i=$1, 2,…, $D$). Secondly, ${u}_{J}$, the mean of all looseness sensitivity index is calculated. Finally, the looseness extent sensitive features are selected which sensitivity index ${J}_{i}\ge {u}_{J}$, and the looseness extent sensitive feature set is constructed.
3. Dimensionality reduction of highdimensional looseness sensitive feature set based on linear local tangent space alignment (LLTSA)
3.1. Problem description
Consider a looseness samples set ${\mathbf{X}}_{ORG}=\left[{\mathbf{x}}_{org1},{\mathbf{x}}_{org2},\dots ,{\mathbf{x}}_{orgN}\right]$ ($N$ is the number of all looseness samples) for training and test with noise from ${\mathbf{R}}^{m}$ which exists an underlying nonlinear manifold ${\mathbf{M}}^{d}$ (${\mathbf{M}}^{d}\subset {\mathbf{R}}^{d}$) of dimension $d$. Moreover, suppose ${\mathbf{M}}^{d}$^{}is_{}embedded in the ambient Euclidean space ${R}^{m}$, where $d<m$. The problem that dimension reduction with LLTSA solves is finding a transformation matrix $\mathbf{A}$ which maps the looseness sample set ${\mathbf{X}}_{ORG}=\left[{\mathbf{x}}_{org1},{\mathbf{x}}_{org2},\dots ,{\mathbf{x}}_{orgN}\right]$ in ${\mathbf{R}}^{m}$ to the set $\mathbf{Y}=\left[{\mathbf{y}}_{1},{\mathbf{y}}_{2},\dots ,{\mathbf{y}}_{N}\right]$ in ${\mathbf{R}}^{d}$ as follows:
where ${\mathbf{H}}_{N}=\mathbf{I}\mathbf{e}{\mathbf{e}}^{T}/N$ is the centering matrix which is used to improve the aggregation of $k$ nearest neighbors set ${\mathbf{X}}_{i}=\left[{\mathbf{x}}_{i1},{\mathbf{x}}_{i2},\dots ,{\mathbf{x}}_{ik}\right]$ of each sample ${\mathbf{X}}_{i}$, and $\mathbf{I}$ is the identity matrix, $\mathbf{e}$ is an $N$dimensional column vector of all ones, $\mathbf{Y}$ is the underlying $d$dimensional nonlinear manifold ${\mathbf{M}}^{d}$ of ${\mathbf{X}}_{ORG}$.
3.2. The algorithm of LLTSA
Given the data set $\mathbf{X}$ obtained by treating ${\mathbf{X}}_{ORG}$ with Principal Component Analysis, for each point ${\mathbf{X}}_{i}\in \mathbf{X}$, its $k$ nearest neighbors by a matrix ${\mathbf{X}}_{i}=\left[{\mathbf{x}}_{i1},{\mathbf{x}}_{i2},\dots ,{\mathbf{x}}_{ik}\right]$. In order to ensure the local structure of each ${\mathbf{X}}_{i}$, local linear approximation of the data points in ${\mathbf{X}}_{i}$ is calculated with tangent space as follows:
where ${\mathbf{H}}_{k}=\mathbf{I}\mathbf{e}{\mathbf{e}}^{T}/k$, ${Q}_{i}$ is an orthonormal basis matrix of the tangent space and has $d$ eigenvalues of ${\mathbf{X}}_{i}{\mathbf{H}}_{k}$_{}corresponding to its $d$ largest eigenvalues, and ${\mathrm{\Theta}}_{i}$ is defined as:
where ${\theta}_{j}^{\left(i\right)}$ is the local coordinate corresponding to the basis ${\mathbf{Q}}_{i}$. Now, we can construct the global coordinates ${\mathbf{y}}_{i}$, $i=$1, 2,…, $N$, in ${\mathbf{R}}^{d}$ based on the local coordinate ${\theta}_{j}^{\left(i\right)}$, which represents the local geometry as follows:
where ${\stackrel{}{y}}_{i}$ is the mean of the various ${\mathbf{y}}_{ij}$, ${\mathbf{L}}_{i}$ is a local affine transformation matrix that needs to be determined, and ${\epsilon}_{j}^{\left(i\right)}$ is the local reconstruction error. Letting ${\mathbf{Y}}_{\mathbf{i}}=\left[{\mathbf{y}}_{i1},{\mathbf{y}}_{i2},\dots ,{\mathbf{y}}_{ik}\right]$ and ${E}_{i}=\left[{\epsilon}_{1}^{\left(i\right)},{\epsilon}_{2}^{\left(i\right)},{\epsilon}_{3}^{\left(i\right)},\cdots ,{\epsilon}_{k}^{\left(i\right)}\right]$, we have:
To preserve as much of the local geometry as possible in the low dimensional feature space, we intend to find ${\mathbf{y}}_{i}$ and ${\mathbf{L}}_{i}$ to minimize the reconstruction errors ${\epsilon}_{j}^{\left(i\right)}$ as follows:
Therefore, the optimal affine transformation matrix ${L}_{i}$ has the form ${L}_{i}={Y}_{i}{H}_{k}{\mathrm{\Theta}}_{i}^{+}$, and ${E}_{i}={Y}_{i}{H}_{k}(I{\mathrm{\Theta}}_{i}^{+}{\mathrm{\Theta}}_{i})$, where ${\mathrm{\Theta}}_{i}^{+}$ is the MoorPenrose generalized inverse of ${\mathrm{\Theta}}_{i}$.
Let $\mathbf{Y}=\left[{\mathbf{y}}_{1},{\mathbf{y}}_{2},\dots ,{\mathbf{y}}_{N}\right]$ and ${\mathbf{S}}_{i}$ be the 01 selection matrix, such that $\mathbf{Y}{\mathbf{S}}_{i}={\mathbf{Y}}_{i}$, then the objective function is converted to this form:
where $\mathbf{S}=\left[{\mathbf{S}}_{1},{\mathbf{S}}_{2},{\mathbf{S}}_{3},\dots \uff0c{\mathbf{S}}_{n}\right]$, and $\mathbf{W}=\mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}\uff08{\mathbf{W}}_{1},{\mathbf{W}}_{2},{\mathbf{W}}_{3},\dots ,{\mathbf{W}}_{n}\uff09$ with ${W}_{i}={H}_{k}(I{\mathrm{\Theta}}_{i}^{+}{\mathrm{\Theta}}_{i})$ According to the numerical analysis in [17], Wi can also be written as follows:
where ${\mathbf{V}}_{i}$ is the matrix of $d$ eigenvectors of ${\mathbf{X}}_{i}{\mathbf{H}}_{k}$ corresponding to its $d$ largest eigenvalues. To uniquely determine $\mathbf{Y}$, we impose the constraint $\mathbf{Y}{\mathbf{Y}}^{T}={\mathbf{I}}_{d}$. Considering the map $\mathbf{Y}={\mathbf{A}}^{T}\mathbf{X}{\mathbf{H}}_{N}$, the objective function has the ultimate form:
where $\mathbf{B}=\mathbf{S}\mathbf{W}{\mathbf{W}}^{T}{\mathbf{S}}^{T}$. It is likely to be proved that the above minimization problem can be converted to solving a generalized eigenvalue problem as follows:
Let the column vectors ${a}_{1}$, ${a}_{2}$,…, ${a}_{d}$ be the solutions of Eq. (13), ordered according to the eigenvalues, ${\lambda}_{1}<{\lambda}_{2}<\dots <{\lambda}_{d}$. Thus, the transformation matrix ${\mathbf{A}}_{LLTSA}$ which minimizes the objective function is as follows:
In the practical problems, one often encounters the difficulty that $\mathbf{X}{\mathbf{H}}_{N}{\mathbf{X}}^{T}$ is singular. This stems from the fact that the number of data points is much smaller than the dimension of the data. To attach the singularity problem of $\mathbf{X}{\mathbf{H}}_{n}{\mathbf{X}}^{T}$, we use the PCA to project the data set to the principal subspace. In addition, the preprocessing using PCA can reduce the noise. ${\mathbf{A}}_{PCA}$ is applied to denote the transformation matrix of PCA.
Therefore, the ultimate transformation matrix is as follows: $\mathbf{A}={\mathbf{A}}_{PCA}{\mathbf{A}}_{LLTSA}\text{,}$ and $\mathbf{X}\to \mathbf{Y}={\mathbf{A}}^{T}{\mathbf{X}}_{OGR}{\mathbf{H}}_{N}$.
Because of LLTSA’s good clustering performance, $d$dimensional looseness feature set $\mathbf{Y}={\mathbf{A}}^{T}{\mathbf{X}}_{OGR}{\mathbf{H}}_{N}={\left({\mathbf{A}}_{PCA}{\mathbf{A}}_{LLTSA}\right)}^{T}{\mathbf{X}}_{OGR}{\mathbf{H}}_{N}$ outputted by LLTSA from highdimensional looseness extent sensitive feature set ${\mathbf{X}}_{ORG}$. The lowdimensional looseness extent feature set $\mathbf{Y}$ has better identifiability than highdimensional looseness extent sensitive set ${\mathbf{X}}_{ORG}$, and the features of $\mathbf{Y}$ are mutually independent, so the set $\mathbf{Y}$ is good for pattern recognition. The set $\mathbf{Y}$ is inputted into Weight $K$ nearest neighbor classifier for looseness extent recognition.
4. Pedestal looseness extent recognition by weigh $\mathit{K}$ nearest neighbor classifier (WKNNC) and the looseness extent recognition process
4.1. Looseness extent recognition by WKNNC
In order to finally realize pedestal looseness extent automatic recognition, the lowdimensional looseness extent sensitive feature set output by LLTSA serves should be recognized by classifier. The simplicity of $K$ nearest neighbor classifer (KNNC) algorithm makes it easy to implement. The KNNC has no complex training process, compared with other classification algorithms such as support vector machine, neural network and decision tree. It directly uses the local information of the classified data to form the final classification decision, which improves its efficiency of computation, makes it easy to realize and be widely used. But, it suffers from poor classification performance when samples of different classes overlap in some regions in the feature space, and its result is affected by the neighborhood size of $k$, it has poor stability. Therefore, the Weight $K$ nearest neighbor classifier (WKNNC) [21] is useful for overcoming the shortcomings of KNNC. The WKNNC is an improvement of KNNC, it not only has the all advantages of KNNC, but also is insensitive the neighbor size of $k$ and better robustness. So, the WKNNC is used for pedestal looseness extent recognition.
Given a training set $\mathbf{X}=\left\{\left({x}_{i},{l}_{i}\right),{x}_{i}\in {\mathbf{R}}^{m}\uff0ci=\mathrm{1,2},\dots ,n\right\}$, consisting of $n$ samples, the class label ${l}_{i}$ of each sample ${x}_{i}$ is known, ${l}_{i}\in \left\{{l}_{1},{l}_{2},\dots ,{l}_{r}\right\}$. The class label ${l}_{t}$ of testing sample ${x}_{t}$ need to be identified. The general idea of WKNNC is summarized as follows: for given testing sample ${x}_{t}$, its $k$ nearest neighbors are selected from the training samples. The class label ${l}_{t}$ of testing sample ${x}_{t}$ is identified based on the class label of the $k$ nearest neighbors. The target of classification is to make the classification error $M$ minimum. And for each value ${l}_{j}$, the classification error $M\left({l}_{j}\right)$ is as follows:
where $P\left({l}_{i}{x}_{t}\right)$ is the probability of ${x}_{t}$ classified as ${l}_{i}$, $R\left({l}_{i},{l}_{j}\right)$ is error of class label ${l}_{i}$ classified as ${l}_{j}$. The all misclassification errors set by WKNNC are same as follows:
Then, the calculation steps of the WKNNC are described as follows:
Step 1: The Euclidean distance $d\left({x}_{t},{x}_{i}\right)$ between the testing sample ${x}_{t}$ and the training sample ${x}_{i}$ is defined as:
The $k+1$ nearest neighbor samples are selected from training samples based on the value of $d\left({x}_{t},{x}_{i}\right)$, denoted as ${x}_{t,1}$, ${x}_{t,2}$,…, ${x}_{t,k+1}$_{.}
Step 2: The sample ${x}_{t,k+1}$ is selected from the $k+1$ nearest neighbor samples, which Euclidean distance with ${x}_{t}$ is maximum, the maximum Euclidean distance is denoted $d\left({x}_{t},{x}_{t,k+1}\right)$. Then, the $d\left({x}_{t},{x}_{t,k+1}\right)$ is used to standardize the Euclidean distance between the other $k$ nearest neighbor samples and ${x}_{t}$ as follows:
where $D\left({x}_{t},{x}_{i}\right)$ is the standard Euclidean distance.
Step 3: Using Gauss kernel function, the $D\left({x}_{t},{x}_{i}\right)$ are transformed into similar probability $p\left({x}_{i}{x}_{t}\right)$ between ${x}_{t}$ and ${x}_{i}$ as follows:
Step 4: The posterior probability $P\left({l}_{i}{x}_{t}\right)$ that ${x}_{t}$ belongs to the class label ${l}_{i}$ ($i=$ 1, 2,…, $r$) is calculated based on the similar probability $p\left({x}_{i}{x}_{t}\right)$ between ${x}_{t}$ and the $k$ nearest neighbor samples, the $P\left({l}_{i}{x}_{t}\right)$ is as follows:
then, the most likely classification results $KNN\left({x}_{t}\right)$ are obtained as follows:
According to the similarity degree between the nearest neighbor samples and the testing sample ${x}_{t}$, the WKNNC gives different weights to the nearest neighbor samples, which makes the classification results of the testing sample more close to the similar degree of training sample. Therefore, the WKNNC is insensitive the neighbor size of $k$, and the robustness of the results for the looseness extent identification is better.
4.2. Looseness extent recognition process
According to above preparation, the flowchart of the proposed method is shown in Fig. 3, the steps of the method are described as follows:
Step 1: data acquisition. The vibration signal is collected, and corresponding frequency spectrum is calculated, the training samples and test samples are obtained.
Step 2: extracted original features and constructed the original looseness extent feature set. 15 timedomain parameters are extracted from vibration signal, and 14 frequencydomain characteristic parameters are extracted from frequency spectrum, then 29dimensional timefrequency looseness extent feature set is constructed by fusing the 14 timedomain parameters and 15 frequencydomain parameters.
Step 3: selected looseness sensitive feature and constructed looseness extent sensitive feature set. The looseness sensitivity index of each feature is calculated, ${J}_{i}$ ($i=$ 1, 2,…, $D$). Secondly, ${u}_{J}$, the mean of all looseness sensitivity index is calculated. Then, the looseness features are selected which sensitivity index ${J}_{i}\ge {u}_{J}$, and the looseness extent sensitive feature set is constructed.
Step 4: dimension reduction. The highdimensional looseness extent sensitive feature set are compressed to lowdimensional looseness extent sensitive feature set with LLTSA. The low dimension and good classification performance feature set is obtained.
Step 5: recognition of looseness extents and output the recognition result. The lowdimensional feature set is inputted into WKKNC for looseness recognition, and the pedestal looseness extent recognition for rotating machinery is realized.
Fig. 3The flowchart of the proposed method
5. Application of the proposed method and discussion
5.1. Experiment set up and signal acquisition
In this study, a pedestal looseness experiment of rotating machinery is performed to verify the effectiveness of the proposed looseness extent recognition method. The test rig is constructed as shown in Fig. 4.
Fig. 4Field installation diagram and test diagram. (Notes:1, 2,…, 10 are the serial number of the connecting bolt for embedded steel plate)
The embedded steel plate is fixed on the top of the tunnel by 10 bolts, and the installing bracket of fan is installed in the embedded steel plate. Pedestal looseness extent is simulated by different number of loose bolts, when the number of loose bolts increases, the pedestal looseness extent strengthens. At the test point 1 and test point 2, the speed of fan is 1500 r/min, the vibration signals of 5 kinds of looseness extent are collected, such as T1 (all bolts tightened), T2 (1 bolt looseness), T3 (two bolts looseness), T4 (3 bolts looseness) and T5 (4 bolts looseness), are shown in Table 2. Fig. 1 shows the collected vibration signal and its spectrum of the five kinds of looseness extents, it only shows time domain waveform of first 4096 points and corresponding spectrum in the frequency range of 05000 Hz, collected at the test point 1. The complete sensor selection and distribution scheme are shown in Table 3.
Table 2The looseness extents of pedestal
Looseness extent  Description 
T1  All bolts tightened 
T2  1 bolt looseness (bolt 8 looseness, others tightened) 
T3  2 bolts looseness (bolt 7 and 9 looseness, others tightened) 
T4  3 bolts looseness (bolt 6,8 and 10 looseness, others tightened) 
T5  4 bolts looseness (bolt 1,4,6 and 9 looseness, others tightened) 
Table 3The main parameters of the experiment
Serial number  Parameter/device  Value/ type 
1  Accelerometer  PCB 352C03 
2  Sensitivity of the accelerometer  1.031 mV/ m·s^{2} 
3  Frequency range of the accelerometer  0.5 to 15000 Hz 
4  Data acquisition card  NI 9234 
5  Data acquisition system  DAQ3.0 
6  Sampling frequency  25.6 kHz 
7  Sampling points  100k 
5.2. Experimental results and analysis
2048 continuous data were intercepted from each sample as time series for looseness extent, then, we can obtain 100 samples in each of looseness extent. We randomly select just 20 samples out of the 100 acquired samples as the training samples, and randomly select 20 samples out of the remaining 80 samples as testing samples. Application of the proposed method to recognize the looseness extent of pedestal.
In order to compare the dimension reduction and redundant treatment effect of PCA, the LPP, the LDA, and the LLTSA method are used to reduce the dimension for origin looseness extent feature set. To be comparable, the dimensions of PCA, LPP, LDA and LLTSA are set to 3. The comparison of dimension reduction results is shown in Fig. 5(a)(d).
By comparing Figs. 5(a)(d) we can find that the PCAbased data dimension reduction method cannot effectively separate the high dimension looseness feature set, the looseness extent of T2 is separated, but there is still serious aliasing, which will affect the accuracy of the WKNNC looseness recognition effect. The LPPbased data dimension reduction method can partly separate the different looseness feature sets, there are still some data mixed together, the looseness extent of T1, looseness extent of T3 and looseness extent of T4 mixed together. The LDAbased data dimension reduction method can’t separate feature sets from the looseness extent of T1, the looseness extent of T2 and the looseness extent of T3. The LLTSAbased data dimension reduction method works better than the PCA, LPP and LDA methods, but the looseness extent of T3 and the looseness extent T4 don’t completely separated, and the clustering performance is not good enough.
The looseness sensitivity index of 20 features is calculated as shown in Table 4, and the mean of the sensitivity index ${u}_{J}=$12.352. Then, 9 features (bolded part of the Table 4) that the sensitivity index ${J}_{i}\ge {u}_{J}$ are selected to construct the looseness sensitive feature set.
In order to compare the characterization capabilities between the origin looseness extent feature set and the looseness extent sensitive feature set, the looseness extent sensitive feature set is inputted into LLTSA to reduce the dimension, the dimensions of LLTSA is set to 3. The result is shown in Fig. 6.
By comparing Fig. 5(d) and Fig. 6 we can find that the result of Fig. 6 is better than result of Fig. 5(d). The 5 different looseness extents are completely separated in Fig. 6, and the better clustering performance is obtained.
In order to compare the recognition accuracy of origin looseness extent feature set and looseness extent sensitive feature set, and the recognition accuracy of dimension reduction methods such as PCA, LPP, LDA and LLTSA, the origin looseness extent feature set and looseness extent sensitive feature set are respectively inputted PCA, LPP, LDA and LLTSA to reduce dimension. The results of dimension reduction are respectively inputted WKNNC to recognize the looseness extent. The dimensions are set to 3, and the nearest neighbor $k$ is set to 9. The measurement of computation time was made in the follows computer configuration environment: 10G RAM, 3.4 GHz Inter Core i73770 CPU, 64bit Operating System and MATLAB R2015b.
Fig. 5The comparison of dimension reduction results for origin looseness extent feature set
a) The dimension reduction results of PCA
b) The dimension reduction results of LPP
c) The dimension reduction results of LDA
d) The dimension reduction results of LLTSA
The comparison results are shown in Table 5. In Table 5, the recognition accuracy rate ${\eta}_{i}$ is defined as:
where ${N}_{}^{{T}_{i}}$ represents the total number of testing samples, ${N}_{C}^{{T}_{i}}$ represents the number of testing samples correctly recognized, ${T}_{i}$ represents the pedestal looseness extent of the testing samples, $i=$1, 2,…, 5.
The average recognition accuracy rate $\eta $ can be expressed as:
Table 4The sensitivity index
Number No.  Sensitivity index  Number No.  Sensitivity index  Number No.  Sensitivity index  Number No.  Sensitivity index  Number No.  Sensitivity index 
${P}_{1}$  10.353  ${P}_{7}$  5.857  ${P}_{13}$  3.257  ${P}_{19}$  9.566  ${P}_{25}$  32.803 
${P}_{2}$  9.481  ${P}_{8}$  0.072  ${P}_{14}$  3.562  ${P}_{20}$  43.857  ${P}_{26}$  18.232 
${P}_{3}$  8.782  ${P}_{9}$  2.585  ${P}_{15}$  3.052  ${P}_{21}$  9.574  ${P}_{27}$  4.0230 
${P}_{4}$  10.489  ${P}_{10}$  0.079  ${P}_{16}$  28.247  ${P}_{22}$  2.289  ${P}_{28}$  11.627 
${P}_{5}$  5.857  ${P}_{11}$  3.996  ${P}_{17}$  24.299  ${P}_{23}$  1.820  ${P}_{29}$  13.847 
${P}_{6}$  10.489  ${P}_{12}$  13.786  ${P}_{18}$  34.497  ${P}_{24}$  31.834 
Fig. 6The dimension reduction results of LLTSA for looseness extent sensitive feature set
Table 5The comparison of recognition accuracy
The kinds of feature set  Reduction dimension methods  ${\eta}_{1}$_{}(%)  ${\eta}_{2}$_{}(%)  ${\eta}_{3}$_{}(%)  ${\eta}_{4}$_{}(%)  ${\eta}_{5}$_{}(%)  $\eta $ (%)  Computation time (s) 
Origin looseness extent feature set  PCA  45  50  80  75  95  69  0.29 
LPP  50  45  95  85  100  75  0.21  
LDA  60  75  70  60  90  71  0.19  
LLTSA  80  95  80  90  85  86  0.46  
Looseness extent sensitive feature set  PCA  75  90  75  55  85  76  0.20 
LPP  70  85  80  95  100  86  0.18  
LDA  75  95  80  85  90  85  0.16  
LLTSA  95  100  95  100  95  97  0.37 
Table 5 indicates that the average recognition accuracy rate $\eta $ of test samples achieved by looseness extent sensitive feature set is 97 %, which is higher than that of the origin looseness extent feature set, when the dimension reduction method remains unchanged. The reason for the looseness extent sensitive feature set don’t include the nonsensitive feature and poor sensitivity feature, which enhanced the ability to characterize the pedestal looseness, and improved the recognition accuracy rate. Also, we can see that after the LLTSAbased dimension reduction method, the accuracy of looseness extent recognition improved significantly, much higher than the other dimension reduction methods, because LLTSA has advantage of suitable nonlinear reduction and more superior clustering performance, it is conducive to the recognition. The computation time of LLTSA is 0.46 s, which is more other dimension reduction methods, but it also very fast, doesn’t affect the engineering applications. Also, we can find that the computation time is reduced, when using looseness extent sensitive feature set to recognize and the dimension reduction method remain unchanged.
Next, recognition accuracy of WKNNC is compared with KNNC, where the nearest neighbor size of WKNNC and KNNC is also set as $k=$320, the dimension reduction method is LLTSA. Also, the origin looseness extent feature set and the looseness extent sensitive feature set are used to compare. The recognition accuracy rate curve with the nearest neighbor size $k$ are as shown in Fig. 7, then, for each curve, the average recognition accuracy rate, standard deviation and peakpeak value are calculated as shown in Table 6.
Fig. 7The comparison of recognition results
Table 6Recognition result analysis
Origin looseness extent feature setKNNC  Origin looseness extent feature setWKNNC  Looseness extent sensitive feature setKNNC  Looseness extent sensitive feature setWKNNC  
Average recognition accuracy rate $\eta $ (%)  77  86.17  84.5  97.06 
Standard deviation (%)  9.16  5.18  5.81  0.99 
Peakpeak value (%)  31  19  21  3 
Fig. 7 and Table 6 show that the recognition accuracy rate of WKNNC is higher than KNNC, the standard deviation and peakpeak value of WKNNC are less than KNNC, because the nearest neighbor samples are given different weights, which makes the classification results of the testing sample more close to the similar degree of training sample, which also prove that the WKNNC is insensitive the neighbor size of $k$, having better stability and robustness than KNNC. The recognition accuracy rate basedlooseness extent sensitive feature set is higher than the basedorigin looseness extent feature set when the pattern recognition algorithm is same, which confirm again that the characterization capabilities of looseness extent sensitive feature set is stronger than origin looseness extent feature set. The standard deviation and peakpeak value basedlooseness extent sensitive feature set is smaller than the basedorigin looseness extent feature set when the pattern recognition algorithm is same, which confirm that the stability of characterization method basedlooseness extent sensitive feature set is better than based origin looseness extent feature set, and the clustering performance of looseness extent sensitive feature set is better too. Table 5, Fig. 7 and Table 6 show that the recognition accuracy rate of the proposed method is higher than other method, and it insensitive the neighbor size of $k$, having better stability and robustness.
This application example demonstrates well the performance of the proposed method which comprehensively uses characteristics of vibration signal to characterize the pedestal looseness extent, extracts timefrequency feature to construct origin looseness extent feature set, obtains looseness extent sensitive feature set to enhance the characterization capabilities, reduce dimension with LLTSA, and recognize looseness extent with WKNNC. The results conform the proposed method can recognize the pedestal looseness extent. Also, the feasibility and validity of the proposed method are verified by experimental results.
6. Conclusions
A pedestal looseness extent recognition method for rotating machinery based on vibration sensitive timefrequency feature and manifold learning has been proposed in this paper.
1) When the rotating machinery pedestal is loose, the structure dynamic characteristics of the rotating machinery will be change, the vibration signal and its spectrum will be change too. Then, pedestal looseness extent of rotating machinery is exactly characterized by the characteristics of vibration signal and its spectrum.
2) The timefrequency features are extracted to construct the origin looseness extent feature set, which includes 15 timedomain parameters and 14 frequencydomain parameters. Then, the origin looseness extent feature set has realized quantitative characterization of pedestal looseness extent, which easily implement intelligent diagnosis.
3) The algorithm of looseness sensitivity index is designed, the nonsensitive feature and poor sensitivity feature are removed from origin looseness feature set. Then, the looseness extent sensitive feature set is obtained, which characterization capabilities and clustering performance are better than origin feature set. A manifold learning algorithm LLTSA has excellent clustering and dimension reduction characteristics, and is perfectly suitable to reduce nonlinear feature set. It is used to reduce the highdimensional and nonlinear looseness extent sensitive feature set.
4) The WKNNC gives different weights to the nearest neighbor samples based on the similarity degree between the nearest neighbor samples and the test samples. So the WKNNC is insensitive the neighbor size of $k$, and the robustness of the results for pedestal looseness extent recognition is better.
5) The proposed method makes use of the advantage of all parts and together to realize pedestal looseness extent recognition of rotating machinery and obtain better recognition accuracy and efficiency. The feasibility and performance of the proposed method was proved by successful pedestal looseness extent recognition application in a fan pedestal.
6) The proposed method could be used for identification and classification of other rotating machinery faults. For example, rotating bearing faults like inner race crack, outer race crack and ball crack, etc. It must be noted that the training samples and testing samples should be from the same operating conditions, because the timefrequency features are difficult to effectively characterize the faults of rotating machinery when the operating conditions are different, such as rotor speed and load.
It should be pointed out that the pedestal looseness extent could be effectively recognized, but there is a problem to solve. For example, we can be sure that there are two bolt loose, but it is difficult to determine which of the two bolts, so we need to manually check each bolt when maintain the equipment. So, it is interesting to study the problem in the future.
References

Se Camby M. K., Lee Eric W. M., Lai Alvin C. K. Impact of location of jet fan on airflow structure in tunnel fire. Tunnelling and Underground Space Technology, Vol. 27, Issue 1, 2012, p. 3040.

Costantino Antonio, Musto Marilena, Rotondo Giuseppe, et al. Numerical analysis for reducedscale road tunnel model equipped with axial jet fan ventilation system. Energy Procedia, Vol. 45, 2014, p. 11461154.

Qin Zhaoye, Han Qinkai, Chu Fulei Bolt loosening at rotating joint interface and its influence on rotor dynamics. Engineering Failure Analysis, Vol. 59, 2016, p. 456466.

Zhang Yanping, Huang Shuhong, Hou Jinghong, et al. Continuous wavelet grey moment approach for vibration analysis of rotating machinery. Mechanical Systems and Signal Processing, Vol. 20, Issue 5, 2006, p. 12021220.

Lee SeungMock, Choi YeonSun Fault diagnosis of partial rub and looseness in rotating machinery using HilbertHuang transform. Journal of Mechanical Science and Technology, Vol. 22, Issue 11, 2008, p. 21512162.

Wu T. Y., Chung Y. L., Liu C. H. Looseness Diagnosis of rotating machinery via vibration analysis through HilbertHuang transform approach. Journal of Vibration and Acoustics, Vol. 132, Issue 3, 2010, p. 031005.

Wu TianYau, Hong HueiCheng, Chung YuLiang A looseness identification approach for rotating machinery based on postprocessing of ensemble empirical mode decomposition and autoregressive modeling. Journal of Vibration and Control, Vol. 18, Issue 6, 2011, p. 796807.

Nembhard Adrian D., Sinha Jyoti K., YunusaKaltungo A. Development of a generic rotating machinery fault diagnosis approach insensitive to machine speed and support type. Journal of Sound and Vibration, Vol. 337, Issue 17, 2015, p. 321341.

Muralidharan V., Sugumaran V. Roughset based rule learning and fuzzy classification of wavelet features for fault diagnosis of monoblock centrifugal pump. Measurement, Vol. 46, Issue 9, 2013, p. 30573063.

Su Zuqiang, Tang Baoping, Deng Lei, et al. Fault diagnosis method using supervised extended local tangent space alignment for dimension reduction. Measurement, Vol. 62, 2015, p. 114.

Yang C. Y., Wu T. Y. Diagnostics of gear deterioration using EEMD approach and PCA process. Measurement, Vol. 61, 2015, p. 7587.

He Fei, Xu Jinwu A novel process monitoring and fault detection approach based on statistics locality preserving projections. Journal of Process Control, Vol. 37, 2016, p. 4657.

Akbari Ali, Khalil Arjmandib Meisam An efficient voice pathology classification scheme based on applying multilayer linear discriminant analysis to wavelet packetbased features. Biomedical Signal Processing and Control, Vol. 10, 2014, p. 209223.

Li Feng, Wang Jiaxu, Tang Baoping, et al. Life grade recognition method based on supervised uncorrelated orthogonal locality preserving projection and knearest neighbor classifier. Neurocomputing, Vol. 138, Issue 22, 2014, p. 271282.

Souza Roberto, Rittner Letícia, Lotufo Roberto A comparison between kOptimum Path Forest and kNearest Neighbors supervised classifiers. Pattern Recognition Letters, Vol. 39, Issue 1, 2014, p. 210.

Saravanan N., Kumar Siddabattuni V. N. S., Ramachandran K. I. Fault diagnosis of spur bevel gear box using artificial neural network (ANN), and proximal support vector machine (PSVM). Applied Soft Computing, Vol. 10, Issue 1, 2010, p. 344360.

Moosavia S. S., Djerdira A., AitAmiratb Y., et al. ANN based fault diagnosis of permanent magnet synchronous motor under stator winding shorted turn. Electric Power Systems Research, Vol. 125, 2015, p. 6782.

Salahshoor Karim, Kordestani Mojtaba, Khoshrob Majid S. Fault detection and diagnosis of an industrial steam turbine using fusion of SVM (support vector machine) and ANFIS (adaptive neurofuzzy inference system) classifiers. Energy, Vol. 35, Issue 12, 2010, p. 54725482.

Muralidharan a V., Sugumaran V., Indira V. Fault diagnosis of monoblock centrifugal pump using SVM. Engineering Science and Technology, Vol. 17, Issue 3, 2014, p. 152157.

Zhang Tianhao, Yang Jie, Zhao Deli, et al. Linear local tangent space alignment and application to face recognition. Neurocomputing, Vol. 70, Issues 79, 2007, p. 15471553.

Lei Yaguo, Zuo Ming J. Gear crack level identification based on weighted K nearest neighbor classification algorithm. Mechanical Systems and Signal Processing, Vol. 23, Issue 5, 2009, p. 15351547.

Chen Lifei, Guo Gongde Nearest neighbor classification of categorical data by attributes weighting. Expert Systems with Applications, Vol. 42, Issue 6, 2015, p. 31423149.

Wang Yuli, Chakrabarti Amitabha, Sorensen Christopher M. A lightscattering study of the scattering matrix elements of Arizona road dust. Journal of Quantitative Spectroscopy and Radiative Transfer, Vol. 163, 2015, p. 7279.

Yao Chao, Lu Zhaoyang, Li Jing, et al. An improved Fisher discriminant vector employing updated betweenscatter matrix. Neurocomputing, Vol. 173, 2016, p. 154162.
Cited by
About this article
This research was supported by the National Natural Science Foundation of China (Project No. 51305471, No. 51405048, No. 51505048), China Postdoctoral Science Foundation (Project No. 2014M560719), Chongqing Research Program of Basic Research and Frontier Technology (Project No. cstc2014jcyjA70009), Science and Technology Research Project of Chongqing Education Commission (Project No. KJ1400308, No. KJ1500516), National Scholarship Project (No. 201408505081). Finally, the authors are very grateful to the anonymous reviewers for their helpful comments and constructive suggestions.
Renxiang Chen contributed to the conception of the study and proposed the method. Zhiyan Mu designed and performed the experiments. Lixia Yang performed the data analyses and wrote the manuscript. Xiangyang Xu helped perform the analysis with constructive discussions. Xia Zhang played an important role in interpreting the results.