广义主成分分析外文翻译资料

 2023-01-11 10:01

广义主成分分析

——外文翻译

原文作者 Reneacute; Vidal, Member, IEEE, Yi Ma, Member, IEEE, and Shankar Sastry, Fellow, IEEE

摘要:本文提出了一种algebro-geometric方案,以对高维未知变量空间进行降维处理。我们用一组齐次多项式代表子空间,其维度是子空间的数,并且其衍生物在一个数据点到子空间通过点给出子空间的法向量。当子空间的数量是已知的,我们发现,这些多项式可以通过数据进行线性估计;因此,子空间分割可以便于观察每一个子空间的每个样本点。我们通过使确定的距离函数最小化从数据集合中选出这些最优点。从而在数据中自动地处理中度噪声。对于每个子空间的补集的基础因此通过将标准主成分分析应用到衍生品(法向量)的集合恢复。广义主成分分析的扩展,在高维空间和位置数目的子空间中处理数据也被提出。我们对低维数据的实验表明,广义主成分分析优于现有的基于多项式因式分解的代数算法,并且提供了一个良好的初始化迭代技术,如K-subspaces和期望最大化技术。我们同时也提出将广义主成分分析应用于计算机视觉问题,如面对集群,空间视频分割,和从多个仿射对应视点的3D运动分割。

关键词:主成分分析(PCA); 子空间分割; 维罗纳地图; 降维; 视频分割; 动态场景和运动分割

1 简介

主成分分析[12](Principal Component Analysis,PCA), 是将多个变量通过线性变换以选出较少个数重要变量的一种多元统计分析方法。这个概念出现在许多不同的领域,例如,识别模式,数据压缩,回归,图像处理等应用程序中,并且可以以一个非常简单线性组合来概括众多指标,解决了变量多且复杂的问题,这种线性代数解决方案具有得到最小化平方距离的特性,从复杂的原始数据点。

除了以上所述的代数和几何意义,主成分分析PCA,也可以以概率的方式来理解。在概率中PCA[20](PPCA),被假定为从未知分布中识别在最大似然意义上的子空间和分布参数。当噪声分布是高斯分布时,所述algebro问题的几何意义和概率意义一致[2]。然而,当噪声分布是非高斯分布时,所述问题的主成分线性组合将不再是线性的,因此其中的主成分PCA将被推广到在指数族的任意分布情形。

主成分分析PCA的另一个更广义的形式是非线性主成分(NLPCA)或内核的PCA(KPCA),它将可以用来解释来自样本点非线性的问题。非线性主成分分析NLPCA[16]是基于首先把数据嵌入一个高维特征子空间F,然后对所嵌入的数据应应用标准的主成分分析PCA。由于F的维度可以很多,更多的潜在的解决方法是可以从核矩阵的特征值分解来考虑;因此,这个方法可以命名为KPCA。KPCA的缺点之一是,在实践中,很难确定要使用哪个内核函数,因为内核的选择当然取决于识别是哪种非线性结构。事实上,学习研究内核函数在机械学习中是一个被热烈探讨的主题。据我们所知,我们的工作是第一个以分析方式维证明罗纳地图(多项式嵌入)是天然嵌入数据基于多个子空间的结合。

在本文中,我们在一个联合子空间内考虑下列基于实例替代广义主成分分析的案例,如示于图1 所示两个子空间。

1.1子空间分割前期工作

子空间分割是许多实际应用的基础工作。在计算机视觉(例如,图像/运动/视频分割),图像处理(例如,图像表示和压缩),以及系统的理论(例如,混合动力系统识别),这些通常被认为是许多应用中的一个基本问题“chicken-andegg。”如果数据的分割是已知的,人们可以很容易地识别每个组的单一子空间,以标准主成分分析PCA来研究。相反,如果该子空间被已知的,人们可以很容易地找到最适合每个子空间中的数据点。因为,在实践中,无论是子空间的域,亦或是子空间的分割都是已知的,大多数现有的方法,随机选择每个子空间的基础的数据,然后在数据分割和子空间估计之间进行迭代。这些也可以通过K-子空间[10]来实现,基于K均值的主成分分析PCA [19]。包括子空间的成长和子空间的选择[15],或子空间的期望最大化(EM)的情况下。不幸的是,事实上大多数迭代方法是,在一般情况下,初始化非常敏感;因此,他们可能不收敛到全局最优[21]。

需要初始化的要求激励着algebro几何方法最近的发展,它可以进行子空间的分割但不需要初始化。在[13](见,[3]),它表明,当子空间正交且同维度,并且只相交在原点,这意味着,可以使用的数据的SVD来定义一个从其中的数据的分割可利用光谱聚类技术获得相似性矩阵。不幸的是,这种方法在数据中将对噪声十分敏感,在[13],[27],其中提出了各种改进,并且不能在子空间任意相交[14],[22],[28]。后者的情况已得到解决在特设的方式下通过使用聚类分析法,如K-均值,谱聚类,或EM [14],[28]法去分割数据,并对每个组都打下使用主成分分析的基础。处理任意交点的唯一代数方法是[17],它研究的两个平面中的情况下与[24]它研究余维1,也就是说,在超平面的子空间的情况下,也显示了超平面分割相当于均值项式分解。我们以前的工作[23]将这个问题延伸到假设子空间的数量是已知的并且子空间是不同的多维子空间但自空间数量已知这个框架。本文结合的[24]和[23]的结果,并将研究延伸子空间的数量和维度都是未知的情况下。

1.2文章的组织架构

在本文中,我们提出了一种叫做“algebro”的几何方法来进行子空间的分割,这种方法被称为广义主成分分析(GPCA),它是基于拟合,区分和划分多项式等方法最终得出。像和以前的工作不一样,在这个领域理我们不限制子空间是正交的或是平凡相交,亦或是已知并且相等的维度。相反,我们解决未知和可能不同的维度(例如图1),以及子空间的任意数量和与子空间之间的任意交点中最一般的情况。

在第2节中,我们通过解决图1所示的简单的例子来引出和强调我们的方法的核心思想。

在第3节中,我们基于子空间以及未知的并且可能具有不同维度的已知数量,对这个实例进行了概括。我们表明了:这个可以表示所有子空间的并集,作为设定齐次多项式,其程度是子空间的数量和其编码因子法线向量的子空间的集合。而这些多项式的系数,则可从在子空间和该组的正常矢量的每个子空间采样所得的数据点,进行线性估计,这样基于子空间,就可以通过评估这些多项式的导数的任何一点。因此,子空间分割便降低到每个分类子空间中点的问题。当那些点被获得(例如,在半监督研究中),这意味着,为了研究子空间的混合,从每个类别中获得一个正面的例子是充足的。当所有的数据点是被未标记的(例如,在无监督研究中),我们会使用多项式除法来将数据集中,从而最大限度地减少其对代数设定到递归选择点的距离;因此,在数据中,进行较轻度的噪音的自动处理。而一个对每个子空间的补足的基础便会恢复,通过添加PCA标准到多项式(法线向量),在这些点上。而最终的结果是基于简单的线性和多项式代数上的一个全球性的,非迭代子空间分割算法。

在第4节,我们讨论了我们的方法的一些扩展。我们展示了如何处理高维空间上的低维子空间,通过线性投影到低维子空间,这个方法保留了数量和子空间.我们同时展示了如何将基本GPCA算法概括进入这个例子,其中通过递归分割算法子空间的数量是未知的。

在第5节中,我们提出在低维的数据上的实验,其显示GPCA给出的基于多项式因式分解现有代数算法大约一半的错误,并提高了迭代技术,如K-子空间和EM的性能,通过相关随机初始化的50%。我们同时也提出了GPCA在计算机视觉问题的应用,比如面部集群,暂时的视频分割,以及在在多个仿射视野中对应点的3D分割。

在本文中,我们得到一个建设性的algebro-geometric方法,当解决子空间分割问题子空间n是已知的。也将讨论的子空间是未知的情况。我们总结了algebro-geometric解决方案。我们依据下面的定理:

集合的n个子空间可以用一组表示齐次多项式的n维变量来表示。这些多项式可以被线性估计,在一定的子空间里这些多项式表示足够多的样本店。每个自空间的信息都可以从线性多项式中寻找到。这样的点可以通过多项式的递归选择主成分。因此,子空间分割问题在数学中相当于匹配问题、分组差异化和分组列出齐次多项式问题。

参考文献

[1] D.S. Broomhead and M. Kirby, “A New Approach to Dimensionality Reduction Theory and Algorithms,” SIAM J. Applied Math.,vol. 60, no. 6, pp. 2114-2142, 2000.

[2] M. Collins, S. Dasgupta, and R. Schapire, “A Generalization of Principal Component Analysis to the Exponential Family,” Advances on Neural Information Processing Systems, vol. 14, 2001.

[3] J. Costeira and T. Kanade, “A Multibody Factorization Method for Independently Moving Objects,” Intrsquo;l J. Computer Vision, vol. 29, no. 3, pp. 159-179, 1998.

[4] C. Eckart and G. Young, “The Approximation of One Matrix by Another of Lower Rank,” Psychometrika, vol. 1, pp. 211-218, 1936.

[5] M.A. Fischler and R.C. Bolles, “RANSAC Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Comm. ACM, vol. 26, pp. 381-395, 1981.

[6] I.M. Gelfand, M.M. Kapranov, and A.V. Zelevinsky, Discriminants, Resultants, and Multidimensional Determinants. Birkhauml;user, 1994.

[7] J. Harris, Algebraic Geometry: A First Course. Springer-Verlag, 1992.

[8] R. Hartshorne, Algebraic Geometry. Springer, 1977.

[9] M. Hirsch, Differential Topology. Springer, 1976.

[10] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman, “Clustering Apperances of Objects under Varying Illumination Conditions,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 11-18, 2003.

[11] K. Huang, Y. Ma, and R. Vidal, “Minimum Effective Dimension for Mixtures of Subspaces: A Robust GPCA Algorithm and Its Applications,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 631-638, 2004.

[12] I. Jolliffe, Principal Component Analysis. New York: Springer-Verlag, 1986.

[13] K. Kanatani, “Motion Segmentation by Subspace Separation and Model Selection,” Proc. IEEE Intrsquo;l Conf. Computer Vision, vol. 2, pp. 586-591, 2001.

[14] K. Kanatani and Y. Sugaya, “Multi-Stage Optimization for Multi-Body Motion Segmentation,” Proc. Australia-Japan Advanced Workshop Computer Vision, pp. 335-349, 2003.

[15] A. Leonardis, H. Bischof, and J. Maver, “Multiple E

剩余内容已隐藏,支付完成后下载完整资料


Generalized Principal Component Analysis (GPCA)

Reneacute; Vidal, Member, IEEE, Yi Ma, Member, IEEE, and Shankar Sastry, Fellow, IEEE

Abstract—This paper presents an algebro-geometric solution to the problem of segmenting an unknown number of subspaces of unknown and varying dimensions from sample data points. We represent the subspaces with a set of homogeneous polynomials whose degree is the number of subspaces and whose derivatives at a data point give normal vectors to the subspace passing through the point. When the number of subspaces is known, we show that these polynomials can be estimated linearly from data; hence, subspace segmentation is reduced to classifying one point per subspace. We select these points optimally from the data set by minimizing certain distance function, thus dealing automatically with moderate noise in the data. A basis for the complement of each subspace is then recovered by applying standard PCA to the collection of derivatives (normal vectors). Extensions of GPCA that deal with data in a highdimensional space and with an unknown number of subspaces are also presented. Our experiments on low-dimensional data show that GPCA outperforms existing algebraic algorithms based on polynomial factorization and provides a good initialization to iterative techniques such as K-subspaces and Expectation Maximization. We also present applications of GPCA to computer vision problems such as face clustering, temporal video segmentation, and 3D motion segmentation from point correspondences in multiple affine views.

Index Terms—Principal component analysis (PCA), subspace segmentation, Veronese map, dimensionality reduction, temporal video segmentation, dynamic scenes and motion segmentation.

1 INTRODUCTION

Principal Component Analysis (PCA) [12] refers to the problem of fitting a linear subspace of unknown dimension to sample points in . This problem shows up in a variety of applications in many fields, e.g., pattern recognition, data compression, regression, image processing, etc., and can be solved in a remarkably simple way from the singular value decomposition (SVD) of the (mean-subtracted) data matrix With noisy data, this linear algebraic solution has the geometric interpretation of minimizing the sum of the squared distances from the (noisy) data points to their projections in .

In addition to these algebraic and geometric interpretations, PCA can also be understood in a probabilistic manner. In Probabilistic PCA [20] (PPCA), the noise is assumed to be drawn from an unknown distribution and the problem becomes one of identifying the subspace and distribution parameters in a maximum-likelihood sense. When the noise distribution is Gaussian, the algebro-geometric and probabilistic interpretations coincide [2]. However, when the noise distribution is non-Gaussian, the solution to PPCA is no longer linear, as shown in [2], where PCA is generalized to arbitrary distributions in the exponential family.

Another extension of PCA is nonlinear principal components (NLPCA) or Kernel PCA (KPCA), which is the problem of identifying a nonlinear manifold from sample points. The standard solution to NLPCA [16] is based on first embedding the data into a higher-dimensional feature space F and then applying standard PCA to the embedded data. Since the dimension of F can be large, a more practical solution is obtained from the eigenvalue decomposition of the so-called kernel matrix; hence, the name KPCA. One of the disadvantages of KPCA is that, in practice, it is difficult to determine which kernel function to use because the choice of the kernel naturally depends on the nonlinear structure of the manifold to be identified. In fact, learning kernels is an active topic of research in machine learning. To the best of our knowledge, our work is the first one to prove analytically that the Veronese map (a polynomial embedding) is the natural embedding for data lying in a union of multiple subspaces.

In this paper, we consider the following alternative extension of PCA to the case of data lying in a union of subspaces, as illustrated in Fig. 1 for two subspaces of IR3.

    1. Previous Work on Subspace Segmentation

Subspace segmentation is a fundamental problem in many applications in computer vision (e.g., image/motion/video segmentation), image processing (e.g., image representation and compression), and systems theory (e.g., hybrid system identification), which is usually regarded as “chicken-andegg.” If the segmentation of the data was known, one could easily fit a single subspace to each group of points using standard PCA. Conversely, if the subspace bases were known, one could easily find the data points that best fit each subspace. Since, in practice, neither the subspace bases nor the segmentation of the data are known, most existing methods randomly choose a basis for each subspace and then iterate between data segmentation and subspace estimation. This can be done using, e.g., K-subspaces [10], an extension of K-means to the case of subspaces, subspace growing and subspace selection [15], or Expectation Maximization (EM) for mixtures of PCAs [19]. Unfortunately, most iterative methods are, in general, very sensitive to initialization; hence, they may not converge to the global optimum [21].

The need for initialization methods has motivated the recent development of algebro-geometric approaches to subspace segmentation that do not require initialization. In [13] (see, also, [3]), it is shown that when the subspaces are orthogonal, of equal dimension , and intersect only at the origin, which implies that , one can use the SVD of the data to define a similarity matrix from which the segmentation of the data can be obtained using spectral clustering techniques. Unfortunately, this method is sensitive to noise in the data, as shown in [13], [27] where various improvements are proposed, a

剩余内容已隐藏,支付完成后下载完整资料


资料编号:[287144],资料为PDF文档或Word文档,PDF文档可免费转换为Word

您需要先支付 30元 才能查看全部内容!立即支付

课题毕业论文、文献综述、任务书、外文翻译、程序设计、图纸设计等资料可联系客服协助查找。