Monday, October 03, 2005

Face Recognition Using Laplacianfaces

Face Recognition Using Laplacianfaces [1]

Este método propone un enfoque para análisis facial (representación y reconocimiento), que explícitamente considera la estructura múltiple. Para ser específico, la estructura múltiple es modelada por una grafica vecino-mas-cercano que conserva la estructura local del espacio imagen. Un subespacio de imagen es obtenido mediante “Locality Preserving Projections” (LPP). Cada imagen en el espacio imagen es mapeada a un subespacio facial de menor dimension, que es caracterizado por un conjunto de imágenes características llamadas Laplacianfaces, Figura 1. El subespacio rostro conserva su estructura local y parece tener mayor poder discriminante que el enfoque del PCA para propósitos de clasificación.

Este método también provee de un análisis teórico para demostrar que tanto el PCA, LDA y LPP pueden ser obtenidos con diferentes modelos gráficos, basándose en que existe una estructura gráfica que es inferida en los puntos de datos; LPP encuentra una proyección que respeta esta estructura gráfica. En el análisis teórico se muestra como el PCA, LDA y LPP surgen del mismo principio aplicado a diferentes opciones de esta estructura gráfica.

El problema del vector característico generalizado de LDA puede ser escrito de esta manera:





(1)





donde w representa los vectores característicos de la matriz de covarianza, y representa los correspondientes valores caracteristicos, C es la matriz de covarianza, X es el conjunto de clases, y, L es la matriz laplaciana.

Las proyecciones del LDA pueden ser obtenidas resolviendo el siguiente problema de Valor Característico (Eigenvalue) generalizado:


(2)

Las Laplacianfaces son capaces de capturar la estructura múltiple facial intrínseca a algún área. Ambos métodos Laplacian Eigenmap y LPP ayudan a encontrar un mapa que conserve su estructura facial local. Su función objetivo es:


(3)


El factor de error mas bajo obtenido con este método fue de 4.6%



Figura 1
. Figuras comparativas de diferentes métodos de representación facial. (a) Egen-rostros, (b) Fisher-rostros, y (c) Laplacianfaces.


[1] X. He, S. Yan, Y. Hu, P. Niyogi, H. J. Zhang, “Face Recognition Using Laplacianfaces”, IEEE Transactions on Pattern Analysis and Machinery Intelligence, Vol. 27, No. 3, Marzo 2005.

Tuesday, September 20, 2005

Graphic of the results

This is the graphic of the results obtained during the analysis of all the metodologies.
The graph shows the % of recognition vs # of faces that each system can recognize.

Friday, September 02, 2005

State of the art research results

These are my findings on the sate of the art, for face recognition.

Num Publication Name

1 Face Recognition Using the Weighted Fractal Neighbor Distance
2 Face Recognition Using Line Edge Map
3 Face Recognition Based on Fitting a 3D Morphable Model
4 GA-Fisher: A New LDA-Based Face Recognition Algorithm With Selection of Principal Components
5 Deformation Analysis for 3D Face Matching
6 Face Recognition Using Laplacianfaces
7 Face Detection and Identification Using a Hierarchical Feed-forward Recognition Architecture
8 Gabor-Based Kernel PCA with Fractional Power Polynomial Models for Face Recognition
9 Generalized 2D Principal Component Analysis
10 Locally Linear Discriminant Analysis for Multimodally Distributed Classes for Face Recognition with a Single Model Image
11 Appearance-Based Face Recognition and Light-Fields
12 Appearance-Based Face Recognition and Light-Fields
13 Bayesian Shape Localization for Face Recognition Using Global and Local Textures
14 High-Speed Face Recognition Based on Discrete Cosine Transform and RBF Neural Networks
15 Kernel Machine-Based One-Parameter Regularized Fisher Discriminant Method for Face Recognition
16 Nonlinearity and Optimal Component Analysis
17 Face Recognition Using Fuzzy Integral and Wavelet Decomposition Method
18 Face Recognition Using the Discrete Cosine Transform
19 Acquiring Linear Subspaces for Face Recognition under Variable Lighting
20 Face Recognition System Using Local Autocorrelations and Multiscale Integration
21 Combined Subspace Method Using Global and Local Features for Face Recognition
22 Gabor Wavelet Associative Memory for Face Recognition
23 N-feature neural network human face recognition
24 PROBABILISTIC MATCHING FOR FACE RECOGNITION
25 Face Recognition by Applying Wavelet Subband Representation and Kernel Associative Memory
26 Real-time Embedded Face Recognition for Smart Home
27 A Unified Framework for Subspace Face Recognition
28 Discriminative Common Vectors for Face Recognition
29 Wavelet-based PCA for Human Face Recognition
30 Face Recognition Using Artificial Neural Network Group-Based Adaptive Tolerance (GAT) Trees
31 Face Recognition Using Kernel Direct Discriminant Analysis Algorithms


Well that’s all.
Now the next step is to make an analysis of this data and justify my own research project, also to see which way I’ll go.

Wednesday, August 24, 2005

Project Idea

I just get an idea, and I’m registering it on my blog just to let anyone know this is my idea, also if you want to work on it, it’s ok for me…

Fuzzy-Principal Component Analysis (FPCA), as we know a normal PCA give us a vector that characterizes a matrix (in my case, an image), but this vector is an specific value rounded by our computations, and on this calculus we can lose valuable information that can be useful to calculate the Euclidean distance between object n-dimensional.  So, to avoid to loose information, we can fuzzy the output vectors rounded in a fuzzy manner.

Well that’s all, I think this can improve the efficiency of a classifier.

Any comments?

Reviewing eigenfaces

Ok, yesterday night I was trying to implement the eigenface algorithm [1] on MATLAB, using the AT&T [2] database, but I had some issues with it...
Let me explain myself what I did:

For a given face like this one:

This face can be thought as , so the mean face should be the sum of n samples of faces divided by n. So, the mean face just defined, can looks like as follows:

Now, given the mean face, we can get the difference between the mean face and the original face, in therms of the spatial domain and their intensity values, and it looks like this:

The eigenvalues of the the covariance matrix are defined as:

So then, the eigenvalues just obtained have been multiplied by the image obtained differencing the mean face and the original face, obtainig something like this:

This is called eigenface, however, this doesn't looks like the eigenfaces introduced on the journal articles regarding eigenfaces, therefore, I have to ask myself some questions, to get fully understanding of the main idea of the eigenfaces:

  • "The eigenvector of the covariance matrix..." <--about which covariance matrix are they talking about? the original face covariance matrix? or the differenced face covariance matrix?
  • The eigenface is the product between the eigenvalues and the differenced face?
  • What the Karhunen-Loeve Transform does exactly?
  • Is the eigenface part of the Karhunen-Loeve Transform?
  • Where does the Principal Component Analysis (PCA) ends? Doues it ends getting the Karhunen-Loeve Transform; or, ends obtaining the eigenface?
  • Do I deserve to eat today?

Well, I hope to get the answers to some of my questions, if someone want to comment, anything, I'll appreciate it.

[1] M. Turk, A. Pentland, "Eigenfaces for recognition", Journal of Cognitive Neuroscience, Massachusetts Institute of Technology, Vol. 3, No. 1, 1992.
[2] Face database propietary of AT&T Research laboratories.

Tuesday, August 23, 2005

Research Update

These past few days I’ve been writing down a table with some interesting information of the current state of the art, this information is: Name of the publication (transaction papers, journal papers, etc), PIE robustness, % of recognition, type of clasiffier, facial features extracted, year, database tested, and application. The papers done until now are this ones:

Deformation Analysis for 3D Face Matching Discriminative Common Vectors for Face Recognition
Face Recognition Using Laplacianfaces
High-Speed Face Recognition Based on Discrete Cosine Transform and RBF Neural Networks
Locally Linear Discriminant Analysis for Multimodally Distributed Classes for Face recognition with a Single Model Image
Wavelet-based PCA for Human Face Recognition
Real-time Embedded Face Recognition for Smart Home
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
Appearance-Based Face Recognition and Light-Fields Bayesian Shape Localization for Face Recognition Using Global and Local Textures
A Unified Framework for Subspace Face Recognition
PROBABILISTIC MATCHING FOR FACE RECOGNITION
Face Recognition Based on Fitting a 3D Morphable Model
Appearance-Based Face Recognition and Light-Fields
Face Recognition Using Artificial Neural Network Group-Based Adaptive Tolerance (GAT) Trees
Face Recognition by Applying Wavelet Subband Representation and Kernel Associative Memory
Face Recognition Using Kernel Direct Discriminant Analysis Algorithms
Face Recognition Using Fuzzy Integral and Wavelet Decomposition Method
Face Recognition Using Line Edge Map
Face Recognition Using the Discrete Cosine Transform
Face Recognition System Using Local Autocorrelations and Multiscale Integration
Face Recognition Using the Weighted Fractal Neighbor Distance
Gabor-Based Kernel PCA with Fractional Power Polynomial Models for Face Recognition Gabor Wavelet Associative Memory for Face Recognition
N-feature neural network human face recognition
GA-Fisher: A New LDA-Based Face Recognition Algorithm With Selection of Principal Components
Kernel Machine-Based One-Parameter Regularized Fisher Discriminant Method for Face Recognition


Some authors try to get the reader confused about the results, trying to make himself look the best methodology with the best recognition rate. Here is where I loose the big part of the time, discovering the real % of recognition.

Tuesday, August 16, 2005

The first blog

The very first blog...

Well, well, well, this thing looks nice, I'll try to put some stuff as soon as possible.
Maybe I should call it "Pablog" <--- it's funny, doesn't it?

Ok, c-you.