List of Figures

  1. Recognition process presented in [19]
  2. The image normalization in [19] was done using this geometry (the image is from our database)
  3. Feature Extraction in [19]
  4. Proposed Classification Process
  5. The number-plate is detected first. Then the surrounding rectangle is cut from the image
  6. Car geometry is measured in the space of number-plate
  7. ROC curve for evaluated descriptors in [14], here for an rotated image
  8. Feature representation used for learning
  9. obtained orientation is modulo $\pi $ to obtain the same results for (a) and (b)
  10. Features Extraction
  11. Possible distinction between random texture and circle-like shapes
  12. Overlapping Tiles
  13. These two distributions are very close to each other, but their Euclidean distance is quite large.
  14. Here is depicted how mass units are transformed from distribution $f_2(x)$ to $f_1(x)$ .
  15. Transportation Problem relation to EMD. The solid line shows the minimum work-flow of 2 mass units
  16. Classification using k-NN, $k=3$. The test sample T is being classified as $\times $, because in the hypercycle surrounding T are 2 elements from $\times $ and only one from $\circ $.
  17. Projection from 2D to 1D -- we should be able to find some threshold for classification
  18. Projection from 2D to 1D -- not a very good projection for classification purposes
  19. The best is to separate the classes -- projected distances of $m_1$ and $m_2$ should be maximized and projected scatters $s_1$ and $s2$ should be minimized.
  20. The LDA transformation is trying to maximize $d_1^2+ \dots +d_c^2$ in the space $\Re ^{c-1}$
  21. Sample car images in our database
  22. Sample truck images in our database
  23. Optimal Number of Bins
  24. SIFT Topology and how this influences the classification rate
  25. SIFT Type and how this influences the classification rate
  26. EMD vs. Euclidean Distance Function
  27. The best k for k-NN algorithm
  28. The explanation to figure 4.7 on page [*]
  29. We can see that SIFT with topology 15x9x25 performed best
  30. Rotation and stability: (a) sample rotated $\approx 1^{\circ }$ to the left, number of bins equals to 9, (b) sample rotated $\approx 2^{\circ }$ to the right, number of bins equals to 9, (c) sample rotated $\approx 1^{\circ }$ to the left, number of bins equals to 10, (d) sample rotated $\approx 2^{\circ }$ to the right, number of bins equals to 10
  31. SIFTs with higher number of tiles have better classification power but from certain point the classification rate stagnates
  32. Here we can see that overlapping SIFTs performed better except the simplest case
  33. We can see that EMD and Euclidean distance measures performed almost the same
  34. The influence of k on classification rate
  35. The FLD transformation performed on features extracted from car images
  36. The FLD transformation performed on features extracted from car images
  37. The optimal feature space dimension selection for car images
  38. The optimal feature space dimension selection for car images
  39. The training set was reduced to 80% and 60% of its original size.
  40. k-NN classifier sensitivity to image blur (truck images)
  41. SIFT is almost invariant to blur (a) original image, (b) Gaussian with 6x6 region size applied on image
  42. Images after noise addition. (a) $v=0.02$, (b) $v=0.3$
  43. k-NN classifier sensitivity to image added noise
  44. In this picture we can see that SIFT discrimination power decreases with increasing amount of noise. For $v$=0.3 the distribution is almost random (d); (a) original image, (b) added noise, $v=0.04$, (c) added noise, $v=0.1$, (d) added noise, $v=0.3$
  45. The training set was reduced to 80% and 60% from its original size.
  46. k-NN classifier sensitivity to image blur
  47. k-NN classifier sensitivity to image added noise
  48. The Archiecture Overview
  49. Program Flow
  50. Feature Extraction Module
  51. Error Module
  52. Classification module


Kocurek 2007-12-17