http://stackoverflow.com/questions/36908402/dlib-training-shape-predictor-for-194-landmarks-helen-dataset

 

DLIB : Training Shape_predictor for 194 landmarks (helen dataset)

 

I am training DLIB's shape_predictor for 194 face landmarks using helen dataset which is used to detect face landmarks through face_landmark_detection_ex.cpp of dlib library.

Now it gave me an sp.dat binary file of around 45 MB which is less compared to file given (http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2) for 68 face landmarks. In training

  • Mean training error : 0.0203811
  • Mean testing error : 0.0204511

and when I used trained data to get face landmarks position, IN result I got..

enter image description here

which are very deviated from the result got from 68 landmarks

68 landmark image:

enter image description here

Why?

-----------------------------------------------------

Ok, looks like you haven't read the code comments (?):

shape_predictor_trainer trainer;
// This algorithm has a bunch of parameters you can mess with.  The
// documentation for the shape_predictor_trainer explains all of them.
// You should also read Kazemi's paper which explains all the parameters
// in great detail.  However, here I'm just setting three of them
// differently than their default values.  I'm doing this because we
// have a very small dataset.  In particular, setting the oversampling
// to a high amount (300) effectively boosts the training set size, so
// that helps this example.
trainer.set_oversampling_amount(300);
// I'm also reducing the capacity of the model by explicitly increasing
// the regularization (making nu smaller) and by using trees with
// smaller depths.  
trainer.set_nu(0.05);
trainer.set_tree_depth(2);

Have a look at the Kazemi paper, ctrl-f the string 'parameter' and have a read...

Posted by uniqueone
,