We encountered several roadblocks while implementing our project on Android. Almost all of them were due to technicalities inherent in mobile development. It took us a long while to make significant progress in the sequential Android code, and we realized that we were spending far too much time dealing with Android errors.
After speaking with our mentor, we decided to switch platforms and instead make a C++-based project where we implement a highly parallel face-detection algorithm using C++/Cuda. We have made a some progress in implementing the sequential version of the Viola-Jones' Algorithm. We also read a lot of research papers to decide the most apt face detection algorithm for our project, understand the chosen one (Viola-Jones'), understand how eye state detection detection works and how can we implement it, designing our Machine Learning model for eye state and smile detection, and finally making a detailed plan on parts that we will parallelize.
Our Project now comprises of 3 parts:
- Developing highly parallel face detection utility in C++
- Implementing smile and eye-state detection on detected faces
- Analyzing and returning the best image in the Burst based on eye and smile states
Goals and Deliverables
We think we will be able to produce a working and highly efficient C++-based application to accurately detect the best pictures form a burst. We have read and discussed a lot of low-level technical details, and we would soon be starting to parallelize our program. From our initial tests, the sequential version is looking to be extremely slow and we feel that a parallel implementation will provide high speedups.
A "nice-to-have" would be making our program to be extremely flexible so not basing our parallel implementation with respect to a particular machine's specifications and yet being able to exploit any machine's hardware spec.
We plan to show a demo along with speedup graphs.
We are concerned about detecting face not looking straight into the picture i.e. faces at different angles. This can be difficult.We are also not sure on how much dat should we train our ML model on.
It has been updated on the main page.