Face recognition using EmguCV 3.0 and typing pattern recognition
Introduction
An MSc project with the title Student Examination System, where the objective is to put the students in an examination condition but instead of having an invigilator in an examination center, the system will cater for the proper ongoing of the exam. the system can be used as an online examination system
- Recognizing the face shape of a particular student
- Detect if there is more than one person in the examination room
- Analyze the typing pattern of a student and detect if any one person is taking part in the exam
- voice recognition for the student and detect if there is more than one person speaking in the examination room
Setup
Download Emgu CV from http://www.emgu.com/wiki/index.php/Main_PageDownload Haarcascade from https://github.com/opencv/opencv/tree/master/data/haarcascades
Create an account at https://www.keytrac.net/
Face recognition
The snippet below illustrates how the Emgu CV is loaded when the application started and when click on capture how the face pattern is processed and save in the database
private void btnCapture_Click(object sender, EventArgs e) { if (!Directory.Exists(Application.StartupPath + "/Faces")) { Directory.CreateDirectory(Application.StartupPath + "/Faces"); } count = count + 1; grayFace = camera.QueryGrayFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); MCvAvgComp[][] detectedFaces = grayFace.DetectHaarCascade(faceDetected, 1.2, 10, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20)); foreach (MCvAvgComp f in detectedFaces[0]) { trainedFace = frame.Copy(f.rect).Convert<Gray, byte>(); break; } trainedFace = result.Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); trainingImage.Add(trainedFace); labels.Add(student.StudentName); biometric.FaceId = string.Format("{0}.bmp", student.StudentName); trainingImage.ToArray()[1 - 1].Save(Application.StartupPath + "/Faces/" + biometric.FaceId); trainingImage.Add(new Image<Gray, byte>(Application.StartupPath + "/Faces/" + biometric.FaceId)); SaveBiometric(student.StudentId); btnCapture.Enabled = false; }
public partial class Biometric : Form { MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_TRIPLEX, 0.6d, 0.6d); HaarCascade faceDetected; Image<Bgr, Byte> frame; Capture camera; Image<Gray, byte> result; Image<Gray, byte> trainedFace = null; Image<Gray, byte> grayFace = null; List<Image<Gray, byte>> trainingImage = new List<Image<Gray, byte>>(); List<string> labels = new List<string>(); List<string> users = new List<string>(); int count, numLabeles, t; string name, names = null; public Biometric() { InitializeComponent(); student = LoadDetails(Register.StudentId); StartWebcam(); } private void StartWebcam() { try { faceDetected = new HaarCascade("haarcascade_frontalface_default.xml"); camera = new Capture(); camera.QueryFrame(); Application.Idle += new EventHandler(FrameProcedure); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void FrameProcedure(object sender, EventArgs e) { users.Add(""); frame = camera.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); grayFace = frame.Convert<Gray, Byte>(); MCvAvgComp[][] faceDetectedNow = grayFace.DetectHaarCascade(faceDetected, 1.2, 10, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20)); if (faceDetectedNow[0].Count() > 1) { AbortRegistraction(student.StudentId); LogEvent(student.StudentId, "Two person detected. registration aborted"); MessageBox.Show("Two person detected. registration aborted"); Login login = new Login(); login.Show(); this.Close(); } foreach (MCvAvgComp f in faceDetectedNow[0]) { result = frame.Copy(f.rect).Convert<Gray, Byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); frame.Draw(f.rect, new Bgr(Color.Green), 3); if (trainingImage.ToArray().Length != 0) { MCvTermCriteria termCriterias = new MCvTermCriteria(count, 0.001); EigenObjectRecognizer recognizer = new EigenObjectRecognizer(trainingImage.ToArray(), labels.ToArray(), 1500, ref termCriterias); name = recognizer.Recognize(result); frame.Draw(name, ref font, new Point(f.rect.X - 2, f.rect.Y - 2), new Bgr(Color.Red)); } users.Add(""); } cameraBox.Image = frame; name = ""; users.Clear(); } }
Typing recognition
- Enrollment
Enrollment process where a unique user ID is generated and the typing pattern is recognized and associated with that user ID
private static bool Enroll(string dna, string typingId) { TypingEnrolViewModel request = new TypingEnrolViewModel(); request.samples = new List<string>(); request.user_id = typingId; request.samples.Add(dna); string jsonObject = new JavaScriptSerializer().Serialize(request); HttpResponse<String> response = Unirest.post("https://api.keytrac.net/anytext/enrol") .header("Authorization", "XXXXX") .header("Content-Type", "application/json") .body(jsonObject) .asJson<String>(); var result = response.Body; var results = JsonConvert.DeserializeObject<TypingViewModel>(response.Body); return results.OK; }
namespace StudentExam.Model { public class TypingViewModel { public string id { get; set; } public bool OK { get; set; } public bool authenticated { get; set; } public int score { get; set; } } public class TypingEnrolViewModel { public string user_id { get; set; } public List<string> samples { get; set; } } }
- Pattern authentication
when the students are taking part in the exam. their typing pattern will be analyzed using the API of KeyTrac and return response in terms of score. if the score is above 50 then this indicates that the student who claims to be is taking part in the exam
public static string search(string dna, string id) { BiometricViewModel biometric = new BiometricViewModel(); if (!string.IsNullOrWhiteSpace(dna) && !string.IsNullOrWhiteSpace(id)) { TypingEnrolViewModel request = new TypingEnrolViewModel(); request.samples = new List<string>(); int studentId = Convert.ToInt32(id); biometric = GetBiometricDetailsByStudentId(studentId); request.user_id = biometric.TypingId; request.samples.Add(dna); string jsonObject = new JavaScriptSerializer().Serialize(request); HttpResponse<String> response = Unirest.post("https://api.keytrac.net/anytext/authenticate") .header("Authorization", "XXXXXXXXXX") .header("Content-Type", "application/json") .body(jsonObject) .asJson<String>(); var result = response.Body; var results = JsonConvert.DeserializeObject<TypingViewModel>(response.Body); if (results.score < 50) { AbortRegistraction(studentId); LogEvent(studentId, "Authentication fails did not recognised the registed student typing pattern with a score of " + results.score); return "Authentication fails did not recognised the registed student typing pattern with a score of " + results.score; } return "worked"; } else { return "Fail to student details and typing pattern"; } }
Testing
- Registration face pattern
the screenshot below demonstrates how the application is able to detect the face shape on an individual.
Once the user clicks on the capture button the application will process the face shape of the user
once the user face shape is processed and recognized, the name of the user will appear every time the individual is using the application
- Enroll typing pattern
the next process for registration is typing recognition. In short, the user will have the type the text appearing above the textbox. each keystroke and typing pattern will be converted into an alphanumeric character.
- Two faces appearing
After the registation process is done, when the user will be logged into the system to take part in the exam, in case two faces are detected by the system the exam will be aborted as exam condition was not meat
cool project, nice job
ReplyDeleteYour website is very beautiful or Articles. I love it thank you for sharing for everyone. Facial Recognition Software
ReplyDelete