Nowadays the identification of users in online or web services is linked to a previous user assignment and password. An information that, with or without the consent of the user, can trigger in identity theft since the keys can be shared, stolen or lost. It is known that using Internet services people can communicate and relate without the obligation to prove their identity, using pseudonyms, false identities or impersonating another person.
At Smiley Owl Tech S.L (hereinafter Smowltech) we detected that this problem is critical in the world of online teaching. In recent years, virtual campuses and e-learning platforms are exponentially increasing their number of students / users and one of the biggest problems on the part of the entity offering courses is the uncertainty of not knowing exactly who is taking the course on the other side of the computer screen and thus be able to certify or not the title that the student will receive.
Smowltech was born in 2012 to respond to this lack and we market a continuous user authentication service through a monomodal biometric system based solely on face recognition. The system was marketed under the name of SMOWL and today we are a benchmark in the online student authentication market in the world of eLearning, an international market in continuous growth and unspoiled in this purpose.
However, the current state of the art in facial recognition does not allow a reliable recognition where the capture of photos is transparent to the user (system of passive action of the user, so as not to interrupt him in his tasks within the session) and in uncontrolled environments, where the lighting, the user’s pose, his appearance (beard, haircut …), accessories (glasses, hat …), partial occlusions and gestures and expressions are variable. It is not the same to pose in front of a facial recognizer for accessing a terminal (smartphone) or being in a physical space (as a room) -where the user prostrates, facing the front before the camera, without accessories (glasses, cap), always with the same gesture, with good lighting and without occlusions – to keep that same face during an online session (online teaching) of many hours. On the other hand, the market emphasizes the need to be able to capture evidence, not only of what happens in front of the device’s screen but also of what is happening in it. For this reason, the MULTIBIO system must be completed with some screenshots capture system to keep evidence of what happened in that session.
The main objective of the project is therefore to create the first MULTIBIO commercial prototype, a completely new product / service, independent of SMOWL and more complete that can be commercialized in a more competitive way and position the entity in a leading position at an international level.
This new MULTIBIO service will be an automatic, passive authentication service (the user does not actively participate, such as posing before the camera) and continuous users of online services that require a greater degree of security in identifying their users. Said authentication will be carried out by means of an automatic, mass and efficient facial and voice identification and in uncontrolled environments (pose, lighting, appearance, complements, partial occlusions, gestures / expressions, ambient noise, mix of voices …) on images captured by the Webcam and through the microphone of the device (low resolution images in high occasions and low quality audio), by which the users to verify access the internet.
Once inside the session of the virtual campus, the main objective of MULTIBIO is:
a) Capture from the terminal devices: a.1) Random images through the Webcam, a.2) The audio that captures the microphone and a.3) Screenshots or screenshots. All this during the entire duration of the session (continued) and passive attitude of the user (no need to pose or read predefined paragraphs).
b) Automatically authenticate by facial and voice biometrics, that there is someone in front of the terminal and that someone is the person who should be receiving training on that platform.
Financiado por: FEDER/Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de Investigación/ Proyecto (RTC-2016-5711-7).
03/15/2016 – 03/31/2019