Video KYC — The Banking future is here!

At a macro level, India seems to be going through an “identity crisis”. Not in terms of whether she is a potential superpower or a grappling economy, but instead which papers and bills identify its constituents as Indian citizens.

Zooming in to the fintech ecosystem of the country, constantly identifying individuals through Know Your Customer (KYC) processes is imperative, but the latest developments in the sector are far from bleak. The past few years have seen rapid developments in ideas and technologies, with the regulatory space dishing out amendments to keep up.

With concepts like Artificial Intelligence (AI), face-matching, and Computer Vision now a practical reality instead of something fresh out of a sci-fi movie, the processes of authenticating customers have taken a step away from the physically daunting and expensive task of onboarding. Along the same tangent, the regulatory body RBI is also tasked with updating their KYC compliance norms. The fintech space is fast changing, and sometimes companies developing futuristic tech have solutions relegated to waiting in the wings until official norms give them the green light. This may require sitting back with a tub of popcorn for a few years.

The build up here is to introduce an esrtwhile non-compliant, yet simple, secure, and scalable method to establish the identity of an individual: Video KYC (VCIP).

Reaching Compliance: The past

  • In an earlier phase of “identity crisis”, the question was whether the unique identification card “Aadhaar” had constitutional validity itself. On 26 September 2018, the Supreme Court affirmed its constitutional validity but scrapped Section 57 of the Aadhaar Act that allowed private companies to use Aadhaar authentication and eKYC.

With the 1,448-page judgment up for interpretation, a cloud of ambiguity loomed over India’s booming fintech industry; when was Aadhaar authentication to be stopped, and would the private space have to sacrifice the paperless, cashless and presence-less verification method it had adopted? Potential customers were seen on the opposite side of the regulations door as the industry suffered hiccups to onboard new customers after the judgement.

  • About six months later, on June 26, 2019, an expert committee on Micro, Small and Medium Enterprises (MSMEs), headed by UK Sinha, former chairman of the Securities and Exchange Board of India (SEBI) proposed the need for online video KYC. The panel recognized the drawbacks of physical presence and the sheer data handling required for even eKYC. Video-KYC was seen as a simple seamless process that could be done through a video chat where the customer can display documents. At that time the committee recommended it could be done through apps like Google Duo or Apple FaceTime.

Experts pointed out that considering these applications were of foreign origin, the RBI was unlikely to allow them. Under the Data Protection Bill, and the debate around data localization, the central bank was unwilling to let companies store customer data in foreign locations.

  • In the latest installment of updates, the RBI approved Aadhaar-based video authentication as an alternative to e-KYC on January 9, 2020. The amendment to the KYC norms allow banks and other lending institutions regulated by it to adopt a Video based Customer Identification Process (V-CIP) as a consent based alternate method of identity verification for customer onboarding.

Explaining Compliance: The present

Making sense of the latest amendments to regulations is not easy. We at Signzy have distilled it down to a 20-point cheat sheet to make sure it is. The changes due to the introduction of V-CIP are:

  1. Informed consent to be obtained from individual customer before the live V-CIP process
  2. RE (Regulated Entities) official to record video of the customer present for identification
  3. RE official is to capture a photograph of the customer during the session
  4. RE official to obtain identification information. This can be done through two methods depending on the entity type:
    Banks: OTP based Aadhaar eKYC authentication
    Non-bank RE: only Offline Verification of Aadhaar
  5. RE official to capture a clear image of PAN card which is to be displayed during the process
  6. Live location is to be recorded during the session
  7. RE official to ensure customer’s photograph matches them
  8. RE official to ensure provided identification details match the details on the Aadhaar/PAN
  9. Randomization of questions to ensure there is no pre-recording. This means that the sequence and/or type of questions during video interactions should be varied in order to establish that the interactions are in real-time
  10. The Aadhaar XML or Secure QR provided for offline verification should not be more than 3 days old
  11. Accounts opened through the V-CIP process will only be operational after a concurrent audit
  12. RE official to carry out a liveliness check
  13. The audiovisual interaction should be triggered from the domain of the RE itself
  14. An activity log along with the credentials of the official carrying out the process should be preserved
  15. Video to have a timestamp and be safely stored
  16. The amendment encourages the use of AI and face-matching technology
  17. RE official to redact/blackout Aadhaar number as per standard guidelines
  18. The interaction is to be necessarily done by a bank official and not an agent
  19. The process is to be operated only by specifically trained officials
  20. RE to ensure security, robustness and end to end encryption of the V-CIP application

This is a monumental step towards digitizing the authentication process for banks, lending startups and non-banking financial institutions.

Signzy: The future

Signzy’s video technology came into existence before the license to use it did. In 2016, bankers told us our tech was too futuristic and not practical, but now the future is here! Keeping up to its promise of delivering future ready digital onboarding solutions, Signzy is ready with a plug and play end-to-end digital Video KYC solution with V-CIP features.

Our systems are designed for banking grade technology which means they meet the strictest infosec regulations and data security requirements. Signzy’s video KYC is being used to onboard thousands of customers every month by SEBI regulated institutions. This solution has matured over dialects, browsers and low-internet scenarios. And also has one of the best facial recognition technology at the background (Can read more here)With RBI’s progressive move to bring Video KYC (Video Customer Identification Process) 2020, we look forward to onboarding RBI regulated institutes on our battle-tested solution!

If you would like to know more then look at the Video KYC section on our website

www.signzy.com

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Ankit Ratan, CEO-Signzy

 

Removing blur from images

Everyone misses perfect shots once in a while. Yeah, that’s a pretty shame (We all do that all the time!!!).

There are special moments which we want to capture to make them memorable for lifetime, but just because your camera shook or amount of noise in your camera can really hamper those special moment resulting in blurred images (Maybe your subject is on the move, the reason is not always bad cameras but bad timing as well!!!).

So, if you are also one of us who misses out their special moment, this post is just for you. In this post, you will get to know how you can restore blurred images. All the thanks and applause goes to Neural networks.

What are you going to learn?

From this blog post, you will learn how to make use of the neural network by image deblurring technique with the help of Scale-recurrent Networks. For more info on the technique, you can access this link. The network takes sequence of blurry images as input at different scales and produces a finite set of sharp images. The final output image is at the full resolution.

Figure 1: SRN architecture from the original paper

The method above uses end-to-end trainable networks for the images. Then it used multi-scale convolutional neural network with the approach of state-of-art.

These methods embark from an abrasive measure of the blurry images, and gradually try to recover the suppressed image at higher resolutions.

This Simple Recurrent Network aka SRN makes use of scale recurrent network for multi-scale deblurring. The solver and the corresponding parameter at each scale in a well-established multi-scale method are always same. This is a natural choice as it simply aims to solve the very same problem. If we vary scales in different parameters, then it may cause instability and the extra issue of unrestrictive solution space. Another concern to address here is the input images may have different motion scales and resolutions.

If you allow too much parameter tweaking in each scale, then this might result in creating a solution that is overfitted to a specific motion scale. There are people who believe that this method is also applied to CNN-based methods. Still, there are some recent cascaded networks that still prefer to use independent parameters for every single scale. They justify this method with a pointer which seems quite plausible. They proposed that sharing networks weights across different scales can significantly deteriorate training difficulty and also introduce stability benefit.

Their experiment shows that how with the help of recurrent structure and the combination of the above advantages, the end-to-end deep image deblurring framework can greatly mend training efficiency. They only use less than 1/3rd of the trainable parameters with faster testing time. Apart from this, their method is proven to produce high-quality results both qualitatively and quantitatively. Let’s not dive deep in the research paper for now. Allow me to present you our use-case of this deblurring technology.

We are well-established Global Digital Trust Company, which functions primarily in the domain of verification processes. For this verification process, our customers have to click photos of their documents and submit it for verification. There are probable chances where these photographs may be blurred either due to camera shake or any motion which causes difficulty in reading the document text.

To solve the blurred image problem, we fed these images in the aforementioned Deblurring Model. The results were exhilarating. Below are some of the samples,

Concluding Remarks

What do you learn from this blog? You learn the use of scale recurrent network for deblurring images. With this technology, you can easily extract data from blurred identity card images. You don’t have to poke your customers again and again for the re-submission of the documents due to bad-quality or blurred images. Thanks for the read and do leave a comment to let me know what you feel about this technology. Adios for now fellas!!!

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

How we built a modern, state of the art OCR pipeline — PreciousDory

Finally I am very happy writing this blog after a long wait. As the title suggests PreciousDory is a modern optical character recognition (OCR) engine which performs better than the engines from tech giants like Google, Microsoft, Abby in KYC use cases. We feel now it is time to tell the world how we built this strong OCR pipeline over the last couple of years.

We at Signzy are trying to build a global digital trust system. We solve various fascinating problems related to AI and computer vision. Of them, text extraction from document images was one of the critical problem we had to solve. In the initial phase of our journey we were using traditional rule based OCR pipeline to extract text data from document images. Those OCR engines were not that efficient to compete with global competitors. So In an urge to stay competitive with the global market we took an ambitious decision to build an inhouse modern OCR pipeline. We wanted to build an OCR engine which will surpass the global leaders in that segment.

 

The herculean challenge was out and our AI team accepted it with a bliss. We know building a production ready OCR engine and achieving best in class results is not an easy task. But we are a bunch of gallant people in our AI team. When we started researching about the problem we found very few resources to help us out. And we also stumbled upon the below meme ?

 

If You Can’t Measure It, You Can’t Improve It

The first task our team did was to create a test dataset that would represent all the real world scenarios we could encounter. The scenarios includes varying viewpoints, illumination, deformation, occlusion, background clutter, etc. Below are some samples of our test dataset.

Sample test data

When you have a big problem to solve, break it down into smaller ones

We spent a quite a lot of time in literature study trying to break the problem into sub-problem so that our individual team members could start working on it. We ended with the below macro level architecture.

Macro level architecture

After coming up with the basic architectures our team started exploring the individual entities. Our core OCR engine comprises of 4 key components.

  1. CropNET
  2. RotationNET
  3. Text localizer
  4. Word classifier

CropNET

This is the first step in the OCR pipeline. The input documents for our engine will have a lot of background noise. We needed an algorithm to exactly crop out the region of interest so that the job gets easier in the subsequent steps. In the initial phase we tried out lot of traditional image processing techniques like edge detection, color matching, Hough lines etc. None of them could withstand our test data. Then we took the deep learning approach. The idea was to build a regression model to predict the four edges of the document to be processed. The train data for this model was the ground truth containing the four coordinates of the document. We implemented a custom shallow architecture for predicting the outputs. We achieved good performance from the model.

RotationNET

This is the second stage in the pipeline. After cropping, the next problem to solve is rotation. It was estimated that 5% of the production documents would be rotated in arbitrary angles. But for the OCR pipeline to work properly the document should be at zero degree. To tackle the problem we built a classification model which predicts the angle of document. There are 360 classes corresponding to each degree of rotation. The challenge was in creating the training data. As we had only few real world samples for training each class, we had to build a custom exhaustive pipeline for preparing synthetic training data which closely matches with real world data. Upon training , we achieved impressive results from the model.

Text localizer

The third stage is localizing the text areas. This is the most challenging problem to solve. Given a document the algorithm must be able to localize the text regions for further processing. We knew building this algorithm from scratch is a mammoth task. We benchmarked various open source text detection models on our test datasets.

Text localization — Benchmark

After rigorous testing we decided to go with CTPN. Connectionist Text Proposal Network (CTPN) accurately localizes text lines in natural image. It detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. It was developed with a vertical anchor mechanism that jointly predicts location and text/non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text.

 

Word classifier

This is the final stage and the most critical step in the OCR engine. This is the step where most of our efforts and time went into. After localizing the text regions in the document, the region of interest was cropped out of the document. Now the final challenge is predict the text from this. Upon rigorous literature study we arrived with two approaches for solving this problem.

  1. Character level classification
  2. Word level classification

Character level

This is one of the traditional approach. In this method the bounding box of individual characters are estimated and from them the characters are cropped out and presented for classification. Now what we have in hand is a MNIST kind of dataset. Building a classifier for this type of task is tried and tested method. But the real challenge in this approach was in building the character level bounding box predictor. Normal segmentation methods failed to perform on our test dataset. We thought of developing a FRCNN like object detection pipeline for localizing the individual characters. But creating the training data for this method was a tedious task and involves a lot of manual work. So we ended up dropping this method.

Word level classifier

This method is based on deep learning. Here we pass the full text localized region into a end to end pipeline and directly get the predicted text. The cropped text region is passed into a CNN for spatial feature extraction and then passed on to RNN for extracting temporal features. We are using CTC loss to train the architecture. CTC loss solves two problems: 1. You can train the network from pairs (Image, Text) without having to specify at which position a character occurs using the CTC loss. 2. You don’t have to postprocess the output, as a CTC decoder transforms the NN output into the final text.

The training data for this pipeline is cropped word image regions and their corresponding ground truth text. Since a large amount of training data was required to make the model converge, we made a separate data creation pipeline. In this we first get the cropped word regions from the document, secondly we feed it into third party OCR engine to get the corresponding text. We used this data to benchmark it against manually created human data. The manual data was again verified by a 2 stage human process to make sure the labels are right.

We achieved impressive results with the model. A sample output from the model.

 

Time for results

At Last we combined all the four key components into a single end to end pipeline. The algorithm now takes an input image of a document and gives the corresponding OCR text as output. Below is a sample input and output of a document.

 

Now the engine was ready to face our quality analysis team for validation. They benchmarked the pipeline against popular global third party OCR engines on our custom validation set. Below are the test results for certain important documents we were handling.

 

We tested our OCR engine against other top engines on different scenarios. It includes cases with no background, different background, high brightness and low brightness. The results shows that we are able to perform better than the popular known OCR engines in most scenarios.

Productionzation

The pipeline was built now and tested. But still it was not ready to face the real world. Some of the challenges in productionsing the system are listed below.

  1. Our OCR engine was using GPU for inference. But since we wanted the solution to be used by our clients without any change in their infrastructure, we removed all the GPU dependencies and rewrote the code to run in CPU.
  2. To serve large number of requests more efficiently we builded a queueing mechanism.
  3. For easier integration with existing client infrastructures, we provided the solution as a REST API
  4. Finally the whole pipeline was containerized to ease the deployment at enterprises.

Summary

Thus a mammoth of task building a modern OCR pipeline was accomplished. A special thanks to my team members Nishant and Harshit for making this project successful. One of the key take away from the project was that if you have an exciting problem and a passionate team in hand, you could make the impossible possible. And I could not explain a lot of steps in details since I had to keep the blog short. Do write to me if you have any queries.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

Democratizing AI using Live Face Detection

Democratizing AI using Live Face Detection

Democratizing AI using Live Face Detection!  Since the dawn of AI, facial recognition systems have been evolving rapidly to exceed our expectations at every turn. In a few years, you’ll be able to go through the airport basically just using your face. If you have bags to drop off, you’ll be able to use the self-service system and just have your face captured and matched. You’ll then go to security, the same thing happens just use your biometric. The big tech giants have proved this can be done on a massive scale. The world now needs higher adoption through the democratization of this technology, where even small organizations can use this advanced technology with a plug-and-play solution.

The answer to this is Deep Auth, Signzy’s in-house facial recognition system. This allows large-scale face authentication in real-time, using your everyday mobile device cameras in the real world.

Democratizing AI using Live Face Detection

Deep Auth, Facial Recognition System from Signzy

While a one-to-one face match is now very popular (thanks to the latest Apple iPhone X), it’s still not easy to authenticate people from larger datasets that identify you from thousands of other images. What is even more challenging is doing this in real-time. And just to add some bit of realism, sending images and videos over mobile internet slows this down even further.

This system can detect and recognize faces in real-time in any event, organization, office space without any special device. This makes Deep Auth an ideal candidate to use in real-world scenarios where it might be not possible to deploy a large human workforce or spend millions of dollars to monitor people and events. Workplaces, Education Institutes, Bank branches even large residential buildings are all valid areas of use.

Digital journeys can benefit from face-based authentication thus eliminating the friction of username, password, and adding the security of biometrics. There can also be hundreds of other use-cases which hopefully our customers will come up with, and help us improve our tech.

Democratizing AI using Live Face Detection

 

Deep Auth doing door access authorization.

Deep Auth is robust to appearance variations like sporting a beard,, or wearing eyeglasses. This is made possible by ensuring that Deep Auth learns the facial features dynamically (Online training).

Democratizing AI using Live Face Detection

 

Deep Auth working across different timelines

Technology

The technology behind face recognition is powered by a series of Convolution Neural Networks(CNN). Let’s divide the tech into two parts :

  • Face Detection
  • Face Recognition

Face Detection:

This part involves a 3 stage cascaded CNN network. This is to ensure the face is robustly detected. In the first stage, we propose regions (Objectablility score) and their regression boxes. In the second stage, we take these proposed regression boxes as the input and then re-propose them to reduce the number of false positives. Non-maximal suppression is applied after each stage to further reduce the number of false positives.

Democratizing AI using Live Face Detection

3 stage cascaded CNN for face detection.

In the final stage, we compute the facial landmarks with 5 point localization for both the eyes, nose, and the edges of the mouth. This stage is essential to ensure that the face is aligned before we pass it to the face recognizer. The loss function is an ensemble of the center loss and IoU (Intersection Over Union) loss. We trained the network for 150k iterations on the WIDER Face dataset.

Face Recognition:

The extracted faces are then passed to a siamese network to where we use a contrastive loss to converge the network. The siamese network is a 152 layer Resnet where the output is a 512-D vector depicting the encodings of the given face.

 

Democratizing AI using Live Face Detection

Resnet acts as the backbone for the siamese network.

We then use K- Nearest Neighbours(KNN) to classify each encoding to the nearest face encodings that were injected to KNN during the training phase. The 512-D vectorization used here compared to 128-D vectorization used in other face recognition systems helps in distinguishing fine details across each face. This provides high accuracy to the system even with a large number of non-discriminative faces. We are also working on extending the siamese network to extract 1024-D face encodings.

Benchmarks

Deep Auth poses impressive metrics on the FDDB database. We use 2 images to train each of 1678 distinct faces and then evaluate the faces with the rest of the test images. We then calculate the Precision and recall as 99.5629 and 91.2835 respectively, and with the F1 score of 95.2436.

Democratizing AI using Live Face Detection

 

Deep Auth’s Impressive scores!

We also showcase Deep Auth working in real-time, by face matching faces in a video.

Deep Auth in Action!

We tried something a little more cheeky and got our hands on a picture of our twin co-founders posing together, a rare sight indeed! And checked how good the Deep Auth really was. Was it able to distinguish between identical twins?

Democratizing AI using Live Face Detection

 

And Voila! It worked

Deep Auth is accessed using the REST API interface making it suitable for online training and real-time recognition. Deep Auth is self-servicing due to the fact it is robust to aging and appearance, which makes it an ideal solution to deploy in remote areas.

Conclusion

Hopefully, this blog was able to explain more about Deep Auth and the technology behind it. Ever since UIDAI made face recognition mandatory for Aadhaar authentication, face recognition will start to prevail in every nook and corner of the nation for biometric authentication. Thus democratization of face authentication allows even small companies to access this technology within their organizations. Hopefully, this should allow more fair play and give everyone a chance to use advanced technology to improve their lives and businesses.

In the next blog, we will explain how we have paired face recognition with spoof detection to make Deep Auth robust to spoof attacks. Please keep reading more on our AI section to understand how this is done.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

Survey of facial feature descriptors

Face recognition technology has always been a concept that lived in fictional worlds, whether it was a tool to solve a crime or open doors. Today, our technology in this field has developed significantly as we are seeing it become more common in our everyday lives. In the mission of building a truly digital trust system, we at Signzy use Facial recognition technology to identify and authenticate individuals. The technology is able to perform this task in three steps: detecting the face, extracting features from the target, and finally matching and verifying. As a visual search engine tool, this technology is able to identify key factors within the given image of the face.

To pioneer our facial recognition technology, we wanted an edge over the current deep learning-based facial recognition models. Our idea was to embed human crafted knowledge into state of art CNN architectures to improve their accuracy. For that, we needed to do an extensive survey of the best facial feature descriptors. In this blog, we have shared a part of our research that describes some of the features.

Local binary patterns

LBP looks at points surrounding a central point and tests whether the surrounding points are greater than or less than the central point (i.e., gives a binary result). This is one of the basic and simple feature descriptors.

Gabor wavelets

They are linear filters used for texture analysis, which means that it basically analyses whether there are any specific frequency content in the image in specific directions in a localized region around the point or region of analysis.

 

 

Gabor jet similarities

These are the collection of the (complex-valued) responses of all Gabor wavelets of the family at a certain point in the image. The Gabor jet is a local texture descriptor, that can be used for various applications. One of these applications is to locate the texture in a given image. E.g., one might locate the position of the eye by scanning over the whole image. At each position in the image, the similarity between the reference Gabor jet and the Gabor jet at this location is computed using a bob.ip.gabor.Similarity.

Local phase quantisation

The local phase quantization (LPQ) method is based on the blur invariance property of the Fourier phase spectrum. It uses the local phase information extracted using the 2-D DFT or, more precisely, a short-term Fourier transform (STFT) computed over a rectangular M-by-M neighborhood at each pixel position x of the image f(x) defined by:

where Wu is the basis vector of the 2-D Discrete Fourier Transforms (DFT) at frequency u, and fx is another vector containing all M2 image samples from Nx.

Difference of Gaussians

It is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image. Below are few examples with varying sigma ( standard deviation ) of the Gaussian kernel with detected blobs.

 

Histogram of gradients

The technique counts occurrences of gradient orientation in localized portions of an image. The idea behind HOG is that local object appearance and shape within an image can be described by the distribution of intensity gradients or edge directions. The image is divided into small connected regions called cells, and for the pixels within each cell, a histogram of gradient directions is compiled. The descriptor is the concatenation of these histograms.

 

FFT

Fourier Transform is used to analyze the frequency characteristics of various filters. For images, 2D Discrete Fourier Transform (DFT) is used to find the frequency domain. For a sinusoidal signal,

we can say f is the frequency of signal, and if its frequency domain is taken, we can see a spike at f. If signal is sampled to form a discrete signal, we get the same frequency domain, but is periodic in the range

or

( or for N-point DFT ).

You can consider an image as a signal which is sampled in two directions. So taking Fourier transforms in both X and Y directions gives you the frequency representation of the image.

Blob features

These methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other.

CenSurE features

This feature detector is a scale-invariant center-surround detector (CENSURE) that claims to outperform other detectors and gives results in real-time.

ORB features

This is a very fast binary descriptor based on BRIEF, which is rotation invariant and resistant to noise.

Dlib — 68 facial key points

This is one of the most widely used facial feature descriptors. The facial landmark detector included in the dlib library is an implementation of the One Millisecond Face Alignment with an Ensemble of Regression Trees paper by Kazemi and Sullivan (2014). This method starts by using:

  1. A training set of labeled facial landmarks on an image. These images are manually labeled, specifying specific (x, y)-coordinates of regions surrounding each facial structure.
  2. Priors, of more specifically, the probability of distance between pairs of input pixels.

Given this training data, an ensemble of regression trees is trained to estimate the facial landmark positions directly from the pixel intensities themselves (i.e., no “feature extraction” is taking place). The end result is a facial landmark detector that can be used to detect facial landmarks in real-time with high-quality predictions.

Code: https://www.pyimagesearch.com/2017/04/17/real-time-facial-landmark-detection-opencv-python-dlib/

Conclusion

Thus in this blog, we compile different facial features along with its code snippet. Different algorithms explain different facial features. The selection of the descriptor which gives high performance is truly based on the dataset in hand. The dataset’s size, diversity, sparsity, complexity plays a critical role in the selection of the algorithm. These human engineered features when fed into the convolution networks improve their accuracy.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

1 9 10 11 12