How we built a modern, state of the art OCR pipeline — PreciousDory

Finally I am very happy writing this blog after a long wait. As the title suggests PreciousDory is a modern optical character recognition (OCR) engine which performs better than the engines from tech giants like Google, Microsoft, Abby in KYC use cases. We feel now it is time to tell the world how we built this strong OCR pipeline over the last couple of years.

We at Signzy are trying to build a global digital trust system. We solve various fascinating problems related to AI and computer vision. Of them, text extraction from document images was one of the critical problem we had to solve. In the initial phase of our journey we were using traditional rule based OCR pipeline to extract text data from document images. Those OCR engines were not that efficient to compete with global competitors. So In an urge to stay competitive with the global market we took an ambitious decision to build an inhouse modern OCR pipeline. We wanted to build an OCR engine which will surpass the global leaders in that segment.

 

The herculean challenge was out and our AI team accepted it with a bliss. We know building a production ready OCR engine and achieving best in class results is not an easy task. But we are a bunch of gallant people in our AI team. When we started researching about the problem we found very few resources to help us out. And we also stumbled upon the below meme ?

 

If You Can’t Measure It, You Can’t Improve It

The first task our team did was to create a test dataset that would represent all the real world scenarios we could encounter. The scenarios includes varying viewpoints, illumination, deformation, occlusion, background clutter, etc. Below are some samples of our test dataset.

Sample test data

When you have a big problem to solve, break it down into smaller ones

We spent a quite a lot of time in literature study trying to break the problem into sub-problem so that our individual team members could start working on it. We ended with the below macro level architecture.

Macro level architecture

After coming up with the basic architectures our team started exploring the individual entities. Our core OCR engine comprises of 4 key components.

  1. CropNET
  2. RotationNET
  3. Text localizer
  4. Word classifier

CropNET

This is the first step in the OCR pipeline. The input documents for our engine will have a lot of background noise. We needed an algorithm to exactly crop out the region of interest so that the job gets easier in the subsequent steps. In the initial phase we tried out lot of traditional image processing techniques like edge detection, color matching, Hough lines etc. None of them could withstand our test data. Then we took the deep learning approach. The idea was to build a regression model to predict the four edges of the document to be processed. The train data for this model was the ground truth containing the four coordinates of the document. We implemented a custom shallow architecture for predicting the outputs. We achieved good performance from the model.

RotationNET

This is the second stage in the pipeline. After cropping, the next problem to solve is rotation. It was estimated that 5% of the production documents would be rotated in arbitrary angles. But for the OCR pipeline to work properly the document should be at zero degree. To tackle the problem we built a classification model which predicts the angle of document. There are 360 classes corresponding to each degree of rotation. The challenge was in creating the training data. As we had only few real world samples for training each class, we had to build a custom exhaustive pipeline for preparing synthetic training data which closely matches with real world data. Upon training , we achieved impressive results from the model.

Text localizer

The third stage is localizing the text areas. This is the most challenging problem to solve. Given a document the algorithm must be able to localize the text regions for further processing. We knew building this algorithm from scratch is a mammoth task. We benchmarked various open source text detection models on our test datasets.

Text localization — Benchmark

After rigorous testing we decided to go with CTPN. Connectionist Text Proposal Network (CTPN) accurately localizes text lines in natural image. It detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. It was developed with a vertical anchor mechanism that jointly predicts location and text/non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text.

 

Word classifier

This is the final stage and the most critical step in the OCR engine. This is the step where most of our efforts and time went into. After localizing the text regions in the document, the region of interest was cropped out of the document. Now the final challenge is predict the text from this. Upon rigorous literature study we arrived with two approaches for solving this problem.

  1. Character level classification
  2. Word level classification

Character level

This is one of the traditional approach. In this method the bounding box of individual characters are estimated and from them the characters are cropped out and presented for classification. Now what we have in hand is a MNIST kind of dataset. Building a classifier for this type of task is tried and tested method. But the real challenge in this approach was in building the character level bounding box predictor. Normal segmentation methods failed to perform on our test dataset. We thought of developing a FRCNN like object detection pipeline for localizing the individual characters. But creating the training data for this method was a tedious task and involves a lot of manual work. So we ended up dropping this method.

Word level classifier

This method is based on deep learning. Here we pass the full text localized region into a end to end pipeline and directly get the predicted text. The cropped text region is passed into a CNN for spatial feature extraction and then passed on to RNN for extracting temporal features. We are using CTC loss to train the architecture. CTC loss solves two problems: 1. You can train the network from pairs (Image, Text) without having to specify at which position a character occurs using the CTC loss. 2. You don’t have to postprocess the output, as a CTC decoder transforms the NN output into the final text.

The training data for this pipeline is cropped word image regions and their corresponding ground truth text. Since a large amount of training data was required to make the model converge, we made a separate data creation pipeline. In this we first get the cropped word regions from the document, secondly we feed it into third party OCR engine to get the corresponding text. We used this data to benchmark it against manually created human data. The manual data was again verified by a 2 stage human process to make sure the labels are right.

We achieved impressive results with the model. A sample output from the model.

 

Time for results

At Last we combined all the four key components into a single end to end pipeline. The algorithm now takes an input image of a document and gives the corresponding OCR text as output. Below is a sample input and output of a document.

 

Now the engine was ready to face our quality analysis team for validation. They benchmarked the pipeline against popular global third party OCR engines on our custom validation set. Below are the test results for certain important documents we were handling.

 

We tested our OCR engine against other top engines on different scenarios. It includes cases with no background, different background, high brightness and low brightness. The results shows that we are able to perform better than the popular known OCR engines in most scenarios.

Productionzation

The pipeline was built now and tested. But still it was not ready to face the real world. Some of the challenges in productionsing the system are listed below.

  1. Our OCR engine was using GPU for inference. But since we wanted the solution to be used by our clients without any change in their infrastructure, we removed all the GPU dependencies and rewrote the code to run in CPU.
  2. To serve large number of requests more efficiently we builded a queueing mechanism.
  3. For easier integration with existing client infrastructures, we provided the solution as a REST API
  4. Finally the whole pipeline was containerized to ease the deployment at enterprises.

Summary

Thus a mammoth of task building a modern OCR pipeline was accomplished. A special thanks to my team members Nishant and Harshit for making this project successful. One of the key take away from the project was that if you have an exciting problem and a passionate team in hand, you could make the impossible possible. And I could not explain a lot of steps in details since I had to keep the blog short. Do write to me if you have any queries.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

Democratizing AI using Live Face Detection

Democratizing AI using Live Face Detection

Democratizing AI using Live Face Detection!  Since the dawn of AI, facial recognition systems have been evolving rapidly to exceed our expectations at every turn. In a few years, you’ll be able to go through the airport basically just using your face. If you have bags to drop off, you’ll be able to use the self-service system and just have your face captured and matched. You’ll then go to security, the same thing happens just use your biometric. The big tech giants have proved this can be done on a massive scale. The world now needs higher adoption through the democratization of this technology, where even small organizations can use this advanced technology with a plug-and-play solution.

The answer to this is Deep Auth, Signzy’s in-house facial recognition system. This allows large-scale face authentication in real-time, using your everyday mobile device cameras in the real world.

Democratizing AI using Live Face Detection

Deep Auth, Facial Recognition System from Signzy

While a one-to-one face match is now very popular (thanks to the latest Apple iPhone X), it’s still not easy to authenticate people from larger datasets that identify you from thousands of other images. What is even more challenging is doing this in real-time. And just to add some bit of realism, sending images and videos over mobile internet slows this down even further.

This system can detect and recognize faces in real-time in any event, organization, office space without any special device. This makes Deep Auth an ideal candidate to use in real-world scenarios where it might be not possible to deploy a large human workforce or spend millions of dollars to monitor people and events. Workplaces, Education Institutes, Bank branches even large residential buildings are all valid areas of use.

Digital journeys can benefit from face-based authentication thus eliminating the friction of username, password, and adding the security of biometrics. There can also be hundreds of other use-cases which hopefully our customers will come up with, and help us improve our tech.

Democratizing AI using Live Face Detection

 

Deep Auth doing door access authorization.

Deep Auth is robust to appearance variations like sporting a beard,, or wearing eyeglasses. This is made possible by ensuring that Deep Auth learns the facial features dynamically (Online training).

Democratizing AI using Live Face Detection

 

Deep Auth working across different timelines

Technology

The technology behind face recognition is powered by a series of Convolution Neural Networks(CNN). Let’s divide the tech into two parts :

  • Face Detection
  • Face Recognition

Face Detection:

This part involves a 3 stage cascaded CNN network. This is to ensure the face is robustly detected. In the first stage, we propose regions (Objectablility score) and their regression boxes. In the second stage, we take these proposed regression boxes as the input and then re-propose them to reduce the number of false positives. Non-maximal suppression is applied after each stage to further reduce the number of false positives.

Democratizing AI using Live Face Detection

3 stage cascaded CNN for face detection.

In the final stage, we compute the facial landmarks with 5 point localization for both the eyes, nose, and the edges of the mouth. This stage is essential to ensure that the face is aligned before we pass it to the face recognizer. The loss function is an ensemble of the center loss and IoU (Intersection Over Union) loss. We trained the network for 150k iterations on the WIDER Face dataset.

Face Recognition:

The extracted faces are then passed to a siamese network to where we use a contrastive loss to converge the network. The siamese network is a 152 layer Resnet where the output is a 512-D vector depicting the encodings of the given face.

 

Democratizing AI using Live Face Detection

Resnet acts as the backbone for the siamese network.

We then use K- Nearest Neighbours(KNN) to classify each encoding to the nearest face encodings that were injected to KNN during the training phase. The 512-D vectorization used here compared to 128-D vectorization used in other face recognition systems helps in distinguishing fine details across each face. This provides high accuracy to the system even with a large number of non-discriminative faces. We are also working on extending the siamese network to extract 1024-D face encodings.

Benchmarks

Deep Auth poses impressive metrics on the FDDB database. We use 2 images to train each of 1678 distinct faces and then evaluate the faces with the rest of the test images. We then calculate the Precision and recall as 99.5629 and 91.2835 respectively, and with the F1 score of 95.2436.

Democratizing AI using Live Face Detection

 

Deep Auth’s Impressive scores!

We also showcase Deep Auth working in real-time, by face matching faces in a video.

Deep Auth in Action!

We tried something a little more cheeky and got our hands on a picture of our twin co-founders posing together, a rare sight indeed! And checked how good the Deep Auth really was. Was it able to distinguish between identical twins?

Democratizing AI using Live Face Detection

 

And Voila! It worked

Deep Auth is accessed using the REST API interface making it suitable for online training and real-time recognition. Deep Auth is self-servicing due to the fact it is robust to aging and appearance, which makes it an ideal solution to deploy in remote areas.

Conclusion

Hopefully, this blog was able to explain more about Deep Auth and the technology behind it. Ever since UIDAI made face recognition mandatory for Aadhaar authentication, face recognition will start to prevail in every nook and corner of the nation for biometric authentication. Thus democratization of face authentication allows even small companies to access this technology within their organizations. Hopefully, this should allow more fair play and give everyone a chance to use advanced technology to improve their lives and businesses.

In the next blog, we will explain how we have paired face recognition with spoof detection to make Deep Auth robust to spoof attacks. Please keep reading more on our AI section to understand how this is done.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

Data privacy

Data privacy for Banks & Financial Institutions

About 85 countries in the world have their data privacy policies in place. Sadly, India isn’t one of them. While the Information Technology Act, 2000 does touch upon privacy policies, it’s hardly sufficient. The countries that have data privacy regimes are also evolving their models to suit the BIG DATA wave. For example, in the US, where user data privacy is protected under a bunch of legislations like the Children’s Online Privacy Protection Act, the Gramm-Leach-Bliley Act for financial information, the California Online Privacy Protection Act in California, etc is still looking for more a better way to regulate.

Comparing the US, the framework with the one from EU, Michelle De Mooy, the director for privacy and data at the Center for Democracy & Technology, explains that Europe has a “people-first mentality” that’s ”more than we do here in our capitalist society, where innovation is sort of equated with letting businesses do whatever they need to grow. That has translated into pretty weak data protection.

EU is tightening its laws further with the upcoming GDPR. It has already got companies hustling to making their privacy policies compliant with the new laws. As the world gears up for a more stringent GDPR, let’s look at how Indian banks and financial institutions can approach data privacy despite the lack of regulations.

Failing on the data privacy score

Most banks and financial companies are committed to maintaining their data integrity and protect it against breaches. However, the same isn’t true when it comes to ensuring security & privacy. You could say that there’s some degree of laxity. Blame it on the “largely self-regulated” privacy guidelines or the “depends-on-the-context” grounds, but banks and financial institutions offering both data security and privacy are few.

In a global survey of more than 180 senior data privacy and security professionals, Capgemini found that lesser than 29% of them “offered both strong data privacy practices and a sound security strategy.

 

What makes the situation more serious is that today’s banks use a giant tech ecosystem with partners sharing data to build better digital experiences for the end users. As data exchanges hands and lives in multiple places, the risk of data privacy breaches increases. This calls for an even more robust and thorough data privacy regime applying to the entire banking and fintech ecosystem.

But without much legal guidance on approaching data privacy, banks and financial institutions too are forced to take the self-regulation route just like the cryptocurrency businesses. Here’s how banks can handle data privacy until the regime gets regulated.

Self-regulation

While the data privacy laws are ever-evolving, some best and practice data privacy practices can prepare banks and financial institutions for the time when the laws and policies are actually formulated. PwC offers 6 excellent action points for financial institutions to use when handling data privacy:

  • Define privacy as primarily a legal and compliance regulatory matter.
  • Create a privacy office that develops privacy guidelines and interfaces with other stakeholders. If the financial institution does not currently have a separate privacy office, we recommend for the institution to hold an internal “privacy summit” that convenes key stakeholders from the lines of business, technology, compliance, and legal.
  • Identify and understand what the data is, where it resides, how it is classified, and how it flows through various systems. For example, financial, medical, and PII are subject to different restrictions in different jurisdictions.
  • Develop appropriate global data-transfer agreements for PII and other data that falls under privacy requirements.
  • Recognize and adhere to requirements when developing core business processes and cross-border data flows.
  • Preserve customer trust as the primary goal.

McKinsey & Company recommend another great tactic for approaching data privacy that companies can adopt to become data stewards. This strategy is of creating a “golden record” of every personal-data processing activity in a company to ensure compliance and traceability that goes “beyond documenting the system inventory and involves maintaining a full record of where all personal data comes from, what is done with them, what the lawful grounds for processing are, and whom the data are shared with.“

This tactic applies seamlessly to banks and financial institutions. They can start off by building records of what data they collect from their users and how the sharing with their tech partners happens — all of this while ensuring users’ consent for all their operations using the data.

In fact, in addition to self-regulating the data collection, usage, and sharing regime, banks must also build a data privacy taskforce that’s committed to ensuring compliance with the internal data privacy framework.

With the right records, resources, banks, and financial institutions must also see how they can ensure data privacy into their services and offerings by design and by default.

At Signzy, we don’t just view user data privacy proactiveness as a risk management strategy, but we see it as a core building block of a digital trust system. It’s a competitive advantage. We believe that data privacy inspires trust. And when we build digital solutions to tackle challenging legacy financial processes, we make sure that our solutions are structured in a way that user data privacy isn’t compromised while balancing both user expectations and regulatory compliance.

Wrapping it up

Although privacy is a largely law-regulated — and we currently lack the laws — it’s still not optional. And it goes way beyond just seeking the users’ consent for collecting and storing the information. While banks and financial institutions can’t probably go so far as to give their users the “right to erasure” or the “right to be forgotten,” they can surely embrace data privacy as the norm. With stringent self-regulation measures, Indian banks and financial companies can contribute to building trust and transparency in the Indian digital banking scenario until the laws get formulated.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

Smart Contracts 

Smart Contracts — An Indian Perspective

Smart Contracts, within the burgeoning realm of blockchain technology, are beginning to gain traction in India’s technological and legal landscapes. These self-executing contracts, with terms of agreement directly written into code, promise a future of transparent, tamper-proof, and efficient transactions. While the global community has been quick to adopt and integrate these into various sectors, India stands at a pivotal juncture, balancing its rich legal traditions with the innovations of the digital age. As the nation grapples with the challenges and opportunities presented by smart contracts, it’s essential to understand their implications, regulatory frameworks, and potential transformative power in reshaping the Indian contractual ecosystem.

Demonetization in India has placed Blockchain-based Smart Contracts in a visible space. Blockchain technology has enabled the smooth transition from traditional to smart contracts by making them simpler and less expensive. Smart contracts are a vital step forward in automating the terms of an agreement between two parties.

For smart contracts to completely penetrate the Indian business circuit, the following aspects need to be focused upon:

  • The myth of smart contracts not being analogous to traditional contracts, needs to be addressed.
  • The legal clarification on status of Digital Currency is vital. Adequate regulation in the sphere of digital currency and smart contracts, will help in integration of digital contracts into present industrial standards. But, this transition needs the regulatory and logistical help of the RBI and Government structures.

What are Smart Contracts?

Smart contracts are computer protocols that embed the terms and conditions of a contract. The human readable terms of a contract are fed into an executable computer code that can run on a network. Many contractual clauses are made partially or fully self-executing, self-enforcing, or both.

Understanding Smart Contracts and Blockchain Technology

  • Smart contracts are self-performing and operate in combination with blockchain. This enables them to move information of value on the blockchain between parties.
  • Blockchain forms the backbone of all digital contracts and currency like the Bitcoin. It creates a transaction database that is shared by all nodes participating in a system based on the Bitcoin protocol.

Smart Contracts vs. Traditional Contracts

Contracts can be understood as agreements which are legally enforceable. The rights and obligations created by this agreement are recognized by law.

The idea of smart contracts is compatible with our understanding of traditional contract principles. Since, smart contracts also have legal backing, they fulfil the requirements of traditional contract law.

An important distinction between traditional and smart contracts is the medium on which the contract is formed. Commerce depends on individuals being able to form stable, predictable agreements with one another. Communication and physical ratification are the primary ways of creating a legal relationship. This infuses confidence of enforceability into the parties. The legal legitimacy and confidence of enforceability make traditional contracts a preferred way of forming contractual relations.

In smart contracts, the terms and conditions of contractual agreement are entered into the software code. But, this does not take away from the original character of the agreement. As long as the agreement creates a set of rights and duties or obligation, it is a valid contract.

Smart contract comprises of a new set of tools to articulate terms. The process of formation and articulation of contract is now embedded in a self-enforcing automated contract. Hence blockchain technology-based-smart contracts are a way to complement or replace, existing legal contracts.

For a wide range of potential applications, blockchain-based-smart contracts offer many benefits:

  • Speed — Smart contracts use software code. These codes automate tasks that are typically accomplished manually. Hence, they can increase the speed of a wide variety of business processes.
  • Accuracy — The probability of manual error is reduced due to automated transactions.
  • Lower cost — Smart Contracts need less human intervention, fewer intermediaries and thus reduce costs.
  • Auto-enforcement — Smart contracts are unique in their enforceability since these clauses are embedded in the applicable software itself.

Despite these benefits, there is hesitancy to participate in transactions involving smart contracts. This is because the status of digital currency is still ambiguous in India. Unlike traditional contracts, the legal position on enforcement, jurisdiction etc. is unsettled.

Yet, it can be seen that smart contract based transactions are much more popular in international parlance. Recognition for such transactions in major international commercial law statute have a profound impact.

Opponents of smart contracts in India argue that cryptocurrencies do not have the legal status as a currency in India. Hence, there is ambiguity about whether they constitute a ‘valid consideration’ as per traditional contractual principles.

  • Cryptocurrency is undefined under the FEMA, RBI Act or Coinage Act.
  • It is uncertain as to how Cryptocurrencies will be taxed and whether such tax will be a central or state subject.
  • Recently, a multi-stakeholder panel comprising of members from the RBI and the IDRBT looked into the implications of blockchain technology.[1]
  • Since all transactions take place over the internet, the dispute resolution or clause reposing jurisdiction to courts or excluding jurisdiction of courts needs to be clearly spelt out. “Smart contract itself should envisage a dispute resolution mechanism involving external arbitrators and/or courts, where the contract is frozen pending proceedings, and the award of the court is incorporated into the terms of the smart contract. With regards to evidence, a dual-integration mechanism comprising hybrid ‘code + paper’ contracts can be presented in court.”[2]

Commercial agreements comprise of clauses that protect parties from various liabilities. They are not always suitable for representation and execution through code. Hence it can be concluded that smart legal contracts will need a blend of code and natural language.

Smart contracts in the commercial realm are at a nascent stage. Hence, regulation in this regard will render adequate clarity to the functioning of smart contracts. This would ensure a smooth transition from traditional contracts to smart contracts in the near future.

About Signzy

Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.

Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.

Visit www.signzy.com for more information about us.

You can reach out to our team at reachout@signzy.com

Written By:

Signzy

Written by an insightful Signzian intent on learning and sharing knowledge.

 

1 5 6 7