The data economy can be a Catch-22. It can succumb to corporate surveillance capitalism on the one hand and an authoritarian digital “welfare” state on the other. The European Union (EU) places itself as the alternative to both. Its strategy to regulate technology over the next decade is to set that precedence. Whether it is successful is up for interpretation. On 19th February 2020, the European Commission (executive branch of the European Union) published a 26-page whitepaper on Artificial Intelligence (AI). The paper titled a European Approach to Excellence and Trust states the EC’s intent to regulate and advance AI.
This blog will explore the reach, requirements, and reservations of the guidelines the whitepaper introduces.
Reach: A Risk Barometer Approach
The whitepaper will have consequences for those using and developing AI. To be specific, businesses that are participants of the data economy. It’s drafted to effectively regulate AI while not being dictatorial. Strict measures could create a disproportionate burden for SMEs.
The paper defines AI as
“Systems that display intelligent behavior by analyzing their environment and taking actions — with some degree of autonomy — to achieve specific goals.”
However, the proposed requirements will mainly affect AI which is deemed “high-risk”. This is enumerated by the EC as:
“…deployed in health care, transport, energy and parts of the public sector, or if it is used in the employment sphere (for recruitment puposes or in situations impacting worker’s rights), or for remote biometric identification and other intrusive surveillance technologies.”
Due to this definition and set scope, the suggestions would not apply to advertising technology or consumer privacy. The assumption here is that risk can be finitely calculated. This leaves many contentious issues outside of the purview of the guidelines. For example, data brokers that leverage AI to predict identities and hyper-targeted advertising.
It is anticipated that the new framework will have extraterritorial impact, like the GDPR.
Requirements: The Precursor to Compliance
The AI applications classified as high-risk would be regulated by the following key features. These center on safety, security, fairness and transparency:
- Training data
The paper reiterates that if there is no data, there is no AI. The decisions and performance of an AI are dependent on the data sets it has been fed and trained on. To ensure that the services or products that the AI system enables are safe, the requirements dictate that it must be trained on a broad enough data set. The training data must also be representation to avoid inadvertent coded discrimination. The data collected to adhere to privacy and data protection standards i.e. the GDPR. (Interested in reading more on the data protection regulations in place in the EU and India? Take a look at our article comparing the GDPR and PDP Bill) - Data and record-keeping
Considering the opacity and complexity of many AI systems, certain requirements are put forth to verify compliance. It could allow potentially problematic decisions or actions by the AI to be traced back. The regulatory framework proposes that the following records can be kept:
a. Records related to the programming of the algorithm
b. Data sets used to train and test the high-risk AI systems (when justified) along with a description of their main characteristic and the reason for their selection
c. Documentation on the algorithm and the training methodologies adopted to build, test, and validate the AI - Information to be provided
Apart from the above information, the AI system’s limitations and capabilities must be proactively provided. It should also mention the degree of accuracy to which the system can achieve a specific purpose. This information could be useful to those deploying the AI application. The whitepaper reiterates that citizens should be duly informed when they are interacting with an AI and not a real person. The details should be easy to understand, concise and objective. - Robustness and accuracy
Across the AI system’s life cycle, it must correctly reflect its own degree of accuracy. The whitepaper mentions that the outcomes should be reproducible. The AI system must be able to deal with errors and inconsistencies. It should endure overt attacks, and be resilient against manipulated data. - Human oversight
The AI system must be ethical and trustworthy. To not undermine human autonomy, the whitepaper insists on the AI being human-centric. This could manifest in different ways depending on the system’s purpose and functioning:
a. Output is reviewed and validated by a human before it becomes effective. For example, human intervention needed to approve a person’s KYC.
b. Human intervention post the output being effective. For example, reviewing why the AI rejected a credit application, after the decision was put into effect.
c. Monitoring the operation of the AI system. This is with the possibility to intervene and stop its functioning in real time. For example, a deactivate button in a driverless car.
d. Constraints integrated during the design phase. For example, a driverless car will stop when visibility is low. - Specific requirements (Example: For AI applications used for remote biometric identification)
The application of AI systems for functions such as facial recognition affects the fundamental rights of a citizen. For example the right to a private life and the protection of one’s personal data. Processing of biometric data is to uniquely identify a person. This can only be done in special circumstances with adequate safeguards. The whitepaper declares that the EC will begin a “broad European debate”- on what these circumstances are and their justification.
Reservations: Missing the Mark
The proposed guidelines address issues of personal data protection and pivacy rights, non-discrimination, and cybersecurity. But, it seems to miss the perils of “low-risk” technologies with weakened guidelines.
The whitepaper overlooks that the classification of low risk is not absolute. This could actually be very risky for some. The harms of technology are often amplified to disproportionately affect the marginalized.
A draft version of the whitepaper was leaked in January. Held against that, the new criteria are feeble attempts to regulate the possible adverse implementations of AI. Here, the draft proposed a prohibition or what is called a “moratorium” on facial recognition in public spaces for 5 years. But, the released guidelines are merely a call for a “broad European debate” on the facial recognition policy.
Stakeholders can give their insights on the whitepaper by 31st May 2020. The EC will start drafting legislation based on the proposal and feedback at the end of 2020.
About Signzy
Signzy is a market-leading platform redefining the speed, accuracy, and experience of how financial institutions are onboarding customers and businesses – using the digital medium. The company’s award-winning no-code GO platform delivers seamless, end-to-end, and multi-channel onboarding journeys while offering customizable workflows. In addition, it gives these players access to an aggregated marketplace of 240+ bespoke APIs that can be easily added to any workflow with simple widgets.
Signzy is enabling ten million+ end customer and business onboarding every month at a success rate of 99% while reducing the speed to market from 6 months to 3-4 weeks. It works with over 240+ FIs globally, including the 4 largest banks in India, a Top 3 acquiring Bank in the US, and has a robust global partnership with Mastercard and Microsoft. The company’s product team is based out of Bengaluru and has a strong presence in Mumbai, New York, and Dubai.
Visit www.signzy.com for more information about us.
You can reach out to our team at reachout@signzy.com
Written By:
Signzy
Written by an insightful Signzian intent on learning and sharing knowledge.