Data vs Privacy: EU regulations for Artificial Intelligence

Will the development and use of AI-powered solutions such as voicebots and chatbots become more complicated in the coming years? An effective AI solution implementation always starts with a good dataset. It is precisely the use of – especially “human”- data for AI that is subject to new European regulations.

It is only a matter of time, there will be new regulations from the EU for artificial intelligence (AI). The first part of the guidelines was published last spring and show that the new regulations are based on risk assessments. The following applies: the extent of the risk is probability times impact. If there is a small chance that something will go wrong but the impact is high, then the risk is bigger and the rules will be more restrictive. For example, Chatbots are classified in risk category 2, meaning there is an obligation to provide information: companies must be transparent about the operation of the application. It is precisely at this point that customer service may have to deal with an additional reporting obligation to consumers.

Algorithm watchdog

“With the EU’s legislative proposals and policy notes published so far, the tone has been set,” says tech and privacy lawyer Menno Weij (Partner Tech & Privacy Law at BDO Nederland). “The fact that there will be rules is good, but the impact these rules will have on businesses and consumers is worrying me at the moment. For example, in the Netherlands there is discussion of setting up an algorithm watchdog, a task that should be assigned to the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP). The AP has been complaining for years about too little resources and is lagging far behind on privacy files alone.”

Openness and transparency

A second line of the EU’s plans relates to openness and transparency around the use and deployment of AI. Menno Weij: “Legally, it comes down to making it clear that as a consumer you are dealing with a chatbot instead of a real person, for example. The idea behind this is to give the consumer the opportunity to make an informed choice or to distance from a certain situation. For example, a person can call customer service instead of talking with a chatbot”.

Data as training material: major impact

In addition, Weij also expects the necessary impact of another striking element in the upcoming laws and regulations. “One of the leaders of the European privacy policy is the French privacy watchdog CNIL (Commission Nationale Informatique & Libertés). CNIL, a progressive and respected privacy watchdog in Europe, has made an analysis of the different roles of AI suppliers in the context of GDPR. This analysis can have far-reaching consequences for companies that use AI applications towards consumers.”

Chatbot notification
At the moment it is not yet clear how this transparency obligation would be implemented. You could think of a message such as “you are now using an automated service that uses algorithms and data, where the data is collected, among other things, during your use of the service”. Menno Weij suspects that from the perspective of AI legislation, people would need to know that they are dealing with a robot. In addition, the consumer as a person involved should also know what the consequence of such an interaction with a robot is, according to Femke Schemkes, colleague of Weij. “On the one hand, automated answers are generated in response to the questions asked; on the other hand, data is collected and used during the service, and also to let the AI system learn. For users who know nothing about AI systems, this will not be immediately clear.”

Weij: “From the ‘privacy’ tracking, the variant is particularly interesting that the supplier also becomes responsible. The party with whom consumers do business will then have to point this out, whereby that party would have to refer to, for example, a privacy statement from the original supplier.”

Dataset has a major influence on AI effectiveness

Then there is the technical side of AI. Arjan van Hessen (Head of Imagination at Telecats/University of Twente) also thinks that it is high time to develop laws and regulations, “because technological developments are moving at lightning speed.” He explains that the operation of AI is highly dependent on the way in which datasets are compiled.

Van Hessen: “AI comes down to recognizing patterns with collected data. The broader the dataset is, the better AI usually works. By imposing more restrictions on the use of certain data from your dataset, AI would become less accurate. Use or not use special personal data. In the US and even more in China, as privacy rules are considered less important, we will end up with a situation in which Chinese AI will soon ‘score’ better than European ones, for example. In Europe, the starting point is you do not use special personal data unless you can explain it very well. In addition, the European Union is strongly committed to the ability to explain algorithms, so that the (end) user can know why a certain decision was made. That is certainly not easy with AI, in contrast to the more classical algorithms, but it is good that it will at least be tried.”

GDPR already provides guidance for data use in AI
For the use of (special) personal data, the legislation was amended in 2018, both at national and European levels (GDPR). GDPR stipulates that processing data for “statistical purposes” is simply permitted. However, the result of AI training should not be applied directly back to the individuals involved, and certainly not as an automated decision-making process (Article 22 GDPR).

Quality of the solution

There is a second reason why preventing bias is important: in addition to the risk of discrimination and abuse, the performance of the solution is also important. Van Hessen: “If AI is built from a dataset that is not a good representation of reality and developers use it widely, the solution ultimately works not so well. If you develop speech recognition without including Ukrainians who are going to speak English, then the solution does not work for the entire society.”

Van Hessen emphasizes that this problem already exists: after all, speech recognition does not work so well with the very elderly, children and foreigners who speak English as a second language, simply because these groups are usually not involved in training of the speech recognizer. “If we want to make the speech recognizer more inclusive, then the speech data of all smaller groups that speak English will also have to be used. And this process never stops, because “new English-speaking people” are constantly being added, so that the speech is constantly evolving.”

Everything starts with the right training material

The training material of the AI must therefore be representative of the target group. This target group can also change over time – think of the arrival of new target groups in your society (aging society, multicultural groups) and then AI must also be trained for them, according to Van Hessen. “In GDPR, your voice is now seen as a special personal data because it is a biometric feature that can be used for unique identification. That makes improving speech recognition technology more complicated. You don’t want to leave making and keeping inclusive technology to Microsoft or Google only.”

Strengthen expertise

Arjan Van Hessen thinks it would be a good idea if there were more expertise in organizations about their own data collection and what is done with it. “Otherwise, you get situations where things can get out of hand if you do not look closely at your data collection and your analysis method. Evaluating the methods used by experts, to learn from them, that happens too little in my opinion. Consider the relationship between the nature and quality of results and the dataset. I hear more and more often from young data scientists they find it difficult to convince their managers why something does or does not work well or why something is actually not statistically responsible.” Of course, these managers will slowly get a better picture of data, algorithms and AI, but it would help enormously to make a bit more progress with this, according to Van Hessen.

What should we take into account?

Back to the inevitable laws and regulations. For consumers, this could become like the cookie notifications: soon you may also have to click away an AI notification from a chatbot. “The question is what the consumer will get in terms of explanations about this. And whether that explanation can be understood. Of course, offering a choice is a great asset. But such an additional notification can also ensure that the ‘consent fatigue’ of consumers is increased,” Weij fears, “the means is then worse than the end.”

There is a good chance that the new rules will result in a lot of extra work for companies – and a lawyer’s paradise for lawyers, according to Weij. When it comes to the low-threshold development of applications – think of low-code chatbots, something that can simply be picked up within the contact center – he is convinced that the current laws and regulations comply with this. “Of course, low code developers must also be compliant. Companies that do not have their governance in order in this area and have low-code applications developed and rolled out without thinking will sooner or later get into trouble.”

Article from Erik Bouwer for Ziptone

SHARE
SHARE

Demo

Curious to see our solutions in action? Fill in your details and we'll contact you to schedule a demo session.​

Contact us by phone

Get In Touch

How can we help you?