European AI guidelines give hesitant developers green light

By Alex Hamilton | 15 April 2019

European Commission guidelines on the implementation of artificial intelligence (AI) and machine learning (ML) has given developers a green light and quashed industry hesitancy to experiment with new AI solutions, though care must be taken to limit the data needed to train them, say vendors.

“For regulatory compliance in financial crime, we’ve been working with a lot of companies on the application of AI but there has been this hesitancy to move forward because they’re worried about the support from regulators,” says Marc Andrews, vice president of Watson Financial Services Solutions at IBM. “What this statement from the EU does is provide some guide rails and an accepted framework.”

Last week the European Commission published a set of ethics guidelines inviting the industry, research institutes and public authorities to test them on their new AI systems. The guidelines were developed by a panel of independent experts appointed by the commission in June 2018. The project sits under the European Union’s AI Strategy, which aims to increase public and private investment in AI to €20bn per year by 2030.

In a statement accompanying the announcement, European Commissioner for Digital Economy and Society Mariya Gabriel said: “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."

The commission plans to launch a pilot phase involving “a wide range of stakeholders” this summer, while the guidelines and assessment list will be reviewed in early 2020.

“Across the industry there are certain regulations adhered to fairly broadly. Things like fair lending acts and non-discriminatory practices,” says Andrews. “There is a danger in AI that you could introduce some of those discriminatory things back into the equation. When you consider this, it’s crucial that these types of regulations and guidelines are applied. If we’re making decisions purely on data, there’s a risk that the human element is missed.

“This is especially true to areas like financial crime, which is a big area where [AI] is being applied. There’s a potential for the learning to feed off its own data go in directions where there aren’t sufficient proof points and things are going to be missed. You need to have human experts and human guidelines.”

Early days

Henry Vaage Iversen, chief operating officer of Boost.ai, believes that proper controls over the data machine learning systems can train themselves on is key. “If you have companies starting to find specific use cases within AI, and starting to train algorithms, the more training and data the system gets the better it will be. If the framework for controlling this is difficult to work with then that could potentially damage the business or opportunities for European AI companies.

“I’m curious how they can actually start enforcing this because the application of AI onto data is a huge competitive advantage for firms,” he adds. “When you start enforcing that companies cannot apply the AI in certain ways, ways that might have been very profitable, that might cause issues for a lot of companies. It’s difficult to see how this can be developed as a framework because it definitely requires a lot of control and insight into companies and how they work.

“As long as you do it in the right way, I think this will definitely be a positive thing for the industry,” says Iversen. “A lot of what they listed is essential to achieving trustworthy AI. In Europe people are starting to implement systems but they are still very early days. There will be a lot of discussion about this going forward.”

It may take time, says Andrews, for kinks in the guidelines to be worked out: “There's a comment about technical robustness and safety, that they need to be resilient and secure and have a fallback plan in case something goes wrong. What does that mean? Those are the kind of details that will probably be continued to worked on over time. It’s not too dissimilar to when GDPR started. Before it launched, there were guidelines on how to protect customer data, but they weren’t too specific, then as it launched that specificity has evolved. IBM believes this will move in a similar manner.

“Almost every bank we work with has an AI or innovation department that is working on various applications across their business. Usually these AI folks are off running independent things and coming up with ideas, maybe now they’ll be forced to work with the business to understand what its requirements are.”

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development