The impact of artificial intelligence on business and governance
Key takeaways from the panel discussion featuring Ananda KAUTZ | Head of Innovation and Payments, ABBL (Moderator) - Solenne NIEDERCORN-DESOUCHES |  Independent Director and Senior Advisor in FinTech/VC, Podcast host Finscale - Theodoros EVGENIOU | Professor of Decision Sciences and Technology Management, INSEAD - Yannick BRUCK | CTO at Luxembourg Stock Exchange - Emilia TANTAR | Chief Data and AI Officer, Black Swan Lux
Getting a handle on AI

The stunning potential of artificial intelligence came to life for many with ChatGPT, but for Bill Gates, this software is “pretty dumb” compared to what is coming down the pipe over the next five years. A panel discussed the impact of artificial intelligence on business and governance.


Understand the philosophy

A recent ABBL survey found that three-quarters of financial sector business leaders see generative AI as an opportunity, said panel chair Ananda Kautz, Head of Innovation and Payments at the ABBL. Yet as with all new technology, how these opportunities (and latent threats) will manifest themselves is hard to grasp. “When I discuss AI with boards, they don't necessarily see all the opportunities nor realise the major changes and risks it can imply in their sector,” said Solenne Niedercorn-Desouches, an independent director and senior advisor in fintech. Just 8% of senior management surveyed by the ABBL said they were very comfortable with AI technology. 

Experts need to advise, ideally with these people sitting on the board, but Theodoros Evgeniou, Professor of Decision Sciences And Technology Management at Insead said this is not sufficient. “Debates cannot happen when expertise is very one-sided. Everybody on boards has to know enough to enable them to challenge decisions,” he said. “This is not necessarily about becoming an AI geek. It’s philosophy; which is much more interesting,” he added.

Mostly value-adding

He cited a recent US study by teams from Harvard, MIT, and UPenn working with Boston Consulting Group staff. On average the speed and quality of work improved when these people were “co-piloting” ChatGPT to do their work, with the least experienced consultants benefiting most. However, more surprisingly, efficiency fell consistently in some tasks during these tests. “I'm sure we'll see, on average, greater productivity, increased quality, more innovation, and even value-creating creativity,” said Mr Evgeniou. Yet he cautioned that this might not be universal, with AI serving to destroy value in some areas.

Regulatory avenues

EU member states are currently debating European Parliament proposals for an AI Act, with them targeting a draft text for the end of the year. As someone working with the European Commission on technical specifications, Emilia Tantar, Chief Data and AI Officer, Black Swan Lux is well placed to give a broad outline of the direction of travel of AI regulation globally. In particular, she is contributing to an expert AI working group working to understand the range of risks, with the aim of publishing technical standards in 2025. “We do not know what we do not know, but this is the first step to creating an operational risk management framework,” she said. Data protection, cyber security, the role of human oversight, AI testing, and much more come into these considerations. 

Mr Evgeniou commented on the work so far, including the proposal that AI should conform to European values. “Be careful,” he warned. “AI designed to European values might not suit conditions in Asia. And what about Chinese medical AI that could save lives, but which is trained to Chinese norms?” He said there will need to be trade-offs, with regulation needing to be an iterative process that evolves with the technology. Ms Tantar spoke of the need for “ethical translation” to adapt AI to meet varying social norms and “continuous monitoring” of innovation.

Just as medicines are products users don’t understand but trust due to faith in regulators, similar processes are required for AI, with checks on how the tool was developed. “The most impressive regulator of AI at the moment is the US FDA [Food and Drug Administration] which has already approved about 500 AI medical devices,” said Mr Evgeniou.

Workforces will be affected by AI. Jobs will be lost, jobs will be created and existing roles will change. “Focus on tasks, not to jobs,” he said. “Be aware that technology always has second-order effects that are difficult to predict,” he added.

A case study

After the panel, a case study of how the Luxembourg Stock Exchange uses AI was presented by its CTO Yannick Bruck. He discussed how investing in cloud-based operations has opened possibilities, as has investing in a start-up to help them with the specific use case of extracting data from legal and financial documents. He cautioned about the need to ensure the cloud system is secure, particularly to keep safe the AI training data that is introduced to the system.
He described the innovation process used at the exchange. “Start with something that makes sense and that people can rely on, and this will open the door for the application of this technology to many use cases across the organisation,” he said. 

Ananda Kautz
Head of Innovation and Payments, ABBL (Moderator)

Theodoros Evgeniou
Professor of Decision Sciences and Technology Management, INSEAD

Solenne Niedercorn-Desouches
iNED and FinTech/VC expert

Emilia Tantar
Chief Data and AI Officer, Black Swan Lux

Yannick Bruck
CTO at Luxembourg Stock Exchange


Sustainability & Cognitive biases: Why are we in denial? How do we consider scientific information? What holds us up?
Key takeaways from the intervention of Kris DE MEYER | Neuroscientist, Director UCL Climate Action Unit