Imagine your company's new AI tool discriminates against people with darker skin tones, or worse, leaks users' personal data. Scenarios like these are happening right now, as you're reading this.
AI is quickly weaving its way into our lives, at home and at work, making its governance more urgent than ever. We’ve known about AI’s ethical challenges for decades, but developments over the last five years have turned these concerns into real threats to everyone’s rights and wellbeing. Some, like the European Parliament and the State of New York, have introduced legislation to put guardrails around AI development and use. Others, like the UK government, are taking a more cautious approach. Meanwhile, companies are adopting AI at pace and not always adopting or maintaining governance at the same rate.
These issues were discussed at a recent panel at the GRC #Risk conference at London’s Excel Centre. I chaired the panel which also consisted of Teodora Pimpireva Tapping, Global Head of Privacy at Bumble; Eleonor Duhs, Head of Data Privacy, Bates Wells LLP and Chief Negotiator for the UK for GDPR; Ivan Djordjevic, Principle Architect for Security, Privacy, and Identity at Salesforce; and Marc Rubbinaccio, Head of Compliance at Secureframe. The conference brought together governance, risk and compliance experts from around the globe to discuss these and related issues.
The panel covered three core areas: the current challenges, how to move beyond lists of principles and the motivation to put robust governance in place, especially where there is not overarching legislation, as in the UK.
Current Challenges
A core challenge that was raised repeatedly on the panel was the need for cross-functionality. AI governance isn’t just for lawyers or tech specialists, it’s like assembling a football team. You need everyone on board - lawyers, tech experts, ethicists, and more - working together towards the same goal. In Sopra Steria, for example, the AI Governance Board consists of our Chief Technical Officer, Chief Information and Security Officer, Head of Legal, Head of Procurement, Data Protection Officer and Head of Ethics Consulting.
Governance is also challenged in some jurisdictions, such as the UK, for the aforementioned reason of no overarching legislation. The UK currently has a patchwork of laws and regulations that collectively govern AI use (such as the Equalities Act, the UK GDPR and others) which makes compliance complex and uncertain, especially for small and medium businesses without the resources to have specialised AI governance oversight.
Principles vs Practice
While principles are important as a starting point, they cannot be the last word on the matter. This will only create confusion when different principles clash and there is no clear guidance as to which should be traded off. Think of a case where profitability may clash with explainability. It’s easy to say explainability should always come first, but in reality, businesses have to balance explainability against profitability and their risk tolerance, while remaining ethical and within the law. Should we stop using (and should OpenAI and Anthropic stop offering) tools such as ChatGPT and Claude because their output is not fully explainable?
Again, the need for cross-functionality was brought up as an essential prerequisite in order to move effectively from principles to policy to the implementation of standards. Which standards should be employed (ISO27001, ISO42001, the NIST Risk Management Framework, and others) is another area for decision.
Motivation
While organisations may recognise the need for governance, they may not be able to justify the budget if there is no legislation demanding this. Even so, in those contexts good governance can be a differentiator, and certifications such as ISO42001 will become increasingly valuable to help suppliers stand out in a crowded market. Good governance can also help organisations bring some order to the chaos many of us are experiencing with AI. Lastly, we’ve all heard of the Universal Declaration of Human Rights. Even though some organisations may not be subject to, for instance, the fundamental rights requirements of the EU’s AI Act, the call to respect human rights such as non-discrimination, privacy and freedom of expression is universal.
To wrap up, the panellists left us with some key takeaways: audit your AI systems so you know where they’re being used, don’t get swept up in the hype of new tech, make sure everyone knows their responsibility for the models across your organisation and hold your suppliers to account for how they are implementing AI governance.
Conclusion
For all of the excitement and pace of development in AI, there are some core risk management principles which should underlie implementation. Know what your organisation has and is using; review what is coming into your organisation (and what is going out); and ensure that good governance sits within the organisational culture and does not reside in one function alone. Given the urgency around governance, if no one’s taking clear responsibility for AI in your organisation, maybe it’s time to ask yourself: what’s your role in making sure AI is compliant, fair, and ethical in your workplace?
Kevin Macnish has been working in AI governance with a range of clients from governments and universities to the European Commission and the financial services sector for 14 years. He co-founded Sopra Steria’s internal AI governance board and regularly advises clients on how to implement effective governance. If you would like to know more, please reach out to Kevin and Kevin.Macnish@soprasteria.com.