In today’s rapidly evolving technological landscape, the influence of AI is undeniable. As it becomes integrated into nearly every aspect of our lives, the insight of our subject-matter experts is more valuable than ever. Our AI leaders are grounded in years of experience and focused on shaping the future.
With the power of AI comes the responsibility to use it ethically and transparently. And making sure our clients understand the ethical considerations around AI is incredibly important to us. Dr Kevin Macnish is the Head of Sustainability Consulting at Sopra Steria Next, with a particular focus on AI. He has authored books on ethics and technology and is working on a new book, AI and Democracy. He’s currently an advisor on the EU’s AI Code of Practice AI Act.
Let's meet Dr Kevin Macnish.
Tell us a bit about your background and your pathway into the technology sector?
After an initial career at GCHQ, I returned to university in 2008 to get a doctorate in philosophy, studying ethics and surveillance. I then worked as a researcher, lecturer, professor and consultant in academia for 12 years in the UK and The Netherlands, specialising in ethics, technology (particularly AI), and governance. I’ve published widely, including several books, academic papers and book chapters, I’ve spoken at both houses of Parliament and briefed the US Director of National Intelligence on AI ethics. I became a full-time consultant when I joined Sopra Steria early in 2021 and started leading the ethics and sustainability team in 2023.
What do you see as the main challenges facing organisations today when it comes to adopting AI?
The author William Gibson has said that the future is already with us, it's just not evenly distributed. While AI will bring immense benefits to people, society and the planet, it also poses a number of risks. Many of these can be avoided if we think to ask the right questions, and that is what I see as the heart of ethics and sustainability: asking the right questions and helping clients get better answers.
Organisations need to ensure they have both the technical understanding and the ethical foresight to adopt AI responsibly. It is tempting for some to view ethics and sustainability as restrictions on innovation, and so avoid dealing with them. However, ethical considerations in other industries have led to seatbelts, blind spot indicators and adaptive cruise control - all of which are for the best! Ethics and sustainability should be a spur to better innovation. After all, is biased AI better than its alternative?
How do you think emerging technology will impact people’s lives?
Emerging technologies are already influencing our lives, from recommender systems on Amazon and Netflix, to advances in medical diagnostics. The benefits range from the relatively trivial (which film to watch next) to lifesaving. At the same time, we've become aware of some of the drawbacks of new technologies, not least with the Cambridge Analytica scandal in 2017 when it transpired that a company had taken data entered innocently into a Facebook survey to create targeted advertising that undermined democratic practices. Less nefarious but just as harmful are AIs such as facial recognition systems that are biased against people on the basis of their gender or the tone of their skin. These negative consequences can be avoided, but only if we approach technological development with our eyes fully open.
How is Sopra Steria Next different from other companies in the industry?
What sets us apart is our focus on a holistic approach to digital transformation. Unlike some companies that just push technology solutions, we recognise the importance of combining technology with human experience, strategy, and ethical thinking. It’s about more than just automation or AI; it’s about making sure these technologies align with our clients' and society’s values. We also put a lot of emphasis on co-creating with our clients, ensuring that solutions are tailored to real-world problems rather than forcing them to adapt to rigid tech platforms.
What is your vision for the future and how would you like to shape the experience for our customers?
My vision for the future is one where technology doesn’t just make things faster or more efficient but also genuinely better for society. I’d like to see AI and emerging technologies being used to enhance human dignity, not just replace human labour. For clients, that means building tools and systems that are transparent, ethical, and ultimately empowering. It’s not just about delivering a product, but about shaping a future where technology works for us, not the other way around.
Who is your inspiration? Who has changed the way you think about the future?
Onora O'Neill is a brilliant philosopher, especially in the areas of trust, justice, and ethical communication as well as an active member of the House of Lords and former chair of the Equality and Human Rights Commission. Her work has had a significant impact on my thinking, particularly in relation to ethics in technology and AI.
Some of O'Neill’s central work is around trust and accountability. She argues that trust should not be something we simply assume or demand, but something that must be earned through transparency and responsibility. This has been influential in my views on AI and digital technologies. For instance, as we deploy AI systems that affect people's lives—whether in healthcare, finance, or policing—there must be clear structures of accountability. Users need to understand how decisions are made and who is responsible when things go wrong.
O'Neill’s insistence on reason-giving is key: those who develop and deploy technologies must be able to justify their decisions and actions. O'Neill’s philosophy has pushed me to advocate for a world where technology builds trust, enhances autonomy, and contributes to a more just and accountable society. She’s a powerful reminder that ethics is not a barrier to innovation but a foundation for sustainable, human-centred progress.
Connect with Kevin on LinkedIn or send him an email to discuss more about ethics and sustainability, and how this impacts AI use.