Friso Spinhoven about the responsible AI framework

Responsible AI, i.e. the careful, responsible and safe use of AI, is a prerequisite for sustainable value creation with AI. Conclusion has developed two practical models that show what organizations need to do to get value from AI, and how to use it responsibly at the same time. In doing so, they at the same time comply with the AI Act, the European Commission regulation that aims to introduce a common regulatory and legal framework for the regulation of artificial intelligence.

December 3rd, 2024   |   Blog   |   By: Conclusion

Share

Friso Spinhoven, responsible AI consultant at Conclusion AI 360

What is Responsible AI?

The media pays a lot of attention to the negative effects that the use of AI can have. This often includes examples of bias in models. However, Responsible AI is much broader than that. Friso: "Responsible AI also includes themes such as employee training, defining new roles and taking into account the ecological impact of AI."

To maintain a clear view of the full breadth of the spectrum, Conclusion has developed the Responsible AI Framework. This framework consists of six dimensions that you must take into account if you want to derive sustainable value from AI. The themes are linked with questionnaires that the various stakeholders in the organization can complete. This way, as an organization, you can see where you stand and determine where you should put your energy first. The Responsible AI Framework is a maturity model that you can use to periodically measure the progress you have made and to see where you still need to invest additional energy to ensure the sustainable embedding of AI within your organization.

Responsible AI includes employee training, defining new roles and taking into account the ecological impact of AI.

Friso Spinhoven, Conclusion AI 360

Why does it often remain unactioned?

Friso recognizes that most organizations are enthusiastic about dealing with AI responsibly, but they do not know exactly how. "Because of all the media attention, policymakers are well aware that making mistakes is easy when you use technology to support or even fully automate decisions. Mistakes that you often aren't even aware of and that have crept into your process or AI model completely unnoticed. That's why the topic of Responsible AI is currently high on the agenda of the management of most organizations. But they find it difficult to give substance to this, because they don't know where to start. "Moreover, they lack an overview of the full breadth of the spectrum," says Friso. The Responsible AI Framework is changing that.

What makes it so complex?

Friso recognizes that maturity is strongly related to the nature of the company. The first step that Conclusion takes when dealing with customers is to determine the AI Readiness (AIR) profile. "We have developed a model in which we can award a score to companies. This is because companies show major differences in their modus operandi. These differences affect the level of maturity. Companies in innovative sectors must come up with new products quickly and therefore dare to take risks. Responsible AI is generally not the first thing they think of. Whereas companies in other sectors consider risk management to be a top priority, because they know that the impact of an error can have major consequences."

In addition to the sector in which organizations operate, AI Readiness is also determined by organizational culture. One company is more opportunistic, another is mainly pragmatic and yet another is naturally conservative or sceptical when it comes to change. Friso: "The AIR profile is a way of giving us a quick insight into the steps an organization needs to take before actually starting to work with AI. In many organizations we have to put on the brakes. These are the companies that easily fall for a sexy pilot that demonstrates the value of AI. But sometimes we also need to shift the focus from risk management to opportunities. These are the organizations that are on the cautious side of the spectrum. For those organizations I often use a Formula 1 metaphor: you can make a fast car very reliable. But you can't make a reliable car fast."

Two models, multiple applications

Friso: "We use both models in multiple ways. First and foremost, they guide our conversations with customers. It ensures that we don't miss any topics. We also use the Responsible AI Framework as a maturity model: a baseline measurement provides overall insight into where our customer stands and in which areas maturity is lagging behind. We then repeat the assessment (semi-)annually to monitor whether the growth path we are pursuing together with the customer is bearing fruit. This close monitoring allows us to embed AI in our customer's organization in a responsible and sustainable manner."

In short, using the AI Readiness model and Responsible AI Framework gives you insight into where you can and should grow as an organization if you want to use AI responsibly. It gives you a grip on AI and helps you achieve your strategic goals in a sustainable way.

Emerging technologies trend report 2024 #2

The Conclusion companies continuously monitor emerging IT trends and explore new technologies that may be relevant to our clients. In our semi-annual trend report, we highlight several technologies that, in our view, deserve special attention.