Why the world needs responsible AI

By Touradj Ebrahimi,
EPFL Professor, Founder RayShaper SA and JPEG Convenor (chairman)

LinkedIn

For the last 30 years, the JPEG image format has been a staple for the Internet’s billions of users. While the technologies used to display images have evolved tremendously over the past few decades, the JPEG format is still used everywhere today. This is a great example of what can happen when a new technology develops under consensus-based, responsive and inclusive governance. 

Today, we have a chance to do this again. Artificial intelligence (AI) isn’t the first technology to impact day-to-day life for people all over the world, and it certainly won’t be the last. As a university professor, the founder of a company and a longstanding leader in international standards development, I have a fortunate vantage point. My many hats allow me to view the current applications of AI and its promise. To me, there is no doubt that responsible governance is the only way to deliver on the potential of AI while avoiding its negative side effects.  

That governance must encompass education, technology and regulation. But most importantly, it must be founded on inclusive and reliable International Standards. 

Governing the future 

Some liken AI to social media, which also fundamentally changed the way we communicate and connect with each other. Just like social media, AI could create a host of opportunities for positive developments; but it would be foolish to pretend there are no drawbacks. 

I prefer to equate AI to cars, because they are a perfect illustration of how ground-breaking technology can be used positively with responsible governance. People need a licence to legally drive a car, which is obtained through training; cars are significantly safer and easier to use than they used to be thanks to technological progress; and the industry is heavily regulated across the world. History shows that effective and responsible governance must be built on those three blocks. 

The same applies for AI. First, we need education. People must be informed on the potential risks they face while using AI technologies, and how to avoid them. This way, consumers can actively contribute towards their own safety. 

Second, there must be technological solutions to counteract risks. Solutions already exist for threats like misinformation, but we must do more to create effective antidotes to AI’s risks.  

Finally, we need regulation, but we need to be careful in what we regulate. AI technologies and the tools that use them are very complex and move fast. Regulation must be designed and implemented with enough foresight so that it is still relevant by the time it comes into effect. Creating these smart frameworks and umbrellas is challenging work, but it is pivotal to the responsible use of AI going forward. 

Connecting streams 

The main challenge is that AI is currently evolving along many tracks, at different speeds. But the challenges and potential risks of AI are global. This calls for inclusive, fair and flexible solutions. To bring all of these streams together and move forward responsibly, we must gather all stakeholders from all over the world around the same table. 

The private sector - driven by shareholder value and competition - is innovating faster than anyone else. This means they are effectively setting standards as they go, simply because they are the first to wade into unknown territory. This is not negative per se, but it leaves out many key voices from the debate. Scientists, engineers, consumer associations, governments and others must all weigh in and come together to establish the mechanisms needed to guide AI towards a benign and prosperous future. 

ISO has a proven track record of doing exactly this, and ISO/IEC 42001 is evidence that AI is high on ISO’s agenda today. As the world’s first AI management system standard, ISO/IEC 42001 addresses the unique challenges AI poses, such as ethical considerations, transparency, and continuous learning. All of this is entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. 

By taking into account all voices, ISO consistently works towards building International Standards that are inclusive and, most importantly, flexible. From the humble JPEG to global telephone networks and broadcasting systems, many of today’s technologies would not have been possible without standards. 

We stand on the brink of a new world powered by AI technologies. This provides an opportunity to minimize global risks by listening to all voices equally. To deliver on the promise of AI, we must act fast. Being static is not an option.