GlobalData Unveils AI Governance Framework to Help Companies Implement AI Responsibly and Safely
Artificial Intelligence (AI) risks can seriously tarnish a company’s reputation. Potential issues range from copyright infringement to data privacy breaches and the risk of actual physical harm. Increased use of AI will also reinforce and exacerbate many of the society’s biggest challenges, including bias, discrimination, misinformation, and other online harms. Hence, to help companies implement AI responsibly and safely, a framework has been developed by GlobalData.
Companies that fail to adopt the highest standards of AI governance face substantial reputational and financial risk. For instance, in 2024, Google had to temporarily block its new AI image generation model after it inaccurately portrayed German Second World War soldiers as people of color. In 2023, iTutor Group paid $365,000 to settle a lawsuit after its AI-powered recruiting software automatically rejected applicants based on age.
GlobalData’s AI Governance Framework is a management tool that helps senior executives identify potential AI risks within five broad classifications:
Laura Petrone, Principal Analyst, Thematic Intelligence team at GlobalData, comments: ”The journey towards responsible AI is complex and fraught with uncertainty. Risk can originate from different sources and multiply as AI systems are implemented. Companies investing in responsible AI early will have an advantage over their competitors. They can not only show that they are good corporate citizens but also actively prepare for upcoming regulations.”
There are currently no global regulatory standards for AI, so it can be difficult for CEOs to know what constitutes best practice governance for AI systems. Instead, they are left by governments to voluntarily embed responsible AI values and practices into their AI strategy. Responsible AI is an approach to developing AI and managing AI-related risks from an ethical and legal perspective.
Petrone concludes: “While most corporate executives will outsource AI provision to tech vendors—often Big Tech—they must be mindful that their company’s reputation will suffer if something goes wrong. Therefore, if you are a senior executive deploying AI systems designed by a third-party tech vendor, the onus is on you to ensure that your business is using AI responsibly.”
Categories: Technology