Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful For the second year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. This year, we’re examining the extent to which organizations are addressing risks that stem from the use of internally and externally developed AI tools. The first question we posed to our panelists was about third-party AI tools. This month, we’re digging deeper into the specific risks associated with generative AI, a set of algorithms that can use unvetted training data to generate content such as text, images, or audio that may seem realistic or factual but may be biased, inaccurate, or fictitious. We found that a majority (63%) of our panelists agree or strongly agree with the following statement: Most RAI programs are unprepared to address the risks of new generative AI tools. That is, RAI programs are clearly struggling to address the potential negative consequences of using generative AI. But many of our experts asserted that an approach that emphasizes the core RAI principles embedded in an organization’s DNA, coupled with an RAI program that is continually adapting to address new and evolving risks, can help. Based on insights from our panelists and drawing on our own observations and experience in this field, we offer some recommendations on how organizations can start to address the risks posed by the sudden, rapid adoption of powerful generative AI tools. According to our experts, RAI programs are struggling to address the risks associated with generative AI in practice for at least three reasons. First, generative AI tools are qualitatively different from other AI tools. Jaya Kolhatkar, chief data officer at Hulu and executive vice president of data for Disney Streaming, observes that recent developments “showcase some of the more general-purpose AI tools that we may not have considered as part of an organization’s larger responsible AI initiative.” Given the technology’s general-purpose nature, adds Richard Benjamins, chief AI and data strategist at Telefónica, “an RAI program that evaluates the AI technology or algorithm without a specific use case in mind … is not appropriate for generative AI, since the ethical and social impact will depend on the specific use case.”
Continued here