Trending Oversight Topics
Governance Surveys
Center for Inclusive Governance
Online Article
Scaling Generative AI Solutions and Fostering an AI-First Approach
09/24/2024
Buoyed by extreme hype, consumer adoption of OpenAI’s ChatGPT raced to 100 million users within two months of launch. But scaling generative artificial intelligence (AI) across an enterprise requires deep commitment and an AI-centric transformation of the organization, driven from the top.
Infosys’s Generative AI Radar 2023: North America report suggests that corporations are on the right path, with the executives of large companies actively championing generative AI within their organizations. Investment in generative AI by firms in the United States and Canada is expected to jump 67 percent from last year’s figure to reach $5.6 billion in 2024. However, money alone is not enough to scale generative AI throughout an enterprise and extract its full potential; a broader effort is required.
Transform Data, Technology, and Talent
Enterprises must simplify and modernize their multigenerational technology landscape before moving forward with generative AI. They also need to address data-related challenges, which, along with skills shortages, pose the biggest barriers to successful adoption. This calls for “treating” the organization’s massive data resources—internal, external, structured, unstructured, tacit, and explicit—to make them AI-ready; clean, consistent, accurate, harmonized, and proprietary data, as the use cases evolve, are essential to unlocking the true value of generative AI. To keep pace with the continual evolution of AI and other digital technologies, organizations should future-proof their infrastructure by creating abstractions that allow them to adapt to new models with ease.
In addition, generative AI is disrupting the world of work and redefining the skills required by enterprises. Organizations will need to train their workforce to meet future talent needs. Aside from reskilling and upskilling technical workers for redeployment in new opportunity areas, companies should also train other employees in how to use the technology in their day-to-day activities. To scale generative AI throughout the enterprise, every employee—leaders included—should be conditioned to be an AI creator or AI consumer. Technology skills aside, organizations should also inculcate uniquely human skills, such as creativity, problem solving, innovation, data governance, and an understanding of how to use AI responsibly, before plunging in.
Manage Risk, Responsibility, and Reward
Providing AI-ready data is part of the problem; the bigger challenge is ensuring that data are used responsibly. Customer data must always be used with consent and in compliance with security, privacy, and confidentiality norms. Data should be checked for trademark, copyright, and other legal protections before being consumed. AI training data should be fair, accurate, and complete to produce reliable and unbiased algorithmic outcomes. Enterprises should also be aware of generative AI’s tendency to hallucinate, or proffer inaccuracies with a great deal of confidence.
Unless managed carefully, the risks associated with generative AI can swiftly destroy its value to the organization and cause not only financial damage, but also reputational and even personal harm. Take, for example, “Deepfake Elon Musk.” After a video of Musk appeared in 2022 endorsing BitVex, a fake cryptocurrency platform, The New York Times dubbed him the biggest scammer on the Internet. The ability of generative AI to misuse one’s likeness can open the door for legal challenges, in turn damaging an organization and the person themselves.
It is the responsibility of senior leaders to ensure that their organizations have the necessary security and governance frameworks to address known risks and intelligent mechanisms to anticipate and detect evolving threats. Moreover, their vision should be global, aiming to build generative AI applications that are compliant across different jurisdictions. Large enterprises with significant generative AI ambitions should provide an “AI foundry” where appropriate employees can experiment with applications using large language models. However, actual deployment and scaling of these models for global countries will require narrow transformers that harness enterprise data and an AI factory setup in order to comply different regulations in various countries.
Scaling generative AI successfully is also a balancing act between risk and reward. McKinsey & Co.’s The Economic Potential of Generative AI: The Next Productivity Frontier says that after studying 63 generative AI use cases spanning 16 business functions, the combined annual value potential was estimated to be between $2.6 trillion and $4.4 trillion. Risk avoidance aside, prioritizing the most beneficial use cases is key to unlocking the value of generative AI while optimizing the organization’s limited resources. Although it sounds simple, picking the best use cases from so many options is one of the biggest challenges on the path to generative AI adoption. Here, an AI-first approach can be helpful.
Go From AI to AI-First
A broad definition of the “AI-first” approach is that companies consider AI before any other solution to resolve any business problem. Of course, an enterprise must take its context into account before going AI-first; for example, a banking institution cannot use a generative AI solution in risk and compliance operations that require complete algorithmic transparency and explainability. Still, there is a rich menu of use cases even for the highly regulated finance industry, and banks are already experimenting with generative AI to do unprecedented things, such as embedding generative pretrained transformers, also known as GPTs, into business processes to auto-resolve fraud. But even in an AI-first approach, a human needs to monitor and oversee the various AI models at work to ensure ethical and responsible use.
The value potential of generative AI appears to exceed that of any technology in history. To tap into it, organizations must scale generative AI across the enterprise. For the first time, CEOs, chief information officers, and chief information security officers in the United States and Canada are personally driving the adoption of a technology. This stems from generative AI demands that require significant business transformation.
Infosys is a NACD partner, providing directors with critical and timely information, and perspectives. Infosys is a financial supporter of the NACD.
Mohammed Rafee Tarafdar is the chief technology officer at Infosys, focused on building next-generation platforms, capabilities, and solutions.