History is replete with examples of industry leaders that paid the ultimate price after failing to innovate. Kodak stubbornly stuck to film when digital photography was taking over the world and filed for bankruptcy in 2012. When Blockbuster passed up the opportunity to buy Netflix for a mere US$50 million in 2000 (the streaming service’s market cap now sits at US$169 billion), the video giant signed its death warrant. And the list of tragic demises goes on and on.
Now, as global business watches the AI revolution unfold with bated breath, many organizations find themselves at a critical juncture. Should they embrace game-changing technology, or stick to their proven business models?
“CEOs should lean into the risks and not lean out of them.”
“Organizations often treat AI as an IT topic, but it’s a business topic because it has such huge implications for strategy,” says Richard Fleming, a Senior Partner and Head of APAC Advanced Analytics Practice at global management consulting firm Bain & Company.
“I’ve heard a lot of organizations say, ‘AI feels risky, let’s batten down the hatches and try to avoid it.’ But that’s a risky thing to do because people within your organization will use it, whether you like it or not.
“CEOs should lean into the risks and not lean out of them.”
In a new report on the state of AI, Bain highlights three transformative outcomes for organizations that take the plunge: supercharged productivity, reinvented customer value and groundbreaking new business models.
“Many organizations are starting to see real productivity gains with AI,” Fleming says. “In code development and software engineering, there’s good evidence that AI improves junior developers’ productivity by 50–60 percent.
“A lot of the writing done in organizations is repetitive, so those that use Microsoft Copilot for writing augmentation are experiencing up to 30 percent improvement in productivity. Organizations in highly regulated industries that need to produce regulatory reports are seeing 80 percent productivity improvements.”
By reducing their margin of error significantly, AI is also helping companies elevate their customer experience and boost their bottom line.
“An insurance company we’re working with is getting much better claims decisions through AI-augmented claims officers,” Fleming explains. “This is far more effective than claims officers having to manually search through all sorts of policy documents and making a lot of mistakes.”
“In the same way that Uber and other businesses exploded into the world with mobile, AI will bring new avenues of creativity and new products in ways we probably haven’t even thought of yet.”
“We’re also working with a travel agency that’s saving millions of dollars a month in mistakes by using AI for their booking systems with hotels,” he adds. “They’ll probably keep their pool of people about the same size, but mistakes are being reduced, which translates into a much better experience for their customers.”
As intelligent agents become increasingly sophisticated, companies are harnessing the power of hypersonalization to further enhance their customer journey.
“We recently created a grocery shopping bot for Carrefour in France that can create meal plans and advise customers on restocking their pantries,” Fleming says. “Customers love it and they’re using it a lot, so Carrefour is getting amazing insights into their customers and how they think when they shop. It’s creating some real benefits in being able to personalize their customer experience.”
With previous technological innovations giving rise to disruptive new business models we now take for granted, AI will similarly revolutionize the way we do business.
“In the same way that Uber and other businesses exploded into the world with mobile, AI will bring new avenues of creativity and new products in ways we probably haven’t even thought of yet,” he adds.
Beyond business and profit, AI can also be applied for social good.
“A simple example is providing wealth advice much more broadly,” Fleming notes. “There’s a huge advice gap in Australia, and it’s even bigger when you look at countries like India where so many people don’t have financial inclusion. So, the question is: how can you provide sound advice to everyone across a range of languages? AI can do that.”
Amid the eager anticipation surrounding AI’s potential business and social applications, fears about predictable and unforeseen risks loom like a dark cloud. From data security to copyright issues to a deep-seated, perennial angst that humans will be replaced by machines, AI anxiety is all too real. In a survey commissioned by Microsoft earlier this year, 49 percent of respondents said they’re worried they’ll lose their jobs to AI.
“Things like fairness, bias and ethical principles around AI need to be considered and translated through the organization.”
“The first thing to recognize is that a lot of the risks already existed, and most big organizations have sophisticated risk management systems in place to deal with them,” Fleming notes. “For example, financial services organizations have been thinking about potential bias in their algorithms for more than a decade.
“Some risks have become amplified with AI and there are new risks emerging that we have to be careful about, such as hallucination from the models and copyright issues. Every organization should have a responsible AI program that should be led from the top and founded in a discussion about the company’s values.”
“Things like fairness, bias and ethical principles around AI need to be considered and translated through the organization. Developers should have the tools and training to consider all these elements and explain their models and how they made decisions.”
In highly regulated industries, additional guardrails need to be put in place to safeguard customer data and protect the public from erroneous and potentially damaging advice. While banks cannot risk putting AI directly into the hands of their customers, they can use it internally to support their financial experts’ decision-making process.
“We’re currently building an AI assistant for a bank, and we’ve created characters that go through the data and compete with each other to make a recommendation for the customer,” Fleming explains. “The characters have to explain their rationales to each other, so the wealth advisor can evaluate both sides and decide whether they’re reasonable before making a decision.”
AI may eventually be strictly regulated, with companies required to divulge to customers when they’re dealing with bots instead of humans and AI models having to undergo a stringent certification process. But the onus currently lies on organizations to protect their customers’ data and win their trust.
“It’s also imperative for organizations to educate their workforce on how to use AI and what it means for the business.”
“Even when we have a set of controls in place, organizations will still need ethical principles around AI,” Fleming says. “As we develop and operate these systems, we have to watch for instances where they go outside of our organization’s or our society’s values and correct them quickly.
“It’s also imperative for organizations to educate their workforce on how to use AI and what it means for the business. A lot of organizations tell me they plan to be fast followers, but the technology is moving really fast and it’s a huge paradigm shift, so it’s very difficult to be a fast follower.
“Organizations should be experimenting now to understand the technology, or they’ll fall behind very quickly.”