The Big Idea: Boardrooms are underestimating the risk of AI.
Is AI the elephant in the boardroom that everyone's trying to ignore? Recently, it feels like unless your CEO is a robot, business leaders are turning a blind eye to the colossal, digital pachyderm tossing up data with its trunk.
Despite the cascading warnings—potential job losses, misinformation spread, and burgeoning legal risks—a recent Aon Global Risk Management Survey of over 2,800 business leaders reveals a startling blind spot toward AI's implications. In a bewildering twist, AI ranked 49th in business threats, far behind fears of cyber attacks and talent shortages, which fall in the top ten.
But ignoring AI today could amplify all other listed risks, leaving leaders to grapple with unforeseen complexities tomorrow.
Karim Lakhani, a Harvard Business School professor and AI expert, suggests this reluctance to prioritize AI may stem from a history of overhyped tech trends. The business world has witnessed many such waves rise and fade—from blockchain to the metaverse. However, the stakes with AI are significantly higher. The potential risk impacts of AI could cause legal issues, amplify human-capital risks, and threaten cyber security exposure. And the risk landscape is only growing as the technology continues to introduce new challenges.
AI demands hands-on, executive-level involvement—a commitment seemingly absent in many current business strategies according to Aon. As its implications become more evident and pressing, Lakhani predicts a similar trajectory for AI as cybersecurity's slow but eventual prioritization.
BCG senior partner Sylvain Duranton advises initiating a comprehensive task force, interlinking IT and HR, to identify areas ripe for AI-driven transformation that could lead to a 50% uptick in productivity.
But defining 'productivity' in knowledge work is trickier than it seems—Slack's global survey shows 43% of managers reported their biggest challenge is keeping their teams motivated. Leaders often fixate on input metrics (think emails sent and time spent at your desk) rather than real outcomes (like goals met and revenue generated).
But the problem with focusing on input is you’re left with a workforce that doesn’t feel trusted—according to Slack’s research, employees who feel trusted are on a productivity double shot and 30% more likely to go the extra mile.
So why are 60% of business leaders still counting the hours rather than focusing on results?
The point is, it might be time to adopt policies that embrace a new, more nuanced definition of productivity—and while you’re at it, keep some peanuts handy because that AI elephant is here to stay.
CHART OF THE WEEK
HR Leaders Identify Failure to Attract or Retain Top Talent as the #1 Enterprise Risk
So, what’s keeping business leaders up at night?
As organizations grapple with global workforce shortages, the legacy of pandemic-induced understaffing is now intensified by a tight labor market and an aging workforce. Making it no surprise that “The Failure to Attract or Retain Top Talent” ranked at its highest position since Aon’s Global Risk Management Survey began.
Gaining increased recognition from C-suite executives, the talent crisis ranked as number four across survey respondents—and came in first for HR leaders.
As the intense competition for talent in key skill areas causes businesses and even entire sectors to fall behind in innovation, there is an escalating importance for companies to reassess their corporate human capital strategy.
The talent deficit amplifies vulnerabilities across various domains, like cybersecurity, regulatory compliance, supply chain, business continuity, and reputation. Consequently, the potential revenue losses due to workforce shortages, limiting businesses from reaching their objectives, have emerged as a primary worry for survey participants.
This precarious scenario is echoed by a stark drop in risk quantification among respondents—from 28% in 2021 to just 17% in 2023. A drop that Aon believes might reveal a troubling oversight. Without effective measurement of risks, organizations will find themselves navigating in the dark, impeding their capacity to mitigate or recover from challenges effectively.
Rushing to Cut Costs with AI Introduces Major Risk
The latest expert take from Editorial Board member Don Blumfield - Former VP of Global IT, Heidrick & Struggles
AI’s transformative potential is clear, creating and offering unparalleled opportunities for redefinition of industries and enhanced profitability. However, the C-Suite’s urgency for immediate profit growth often translates into a relentless drive for efficiency — colloquially “doing more with less.” This drive, while successful in automating certain tasks over the past decade, requires a cautious or methodical approach.
Research from IDC highlights a 32% improvement in customer support and retention scores, primarily due to the evolution of chat-bots into sophisticated ML/AI tools. These advancements have enabled support organizations to address routine issues with 39% greater efficiency. However, this success stems from extensive machine learning training and meticulous validation by human engineers, underscoring the importance of human oversight.
Today, support organizations can focus more on complex interactions, thanks to AI. But this transition requires careful validation of AI-provided solutions. Conversations with leaders across various departments reveal a growing emphasis on efficiency, often hinting at workforce reduction in the AI process.
As executive leaders, our role is to safeguard the quality and veracity of generative AI content. Rushing towards cost-cutting or efficiency without sufficient human validation introduces substantial risk, potentially damaging our brands due to mediocre or inaccurate AI-generated content. The imperative is clear: we must strike a balance, ensuring our drive for efficiency does not compromise our commitment to excellence and the trust placed in our brands by stakeholders.
Why the Anti-Open Sourcers Have AI All Wrong
Critics of open-source AI raise concerns about the safety and responsibility of releasing powerful AI tools so broadly. The argument relies on the perceived dangers of putting advanced AI technologies into potentially unqualified or malicious hands. Skeptics worry that without proper safeguards, open AI can lead to unintended consequences, ranging from the spread of misinformation to the creation of more sophisticated cyber threats.
But they’ve got it all wrong.
The latest article from an Expert Builder in the A.Team network, Asaf Zamir, demonstrates how open source principles, with their focus on transparency and collaboration, consistently win over the restricted nature of walled gardens and proprietary secrets.
FUTURE OF WORK
Is Data Science the One Job That Won’t Be Automated Away?
With Maximilian Metti, A.Team’s Head of Data Science.
Build Mode: Job postings for data scientists, people with any AI/ML experience are skyrocketing. Do you feel like your data science skills are more in demand than ever?
Maximilian: In reality these tools have democratized things. It's actually now more accessible to other people who weren't able to do this stuff before. Whereas if you're dating, now it's like, okay, I want you to take our models and like make them even better. That's where the data scientist comes into. Play.
You're saying that your PhD doesn't help you when using generative AI?
I don't think it helps me as an end user. But let’s say I have a vector database and OpenAI helps me vectorize all of my documents. And let’s say I don't like the way that they vectorize my data, because it doesn't pull up relevant documents for me. I can find a better way to vectorize it. That's where the PhD comes in.
Everyone in this economy from designers to copywriters is worried about getting their job automated. Data science seems like it won’t be automated any time soon.
We're in this age where everyone wants to get their generative AI features out. It’s going to be a house of cards. You think you’re adding a new breakthrough thing and then it turns out literally anybody else can also make it. It’s like how no code hasn't killed developer jobs. Certain things will be automated away. Like semantic search—maybe that's not a relevant skill anymore. But I think that by solving one problem, many more pop up that have to be solved.
Data science might end up being the thing that takes a GPT wrapper and makes a defensible product.
Yeah, I think so. There's been a theme in machine learning for a long time, which is, do we use more advanced techniques? Or do we just use more data? And we’ve gotten to the point where there are general techniques that are advanced enough that you can just start throwing a bunch of data at it. That opens the doors for companies that don't necessarily have budget for data scientists. Like, Hey, let's just throw a shit ton of data at this.
Hear from Leaders on the Cutting Edge of AI
On November 30th at 3pm EST, the 1-year anniversary of ChatGPT, and after a year of working with hundreds of companies in their AI development from both a product and workforce standpoint, we're partnering with Cooley LLP to gather some of the foremost builders, investors, executives and lawmakers that are developing AI solutions for their workforce and the market in 2024.
We'll examine real-life case studies of how companies are leveraging AI to transform the way their teams work and build, and explore solutions to the biggest ethical and regulatory challenges that lie ahead.
We're bringing together a curated, invitation-only group of leaders from the world's most innovative enterprises, leading VC firms, and trail-blazing AI startups.
Join experts from Google, Inflection AI, & Microsoft for groundbreaking insights that will shape the future of AI.
AI DISCOVERY ZONE
The most natural way to get rid of frown lines? Don’t frown.
A tech artist out of SF has built Anti-Aging Software, an experience powered by a machine-learning model that analyzes your face and sounds an alarm when you frown.
DEEP DIVES FROM THE ARCHIVES
- Tech Founders: Don't Be Fooled By the Quiet Quitting Narrative
- The Great Betrayal: After Callous Layoffs, Workers Are Done With the Full-Time Work Model
GET TO KNOW YOUR CHATBOT
Every week we’ll ask AI a question, so we can get to know it a little better. This week, we wondered what ChatGPT smells like.
MEME OF THE WEEK
Missed last week’s issue of Build Mode? Read it here.