The Big Idea: Everyone needs an AI perspective. Do you know yours?
We just got our weekly AI dopamine hit.
Yesterday, Google just unleashed its answer to GPT-4. It’s called Gemini and the “ultra” version, which won’t be released till next year, is supposed to rival GPT-4.5 Turbo. The software dev and multi-modal capabilities will likely be powerful features for future enterprise applications. You can start using it right now with Bard.
It’s thrilling. Everything is moving so fast! There’s so much we can do! The AI hype wave is a gale force wind sweeping us along. But as we head towards the end of the year, we need to take a step back and acknowledge an uncomfortable reality:
Most companies are rushing into AI implementation without something crucial: A coherent point of view on AI.
This was one of the biggest takeaways from our AI x Future of Work Summit last week, where Dr. Eric Solomon — one of the minds behind Spotify Discover Weekly and Spotify Wrapped — challenged the tech leaders in the room to actually understand why and what they're trying to build with AI.
According to Dr. Solomon, there are 4 stages to technological innovation and adoption:
2) Slowly adapt
3) New Normal
4) Never Go Back
Guess where we are right now? The PANIC! stage.
We’ve talked to hundreds of companies this year about their AI strategies, and the truth is that most don’t fully know what they believe about AI and what they want to get out of it.
They’re nose-diving into the AI hype cycle without answering critical questions like: What business problems do you want to solve? What do you want to enable your employees to do better? What better experiences do you want to create for your customers?
In his workshop, Dr. Solomon introduced a powerful and humanistic heuristic to develop your PoV on AI. It has two elements — try it out for yourself:
Ambition: What do you want to accomplish with AI?
Ethical commitments: How do you commit to using AI ethically as you develop and build products?
This exercise generated some eureka moments for the audience. We hope it does for you too.
Of course, Dr. Solomon wasn’t the only AI expert in attendance on the first anniversary of ChatGPT. You can catch it all here, but we also wanted to share some of the biggest takeaways from our lineup of enterprise AI leaders, founders, VCs, and policy-makers from DC to the workforce. Here are some of the main takeaways from the event:
- Legal disclaimer: Everyone’s getting sued! Adam Ruttenberg, a partner at Cooley (which handles lots of tech deals), gave a TED-style talk on how to avoid getting sued on your LLM. “Here’s the short secret answer: You can’t.” What we’re entering into right now is a period of uncertainty—especially around the legal framework around generative AI. What we do know for certain is that the hype is far in excess of the capabilities.
- Build Your Own Tools (before sharing them with customers): A superstar panel of female AI experts from IBM to LinkedIn, explored enterprise AI use cases. Sofia Vizitiu of Pypestream shared one she built: a travel recommendation engine for an airline using generative AI—which gestures towards a whole new level of customer support. (They won’t let you take a carryon but they will give you chatbot travel tips!) Nisha Iyer of Atlassian said they’re focusing on internal tooling before they offer gen AI tools to customers.
- If your data is bad your model will be too: Athena Karp, HiredScore's CEO, led a fascinating panel with the CPO of Aptiv, the WSJ's Lauren Weber, and just-departed McKinsey Senior Partner Bill Schaninger on how the future of talent acquisition will require transitioning from conventional hiring methods to an approach that leverages AI. Schaninger’s take? “If you’ve done a lousy job knowing what you know, if you’ve done a lousy job with your routine knowledge management, any model you test on will be garbage.”
- Best bet for navigating future AI regulations? Don’t break any existing laws: The buzziest panel brought together Kieth Sonderline, the EEOC Commissioner, Fox's in-house counsel, the head of Responsible AI, and Insight's in-house AI expert to dive into what's ahead with responsible AI regulation next year. Rare sighting: Men wearing ties in the A.Team Clubhouse! Sonderling said the one year anniversary of ChatGPT is also the one year anniversary of most people in Washington caring about AI. “Or understanding what AI is.” Cue laughter from the crowd. His take is don’t hold your breath for AI-specific regulation from congress but DO take care to follow EXISTING regulations in your industries which protect against bias and discrimination.
Luckily, we recorded all the sessions and posted them on YouTube, so you can access them here and share them with your team.
AROUND THE WATERCOOLER
Searching for God: Understanding AI Monosemanticity
What’s inside AI?
Anthropic's researchers conducted an experiment using two AIs. First, they created a small, simple AI. Then, they used another AI, called an autoencoder, to interpret what the first AI was doing.
They discovered that while the original AI’s inner workings remained complex and hard to track, the smaller AI they used to study it was monosemantic, meaning each neuron corresponded to a specific, distinct concept. For example, it could identify a particular concept, like "God," with a specific neuron.
This research shows that AI is not just a random generator of responses. It has the capacity for detailed and specific understanding, much like humans—providing fascinating insight into AI’s cognitive process and opening up new avenues for understanding both artificial and biological neural systems. By looking into AI's thought process, we can make it safer, more reliable, and more in tune with our needs.
In short, the study by Anthropic is a big step towards making AI more transparent and trustworthy—a significant finding as it contrasts with the commonly held notion that AI is an indecipherable black box.
CHART OF THE WEEK
There’s an AI talent shortage in the labor market
By 2028, 90% of companies plan to use AI, affecting not just IT but all departments.
A study by Access Partnership and Amazon Web Services reveals that 73% of employers consider hiring AI talent a priority, indicating that AI is becoming the most coveted skill in the tech job market. 51% of employers placed AI development within their top five requirements for the next five years, and estimate that workers who acquire AI expertise could see their paychecks jump by 30% or more.
But the transition to an AI-driven workplace is not without challenges: About 75% of employers, prioritizing AI talent acquisition, report difficulties in finding qualified candidates. This shortage is exacerbated by a notable training awareness gap. Nearly 80% of employers admit to lacking knowledge in implementing AI training programs, paralleling the 79% of workers uncertain about available AI training opportunities.
What does this mean?
Organizations must navigate the complexities of integrating AI into their ecosystems, not just technologically but also in terms of workforce development. Concurrently, there's an urgent need to bridge the training awareness gap, ensuring that current and future employees are equipped to thrive in an AI-augmented landscape.
How Will DC Regulate AI? Check the 1964 Civil Rights Laws
Everyone's scratching their heads about what kind of AI regulation is going to come out of DC. But the quick answer is the most obvious one: Don't discriminate when hiring or take any data that's not yours.
At the A.Team AI x Future of Work Summit, EEOC Commissioner Keith Sonderling and other experts outlined practical ways for businesses to cut through Washington’s noise and align AI strategies with the regulations that matter.
Join A.Team’s next private forum for AI leaders
On Thursday, we celebrated the first anniversary of ChatGPT at the AI x FoW Summit—with nearly 300 VCs on the cutting edge of AI, top tech leaders, and members of the White House and SEC in attendance—to dissect and forecast the trajectory of AI and its profound impact on the workforce and beyond. Watch the recording here.
If you’re interested in joining A.Team’s next private forum, early registrations are now open. Sign up now to get access to our first event of 2024.
AI DISCOVERY ZONE
Ever wish you had an at-home polygraph? LiarLiar uses AI to detect when someone’s lying – and it’s compatible with Zoom and Facetime so no one in your life can ever deceive you again.
DEEP DIVES FROM THE ARCHIVES
- How A.Team Helped Supercharge Nurses' Productivity with AI for a Fortune500 Health System
- Is OpenAI’s Q* the Next Step Towards AGI?
MEME OF THE WEEK
Missed last week’s issue of Build Mode? Read it here.