Build Mode Logo
Request access to A.Team's member-only platform
I'm looking for high-quality work
Request access to build with teammates you like on meaningful, high-paying work.
Select
I'm looking for top tech talent
Request access to work with high-performing teams of tech’s best builders — to build better, faster.
Select

What Spiders Can Teach Us About Generative AI's Potential

Just as spiders use their webs to extend their cognitive reach, we have the potential to use generative AI to extend ours.

Since dinosaurs roamed the earth, the orb-weaver spider has been the unsung architect of the animal kingdom—spinning massive, geometrically satisfying webs from the basic to the remarkably ornate. As mini apex predators, they employ a suite of sensory and navigational tools for hunting. But these eight-legged engineers don't just build their own hunting grounds—they masterfully navigate them, using their webs as extensions of themselves, enhancing their abilities to interact with and understand their environments. In much the same way, humans have always looked for ways to extend our capabilities—from the first stone tools to the vast digital networks of today. Now, with AI, we're crafting our most complex tool yet.

In a study conducted in 2008, Hilton Japyassú, a Brazilian biologist, argued that the web isn’t just a trap for catching prey—it’s an integral part of the spider’s cognitive systems. After collecting twelve species of orb spiders, and clipping part of their webs to mimic the nets of cobweb spiders, Japayassú witnessed a previously undocumented practice in the orb weaver species: The spiders adapted to the new web structure and started fishing for passing insects, offering a unique insight into the mysteries of behavioral evolution and extended cognition.

The concept of extended cognition, where our tools become intertwined with our cognitive faculties, was popularized by Andy Clark, a renowned philosopher and cognitive scientist. Clark posits that our cognitive processes aren't confined to our brains but can extend into the world through our use of tools and technology. He believed that as humans, we have a “heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise.” Of course, in 1995, the idea that our smartphones could serve as an extension of our cognitive abilities seemed far-fetched—but as technology advanced and our reliance on our devices grew, Clark's idea began to resonate.

As humans, we've been diligently expanding our cognition for millennia, from the advent of writing to the evolution of smartphones. The adaptability of the orb weaver offers an important lesson: Extended cognition is a survival tool. Just as spiders use their webs to extend their cognitive reach, we have the potential to use generative AI to extend ours—expanding our cognitive boundaries and reshaping our interactions with the world. The capabilities generative AI unlocks could expand our cognitive web to a previously unimaginable scale. Which begs the question: How do we make AI an integral part of the human processes?

The Cognitive Leap: From Idea Generation to Idea Evaluation

At a recent conference, Open AI’s CEO, Sam Altman, was asked about the future of AI and its implications for students, particularly in terms of their careers. Taking a page right out of the Almost Famous playbook, he addressed his young audience with the confidence of a rock-star about to swan-dive into a swimming pool: “You are about to enter the greatest golden age.” He said.

Despite being overly optimistic—or delulu (that's Gen Z for 'delusional')—Altman’s not wrong. The advancements in AI, particularly in generative models, are a testament to this golden age he alludes to. Generative AI can produce ideas that are not only unique but also have a higher probability of success. A typical new-product innovation can encompass thousands of unique ideas—numbers so vast that an individual or even a small group of experts would struggle to identify the majority of them. With the assistance of generative AI, this once unimaginable level of productivity becomes possible.

In a study comparing the idea-generating abilities of ChatGPT-4 with students from top universities in the country, ChatGPT-4 proved to churn out a whopping 200 ideas in just fifteen minutes of human interaction. A person can only muster about five ideas in the same timeframe. But it's not just about the quantity, the quality of ideas generated by ChatGPT-4 also surpassed those conceived by humans. The average purchase probability of human-generated product ideas is 40.4%, when seeded with the right data, the purchase probability of GPT-4’s ideas jumped to an impressive 49.3%.

Paired with an LLM, a human collaborator could potentially articulate nearly every single idea within a given opportunity space. Leading to faster product launches, more efficient R&D processes, and a greater emphasis on human-AI collaboration in workplaces. A shift that could permanently change the focus of innovation from idea-generation to idea-evaluation. But this is where balancing rapid AI-driven ideation with ethical considerations becomes crucial.

As early as between 2030 and 2060, half of the work activities we currently perform could be automated—a timeline roughly a decade earlier than previous estimates from experts. AI optimist Marc Andreessen, venture capitalist, and cofounder of Andreessen Horowitz, explains that, if used correctly, not only does AI have the potential to save the world, but it is quite possibly “the most important – and best – thing our civilization has ever created.” And 98 percent of global executives agree, stating that generative AI will play a fundamental role in their organizations’ strategies in the next three to five years. But integrating AI into our cognitive processes isn't just about enhancing productivity. It's about creating a relationship where humans and AI work together, each amplifying the other.

In 2005, Playchess.com hosted a freestyle chess tournament, where teams paired with computers. Surprisingly, the winners weren't a grandmaster-supercomputer duo but two amateur chess players—their victory wasn't due to superior computational power, but their ability to effectively coordinate, coach, and work with three computers. This underscores a pivotal lesson: The efficiency of a tech-human partnership hinges on the interaction process. GPT-4 can churn out ideas at scale, but it hasn't yet demonstrated the ability to produce consistently accurate results—it's the human perspective that determines which ideas resonate with real-world needs. 

Expanding Our Cognitive Capability Without Losing Ourselves

You’ve probably seen this meme floating around the Twitternet—stating that “AI won’t replace you, a person using AI will”—and while it sounds dramatic, behavioral evolution proves it to be true. Over the past fifty years, the ‘natural selection’ of specific technical tools has led to an increasingly widespread cognitive extension. Just like in nature, where certain traits become more common in a population because they help the species survive, the same happens with technology. The tools and systems that help us the most are those that become more popular and widespread. 

This back-and-forth is what researchers refer to as co-evolution—a dynamic that could potentially lead us to become technically dependent. And the public's sentiment echoes this concern: A recent poll from the Artificial Intelligence Policy Institute (AIPI) and YouGov paints a picture of America's apprehension about AI's rapid advancements. The survey revealed that 60% of respondents fear AI might strip life of its meaning, pushing humans to the sidelines, while only 19% see AI as a potential enhancer of creative expression. And a striking 62% believe AI might dull our cognitive edge by over-automating our lives, making us less reliant on our own abilities. And while 72% of voters want to slow down AI development and usage—it’s too late to stop the progress. All we can do now is regulate it, and choose what risks we are willing to endure for the sake of technological advancement.

Earlier this month at A.Team’s Generative AI Salon, Juliette Powell, AI ethicist, researcher, and author of The AI Dilemma, provided a timely perspective on the broader implications of this shift. She delved into the double-edged nature of AI: its vast potential and the inherent risks. Powell pointed out that when leading companies set a precedent in AI development and deployment, others often follow—an analysis coined as the "Apex Benchmark."

Just as spiders maintain ecological balance by ensnaring prey, industry leaders set the tone for the entire AI ecosystem. When they lower their standards, it prompts a ripple effect, causing competitors to follow suit. Powell highlighted the hasty release of Google's BARD as an example of industry giants compromising in the race to stay ahead. This race, driven by the desire to dominate, can lead to a continuous lowering of standards across the entire industry. As our cognitive processes evolve, integrating more AI-driven tools and methods, the Apex Benchmark serves as a stark reminder—we must prioritize human considerations amidst rapid technological advancements, ensuring that as we leap forward, we do so in a way that benefits all of humanity—not just big tech.

Altman might envision an AI that can build on human knowledge to delve deeper into the mysteries of nature and unlock new insights, but understanding the complexities of nature demands more than just a language model—it requires human scientists. The best ideas don’t come from number crunching, humans rely on feeling, imagining, and sometimes, just gazing at the stars. These human qualities, termed as "authentic intelligence," are crucial, especially in open systems where interactions with external environments are continuous. If a Language Learning Model is fed with flawed information, the output it produces could be just as unreliable. Until GPT is able to interact directly with the physical world, its outputs depend strictly on the data we feed it.

Eric Solomon sees AI as a tool that we're probably going to be living with for the rest of our lives. He’s a PhD and a veteran in the big leagues—ex YouTube, Spotify, Google, and Instagram—who has worked at the intersection of psychology, technology, and creativity for over twenty years. Like any tool, Generative AI’s effectiveness depends on the skill and understanding of the user. “There’s a 'black box' nature of AI, where even the creators are in the dark about its inner workings,” explained Solomon. If we don't understand the inner workings of these systems, how can we truly harness their potential without falling into pitfalls? Asked if people can choose to opt out of AI, Solomon responded candidly, "It's a little bit foolish and reactionary to be like, ‘no, not for me.' Yeah, it is for you. The whole world's going to be inundated with it—like it or not."

It's a little bit foolish and reactionary to be like, "no, not for me." Yeah, it is for you. The whole world's going to be inundated with it—like it or not.

The challenge lies in striking a balance—using AI to enhance our capabilities without letting it overshadow our unique human qualities. Think of the spider: It didn’t throw its eight hands up in despair and abandon its web when it was cut; it adapted to the new conditions and used them to its advantage. We can weave AI into the fabric of our society in a way that actually serves us. We're on the brink of sharing our world with a powerful new form of intelligence that could reshape everything from our work to our personal relationships, as Altman explained in a recent interview. Releasing ChatGPT was just his way of giving us a heads up in preparation of what’s to come.

Reflecting on the parallels between the natural and technological world, the example of spiders and their webs serves as a powerful metaphor. With generative AI, we have the opportunity to extend our minds beyond our biological boundaries. As we continue to integrate AI into our work, our businesses, and our lives, we are extending our cognition in ways we never thought possible. We are outsourcing cognitive tasks to our devices, freeing up our minds for more creative and complex thinking, and as we do so, we are not just changing the way we interact with the world—we are changing the very definition of what it means to be human.

mission by a.team
For people who want to build things that matter & lead great teams
Check out the latest stories from Mission — A.Team's newsletter for builders designing the future of work.
By signing up, you agree to our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.