Request access to A.Team's member-only platform
I'm looking for high-quality work
Request access to build with teammates you like on meaningful, high-paying work.
I'm looking for top tech talent
Request access to work with high-performing teams of tech’s best builders — to build better, faster.

Can you tell if AI wrote this newsletter?

We're in love with human creativity, and yet, paradoxically, our hearts seem to tilt toward AI-generated content.

At a Stanford laboratory, researchers discovered something startling about one of the most advanced AI systems: ChatGPT was bad at math.

In March, it identified a prime number with an almost impeccable 97.6% accuracy. Yet, come June, it stumbled down to a mere 2.4%. When the study landed, it felt like a brilliant mathematician inexplicably forgot basic arithmetic.

But beyond these inconsistencies lies an even more profound puzzle: Maybe our relationship with AI isn't as much about its accuracy or reliability. A different study—this one from MIT Sloan—found that our perception of AI has just as much to do with our own biases as the output from the chatbot. In the study they showed people content created by ChatGPT, by humans, and by a combination of both. When people knew that the content was made by humans, they liked it more. When they didn’t know whether it was machine-made or human-made, they tended to prefer the GPT version.

The researchers call this "human favoritism.” We're in love with human creativity, and yet, paradoxically, our hearts seem to tilt toward AI-generated content. It says something interesting about how humans operate: We want answers, but we don’t always want to confront the messy process behind how the answer was generated.

If the World Economic Forum's prediction of a 39% increase in AI job creation comes to fruition, the difference between human and machine content will be harder and harder to distinguish. A recent paper from a top applied linguistics journal found that expert "reviewers were largely unsuccessful in identifying AI versus human writing, with an overall positive identification rate of only 38.9%."

In an age where data is the new oil and AI its refinery, the true challenge for businesses lies not just in harnessing this power, but in presenting it in a way that respects and understands the complex ways that audiences engage with content.


Here’s what workers want from employers when it comes to AI

What workers and managers say they want from their employers around AI

Research from Charter found that 52% of workers were worried about job loss or replacement from AI. But more than that, employees—a solid 62% of them—want clear communication about their company’s AI plans relative to their roles.

As one respondent put it, “Our company could clearly state how AI will be used and for what purposes. They can also indicate how our team can utilize AI to make their roles more productive for the future.”
The audience is there. As the latest Edelman Trust Barometer discovered, employees trust employer-provided media more than any other type, including information from other corporations or their social media feeds. This is a huge opportunity for companies to boldly communicate their vision for how AI will intersect with the future of work.


Adobe's Scott Belsky on AI and the Storytelling Soul

Adobe's Scott Belsky on AI and the Storytelling Soul

In April, Casey Neistat—one of YouTube’s most prolific vloggers—made an unusual video.

Titled “A Day in Downtown Manhattan,” the video featured Neistat riding his electric skateboard around downtown Manhattan, unironically taking his audience to tourist traps below 14th street — the Oculus, Battery Park, the Charging Bull on Wall Street.

“Let’s take a quick look inside Brookfield Place, one of my favorite spots in downtown Manhattan,” he says, walking into the Battery Park shopping mall, looking more bewildered than a Swifty at a Jets game.  

For Neistat's fans, the video was disorientedly basic—the vlog version of a pumpkin spice latte.

Once it was over, Neistat addressed the camera and explained what he’d just made. Every line of dialogue and shot had been scripted by GPT-4.

“That was the worst video I ever made,” said Neistat. “That video sucked because it had no humanity. It had no soul.”

Scott Belsky—Adobe’s illustrious Chief Strategy Officer—told this story at the end of his keynote at the Propelify conference this past Thursday. In some ways, it was surprising. Belsky had just spent 15 minutes optimistically explaining how AI would usher in a new era of creativity, creating limitless personalized experiences.

But like many of us, Belsky has conflicting feelings about AI.

Belsky explained that Neistat’s story “resonated with a lot of the feedback that I'm getting from a lot of great creatives that I admire that are using this technology. They are realizing that as good as this technology is, it's really bad at counterintuitive, soulful things. It's bad at things that conjure up emotion. It's bad at things that make us find meaning in something that we didn't expect."

“And so that soulfulness, I think, is something we'll crave more than ever," Belsky continued. "We're going to crave these craft experiences. We’re going to crave storytelling … which goes against a lot of what I just said [earlier in this talk].”

Read the Full Story



There’s An AI For That is a curated database of nearly 9,000 generative AI tools. Hold on a second and let that sink in—9,000 generative AI tools! Our recent favorites include GodMode, which just sounds cool, and the instant classic: Business Idea Generator AI.




Missed last week’s issue of MISSION? Read it here.

mission by
For people who want to build things that matter & lead great teams
Check out the latest stories from Mission — A.Team's newsletter for builders designing the future of work.
By signing up, you agree to our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.