A few days ago, I saw a post on LinkedIn:
Prediction: If your job can be done remotely AI will replace it. Maybe not tomorrow or in a year. But AI applications can do practically anything.
Really? Anything? A telehealth appointment with a doctor working remotely? A consult with a lawyer working remotely? Salespeople who build relationships and close deals remotely? As a matter of fact, I've worked for three different companies with CEOs working remotely since 2010. So, based on this person's logic, a CEO could be replaced as well.
(I went back to comment later, and the person had taken down the absurd post. Thankfully, screenshots live forever...)
While I'm sure the original poster shared this "hot take" as engagement bait (and the rest of his content confirms this theory), it shows a fundamental misunderstanding of 1) how much human-centered knowledge work can be done remotely and 2) how wide the gap is between AI hype and AI reality.
Disconnect #1: What humans actually want from AI
I'm sure some CEOs are salivating at the idea that AI can replace people. After all, people are expensive and problematic (they want promotions? and a harrassment-free work environment? How dare they.) Airbnb CEO Ben Chesky said in an interview that he dislikes 1:1 meetings because the employee "owns the agenda" and brings up subjects the manager may not want to discuss. "You become like their therapist," said Chesky.
And there's no shortage of bad managers in the world who simply don't know how to deal with people or who haven't received proper training. Wouldn't AI be a lot easier...?
HR platform Lattice announced last year that its customers would be able to add "digital workers" (AI) into the company org chart. The backlash was so fierce that Lattice announced three days later that it would no longer pursue digital workers within its product. The assumption that digital workers are somehow "equal" to human counterparts and deserve a place within a company org chart was incredibly tone-deaf — especially as some people legitimately wonder how AI will impact their jobs in the future.
Much as CEOs might love the idea, their visions often don't match what people want. And I'm not just talking about employees who might have their jobs replaced by AI: I'm talking about what customers want. People consuming the product or service may not want to interact with AI. A study found that products described as using AI were consistently less popular among consumers. In a statement, lead author of the study Mesut Cicek said:
When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions. We found emotional trust plays a critical role in how consumers perceive AI-powered products.
Meta seems to have missed that memo about customer trust with its announcement that it will introduce AI profiles on Instagram and Facebook. Vice president for generative AI at Meta Connor Hayes said:
We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.
Meanwhile, actual users of Meta products think this will degrade the product experience. Meta has spent most of its existence fighting bots on the platform and thinks introducing more bots would be a good idea...? A few days later, Meta scrambled to remove the AI profiles as the backlash intensified and people began sharing screenshots of the "AI slop" from the profiles in what Meta called an "early experiment."
There is some work that AI is well-suited for: repetitive tasks, working with large datasets, or predictive modeling. But expecting AI to replace the human experience is nonsense.
Disconnect #2: AI hype doesn't match reality
A few weeks ago, Klarna CEO Sebastian Siemiatkowski said in an interview that the company had stopped hiring a year ago. He also said, "I am of the opinion that AI can already do all of the jobs that we as humans do."
TechCrunch called B.S. After examining Klarna's job postings on its own website and comments from Klarna employees on LinkedIn, TechCrunch found that Klarna is hiring for more than 50 roles around the globe and managers have reported that they are actively growing their teams. Open roles include policy, software enginerering, and global partnerships. As it turns out, AI can't do all the jobs that humans can do.
Therein lies the second disconnect, a parallel universe in which AI can take all human jobs (as the original poster at the beginning of this article and the CEO of Klarna seem to believe, among may others). This flawed thinking assumes that humans (customers) want to interac with AI (hint: we don't) and that they won't seek alternatives when they're frustrated by AI being forced down their throats.
Last year, I wrote about LinkedIn's "takeaways" feature: AI-generated text that would appear below a post in the feed. The takeaways were a shallow summary of the original post and annoying to see in the feed. LinkedIn mistakenly assumed that users didn't want to read a few hundred words (or less) in the original post and instead wanted a recap devoid of personality... to what end? It was the feature no one asked for and no one needed. At the time I'm writing this, Takeaways have (thankfully) disappeared.
Meanwhile, AI startups are receiving billions in funding, with founders claiming that they know what people want. Yet, 85% of all AI startups will fail within the first three years, according to Edge Delta — with one reason being poor product-market fit. Turns out, slapping AI onto everything doesn't work and many, many startups fail to live up to their promise of actually solving a real-world problem.
OpenAI CEO Sam Altman claims that the company knows how to build AGI — artificial general intelligence, or AI that is as smart and general as a human. Current AI models rely on predictive responses, give you the "best" answer based on a large dataset rather than actually thinking. AI experts think that AGI will happen somewhere between 2035 and 2050, though Altman claims it will be much sooner. He said, "My guess is we will hit AGI sooner than most people think, and it will matter much less" and that AGI has "become a very sloppy term."
Certainly by downplaying AGI's capabilities, it will be easier to reach the benchmark. Lower the bar and, eventually, you'll hit it.
Most issues of this publication are free because I love sharing ideas and connecting with others about the future of work. If you want to support me as a writer, you can buy me a coffee.
If you love this newsletter and look forward to reading it every week, please consider forwarding it to a friend or becoming a subscriber.
Have a work story you’d like to share? Please reach out using this form. I can retell your story while protecting your identity, share a guest post, or conduct an interview.