Two distinct points of view have emerged in the "rise of AI" discourse. On the one hand, some claim that AI will be a useful tool for humans, allowing them to reduce tedious tasks and focus on more meaningful work. Then you have those who claim that AI will replace humans, saving companies boatloads of money in the process.
The release of ChatGPT in November 2022 catapulted the conversation around AI capabilities to the forefront. But since that time, a lot of the shine has worn off. Most of the tools I use have some type of AI capabilities, but to be honest, most are not that good or useful. Yet many people are bullish on AI and continue to proclaim its greatness.
Meanwhile, workers are rightly concerned by the narrative that AI will replace them. I recently listened to The Interview podcast and the guest was Ted Sarandos, the CEO of Netflix. He said:
"There's not a scenario, I don't believe, that an AI program is going to write a better screenplay than a great writer, or going to replace a great, great performance where we won't be able to tell the difference. AI is not going to take your job. The person who uses AI well might take your job."
I think his perspective is a good one. Human creativity is something uniquely human. At the same time, AI isn't going away. Much like computers didn't go away. People had to learn things like Microsoft Word and email as a core component of their jobs. Eventually, such knowledge is presumed.
I'm disgusted by people who push the narrative that AI is some type of magical solution, mostly because I think they have ulterior motives (say, convincing people to buy their AI product...) It feels a lot like cryptocurrency hype at this point: getting as much buy-in as possible, intentionally pushed by people who benefit financially. While I don't think that AI will suffer the same downfall as crypto, the overhype leaves people skeptical and worried, rather than willing to explore an emerging technology.
Humans are expensive. Robots are cheaper. Bosses are greedy.
Recently, I saw someone on LinkedIn post a breakdown of the cost of writing a blog post. He speculated that it costs $1,000+ per post for about 20+ hours between a writer and an editor, with each role earning $50-$60/hour.
Under his proposed process, using generative AI, the first step is:
A human uses AI to generate a well-researched, structured draft in about 1 hour. The cost per article might be around $10.
He then goes on with a human editor spending 2+ hours revising the article, with the final cost being $120.
Somehow, between his first and second examples, the writer went from earning $50/hour to $10/hour... The logic for this wasn't fully explained, but I can only assume it's because the effort was worth $50/hour when the writer was crafting an article from scratch and "AI prompting" is only worth $10/hour in his view.
He ended with: “There's already been enough people doing really cool things with AI to demonstrably show that AI content can be high-quality when done correctly."
I have yet to see an example of an AI-generated article that didn't need substantial human editing — certainly more than the 2 hours this person suggests. And — as a shocker to no one — this person is trying to sell a generative AI product.
Then there was Lattice, an HR tech platform. Founded in 2015, Lattice is a fairly mature company in the startup space, having raised more than $300 million in venture capital funding.
On July 9th, the company announced that it was "leading the way in responsible employment of AI." According to a blog post, the company wrote:
We will be the first to give digital workers official employee records in Lattice. Digital workers will be securely onboarded, trained, and assigned goals, performance metrics, appropriate systems access, and even a manager. Just as any person would be.
This marks a significant moment in the evolution of AI technology – the moment when the idea of an “AI employee” moves from concept to reality in a system and into an org chart.
The backlash was swift and well-deserved. Comments on the CEO Sarah Franklin's LinkedIn post ranged from mockery to anger.
PR firm CEO Ed Zitron rightly asked:
His questions point out the absurdity of Lattice adding AI "employees" to its org chart. A few days later, Lattice added the following update to its blog:
“This innovation sparked a lot of conversation and questions that have no answers yet. We look forward to continuing to work with our customers on the responsible use of AI, but will not further pursue digital workers in the product.”
Robots are not people. They shouldn't be considered on the same level as people. People earn paychecks and need health insurance and feel things. And if the AI “employee” breaks down or doesn’t do its job, a human will be the one to reprogram it.
If the entire thing was a head-scratcher, perhaps looking at Lattice's leadership can shed some light. Lattice's founder is Jack Altman, brother of Sam Altman, the CEO of OpenAI. Jack Altman still serves on Lattice's board as Executive Chairman.
Sam Altman was interviewed for the book Our AI Journey by Adam Brotman and Andy Sack. When referring to consumer brand marketing, Sam Altman said:
"95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI — and the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing. Again, all free, instant, and nearly perfect. Images, videos, campaign ideas? No problem."
As I said before, the people pushing this type of narrative are the people who benefit most. Of course, the idea that work can be done at a fraction of the cost is appealing to CEOs. They are desperate to believe it’s true because humans are expensive, especially for any company doing the good and decent thing of paying their employees a decent wage.
Humans are hard. Robots are easier. Bosses are greedy.
The level-headed leaders in conversations about AI are those who rightly point out that AI is a tool for humans, not a replacement for humans.
May Habib is the CEO of Writer, a generative AI platform for enterprise companies. She wrote about a company that had recently laid of 1,800 workers, but was going to hire 1,800 more that would be "better suited for an AI future."
May wondered why the company wouldn't let the existing employees re-interview and restructure their roles before laying them off. As she rightly points out: "It's not like the market is awash with knowledge workers highly skilled with how to use AI to get to 200% productivity."
Yes, some people will lose their jobs to AI. This has been true with many technological advancements. Many roles have simply ceased to exist over time. But I feel strongly that it's not as grave as people make it out to be. Responsible companies will incorporate AI alongside their existing employees. Bill Gates even thinks a 3-day workweek is possible in a world where AI can handle repetitive tasks.
However, the key word in my prediction is responsible. I might be overestimating the number of companies that will act responsibly.
That's a nice ideal, but likely not reality. One of two things will happen. Companies will either use the freed-up human time to give the humans more high-value work (without an increase in pay for the additional value) or companies will assert that since humans are only working three days, they should only be paid for three days. Either way, the company benefits, not the worker. AI is not replacing the worker, per se, but the worker still loses.
Throughout the history of work, we've equated productivity to the number of hours put in. The guy saying that writing a blog post should only cost $10 is certainly doing that. I pushed back in the comments on his post, telling him that, as a writer, I don't charge for the number of hours I put into a post. I charge for the result, which is my skill and expertise. He quickly replied that, as a company, they pay writers a flat rate also. He said "I want [the writers] getting to those outcomes in the least amount of time and with the least amount of effort."
That's literally the opposite of what he said. He was touting that the cost of a blog post would be less to the company paying for the blog post.
Perhaps, AI will bring about a fundamental shift in how we think about work. Where we can finally frame work around the quality of output rather than the number of hours put in. It's certainly possible. But maybe I'm being idealistic again.
What to read next:
If you love this newsletter and look forward to reading it every week, please consider forwarding it to a friend or becoming a subscriber. Subscribers get access to additional stories I publish.
Have a work story you’d like to share? Please reach out using this form. I can retell your story while protecting your identity, share a guest post, or conduct an interview.