Big tech is pulling up the ladder...
“Replicants are like any other machine. They're either a benefit or a hazard. If they're a benefit, it's not my problem.” - Rick Deckard, Blade Runner
If you’re not a subscriber, here’s some of what you missed last month:
Subscribe to get access to these posts, and every post.
How big tech is attempting to use sci-fi fears to monopolize AI.
3 AI tools to make your work, your job search, and your life easier - Secta AI, Pictory, and DataRobot.
AI news: rumors of AI-induced extinction are highly exaggerated, that hasn’t stopped big tech from fear-mongering anyway, and why you should hope the WGA writers win.
Missing the forest for the synthetic trees
Hey Chat GPT, define “pulling up the ladder”...
“"Pulling up the ladder" is an idiom that refers to the act of achieving success, advancement, or security and then denying or making it more difficult for others to follow in the same path.”
On Tuesday of this week, over 350 leaders in the AI space (most of them affiliated with very large companies heavily invested in AI) signed their agreement with a one-sentence warning:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
A terrifying possibility surely... but if you want my honest opinion, I’m calling bullshit. This sentence is straight out of science fiction - and while usually that’s kinda my jam, this all just wreaks of corporate gatekeeping to me. The narrative that AI feasibly poses an existential threat to human existence is coming out at far too convenient a time to be in anything other than the tech giants’ personal interests.
It’s no coincidence that these calls are getting louder and louder from the top tech firms at the same time that people like Open AI’s Sam Altman are meeting with politicians to determine the regulatory framework that will dictate who can and can’t use powerful AI models like the LLM that powers Chat GPT.
And it’s not just me who smells something rotten. Tim Wu (Biden’s former antitrust advisor) has been outspoken about big tech’s push to consolidate control over the growing AI industry.
“There's a lot of . . . economic possibility in this moment. . . . There's also a lot of possibility for the most powerful technological platforms to become more powerful and more entrenched"
Btw, I highly recommend reading this article from The Washington Post which details Tim’s framework for regulating AI without concentrating even more power in the hands of a few chosen elites.
“But why would companies who are selling AI products want to actively make people scared of AI?” you may be asking. The answer is somewhat counterintuitive, but it makes all the sense in the world when you think about these companies’ motives.
You see, as with most tech products, it’s actually within big tech’s interests to present their technology as significantly more advanced than it is (eg, everything Tesla has ever done). By simultaneously associating generative AI with artificial general intelligence (AGI) and loudly calling for regulation which will be favorable to their own business interests (ie, shutting out smaller competitors and maintaining their outsized share of the market), Altman, Open AI and the other tech giants are attempting to build a moat around their core money maker while also getting a PR win on account of the AI-crazed news cycle. Thus positioning themselves as the sole entities responsible enough to own and sell tools that will be crucial to enabling future advancements.
In the words of Samantha Floreani for The Guardian:
“This all acts as both a marketing exercise for and a diversion from the more pressing harms caused by AI.”
While most of us like to think of ourselves as rational and discerning, it’s important to recognize the reality that the primary and most powerful exposure that most people have had to advanced AI has been through apocalyptic sci-fi movies, television, and books. It’s not too surprising then that arguments which lean into apocalyptic sci-fi tropes have been effective at engendering a fear of AI in both the public forum and among political movers and shakers. In reality, the main threats posed by AI are not only terrifying, but are already being felt by the most vulnerable members of society. To learn a little more about these real world threats, read my article: “Should we press pause on AI?”
For those who’ve lived and breathed sci-fi over the past several decades, it’s a bit terrifying to see the parallels that can be drawn between the cyberpunk worlds of books, movies and anime and the early landscape of AI in our own real world. For instance, possibly my favorite movie of all time: Blade Runner.
At the outset of Blade Runner, there’s a clear black and white dichotomy... humans = good, robots = bad. Our protagonist, Rick Deckard (played by Harrison Ford in his BEST role, fight me), is the titular Blade Runner, a special form of detective whose sole mission is to find and kill “replicants” (the movie’s version of robots) who escape from their bondage as forced labor on the off-world colonies to live free amongst the ecologically depleted world of human society.
As he hunts down four escaped replicants who have managed to hide themselves in the neon-lit streets of a futuristic LA, Deckard slowly discovers that the replicants are in fact sentient, feeling, empathetic life forms with their own complex fears, hopes, and the same drive to live as human beings. In reality, the villain of the story isn’t the superhuman robots, it’s the mega-corporation who created them in the first place.
The Tyrell corporation is the mega corp responsible for the creation of the replicants and (as is shown in the sequel, Blade Runner 2049) for the Blade Runners who decommission them when they refuse to obey their programming. They’re both the purveyors of the AI, and the bright red warning light urging humans to live in fear of the “dangerous” technology that they themselves created. Little does the average person know that in reality, the threat is overstated if not completely manufactured. The fear is intentional, it’s meant to keep Tyrell in control.
So, you ask... why have I been going on about Blade Runner for several paragraphs and what does this have to do with Open AI and Sam Altman?
Quite simply, our real-life mega corps (Google, Amazon, Microsoft) and others, are sprinting to secure their control over the most promising technological innovation of our time. And just like the Tyrell corporation, the real life, tangible threats of AI (mis and disinformation, bias & discrimination, surveillance, and more) matter little to them. What matters to them is securing market share, and they’re all simultaneously seeing that by abstracting reality and focusing on the threats that AI might one day pose, they may just be able to rig the system to keep the technology solely in their own hands.
Is the possibility of one day achieving AGI a threat? Absolutely. But it’s also a far off fantasy latched onto by commentators due to a combination of sci-fi tropes and poor media literacy. These commentators would do best to remember that in most of these stories, it isn’t actually the tech that’s dangerous, it’s the people who wield it.
So next time you see a headline about how AI could lead to the “extinction” of the human race, remember the motives of the people disseminating this fear. Oh and watch Blade Runner!
3 AI-powered tools to turbocharge your efficiency, today
The product - Secta Labs offers a unique service, where their AI transforms regular photos into professional headshots. Users provide the AI with at least 25 of their favorite photos, and the AI generates hundreds of professional-looking headshots within an hour. With the ability to generate both professional and casual styles across a wide array of themes, Secta Labs ensures that you have a diverse range of headshots to choose from. In case you're not satisfied with the results, they offer a 100% money-back guarantee, and your photos are never shared with any third party.
The use case - for any professional in need of high-quality profile pictures for user interfaces, digital identities or their job search, Secta Labs could be an invaluable tool. It drastically simplifies the process of obtaining professional images. Whether it's for social media platforms, your LinkedIn, or any digital product that requires your picture, Secta Labs can provide a fast, efficient, and cost-effective solution for professional headshot generation.
Listen to my interview with Marko Jak, co-founder of Secta here
The product - Pictory AI is a comprehensive video marketing toolkit powered by advanced AI. It can transform long videos into short branded video snippets, turn scripts into high-conversion sales videos, transform blog posts into engaging videos, and automatically add captions to videos. With its ability to create videos from a variety of content formats, Pictory AI is a versatile tool for video content creation.
The use case - for product folks looking to leverage video content for their product marketing, Pictory AI offers a suite of AI-powered video creation tools. Whether it's creating engaging promotional videos from scripts or blog posts, generating micro-content from longer videos, or enhancing video accessibility with automatic captions, Pictory AI can help streamline the video creation and editing process. This can lead to improved user engagement, greater content reach, and ultimately, better product visibility.
The product - DataRobot offers a full-lifecycle AI platform that combines broad ecosystem interoperability with a team of AI experts. With offerings like collaborative experimentation experiences, assured governance and compliance, and broad enterprise ecosystem, DataRobot provides a robust platform for AI solutions deployment. It's trusted by 40% of the Fortune 50 and offers various deployment options including on a dedicated managed cloud, private cloud, on-premise or SaaS.
The use case - for anyone looking to incorporate AI into their product lifecycle, DataRobot could be a powerful asset. Whether it's employing AI for user behavior analysis, automating repetitive processes, or leveraging AI for predictive modeling, DataRobot’s platform can accelerate the integration of AI in product development and management. Its comprehensive platform and broad ecosystem interoperability mean it can be integrated with various data platforms, AI frameworks, DevOps tools, and business processes. This could lead to more efficient product management, optimized user experiences, and enhanced business decision-making.
Detailed insight into every user experience, powered by PlayerZero
Unlock the power of Stanford's DAWN labs' innovative new AI breakthroughs with PlayerZero - your secret weapon for mastering user behavior within your app. This tool brings you unprecedented insights into the unique patterns and habits of your users, enabling swift detection of shifts in workflows or bumps in user experience. Just pinpoint your core actions and top-tier customers, and let PlayerZero handle the rest. It delivers lightning-fast updates, revolutionizing your connection with users. Brace yourself for a game-changer in user engagement!
Chronicles of the circuit circus
Artificial intelligence could lead to extinction, experts warn - Chris Vallance for BBC. The big pull quote:
“Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.
Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI is nowhere near capable enough for these risks to materialise. As a result, it's distracted attention away from the near-term harms of AI".”
Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat - Samantha Floreani for The Guardian. The big pull quote:
“The problem with pushing people to be afraid of AGI while calling for intervention is that it enables firms like OpenAI to position themselves as the responsible tech shepherds – the benevolent experts here to save us from hypothetical harms, as long as they retain the power, money and market dominance to do so. Notably, OpenAI’s position on AI governance focuses not on current AI but on some arbitrary point in the future. They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.”
AI can’t replace humans yet — but if the WGA writers don’t win, it might not matter - Ryan Broderick for Polygon. The big pull quote:
“The trainability of these models is another, thornier risk for creatives trying to regulate this technology. Doctrow said that if a right to train an AI were created — as in, if suddenly writers had a legal right to say who could and couldn’t train an AI on their writing — that right could become a demand from prospective employers.
“All the employers will demand that you assign that right to them as a condition of working for them,” he said. “It’s just a roundabout way of saying that large corporations who are in a buyers market for creative labor will be the only people in a position to build models that can be used to fire all the creators.””
Thanks so much for joining me for another edition of Future of Product! If you have thoughts on anything I’ve talked about today please feel free to comment on this post - I promise I’ll respond, and you’ll be my favorite person in the world for at least like, an hour or so.
Next week, I’ll be interviewing Barkha Herman, speaker, technologist, podcaster, multi time tech founder and women in tech advocate, to get her takes on the state of women in tech, how the AI boom can be a catalyst for increasing diversity in the tech industry, and how she’s leveraged grit to achieve success throughout her career.
Can’t wait to see you there!