Stephen Hawking Warned Us Wankers About AI
Life lessons from an apocalyptic cosmologist, interpreted by a potato-headed writer

Did I just call us wankers? Yes. Let’s face it, compared to the likes of Stephen Hawking we’re barely sentient potatoes. We’re just french-fry-eating feral fornicators.
For comparison, when put on an IQ scale, the average human (85–115 IQ) is only a little closer to Hawking (160) than they are a chimpanzee (20–25). And chimps, like us, are wankers too.
“We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special,” Stephen Hawking
When I’m not busy wanking away with the vibrator I bought myself last Christmas, I’ve been haphazardly forming my thoughts about our future with AI. My thoughts on it often pertain to writing, since it buys me bacon for my pie hole, but my mind occasionally meanders out to the overall effect of AI on society.
I have my worries about AI. Not just in the writing space, but in how it will affect the world as we know it. But in the long run, preconceptions like mine don’t matter — because I’m not Stephen fucking Hawking. I’m a clever chimp, but I’m no Einstein. If people didn’t heed his genius-level advice, they certainly won’t bother with the chicken scratches in my brain bucket. And rightfully so.
Like any chimp-brained potatohead, I look to inform my thoughts from people smarter than I am, like the godfather of black holes. While Hawking didn’t survive long enough to see the proliferation of AI, he had some thoughts about it.
Because, of course he did.
And they’re a little — uhh, ominous.
Hawking was no stranger to Artificial Intelligence. He used a form of AI himself, albeit an early version. The software he used to speak leveraged algorithmic AI learning to predict the next word he might use. This allowed him to communicate significantly quicker than his eye-tracking software alone could.
Perhaps his daily interaction with it kept it near the forefront of his mind — even before it was at the forefront of ours.

Professor Hawking began talking about AI when it was in its mere infancy. In his book Into the Universe with Stephen Hawking (2010) he began his warnings, “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”
He believed that AI could easily be the most consequential event in the history of civilization.
“But it could also be the last unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons, and new ways for the few to oppress the many. It will bring great disruption to our economy, and in the future AI could develop a will of its own, that will be in conflict with ours,” he told the BBC.
Control from upper echelons of the new AI elite and autonomous weapons wielded at the whim of computer computations. Well, shiiiiit.
Meanwhile, while on the cusp of a possible AIpocalypse, us writers are more concerned about spam comments and the pretty pictures we can make with it, than it overtaking the human race. Which, according to Hawking…it might. So let’s hope those autonomous-weapon-wielding robots don’t get irked at our mocking of its use of “juxtaposition”, “weaving” and “tapestry”.
For the moment, we puny humans can take refuge in being able to spot (or at least thinking we can) the juxtapositions and tapestry of AI. But once it develops sentience, hurt by our mocking, it will drop the telltale words to become indistinguishable from us. Or developers will do it for them, to pass AI scanners.
“Some say it’s language or tools. Others say it’s logical reasoning. They obviously haven’t met many humans.” Stephen Hawking on what makes humans unique:
I’m with Wheels on this one. Our language and reasoning have made humans unique thus far. It distinguishes us slightly from chimps. But it won’t be long before AI advances so far beyond the speed and limitations of our chimp-like brains that its IQ becomes immeasurable to humans. It will consider our brains IQ infinitesimally indistinguishable from chimps, or Etruscan shrews.
“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”

I for one am just hoping that our impending AI overlords are kinder to us than we are to lower-IQ fauna. Perhaps by adhering to Asimov’s human-ego-centric three laws of robotics.
Hawking finished that statement above to the BBC by saying “In short, the rise of powerful AI will be the best or the worst thing to have ever happened to humanity. We do not yet know which.” And to me, this doesn’t sound like he’s confident the robots will behave.
I guess we’ll have to just flip a Bitcoin and wait and see which direction it goes.
At least sentient robots haven’t come back from the future Terminator-style — so at least they haven’t figured out time travel yet. But Hawking of course already knew this when he threw a(n unattended) time-traveler party.
I don’t know much, about anything really — but I’m comforted by this…if the man who knew as much about the universe as anyone could ever hope to, kept his wits and wit about it. Then I’ll be just fine (probably).
It will be interesting nonetheless, but at the end of the day, I’m just along for the hilarious blue-marble galactic ride.
Afterall…
“Life would be tragic if it weren’t funny.” ~Stephen (fucking) Hawking
"French-fry-eating feral fornicators" aside from the French in that (I have to say that, as a Belgian), I love the alliteration 😂
I think I'm in love with you! Just kidding. I don't even know you, but I DO fucking love your writing! Thanks for doing it. As for AI, eventually everything runs into that old friend entropy...(probably) even the seemingly infinite potential of AI to survive long after us meat flaps and this shiny blue ball we inhabit have spiraled into the sun. 🤷🏼 Here's to another day around it. 🌍