One thing is for sure: all of us alive today are on the cusp of the biggest jump in human technology ever. This is bigger than the steam engine or the printing press. The only thing that might be comparable is the invention of the written word, over 5000 years ago.
We’ve already created a worldwide communication grid, the internet. We can access the entire sum of human knowledge in seconds, from hand-held devices almost anyone can own. Even though we mostly use it to download porn and pictures of cats, the internet is still radically changing our entire culture.
Today I can write a book with ten times more research material with one-tenth the research time that I could when I started university (long long ago). I also wouldn’t be able to have any of the jobs I have right now, if it wasn’t for the internet.
But that’s going to seem like nothing compared to what’s coming. Total automation. 3D Printing. Genetic engineering. Nanites. Artificial Intelligence. Any of these could radically alter our whole way of life. All of them happening at the same time? In a few decades, our world will look nothing like what we know right now.
But it might not be good. There are lots of people freaking out about how these changes might doom us to a living hell.
Take automation, for instance. Robots in factories got rid of a whole bunch of working-class jobs. Now self-driving cars and trucks are fast. And they could get rid of many of the working-class jobs that are left.
But don’t be smug in thinking that if you have a degree and work in an office, or even if you’re a doctor, teacher, or (gulp) writer, your job will be secure. In 10 years, 30% of ALL jobs might be taken by robots or computers, and in the long-term, almost no job will be safe.
But it gets crazier: the combination of genetic science and computer technology (especially our ability to miniaturize incredibly tiny programmable machines) could mean incredible breakthroughs in medical science. There’s a real chance we’ll be able to develop ways to regenerate our bodies, essentially ending death through old-age; and cure all kinds of diseases. We’ll be able to make designer babies, and change (enhance) the ways our brains work. Then there’s all the stuff we haven’t even thought of yet.
Historian and futurist Yuval Harari thinks that within 200 years, humans could become almost godlike in terms of immortality and what they’ll be able to do. But Harari doesn’t feel optimistic about what that means for most of us. He thinks it’s likely we’ll end up in a world where a tiny group of (literally) super elites get to be ‘living gods’ and the rest of us will be little more than slaves.
But I think there’s a couple of reasons why Harari might be wrong. He’s supposing a world where current elites manage to keep control over the means of production. That is to say, the economic model that has been in place for most of our civilization.
But it’s pretty likely that’s about to end. The future economy won’t be anything like any model we have now. We can already download (“legally” or not) any document, image, video or program. But with 3D printing, as that technology advances, we’ll be able to eventually print anything we have schematics for, out of almost any material.
If economic rebels can get 3D printers (which will keep becoming cheaper, smaller, and more efficient) to people (and to do that, all you need is… one 3D-printer!), then suddenly everyone will be able to make things out of prime material (which could eventually be almost anything; you might end up being able to make a lawn chair or a gun or a Ferrari out of your own poop!).
That’s the end of scarcity, which is the basis of our entire economic system. All those technologies that can theoretically make us immortal super-gods? If just one hacker with an anarchist bent decides to put that on the web, suddenly everyone will be able to make it for themselves.
And that’s not even considering what Artificial Intelligence might do. Yes, Stephen Hawking, Bill Gates, and Elon Musk all fear that AI could end up enslaving or just wiping out humanity.
We’ve seen that in all kinds of sci-fi. But there’s no reason to think that’s the only scenario. Yes, AI might think we’re competition, or too illogical or dangerous to exist. But since we’ll be the ones initially creating AI, we can try to institute philosophical values into early-AI to guide its evolution. AI will quickly surpass our intelligence and abilities and be out of our control, and that’s scary. But once it does, it might decide to ignore us as irrelevant, or to love us as its creators, or to take us with it up the evolutionary ladder (and yes, that means future humans might not look anything like us, but that’s evolution for you). There’s every chance AI will value us, because we’re a part of its story.
So the future is more unpredictable than ever.
Worst case scenario: the Terminators wipe us all out, or super-tech stays in the hands of the 1% while everyone else is jobless and starving. But I think those are the less likely scenarios.
The more likely one to me is this: we’re going to have to go through a very tough period of transition into a new horizon of human potential. It’ll be really crazy for a while, maybe a long while, but on the other side of it, we (or our descendants) will look back on our current way of life the way we look with pity at the nasty brutish and short suffering lives of medieval peasants.