On Monday, over in Southbank Insider, I wrote about the goblins of net zero, in particular the use of cobalt in batteries. Today, I’m going to write about the gremlins of artificial intelligence (AI).
Stephen Spielberg is one of the most famous living Hollywood directors. Rising to fame and winning awards for Jaws and his sci-fi epic, Close Encounters of the Third Kind, several years following those successes, he produced a fantasy film of a more domestic nature: Gremlins.
The plot is reminiscent of a classic, “Brothers Grimm”-style fairy tale. Cute, fuzzy, semi-intelligent little house pets, the gremlins nevertheless had a dark side to which they went over to if they are fed after dark or exposed to water. They are placed into the care of a few children, who are made aware of this but, children being children, they are somewhat careless and inattentive.
The gremlins are indeed, if inadvertently, fed after dark and exposed to water. Their dark- sided nature is quite suddenly, spectacularly exposed and they proceed to wreak havoc on not only the children, but on the entire town. Violence ensues. People get killed. The marauding gremlins even fight each other in a Lord of the Flies dynamic of sorts.
Those familiar with the Brothers Grimm tales know that, in many cases, they had tragic rather than happy endings. In the case of the gremlins, the chaos they unleash is eventually brought under control, but at a considerable cost. Indeed, the film was criticised for being a “bait-and-switch”; a supposedly family-friendly adventure that turned into a horror show.
As is the case with nearly all fairy tales, there are clear moral lessons to take away – e.g. “Don’t give your children the car keys.” One lesson, however, may be more subtle. Nearly all the trouble and tragedy caused by the gremlins is due to their interaction and experimentation with technologies they either don’t understand, or they deliberately misuse.
This appears to be the way in which many feel about AI. Several prominent tech voices have gone so far as to say that, if not properly regulated, the misuse of AI could result in a dystopian future of machines ruling over men, or worse, attempting to kill them off entirely as in The Terminator, Battlestar Galactica, or some other classic sci-fi film.
I disagree. In my opinion, AI is a game changer, but in ways that leverage human knowledge. It will enable an even greater integration of information technology into our economic and social structures. Yes, this leverage and integration could be abused, but in my opinion, it would still be humans doing the abusing.
Most technologies can be abused. Some spectacularly so. Think nuclear energy which, if properly used, could power mankind far into the distant future, and at a low cost. If misused, it could bring an end to life as we know it and perhaps end human life on earth entirely.
I don’t see AI as inherently worse, or inherently better. It comes down to us to decide how it is to evolve and how it is to be applied. The same could be said of our economic and social structures generally, our political systems, you name it.
AI in the hands of a radical, Marxist, totalitarian state would be frightening for its citizens. Basic liberties could be severely curtailed. Everyone would be under 24/7 AI surveillance looking out for whatever forms of behaviour were deemed undesirable. Digital currency transactions could be analysed by AI in practically real time as part of a “pre-crime” programme to determine if someone is even planning to behave in an undesirable way in future.
Philip K. Dick, eat your dystopian heart out.
The reality, in my opinion, is likely to be more balanced. Yes, there will be abuses. Yes, there will be public push-back. Indeed, we’ve seen a huge push-back in dystopian “behaviour-monitoring” just recently with misguided Covid-19 lockdown policies that appear to have had no positive effect on public health and might well have even harmed it.
Beware the government AI gremlins
There’s a huge lesson to be learned there. We don’t want AI falling into the hands of the government gremlins now, do we?
If I’m right, and AI is not a one-way ticket to tech totalitarianism, then it really is a huge opportunity for investors. It could be as big as the first Silicon revolution, which led to huge breakthroughs in telecommunications and data processing. As big as the PC. As big as the internet.
It could be bigger than all of the above, combined.
I have a reputation as a relatively conservative, defensive, value-oriented investor. I’m a “gold bug” after all, and a crypto sceptic. Yet in AI, I see huge, perhaps unimaginable opportunity. No, I can’t predict what forms it will take, or what specific companies will make the big breakthroughs, but I’m highly confident that they’re out there.
This is why I’m closely following the work that my tech-oriented colleague Sam Volkering is doing in this area. He has a proven track record in identifying new technologies early on and this includes AI. He’s currently preparing for the upcoming AI Summit, during which he will reveal his current thinking, including some specific actions that interested investors could now take. I strongly recommend you sign up here and be sure not to miss it on the day. (Capital at risk.)
Until next time,
Investment Director, Fortune & Freedom
PS If you’d like to comment on this edition of Fortune & Freedom, please send me an email at [email protected]