Totalitarian? At the Partner Forum in Oslo, we got to learn that Orwell was wrong.
To be able to identify children as potential suicide bombers, researchers at the Federal University Lokoja in Nigeria work with artificial intelligence by developing machines that are self-learning.
Welcome to 2018 – the year when AI, or artificial intelligence, takes over for «big data». Let me mention a few examples: Donders Institute in the Netherlands has presented a system that can scan your brain to easily reconstruct images of faces that you have seen.
«The hippies in San Francisco have created the new surveillance system of today, namely Facebook, Google and other companies of the West Coast.»
Most of us already use artificial intelligence more or less consciously through applications such as Apple’s Siri and Google’s Gmail. The latter reads your email and is happy to suggest you answering options. Google also tracks your searches and sends you relevant ads. Companies such as Amazon and Netflix monitor our preferences and suggest appropriate products to purchase or series and movies to watch.
Totalitarian? At the Partner Forum in Oslo last December, the Director of the Technology Council, Tore Tennøe, pointed out during a debate about George Orwell’s book 1984 that Orwell was wrong about where the totalitarian community system of the future would come from. It is not Soviet, but the United States. More specifically: the hippies in San Francisco have created the new surveillance system of today, namely Facebook, Google and other companies of the West Coast.
The Economist shows that annual investments in artificial intelligence have increased as much as 26 times from 2015 until 2017 – now reaching over 180 billion NOK. These giants buy up small, promising startups, such as the small English AI company Deepmind, which was taken over by Google in 2014 for $ 4 billion. Ten years ago, ExxonMobil and General Electric were the world’s largest companies – now Google, Apple, Facebook and Microsoft are the biggest.
And Google? Many of us rely on Google’s search engine, email system (Gmail), smartphone (their Android), video website (YouTube) and soon to come: Google Photos (that scans and sorts out faces in groups), their payment system (GooglePay) – and maybe, one day, Google’s (Alphabet) self-driven car Waymo.
But back at the Partner Forum, which goes by the name «Breakfast with Bernt»: Professor Bernt Hagtvedt still believes (how ironic is he really?), that today’s students are empty minded and spiritually lazy. He points out that the totalitarian aspect that Orwell described, just as much changes your thinking habits beyond monitoring and control. Not only that 2 + 2 = 5, as in the novel, but rather that «big data», combined with AI, will provide management opportunities that will be of great interest to both political as well as commercial forces. Alongside new surveillance technology follows new legislation. Just like the Data Inspector Bjørn Erik Thon told the audience at the forum: PST now wants to gather huge amounts of data to presumably be able to prevent criminal activity while it is being planned.
Exactly, to move the border of crime so that even the act of thinking in the wrong terms becomes illegal, which is not looking that different to what Orwell predicted about the future in 1949.
Having said that, alongside the increased monitoring, a cooling effect follows: people are afraid to give their opinion over the phone and in an email. According to Thon, statistical surveys confirmed this effect after Snowden’s disclosures about the United States’ massive surveillance of its own population. In Europe, such an extensive violation of privacy is still contradicting human rights: the idea is that if you feel constantly monitored, it goes beyond the ability to think independently. It gives a democratic deficit.
«PST now wants to gather huge amounts of data to presumably be able to prevent criminal activity while it is being planned.»
During my studies at Berkeley in San Francisco in the 1990s, «neural networks» was something new within artificial intelligence. Researchers in cognitive science and philosophy tried to imitate human ways of thinking. My philosopher teacher – not a hippie anymore, driving a Porsche – had a theory about machine-learning based on Chinese spaces and characters, about patterns and programs that were being followed, without any understanding or thinking.
Login or signup to read the rest..If you do not have subscription, you can just login or register, and choose free guest or subscription to read all articles.