This ongoing row and hue-n-cry about Facebook’s & other tech giants’ deposition in front of a parliamentary committee on ‘personal data protection policy’ brings to the fore a three-pronged user mentality. Firstly, it seems that ‘it is the social media that is responsible for all the terrible things’ that are happening to us. Secondly, social media is deceivingly manipulative and is influencing us into doing terrible things like spreading hatred or making us do things that we don’t need and intruding on our privacy. And thirdly, we tend to be swayed by the thought that there is a bunch of mean-n-green corporate honchos meeting clandestinely and scheming something sinister for its users behind locked boardrooms for their profit.
As an entrepreneur, I have a somewhat different take on this...
A little while ago I wrote a blog on ‘The paradox of Information Overload’ however having said and having talked about the dubious effects an overload of information has on us in that article, I still tend to believe on a more conscious level – that we, as human beings, are ourselves actually responsible for most of the terrible things that happen around us! There indeed is a lot of truth in the fact that social media platforms are running on self-evolving behavior-tracking algorithms and they map buying or habit patterns of its users, and yes they do know more about us than we ‘think’ we know about ourselves. Further, yes – these technologies can be used for the good of mankind and also can be used for nefarious purposes. But ultimately, we need to thank ourselves for allowing these technologies to know so much about us. Come to think of it, in a way, you are becoming a target of your own belief systems, and not of some ‘mean n green’ digital company. Unfortunately or fortunately, the algorithms are doing what they are supposed to do, however, it is we who are allowing these algorithms to exploit the deep dark aspects of our inherent human proclivities. It has been analyzed that ‘emotionally arousing’ (maybe fake) news spreads six times faster than ‘mundane and drab’ news - because the shitty-ness factor in the news is inversely proportional to its spread levels. The shittier the news the more we stay glued to it, and the more we want to ‘spread it’.
There are two essential elements that have a tremendous impact on our glued-ness to a piece of information that is thrown at us. They are the elements of ‘surprise’ and ‘interest’ that it has in order to grab people’s attention, to retain it, and prompt them to share it. It is at the docks of ‘surprise’ and ‘interest’ that all the conspiracy theories, gossips, and urban legends also get anchored in our minds. We are all emotional beings having our own set of beliefs and theories. All that social media does is to smartly throw at us content to reinforce our own beliefs and our likes. Now, if your beliefs itself are in the ‘not good’ category, then what do you think would happen? What do you expect the emotional reaction of a user to a post to be, if that platform reinforces his or her (very radical) belief systems? Obviously, the response is going to be radical only, right? At times I tend to believe that the platforms simply act as a mirror, showing you your own image and we simply react to that. The platform smartly navigates you or links you to a point that reinforces the emotionally-arousing belief of yours, validating them even more, and making you feel good about yourself or making you want for more and even share it with your friends. It links you to people who have similar behavior and belief patterns. But remember, at the end of the day, the algorithms are self-evolving and there isn’t any geek sitting 24x7 on a computer coding the behavior patterns and changing it live and dynamically for every user on the planet! (however, on a macro-head-level there are humans who are responsible for content sieving sitting behind a desk)
Unless the content provider’s motives are intentionally deceptive, the world more or less is driven by feelings and the fact that we are all emotional at the core, it is easy to predict or foresee that we are going to try to find scapegoats to hide and cover this weakness of ours. It’s easy then to predict that in the not-so-distant future we will have more of such instances where parliamentarians or politicians or bureaucrats will be policing the social media companies. It would go both ways – for companies and for users. Though in a way the policing is needed, however, we must also take cognizance that it is ultimately an individual who gets prompted to post or share something that’s seemingly ‘nasty’ out of his or her own choice, will, and emotion. Squarely blaming the social media platforms would be more like taking away the user’s responsibility from the equation altogether, as there are always two sides to a coin. Having said that it is increasingly imperative that an in-depth analysis of the ‘arousing’ content posted on platforms by its users needs to be sieved-off and deleted by heads of these social media giants but let’s face it, it’s after all a human being only who is going to do so. Here lies the morality of it. Since a human being is involved in posting content and a human being involved in sieving-off the content, prejudices, and biases are bound to creep in. In such a scenario expecting to have politically-agnostic, neutral pieces of content and an unbiased policy may be a far cry.
Social media companies follow The Hooked model to create habit-forming technologies. They fundamentally create a product that alters user’s behaviors fostering habits by getting them hooked on to their product. Creating these user habits can be good or bad. But what responsibility do we as users have, while inculcating such habits by the usage of such technologies – is what we need to deliberate on. As users, we take these technologies to bed. We wake up and check for posts, tweets, notifications, and updates even before we give out our morning yawns. Most of the time, all that we need at the moment to spend time with are not humans but instead spend time with these addictive technologies that we carry in our pockets all the time. Ian Bogost the famous game creator calls this wave of habit-forming technologies ‘the cigarette of this century’. I go one step further and call them ‘the dope of this century’. Okay yeah! I know about the latest Netflix documentary called ‘The Social Dilemma’ and you can argue with me on its gospel truth. But let me rest this argument by quoting the famed Silicon Valley investor Paul Graham who says that –
‘We haven’t had the time to develop societal antibodies to addictive new things (technologies) and unless we wish to become canaries in the coal mine of each new addiction – the people whose sad example becomes a lesson to future generations – we’ll have to figure out ourselves what to avoid and how.’
As we make more and more progress towards AI, these matters of cyber-policing, social-media-policing, content ethicality shall become redundant, as then maybe a conscious AI which will be unemotional, unequivocal, and ruthless will be at the helm that shall be able to do this identification/segregation in split seconds and take swift action. From the user’s perspective then, the repercussions of their content violations in that AI-era shall be cut-throat, non-negotiable in comparison to any court of law, a parliamentary committee, or senate. In that era, the AI would be the judge and the jury, and in retrospect, the intensity of parliamentary committee dealings, happening today, might seem like a cakewalk.