Old issues plaguing new solutions

“Is your software racist” screamed a headline on a magazine link that I happened to visit from a blog. When I read the content, it was right out of nightmarish scenario, that is scripted on Hollywood sci-fi movies, where the machines take over the mantle from humans. But unlike the stories that find fault with AI going awry, this article, explores the reason why those mistakes could’ve happened in first place. After all, machines/robots are supposed to be subservient to humans and when they go against the very purpose of their creation, it raises lot of questions, which are bordering on moral or ethical while the answer is more of technical. To put it simply, the articles lays it all on bad coding!!! As simple as that.

Calling it “bad coding”, might be trivializing the issue in a giant sweep. For the issue which the author infers is far more dangerous and something that is quite common place, even in this age of “politically correct” world – racism in other words. The example which probably led to that article being conceived started as a simple translation error where in a tool kept assigning specific work types to specific gender, when the actual script being translated was gender neutral. To put it simply, unless absolutely specified, the tool translated roles like doctor, military etc as male dominions and addressed the person as “he” while roles like nurse, teachers were set as “she”. At the outset it might seem like a trivial issue, not worth all the ruckus. But the issue is far more deep rooted than mere symbolism of feminism and chauvinism. Consider a case of face mapping apps. At the outset it sounds fascinating that a dumb terminal can read your picture and identify you amidst so many scores of pictures and truly it sounds an amazing invention, especially while screening out known delinquents in a crowd and a handy tool for police. But it has a very serious flip side to it when the advanced version of the tool, clubs dark skinned people in same category as apes and gorillas which has raised a furor, courtesy Google’s photo recognition tool in 2015. One can’t blame the tool for it simply responded to the way it had been programmed. When the revelation dawns on you, it becomes all the more critical and a cause for worry, as the depth to which racist thoughts have reached into the collective subconscious of people is alarming, when people unknowingly and unwittingly make such errors.

The issue with problems like racism is that, many a times they are not “visible”. Meaning, the generalizations and stereotypes have been in existence for so long that, despite the volume of voices fighting against them, winning and even silencing the perpetrators to a major extent, the scars haven’t healed fully and considering that its translating into technologies that is supposed to be the liberating factor, the war seems to be never ending. To me, if I have to point to a singular main reason for racism, I would name trust as the key issue. If you tend to trust a person who speaks your language, looks like you, behaves like you more than anyone who looks alien, talks alien and even behaves alien, would you be called as someone who is racist or someone who don’t trust outside people easy? If you form a opinion based on personal incidents or read or observed, about a specific set of people, are you racist or an informed opinionated person? If those thoughts, result in actions towards such people, that may cause them physical or mental harm, are you considered to be securing yourself as pre-emptive or as the world collectively called as racist? If such prejudiced thoughts, formed on basis of ill-advised information, result in intended harm, then it truly fits into the category. But what about those, who in real life are decent people, devoid of any prejudices, but subconsciously, due to years of repeated visuals and interpretations, even for fun sake (though technically they are not) carry it on to their work, which results in a product that is perceived as racist? Especially for those products, that are not immediately identifiable till such incidents, mentioned above, flare up?

Considering that every single government worth its salt, is planning to remove human interference in many areas, especially in fighting wars to policing, by replacing such activities to supposedly unbiased  systems, it all boils back to the age old question of who will guard the guardians. For, end of the day, however sophisticated the tech maybe, it still relies on data provided by humans. As long as the base data itself is biased, there is no solution that would be fair for all. With Alexa, Siri and a whole lot of supposedly intelligent response systems cropping up every single day, hopefully there should be an universally acceptable, unbiased framework that forms the guideline and basis for these in future. But then again, wouldn’t we be reinventing the same wheel? Only time will tell.

Comments

Ramesh said…
Beautiful post Gilsu. You bring out many nuances on the matter at hand - Bravo.

I am less perturbed than you are. AI will learn and obvious mistakes will get corrected over time. You can never eliminate bias and prejudice and it will always remain for as you rightly observed, we are all creatures of our upbringing and experiences. I don't believe AI will have any greater biases than humans.

Sriram Khé said…
I hope more and more coders and software gurus will think about the ethics of their work. In profession after profession we have removed the ethics from what the professionals do, as if ethics belong exclusively in the dark and gloomy academic corridors inhabited by eccentric professors. This AI issue is merely one of the many that will determine our collective future.

Popular posts from this blog

Rudhra Veenai

Pirivom Santhippom

Unnnai...Kan theduthay...HIC...