Short piece - Artificial intelligence and unbiased-bias (Eng.ver)

<Artificial intelligence and unbiased-bias>

Category: Technology
 
 
 
Few days ago, I have seen an attention grabbing Tweet on Google translator. The Tweet was displaying a translation result between Turkish – which does not have gender distinction in 3rd person singular pronoun - , and English – which distinguish male and female within 3rd person singular.
 
 
 
<Image source: From google translator (Try it!)>
 
 
First, translate two given English sentences – ‘he is a babysitter’, ‘she is a doctor’ – into Turkish. 3rd person singular pronoun ‘he’ and ‘she’ were both translated into Turkish gender neutral pronoun ‘O’. Surprisingly, when two sentences were re-translated into English, the genders of the original sentences have been exchanged. Google translator translated ‘O’ in front of the word ‘doctor’ as ‘he’, and ‘O’ in front of the word ‘babysitter’ as ‘she’. Many people responded to the Tweet with acerbic comments censuring biased gender roles permeating deeply inside our society, which is reflected on artificial intelligence algorithm.

 


It is truly interesting result. The publisher of the Tweet used the term ‘biased AI’. Machine, which follows designated steps and function, is never biased in nature. Machines do what they are supposed to do – and if does not, we call it jammed or malfunctioning – and this is also one of the reasons why as industrialization accelerated casualties and accidents related to industrial machines increased. (Presser is never biased enough to distinguish if the thing about to be squeezed is a junk or a worker) The reason why translators of the very initial stage was terrible is that all they did was to retrieve and spit out translation of each words from build in dictionary, without considering any context or intention of the sentence itself. However, through technological progress machines today are learning how to interpret ‘context’ based on ‘Machine learning’ algorithm. According to the founder of Google Larry Page, with machine learning computer is not just a machine for conducting pre-designated or build in functions anymore. Computer now learns how to understand/interpret context and solve problem. I’m pretty sure that the ‘Biased AI’ from the Tweet also has gone through the process of context analysis. If we have put the sentence to old callow translator, the result could have been something like that/he/she – 3rd singular pronoun – one baby babysitter’, which is kind of neutral but does not make any sense.
 
Is the algorithm of Google translator biased? The essence of context analysis and machine learning is data. Cognitive analysis is one of the humanity’s precious ability (I wouldn’t like to say exclusive, since recent AI are trying to develop cognitive abilities). We feel, experience, and logically think, judge based on our experiences and knowledge. Therefore, we human have an ability to understand context. When you see a man on the suit, holding flower in a hand, looking at a watch on his other hand, entering a fine-dining restaurant, what would you think about the person? Whether consciously or unconsciously, you might wonder ‘is he going to give a marriage proposal?’ However, the old fashioned machine-way of process would only recognize only superficial facts that ‘A man’, ‘Holding a flower’, ‘Looking a his watch’, ‘Walking into the restaurant’. A result of attempt to imitate, or counterfeit cognitive ability of human is machine learning and context analysis. However one great discrepancy between cognition of machine and human is, even in case of machine learning and context analysis, machine does not feel, or ‘think’ as we do; the AI never have identical cognitive ability with human being. (They actually never recognize anything)
 
Alan Turing (who has studies artificial intelligence from 20th century) explains artificial intelligence is not required to ‘think’ as human do. The only thing it must do to be qualified ‘intelligence’ is to imitate our behavior and judgment which are based on human cognition and intuition. This is why a test to certify if an AI has achieved same level of intelligence with human called ‘imitation game’. AIs and machine learning nowadays depend highly on database, and through myriad of data analysis algorithms they are designed to give the most appropriate result which corresponds to the context of human.
 
In case of the Turkish translation, the analysis of context is also from myriad of database. Ironically, the machine which has gone through context analysis is not biased. Only thing it did was following the algorithm designed to analyze context and translate English into Turkish. Translation algorithm is also not biased. Only thing it did is analyzing myriad of database on use of English and Turkish language, yield statistical probability and give most ‘likely’ result. However, I believe database is alleged in this case. The database might have been biased, and we (human) feel and interpret the translation biased based on our contexts of 21st century (e.g: Gender role bias, gender discrimination)
 
I’m not sure how exactly the algorithm led Google translator to put ‘he’ in front of ‘doctor’, and ‘she’ in front of ‘babysitter’. However, I’m sure that the translator did not autonomously judge that doctor may be a male, and babysitter may be a female. I’m quite sure that the algorithm analyzed myriad of use of word ‘babysitter’ and ‘doctor’ (at least of English) and reached conclusion that there are higher possibility for ‘he’ to come before ‘doctor’ and ‘she’ before ‘babysitter’. Maybe in the entire web, frequency of talking about male doctors might be higher than that of female doctor. in this case we cannot say the AI is biased. However, under the humanist value of 21st century, predefining ‘doctor’ as male or ‘babysitter’ as female is definitely biased.
 
I would like to call this ironic bias shown by AI ‘Unbiased bias’. Machine learning and algorithm learn our behavior and cognition following the trace of human behavior, which is left behind us as a data. However, humanist cognition, value, or way of thought is never static, and the change is speeding up as the history goes by. Gender role, equality, democracy, liberalism, humanism… Many things that define 21st century were never conceivable at some point of our history. Therefore, I believe it would be difficult for AI to distinguish anachronistic or biased human behavior. Especially when it comes to humanist value, it might be difficult to program AI which understand and be in line with our consistently changing value.
 
This would be a 10 year task (or maybe more than a decade) for AI developers, since even the context analysis today is not in there complete shape. It would take bunch of time. However, as the example of Turkish translation imply, I believe ‘unbiased bias’ of artificial intelligence could be a yardstick for human to identify biases permeated into our society, and show where we are heading to.  


 
 
 
 
 

댓글

  1. I like your post very much. It is very much useful for my research. I hope you to share more info about this. Keep posting artificial intelligence online training

    답글삭제
  2. The small live artificial intelligence empowered assistants like Google Assistant, Echo, Alexa, etc. are setting an example of an AI in the market. If you want to learn more about this topic, then you can read more about machine learning solutions.

    답글삭제

댓글 쓰기

이 블로그의 인기 게시물

[Python mini projects] Python을 활용한 텍스트 속 유의미한 통계 산출하기 -1

Kaggle competition 간단후기

(2부) 플랫폼 비즈니스,그리고 카카오의 수익모델에 대한 idea