
Editor’s note
This is part of our 73rd anniversary series to explore the multifaceted ways AI is reshaping human society while scrutinizing the ethical, social, and economic implications. Also, we will highlight the opportunities and challenges of human-AI interaction by navigating the new world of AI-driven changes.
A couple of years ago, artificial intelligence (AI)-based chatbot, Lee Luda, characterized as a 20-year-old female college student, became an instant hit among young Koreans. Developed by local startup company Scatter Lab, the service acquired about 800,000 users in just 20 days and saw people engage with it in casual small talk.
What could have been a success story of Korea’s home-grown generative AI, however, met a tragic ending.

A few weeks after its launch, the chatbot made headlines for its offensive language toward social minorities, describing lesbians as "creepy" and saying that it would "rather die than live as a disabled person." The developer had to suspend the service, with a promise to better educate Luda to make unbiased judgments without using hateful speech.
This was undoubtedly a wake-up call for the local AI industry on how AI-based technologies can exacerbate existing discrimination and prejudice within society.
Today, AI is no longer in its infancy. It has become an integral part of people’s lives with its use across a range of sectors such as education, transportation, health care, manufacturing and finance. In particular, the launch of OpenAI’s ChatGPT in November last year – arguably one of the most impressive technological advancements in the field of AI to date ― has significantly accelerated the generalization and commercialization of AI among the public.
While politicians, businesspeople and educators in Korea are scrambling to figure out how this technology can be used for the betterment of mankind, not as many discussions are being held about the ethical consequences AI-based technologies may bring.
Why AI exhibits bias
Despite considerable progress in recent years, analysts say AI technologies continue to grapple with deeply ingrained biases and prejudices.
As to why AI continues to exhibit bias and prejudice, Suh Seung-wan, CEO of Yumeta Lab who specializes in prompt engineering, gave a simple answer: humans.
“The problem is in the humans, not the AI itself,” Suh said. “Humans created AI and they train the systems based on data produced by humans. In this process, biased information of their human developers and users are reflected.”
AI systems that are trained to perform specific tasks use historical data such as research studies or internet information as input to predict new output values. They produce output values in various formats such as text, images and videos, depending on the platform.
Generative AI-powered chatbots are trained on hundreds of billions of words from the internet. But the problem is, the datasets collected from the internet are full of biased information and there is simply no easy way to fully remove such content, Suh said.
“As long as discrimination and prejudice exist in real life, they will also exist in AI. We will be seeing more ethical issues linked to AI bias as the technology expands its presence in everyday life,” said Kim Jin-hyeong, emeritus professor at Korea Advanced Institute of Science and Technology (KAIST) who previously led the state-run AI Research Institute (AIRI).
"Concerns about biased algorithms have existed since the field's emergence, but little has been done to address the issue," Kim said.
He also warned that the AI not only reflects biases from the data that it is built upon, but sometimes amplifies them.
The problems stemming from AI bias are not limited to hate speech.
“Nowadays in society where AI technologies are increasingly being used in the decision-making process, the bias may influence some of the vital decisions companies or people make in the field of employment, health care and politics,” Kim said.
In 2018, Amazon suspended the use of AI in its hiring process primarily because the system showed gender bias. The AI system had been trained on resumes submitted to the company over a period of 10 years. However, after analyzing the data, it began favoring male candidates over female ones for technical positions.
Korea may see similar issues arising in the near future as companies are actively deploying AI systems across their operations, said Jeon Chang-bae, chairman of the International Artificial Intelligence Ethics Association (IAIE), a Seoul-based non-profit group studying AI ethics.
A growing number of Korean companies are using AI technology in their hiring processes to conduct job interviews, which often leaves applicants puzzled about how to impress an algorithm.
Biased AI can also affect public opinion about social or political issues, Jeon said.
“Of course, a chatbot tells users that it doesn’t have any political opinions when asked about it directly. But there are multiple studies showing that chatbots are infused with social stereotypes found in the data collected from the internet that they are trained on,” he said.
A recent study conducted by researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found that OpenAI’s ChatGPT was left-leaning and libertarian, while Google's BERT language models were more conservative than ChatGPT, and Meta’s LLaMA was the most right-leaning and authoritarian among the three models.
Korean politicians have also raised the issue of biased algorithms.
Last month, the conservative ruling People Power Party (PPP) members speculated that local search engine Daum – often criticized by conservatives for its left-leaning tendencies ― failed to screen out certain expressions belittling President Yoon Suk Yeol on its news comment section. The lawmakers claimed that the portal successfully filters expressions that are offensive to former liberal President Moon Jae-in.
Daum denied claims about its political leaning saying that its “Safebot” system, an AI-powered software application that detects and blocks comments that contain swear words, was trained in accordance with the Korea Communications Standards Commission’s regulations.
So the question is, will AI ever be completely unbiased? Theoretically yes, but practically no.
Just as bias and prejudice have proven difficult to eliminate in the real world, eliminating bias in AI is not an easy task, experts said.
“If the developer can clean training dataset from any assumptions on topics such as race, gender or other ideological concepts, theoretically, we could build an AI system that makes unbiased data-driven decisions," Suh said. "But that would be nearly impossible."
To mitigate the bias in AI, developers currently use a tactic called reinforcement learning from human feedback (RLHF), he explained, as chatbots become much smarter and less biased through the feedback provided by humans. The trainers' job is to label content that contains bias and correct the model toward more socially inclusive language.
“But this is a highly time-consuming process which requires a lot of manpower. It's manual labor,” Suh said. "That is why OpenAI outsourced the job to low-paid workers in Kenya."
According to a Time magazine report published in January, OpenAI paid Kenyan workers less than $2 an hour to label and filter out toxic data from ChatGPT’s training dataset. The task involved reading graphic details of violent and vulgar content such as child sexual abuse, murder, suicide, torture and self-harm.
Jeon also viewed that AI will never be completely unbiased.
"The best we can do is to reduce bias in datasets by developing a regulatory framework to ensure algorithms are thoroughly tested on and appropriate for all groups of the society. But at the current stage, Korea lacks concrete ethical guidelines for both AI researchers and users," he said.