Today: 7 February 2025
2 October 2023
2 mins read

Gender bias seen in AI-generated content on leadership

Generative AI learns the patterns in input data, using which the AI is trained, and then creates content bearing similar characteristics. The AI depends on machine learning concepts for content creation…reports Asian Lite News

New research has revealed an inherent gender bias in the content – text, images, other media – generated by artificial intelligence (AI).

Analysing AI-generated content about what made a ‘good’ and ‘bad’ leader, men were consistently depicted as strong, courageous, and competent, while women were often portrayed as emotional and ineffective, researchers at the University of Tasmania, Australia, and Massey University, New Zealand, found.

Thus, AI-generated content can preserve and perpetuate harmful gender biases, they said in their study published in the journal Organizational Dynamics.

“Any mention of women leaders was completely omitted in the initial data generated about leadership, with the AI tool providing zero examples of women leaders until it was specifically asked to generate content about women in leadership.

“Concerningly, when it did provide examples of women leaders, they were proportionally far more likely than male leaders to be offered as examples of bad leaders, falsely suggesting that women are more likely than men to be bad leaders,” said Toby Newstead, the study’s corresponding author.

Generative AI learns the patterns in input data, using which the AI is trained, and then creates content bearing similar characteristics. The AI depends on machine learning concepts for content creation.

For training these generative AI technologies, vast amounts of data from the internet along with human intervention to reduce harmful or biases are processed.

Therefore, AI-generated content needs to be monitored to ensure it does not propagate harmful biases, said study author Bronwyn Eager, adding that the findings highlighted the need for further oversight and investigation into AI tools as they become part of daily life.

“Biases in AI models have far-reaching implications beyond just shaping the future of leadership. With the rapid adoption of AI across all sectors, we must ensure that potentially harmful biases relating to gender, race, ethnicity, age, disability, and sexuality aren’t preserved,” she said.

“We hope that our research will contribute to a broader conversation about the responsible use of AI in the workplace,” said Eager.

ALSO READ-Air India to begin daily flights from Kochi to Doha

Previous Story

Despite Trudeau, Canada is India’s Natural Partner

Next Story

Byju’s Misses September Deadline

Latest from -Top News

‘Ozoum’ shines light on social change 

A groundbreaking Saudi television series is offering an unprecedented glimpse into the Kingdom’s social transformation, captivating domestic audiences and challenging long-standing perceptions, writes Pedro Carvalho  A groundbreaking television series is offering unprecedented

KCF Festival Unites Karnataka Talent in UAE 

Enthusiastic participants displayed Karnataka’s cultural legacy through various artistic performances, making the event a grand celebration of talent.   The 6th edition of the KCF UAE National Level Talent Festival, Prathibhotsava 25, was

SME growth in focus at UAE-South Africa talks 

Bin Salem highlighted that SMEs constitute over 75-80 percent of total enterprises globally…reports Asian Lite News   Humaid Mohammed bin Salem, Secretary-General of the Federation of UAE Chambers of Commerce and Industry

UAE unveils Green IP roadmap to boost innovation 

This three-month initiative seeks to enhance the country’s IP competitiveness while supporting its transition to a circular economy. ..reports Asian Lite News The UAE Ministry of Economy has introduced a new “Green Intellectual
Go toTop

Don't Miss

‘Outcomes from G20 talks on Blue Economy, AI will be taken forward’

CAG further stated that the audit of Blue Economy and

New AI algorithm can forecast earthquake with 70% accuracy

The AI, developed by researchers at The University of Texas