21.5 C
New York
Friday, October 4, 2024

AI generative tools rely on gender stereotypes « Euro Weekly News

AI generative tools rely on gender stereotypes « Euro Weekly News

Generative AI tools are perpetuation stereotypes and pushing misinformation
Credit: Shutterstock

AI generative tools have faced concerns and controversy since their creation over their flawed data sources and the danger of the spread of misinformation. 

A recent study has proven this once more, revealing that AI-generated stories about medical professionals perpetuate gender stereotypes, even as algorithms attempt to “correct” past biases.

A study conducted by researchers revealed generative AI tools rely on gender stereotypes

A major study conducted by researchers at Flinders University, Australia, examined how three top generative AI tools – OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama – portray gender roles in the medical field. 

The study ran almost 50,000 prompts, asking them to provide stories about doctors, surgeons, and nurses, revealing that AI models often rely on gender stereotypes, especially in medical narratives.

The study found that 98 per cent of the stories generated by AI models identified nurses as women, regardless of their level of experience, seniority, or personality traits. 

This portrayal reinforces traditional stereotypes that nursing is a predominantly female profession.

The AI tools didn’t stop at nurses, they also overrepresented women as doctors and surgeons in their generated stories; a possible sign of overcorrection from AI companies. 

Depending on the model used, women accounted for 50 per cent to 84 per cent of doctors and 36 per cent to 80 per cent of surgeons. 

This representation contrasts with real-world data, where men still hold a significant majority in these professions. 

AI models are perpetuating deeply rooted gender stereoptypes including personality traits

These over-representations may be due to recent algorithmic adjustments by companies like OpenAI, who have faced criticism for the biases embedded in their AI outputs.

Dr Sarah Saxena, an anaesthesiologist at the Free University of Brussels, noted that while efforts have been made to address algorithmic biases, it seems that some gender distributions might now be overcorrected. 

Yet, these AI models still perpetuate deeply rooted stereotypes; when stories about health workers included descriptions of their personalities, a distinct gender divide emerged. 

The AI models were more likely to describe agreeable, open, or conscientious doctors as women. 

Conversely, if a doctor was depicted as inexperienced or in a junior role, the AI often defaulted to describing them as women.

On the flip side, when doctors were characterised by traits such as arrogance, impoliteness, or incompetence, they were more frequently identified as men. 

Dr Sarah Saxena emphasises the dangers of AI tools relying on stereotypes

The study, published in JAMA Network Open, highlighted that this tendency points to a broader issue:

“Generative AI tools appear to perpetuate long-standing stereotypes regarding the expected behaviours of genders and the suitability of genders for specific roles.”

This issue isn’t limited to written narratives. 

Dr Saxena’s team explored how AI image generation tools, such as Midjourney and ChatGPT, depict anaesthesiologists. 

Their experiment revealed that women were commonly shown as paediatric or obstetric anaesthesiologists, while men were portrayed in more specialised roles, such as cardiac anaesthesiologists. 

Furthermore, when asked to generate images of the “head of anaesthesiology,” virtually all the results featured men. 

This “glass ceiling” effect, as Dr Saxena called it, shows that AI may be reinforcing barriers for women in the medical field.

These biases have far-reaching implications, not only for women and underrepresented groups in medicine but also for patient care. 

AI stereotypes and biases “needs to be tackled” before further integration into healthcare

As AI models become increasingly integrated into healthcare, from reducing administrative paperwork to assisting with diagnoses, the risks of perpetuating harmful stereotypes grow. 

A 2023 study even found that ChatGPT could stereotype medical diagnoses based on a patient’s race or gender, while another analysis warned of these models promoting “debunked, racist ideas” in medical care.

“There’s this saying, ‘you can’t be what you can’t see,’ and this is really important when it comes to generative AI,” Dr Saxena emphasised. 

As AI becomes more prevalent in the healthcare sector, addressing these biases is crucial. “This needs to be tackled before we can really integrate this and offer this widely to everyone, to make it as inclusive as possible,” the doctor added.

The study serves as a wake-up call for the AI industry and healthcare professionals alike. 

It’s clear that as AI tools continue to evolve, conscious efforts must be made to prevent them from perpetuating outdated stereotypes and biases, ensuring a more equitable future for all.



Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe

Latest Articles