ChatGPT reveals geographic bias on environmental justice issues: Report
Virginia Tech in the US published a report outlining potential biases in the artificial intelligence (AI) tool ChatGPT, which suggested differences in outcomes on environmental justice issues across different counties.
In a recent report, researchers at Virginia Tech argued that ChatGPT has limitations in conveying local information about environmental justice issues.
However, the study did see a trend that larger, more populous states tend to have more access to information.
“In states with large urban populations, such as Delaware and California, less than 1% of the population lived in counties where specific information was not available.”
Meanwhile, equal access was lacking in less populated areas.
“In rural states like Idaho and New Hampshire, more than 90% of the population lived in counties without access to local information,” the report said.
Additionally, a lecturer named Mr. Kim from Virginia Tech’s Department of Geography was quoted as urging that further research is needed as bias has been discovered.
“Although more research is needed, our findings show that geographic bias exists in the current ChatGPT model,” Kim said.
The research paper also includes a map showing the extent of the U.S. population that does not have access to location-specific information on environmental justice issues.
Related: ChatGPT Passes Neurology Exam First Time
This follows recent news that scholars are discovering potential political bias in ChatGPT.
On August 25, Cointelegraph reported that researchers in the UK and Brazil have developed large-scale language models (LLMs), such as ChatGPT output text, that contain errors and biases that can mislead readers and have the ability to promote political bias presented in traditional media. ) reported that they had published a study declaring that. .
magazine: Deepfake K-Pop Porn Wakes Grok, ‘There’s a Problem with OpenAI’ Fetch.AI: AI Eye