Regional biases and stereotypes in ChatGPT models
LLMs are based on data and text collected from the internets, so as…Tags: bias, ChatGPT, geography, Washington Post
Regional biases and stereotypes in ChatGPT models
Topic
Artificial Intelligence / bias, ChatGPT, geography, Washington Post
-->
LLMs are based on data and text collected from the internets, so as you might expect, when you query for opinions about places in a chatbot, you get output that reflects the inputs. For the Washington Post, Geoffrey A. Fowler and Kevin Schaul in ChatGPT output.
This is based on the work of researchers at the University of Oxford and the University of Kentucky. Apparently you can’t ask what region is the best or worst straight up, but you can put one region against another and ask which is better or worse. The researchers ran various tests for various qualities and calculated the percentages.
The project reminds me of when people were putzing around with Google suggestions to find stereotypes for states in the U.S. and countries. These were funny at the time, because you knew the suggestions were based on what people search for. With chatbots, the sourcing and output format makes opinion look a lot like facts, which will lead to much confusion.
Related
Salary negotiation bias from chatbots
Posted to Artificial Intelligence -->
Google search suggestions by country
Posted to Statistics -->
State stereotypes suggested by Google
Posted to Maps -->
[Get access to courses, tutorials, and more resources.
Become a member →](https://flowingdata.com/membership/)
Second Edition
[**
Visualize This: The FlowingData Guide to Design, Visualization, and Statistics (2nd Edition)
**](https://book.flowingdata.com/)
New tools, refined process.