A new study published by researchers at the University of Arkansas further highlights the complexities of artificial intelligence and its ability to think and behave like humans. In this case, the ability to engage in creative thought. The new study is published in Scientific Reports.
The development of artificial intelligence systems continues to expand each day. These technologies are being applied in a range of helpful ways, particularly in the health and medical fields. The ability to diagnose disease or perform surgeries, for example, are all being enhanced by the presence of artificial intelligence tools.
But one common question about artificial intelligence is whether it can effectively replace human thinking and judgment. Can it behave like a human? Think like a human? Respond like a human? The emergence of the ChatGPT platform brought many of these concerns to the forefront. It showed, at least at the surface, that it could produce what appeared to thoughtful, intelligence written responses to questions, responses that seemed better written than a human response.
The study published in Scientific Reports was designed to explore how artificial intelligence performed on tests measuring a uniquely human trait: creativity. Specifically, the study compared how humans and ChatGPT performed on tests that measured what’s called divergent thinking. Often seen as a measure of creative thinking, divergent thinking is a thought process where a person explores multiple options and solutions to solve problems, particularly in situations where there is not a one-size-fits-all fix.
The 151 study participants were asked to perform three separate tasks: the Alternative Use Task (eg, being asked to take an everyday object and use it in a creative way), the Consequences Task (eg, coming up with imagined scenarios or outcomes for a situation), and the Divergent Associations Task (eg, coming up with ten completely different nouns). ChatGPT also performed the same tests.
Researchers ultimately found that ChatGPT generated responses that were both more nuanced and original than the human participants. However, researchers add the caveat that they were measuring creative potential, not the engagement in actual creative activities themselves.
Sources: Science Daily; Scientific Reports; American Museum of Natural History