ChatGPT Dreams Up Fake Studies, Alaska Cites Them To Support School Phone Ban

Techdirt. 2024-10-31

Sometimes I love a good “mashup” story hitting on two of the different themes we cover here at Techdirt. This one is especially good: Alaska legislators relying on fake stats generated by an AI system to justify banning phones in schools, courtesy of the Alaska Beacon. It’s a mashup of the various stories about mobile phone bans in schools (which have been shown not to be effective) and people who should know better using ChatGPT as if it was trustworthy for research.

The state’s top education official relied on generative artificial intelligence to draft a proposed policy on cellphone use in Alaska schools, which resulted in a state document citing supposed academic studies that don’t exist.

The document did not disclose that AI had been used in its conception. At least some of that AI-generated false information ended up in front of state Board of Education and Early Development members. 

Oops.

Alaska’s Education Commissioner Deena Bishop tried to talk her way out of the story. She claimed that she had just used AI to help her “create the citations” for a “first draft” but that “she realized her error before the meeting and sent correct citations to board members.”

Except, that apparently is as accurate as the AI’s hallucinations. Are we sure Deena Bishop isn’t just three ChatGPTs in a trench coat?

However, mistaken references and other vestiges of what’s known as “AI hallucination” exist in the corrected document later distributed by the department and which Bishop said was voted on by the board.

The resolution directs DEED to craft a model policy for cellphone restrictions. The resolution published on the state’s website cited supposed scholarly articles that cannot be found at the web addresses listed and whose titles did not show up in broader online searches.

Four of the document’s six citations appear to be studies published in scientific journals, but were false. The journals the state cited do exist, but the titles the department referenced are not printed in the issues listed. Instead, work on different subjects is posted on the listed links. 

Cool cool. Passing laws based on totally made up studies created by generative AI.

What could possibly go wrong?

And, really, this stuff matters a lot. We’ve had multiple discussions on how lawmakers seem completely drawn to junk science to push through ridiculous anti-tech bills “for the children.” Here they’re skipping over even relying on junk science to go with non-existent made up science.

It’s difficult to see how you get good policy when it’s based on something dreamed up by an AI system.

Alaska officials pathetically tried to say this was no big deal and that these were “placeholder” citations:

After the Alaska Beacon asked the department to produce the false studies, officials updated the online document. When asked if the department used AI, spokesperson Bryan Zadalis said the citations were simply there as filler until correct information would be inserted. 

“Many of the sources listed were placeholders during the drafting process used while final sources were critiqued, compared and under review. This is a process many of us have grown accustomed to working with,” he wrote in a Friday email. 

Again, the version that had the hallucinated citations was distributed to the board and used as the basis for the vote.

Shouldn’t that matter?

For example, the department’s updated document still refers readers to a fictitious 2019 study in the American Psychological Association to support the resolution’s claim that “students in schools with cellphone restrictions showed lower levels of stress and higher levels of academic achievement.” The new citation leads to a study that looks at mental health rather than academic outcomes. Anecdotally, that study did not find a direct correlation between cellphone use and depression or loneliness.

Great. Great.

The Alaska Beacon article has a lot more details in it and is well worth a read. In the past, we’ve talked about concerns about people relying on AI too much, but that was more about things like figuring out prison sentences or whether or not someone should be hired.

Passing regulations based on totally AI-hallucinated studies is another thing entirely.