AI schoolwork

Language Log 2025-06-10

Current LLMs can answer questions or follow instructions in a way that makes them useful for as cheap and quick clerical assistants. Many students use them for doing homework, writing papers, and even taking exams — and many journalists, government functionaries, lawyers, scientists, etc., are using them in similar ways. The main drawback from users' point of view is that LLMs often make stuff up — this seems to have happened a couple of weeks ago to the crew who composed the MAHA report, and is an increasingly widespread problem in court documents. Attempts at AI-detectors have totally failed, and so the current academic trends are either in the direction of testing methods that isolate students from LLM-connected devices, or in the direction of syllabus structures that directly encourage students to use LLMs, but try to teach them to use them better.

Some of these attempts fall into the category of "prompt engineering" — this is certainly needed, but it's very much a moving target, and so I'm skeptical of its value. My colleague Chris Callison-Burch has devised some "AI-Enhanced learning" assignments that strike me as more likely to help students learn course content as well as LLM skills. I'm planning to spend the next month or so re-doing (aspects of) the syllabus for my undergrad Linguistics course in a similar spirit. One problem is that students in different schools at Penn currently have access to different software licenses, so some assignments might be free for some students but require non-trivial access fees for others.

In the news recently was OSU's total capitulation: "Ohio State launches bold AI Fluency initiative to redefine learning and innovation", 6/4/2025:

Initiative will embed AI into core undergraduate requirements and majors, ensuring all students graduate equipped to apply AI tools and applications in their fields

With artificial intelligence poised to reshape the future of learning and work, The Ohio State University announced today an ambitious new initiative to ensure that every student will graduate with the AI proficiencies necessary to compete and lead now.

Launching this fall for first-year students, Ohio State’s AI Fluency initiative will embed AI education into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them — no matter their major.

I gather that this was a top-down decision, made without a lot of faculty consultation, and it'll be interesting to see how it works out. Needless to say, there's been a certain amount of academic pushback from around the world…

Meanwhile, we continue to see a trickle of stories about AI stumbles — for example Mark Tyson, "ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic", Tom's Hardware 6/9/2025 (and here's the LinkedIn post he's reporting on).

And there's a new term (and initialism) to cover such cases — AJI = "Artificial Jagged Intelligence". This turns out not to mean that AI systems can wound you if not handled carefully, though that's also true.

Lakshmi Varanasi, "AI leaders have a new term for the fact that their models are not always so intelligent", Business Insider 6/7/2025:

  • Google CEO Sundar Pichai says there's a new term for the current phase of AI: "AJI."
  • Pichai said it stands for "artificial jagged intelligence," and is the precursor to AGI.
  • AJI is marked by highs and lows, instances of impressive intelligence alongside a near lack of it.

Google CEO Sundar Pichai referred to this phase of AI as AJI, or "artificial jagged intelligence," on a recent episode of Lex Fridman's podcast.

"I don't know who used it first, maybe Karpathy did," Pichai said, referring to deep learning and computer vision specialist Andrej Karpathy, who cofounded OpenAI before leaving last year.

The cited podcast is here, FWIW.