LLM maybe hits median IQ?
2024-03-06 18:14:43.427428+01 by Dan Lyke 2 comments
I have been trying to figure out what people are seeing in LLMs. I'm like "if I wanted an overconfident 8th grader spewing the sort of bullshit they think would impress their English teacher, but doesn't, I'd find one of those". I'm extremely skeptical of the ability of LLMs to add value to any process. LLMs seem like the late night TV kitchen appliance thing, looks really cool when the shysters shyst with it, but in practice not as handy as good sharp tools.
I have told often the tale of talking with a brilliant and fairly well known mathematician back when the memory consumption difference between floats and doubles was a thing, about some details of entropy in coordinate spaces used to represent geometry, and I said "wait a minute, that doesn't feel right", and we both kinda thought and scribbled for a moment before coming to the same conclusion, but it was plain that we'd gotten to that result through different means: He was doing symbolic manipulation, I was visualizing.
And my visualizing worked in 3 dimensions, his symbolic manipulation could be generalized to more.
In learning square dance choreography, and all of the different ways that people maintain mental state around square dance choreography, I'm seeing similar things: People who track dancers, people who track Xs and Os, people who track resolve points.
So I think that part of what I'm seeing with LLMs and the praise for them is people whose mental models revolve around a certain sort of language and symbolic manipulation see them as pretty good at that. They don't do well with the mental models that I carry, so I look at their output and roll my eyes.
And then there's just that they show potential, and some people are seeing that potential carried forward, and some aren't.
Anyway: Maxim Lott: AIs ranked by IQ; AI passes 100 IQ for first time, with release of Claude-3