LLMonday
2024-09-09 18:39:35.138866+02 by Dan Lyke 0 comments
Baldur Bjarnason: The LLM honeymoon phase is about to end
The usefulness of LLMs was always overblown, but unless the AI vendors discover a new kind of maths to fix the problem, they’re about to have an AltaVista moment.
GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Or: yep, LLMs are being used to generate exactly the sort of propaganda masquerading as research that you expected.
RT Matteo Collina @mcollina@fosstodon.org
Finally it happened to me as well: developers complaining that the behavior of my OSS libraries does not match what ChatGPT explains to them. 🤦♂️
In the replies, shrimp eating mammal 🦐 @walruslifestyle@octodon.social observes
this is a power game, and openai has the upper hand. if it's not already true, one day there will be "open source developers" who argue that they should modify their project to do what chatgpt says they should do. it'll help adoption, they'll say, it'll help accessibility, they'll say. user first, they'll say.
Which also means that it's going to be interesting to get adoption on new approaches to problem, because the frameworks by which people "understand" concepts will be limited by LLM behavior (this is especially already a problem that we see with people who use LLMs to "summarize" documents, because the LLM most certainly is not doing that).
Governor Newsom seeks to harness the power of GenAI to address homelessness, other challenges. Given that Newsom has gone full on "let's inflict more trauma to people experiencing trauma response", this bodes ill. (Via)
Others tested this new model — but Shumer’s claims didn’t check out. Reflection 70B had similar benchmark scores to Facebook’s LLaMA 3 70B — and lower than LLaMA 3.1, which Shumer had said it was based on. Reddit r/LocalLLaMA concurred — Reflection 70B was just LLaMA 3 with some extra tuning. [Twitter, archive; Reddit; VentureBeat]
Further testing suggested that Reflection 70B was, in fact, a front-end to Anthropic’s Claude 3.5 Sonnet using LLaMA 3 weights. HyperWrite filtered the string “Claude” in an attempt to hide this. [Twitter, archive; Twitter, archive; Reddit]