chatbots as search
2025-03-20 23:52:35.817237+01 by Dan Lyke 0 comments
Probably linked this before, but: Emily Bender writing in Mystery AI Hype Theater 3k: Information literacy and chatbots as search makes two strong points: First, that
...a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time.
For obvious user acclimation reasons, and, that
Setting things up so that you get "the answer" to your question cuts off the user's ability to do the sense-making that is critical to information literacy.
I'm noticing this both from the sorts of people who send me Joe Rogan links as supporting evidence for their theories (like "Russia has every right to Ukraine, and will stop before Poland") and from people who find the output "charming", and close enough for answers to questions that don't have to be right.
See my previous skepticism about the value of questions that don't have to have right answers, but that's my neuro-divergence acting up again.