

I’m more inclined to believe it’s gotten better at being convincing.
I’m more inclined to believe it’s gotten better at being convincing.
And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…
That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
I wouldn’t have subbed to Spotify on my own. I’m inherited into my wife’s family plan. For me the biggest benefit is just discovering new music. I used to have a big MP3 library, but after a couple computer upgrades, they’ve kind of disappeared over the years. Having Spotify there has been really convenient for just listening toto old stuff I’ve lost as well. This said, if my FiL cancels, I probably wouldn’t sub for myself anyway.
I sometimes approach this like I do with students. Using your example, I’d ask it to restate the source, then ask it to read the title of that source directly. If it’s correct, I might ask it to briefly summarize what the source article covers. Then I would ask it to restate what it told me about the source earlier, and to explain where the inconsistency lies. Usually by this time, the AI is accurately pointing out flaws in its prior logic. At that point I ask again if it is 100% sure it didn’t make a mistake, and it might actually concede to having been wrong. Then I tell it to remember how and why it was wrong to avoid similar errors in the future. I don’t know if it actually works, but it makes me feel better about it.