in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
Are you aware of generalization and it being able to infer things and work with facts in highly abstract way? Might not necessarily be judgement, but definitely more than just completion. If a model is capable of only completion (ie suggesting only the exact text strings present in its training set), it means it suffers from heavy underfitting in AI terms.
Completion is not the same as only returning the exact strings in its training set.
LLMs don’t really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it’s able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn’t really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it’s able to solve more complex problems and perform more complex pattern matching.
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
Are you aware of generalization and it being able to infer things and work with facts in highly abstract way? Might not necessarily be judgement, but definitely more than just completion. If a model is capable of only completion (ie suggesting only the exact text strings present in its training set), it means it suffers from heavy underfitting in AI terms.
Completion is not the same as only returning the exact strings in its training set.
LLMs don’t really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it’s able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn’t really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it’s able to solve more complex problems and perform more complex pattern matching.
You mean like a calculator does?