We’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.
Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”
Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently
In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.
The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.
Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
The truth is, you couldn’t possibly know either way.
I think the argument is we’re not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it’s a different discussion.
That’s true in a somewhat abstract way, but I just don’t see any evidence of the claim that it is just around the corner. I don’t see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don’t have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
-
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
-
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
something that cannot, even in principle, be replicated in silicon
As if silicon were the only technology we have to build computers.
Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.
And why is “non-biological” a limitation?
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
I personally think that the additional component (suppose it’s energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don’t know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.
On your point - I agree.
I’d say we might reach AGI soon enough, but it will be impractical to use as compared to a human.
While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.
-
Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it. :(
Couldn’t we have a good old fashioned butlerian jihad?