Picture this: it’s January 19th, 2038, at exactly 03:14:07 UTC. Somewhere in a data center, a Unix system quietly ticks over its internal clock counter one more time. But instead of moving fo…
I don’t understand why people always say that. Pentium Pro could handle 64 GB even though it was a 32 bit CPU. It had a 36 bit address bus. Later models are the same.
People say it because it was a Windows limitation, not a computing limitation. Windows Server had support for more, but for consumers, it wasn’t easily doable. I believe there’s modern workarounds though. The real limit is how much memory a single application can address at any given time.
The problem doesn’t concern me as much at how bad we’ve become at maintaining shit that already works.
There is also the fact that during Y2K, we didn’t have as much reliance on computers.
There was also a worldwide effort to fix any potential problems before they happened.
Cobol mavens burned both ends of the candle and made bank, while making banks work.
Many were old enough to retire after that.
Issue 2038 will be easier to fix because many systems are already 64-bit, as 32-bit systems could only handle 4 GB of RAM, and programs need more RAM.
The only issue would be critical issues that run on 32-bit systems and must be fixed before that date.
I don’t understand why people always say that. Pentium Pro could handle 64 GB even though it was a 32 bit CPU. It had a 36 bit address bus. Later models are the same.
People say it because it was a Windows limitation, not a computing limitation. Windows Server had support for more, but for consumers, it wasn’t easily doable. I believe there’s modern workarounds though. The real limit is how much memory a single application can address at any given time.