20 hours ago, g4borg said:
i got interested into what our technical limits are atm. through this debate. so went a bit researching. I kept it within current design philosophy, so no quantum computing stuffz.
summarized my findings: physically, it seems, we won’t get much above 8 GHz ”) with electrons, but mainly atm. silicon heat treshholds in cmos limits us to ~3.6 GHz;
interestingly, lately the travel time of data on the mobo starts to affect limits aswell. doesnt matter if the cpu is fast as it gets, if the dataflow isnt - we really have to find a way to speed up light.
thankfully, thats only clock speeds - there is lots of stuff, which influences our progress in efficiency more than that. e.g. its also a matter of time until our applications make use of multicore designs more properly, more preoptimizations, more precognition stuff. and in gaming, besides sheer graphics intensitity, for me personally, networking techniques get more important as a limiting factor aswell.
why this fluff? because it kinda shows, that our current hardware is quite mature. innovation started to evolve around other things than clock speed by now. unfortunately, we can’t expect the THz cpu appearing miraculously aswell. in terms of miniaturization we will reach a treshhold soon aswell, moores law ends in 2025.
but in the year 2525, you gotta survive.
a little curiosity: in cryptography, they make use of bremermanns limit - assume a giant computer the mass of 1 earth with maximum absolutely theoretically possible computing power per mass unit - to test whether a cryptographic algorithm can be broken in time with brute force. quite funky.
It’s not only a problem about conductor’s heating/power limits. More megahartz need longer pipelines in the chip, with all the problem a long unoptimized pipeline can get (redundancy, miscalculation, failed soft requests) all stuffs that makes them doing sometimes two times the job a smaller pipeline/mhz chip can do.
In this regard we had an hystorical example: P4 willamete and athlon xp (barton).
Where the first iteration of P4 was that bad, for the reason I describe above, that the low freq, barton had an higher IPC with a lower clock speed (that’s why they basically moked Intel by calling their cpus athlon 2500+ when instead their clock speed was around 1800mhz, since they had the same IPC at that freq.), all I wrote here is history, and not: me bashing Intel. Two minute research can be done to verify this.
That’s the reason Intel left the first iteration of P4 to die fast and started to move (somehow) to the amd approach with the “core” arch. Well ok, during years branch prediction improved a lot, that’s why we started to see again high freq. but it’s always a work around to something that don’t really have a real solution if you want to achieve the higher clock speed, even if you “solve” the thermal/power limits of the material involved.
That’s why we have multiple cores arch. Sadly the software optimization always struggled since the mass market is oriented toward 2/4 cores cpus, so basically: who cares. I blame AMD for that, for too long they left that faildozzer on the market, without releasing competitives multiple cores on mass market. Let’s see what happens now.
Btw on Ryzen topic, as far as I saw, the mainboard are really struggling to achieve decent reliability and performance optimization. It’s really a bleeding edge issue we are seeing right now. Expecially Asus is acting really weird on their top brand motherboards. They already released like 3 bios update in a month, and it’s not even enough to say that they are at a good “start”.
Even like this, we are seeing something totally whorth attention (finally).