Moore’s law has been a central pillar of computing for all my life. It is not a law, more an observation: every 18 months the number of transistors in a chip will double. This has created a universe of plenty, every two years, everything could double: performance, memory. My first computer had one 4Mhz 8 bit processor with 64K of RAM, my current laptop has two 2.8 Ghz 64 bit cores with 8 Gb of RAM. Basically a million time more capacity.
The end of Moore’s law has be prophesied for ages, I remember one of my university professors saying that circuits could not have features smaller than 120 µm because of the wavelength of the light used to etch them, nowadays they are at 30µm. Still engineers are increasingly hitting walls, processor frequency has stopped increasing at around 2 Ghz and instead the number of cores has started increasing. But programming multi-cores system is difficult and Amdahl’s law still holds, so the number of cores has stayed low.
Running code on the graphic card, with its massive array of computing units, improves certain types of computations by one order of magnitude, but there again, throwing more silicon at the problem yields decreasing returns. Quantum processors might give another boost for certain classes of problems, but there are not ready for consumer production, only improve certain problems, not to mention that programming these things is a completely different art. Things are just getting harder to improve…
Moore’s law has be quite detrimental to software engineering: why work years to improve the code to be more efficient when just waiting will give you a performance increase? As hardware improvement slow down, we will need to make gains at the software level.
In a way this is exciting news: there have been big improvements in algorithms and compilations techniques in the last ten years, they have just been overshadowed by hardware improvement, and there is certainly plenty of more improvements that are possible. Also code that is deployed is generally not highly optimised – fine tuning code is a complicated process and it is generally not cost effective to bother with it. If we were to optimise as aggressively on today’s devices as we used to on 8 bit machines, we could get a large performance improvement.
As the improvements predicted by Moore’s law slow down, the value of software will increase, this at a time when other problems like security are getting harder to ignore. This basically means that cheap software will become increasingly expensive.
The first effect of this situation is that the languages that will dominate in the next ten years will be ones that can be compiled to efficient native code, and platforms that can somehow deploy native code. This probably means that we will have another decade dominate by the spawns of C.
The second effect will be that the classical computing stack will be increasingly challenged. Nowadays most devices run some very mutated variant of Unix as designed in the 80’s. It works, but it is far from efficient. Most of the optimisations that are implicit in the design are outdated and irrelevant. Various parts of the canonical Unix system have already been challenged: the graphical system (X11), the process launching infrastructure (
inetd etc.). The security model has been augmented a lot, and I would not be surprised to see the networking stack changed a lot with the shift to ipv6.
One thing that is bound to increase is the number of devices per person, in particular at home, where the TV is nowadays a respectable computer, tapping into that pool of underused resources will be increasingly tempting, that was Sony’s vision with the Cell processor, it was probably 20 years to early in the making.
Generally, you should expect innovation to be driven by increased interconnection more than increased processing power, there is a large amount of sensors are you, and connecting them represent a huge opportunity, both in terms of potential features as possibilities for abuse.
Macintosh SE/30 image Creative Commons Attribution-Share Alike 2.5 Generic.