The End of Moore’s law…

Macintosh SE/30

Moore’s law has been a central pillar of computing for all my life. It is not a law, more an observation: every 18 months the number of transistors in a chip will double. This has created a universe of plenty, every two years, everything could double: performance, memory. My first computer had one 4Mhz 8 bit processor with 64K of RAM, my current laptop has two 2.8 Ghz 64 bit cores with 8 Gb of RAM. Basically a million time more capacity.

The end of Moore’s law has be prophesied for ages, I remember one of my university professors saying that circuits could not have features smaller than 120 µm because of the wavelength of the light used to etch them, nowadays they are at 30µm. Still engineers are increasingly hitting walls, processor frequency has stopped increasing at around 2 Ghz and instead the number of cores has started increasing. But programming multi-cores system is difficult and Amdahl’s law still holds, so the number of cores has stayed low.

Running code on the graphic card, with its massive array of computing units, improves certain types of computations by one order of magnitude, but there again, throwing more silicon at the problem yields decreasing returns. Quantum processors might give another boost for certain classes of problems, but there are not ready for consumer production, only improve certain problems, not to mention that programming these things is a completely different art. Things are just getting harder to improve…

Moore’s law has be quite detrimental to software engineering: why work years to improve the code to be more efficient when just waiting will give you a performance increase? As hardware improvement slow down, we will need to make gains at the software level.

In a way this is exciting news: there have been big improvements in algorithms and compilations techniques in the last ten years, they have just been overshadowed by hardware improvement, and there is certainly plenty of more improvements that are possible. Also code that is deployed is generally not highly optimised – fine tuning code is a complicated process and it is generally not cost effective to bother with it. If we were to optimise as aggressively on today’s devices as we used to on 8 bit machines, we could get a large performance improvement.

As the improvements predicted by Moore’s law slow down, the value of software will increase, this at a time when other problems like security are getting harder to ignore. This basically means that cheap software will become increasingly expensive.

The first effect of this situation is that the languages that will dominate in the next ten years will be ones that can be compiled to efficient native code, and platforms that can somehow deploy native code. This probably means that we will have another decade dominate by the spawns of C.

The second effect will be that the classical computing stack will be increasingly challenged. Nowadays most devices run some very mutated variant of Unix as designed in the 80’s. It works, but it is far from efficient. Most of the optimisations that are implicit in the design are outdated and irrelevant. Various parts of the canonical Unix system have already been challenged: the graphical system (X11), the process launching infrastructure (initd, chrond, inetd etc.). The security model has been augmented a lot, and I would not be surprised to see the networking stack changed a lot with the shift to ipv6.

One thing that is bound to increase is the number of devices per person, in particular at home, where the TV is nowadays a respectable computer, tapping into that pool of underused resources will be increasingly tempting, that was Sony’s vision with the Cell processor, it was probably 20 years to early in the making.

Generally, you should expect innovation to be driven by increased interconnection more than increased processing power, there is a large amount of sensors are you, and connecting them represent a huge opportunity, both in terms of potential features as possibilities for abuse.

Macintosh SE/30 image Creative Commons Attribution-Share Alike 2.5 Generic.

One thought on “The End of Moore’s law…”

  1. Processor progress may have slowed down, but SSD have removed (well, pushed much farther) the hard disk bottleneck. Network speed is improving.

    Compilers may have improved, but I see enterprise software that waste hardware AND compiler improvements (SAP BI 4), and is slower at each release. Even Windows 7 is often slower than XP. We have much easier targets for optimization before getting rid of the Unix philosophy.

    I sometimes dream of a big problem in East Asia that would make new computers rare and very expensive for a time: we would have a few years to improve the software on the current platforms.

    Anyway, we have reached a point where the current speed is good enough for most applications. I know, we say that since 20 years. That was always true but we were still discovering new applications to do with a computer. But now, as people will not create 3D movies at home, I suppose that we’re seeing the end of that too.

    I don’t think that we’ll see a time where your toaster or your TV will lend its unused processor to the fridge, the BR player or the smartphone:
    1) this adds a big layer of complexity in the design (security!)
    2) this will not improve the sales of TVs in any way
    3) the bottleneck will be the network, like always, never the processor
    3b) even in big enterprises, I have never seen the (huge) combined processor power of all PCs used to process something during the night or the lunch pause — high bandwith between dedicated hardware is the most efficient
    3c) well, perhaps does Google do it? :-)
    4) too much diversity (indianness, OS…)
    5) the processors in the TV or the fridge are lame compared to the ones in the laptop or the smartphone
    6) even if you need your distributed processor network, it will always be easier and probably cheaper to buy another mighty processor in a small box than to update TV, fridge and so on ; or just buy a bit of processor time in the cloud
    7) I don’t think that all these connected objects will want to take part in a home ecosystem, they’ll obey their master and send the data into the cloud too — that’s much easier for most consumers.

Leave a Reply to Krysztof von MurphyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.