Programmers of my generation grew up with 8 bit computers, by today’s standards, the resources available on these machines were very scarce, memory measured in kilobytes, processor speed in a few megahertz. This meant that very little was kept in memory, but instead generated on the fly.
The typical example was the content of the screen, which was not buffered, but instead composited from various source, typically the video memory and the sprite registers. This also meant that you could change these sources during the rendering to get richer results (or bugs). This also meant that a screen capture was something difficult: you could not just grab the content of the memory, instead you needed to reconstruct the state on the screen.
Making sure your code was in sync with the screen’s refresh rate was difficult, this is why newer systems use one or more buffers to store the intermediate renderings: the image on the screen is never live, but instead the one recorded a few milliseconds earlier.
Having enough memory to store things made a lot of things possible, and loading everything into one place to process it is much more easy, but this is also a trap: however much RAM you have, there are datasets that won’t fit. Even with virtual memory, automatic garbage collection and similar mechanisms, you still need to have a clue if stuff will fit in main memory.
Most system have mechanisms to handle data structures that are ephemeral, partially in memory, or fragmented: generator, iterators, mem-mapped containers and ropes instead of strings. While it certainly does not make sense to use these all the time, knowing about them is really useful if you want to process non trivial amounts of data.
Maybe my generation was a privileged one, as we were not soothed by teachers talking with starry eyes into believing that those problems would be handled by garbage collection and high-level languages. Come to think about it, I had those teachers, I just never believed them…
Commodore C64 startup animation – public domain
2 thoughts on “Ephemeral video”
These people telling that the GC would take care of everything were right: I’m playing with an huge Java-based SAP product , and the latest versions need 16-32 Gb and a quadcore to behave properly, while all previous versions from 2002 to 2011 were okay on a 1 Gb desktop, with the same basic features.
The experts on this monster tell that if not enough memory is allowed, the thread will spend more time garbage-collecting memory than doing any work at all.
Anyway, I’d take a GC-language anytime against the old leaky ones. In my field, I have not the knowledge to deal with it better than the python or java runtime. I’ll stop to the applicative layer optimisations (reducing what I need to put in memory). That’s maybe the same problem, on the next level…
Problem is: langages like Java or Python also leak memory, only people are usually not aware of it.
I’m using one of the old leaky languages (C++) and honestly memory leaks are not a significant problem, good tools and a good discipline scale better than garbage collection, this is probably why the new batch of languages (Rust, Swift) avoid garbage collection and implement the discipline…