Programmers of my generation grew up with 8 bit computers, by today’s standards, the resources available on these machines were very scarce, memory measured in kilobytes, processor speed in a few megahertz. This meant that very little was kept in memory, but instead generated on the fly.
The typical example was the content of the screen, which was not buffered, but instead composited from various source, typically the video memory and the sprite registers. This also meant that you could change these sources during the rendering to get richer results (or bugs). This also meant that a screen capture was something difficult: you could not just grab the content of the memory, instead you needed to reconstruct the state on the screen.
Making sure your code was in sync with the screen’s refresh rate was difficult, this is why newer systems use one or more buffers to store the intermediate renderings: the image on the screen is never live, but instead the one recorded a few milliseconds earlier.
Having enough memory to store things made a lot of things possible, and loading everything into one place to process it is much more easy, but this is also a trap: however much RAM you have, there are datasets that won’t fit. Even with virtual memory, automatic garbage collection and similar mechanisms, you still need to have a clue if stuff will fit in main memory.
Most system have mechanisms to handle data structures that are ephemeral, partially in memory, or fragmented: generator, iterators, mem-mapped containers and ropes instead of strings. While it certainly does not make sense to use these all the time, knowing about them is really useful if you want to process non trivial amounts of data.
Maybe my generation was a privileged one, as we were not soothed by teachers talking with starry eyes into believing that those problems would be handled by garbage collection and high-level languages. Come to think about it, I had those teachers, I just never believed them…
Commodore C64 startup animation – public domain