There is a problem. But it seems like it's made mostly by two factors:
1. Business wants software now, written asap, not after optimization, thinking that computers have almost infinite power. It may even work, but only if one such badly written program is operating at a moment. Now this is the programming method not of this bloated CAD package, but whole operating systems. This trend is then shifted to open source which was always underoptimized as most devs have computers much faster than average. The true problem will start when they will try to fix security by forced virtualization instead of firing the **** who notoriously accesses arrays in C by abusing pointer over another array's boundary. Spotted in a scientific package which costs about 8K € per user.
2. Most students of 3rd degree Computer Science I had met have no idea about elementary computer architecture, registers, assembler at all, memory access methods, but hey, they can program in C# or JS. To use a one-directional linked list they can pull a 80MB library into project.
I'm not sure why his 'bright spots' list only seems to list new stuff, rather than old stuff that's still being maintained and still is pretty small. Things like bash, and vim. And to a lesser extent (because it keeps adding codecs) ffmpeg and mplayer still run well on my old hardware.
I don't know exactly what is inside mplayer, but using it to play Youtube saves lots of CPU power for more battery time or computations. And by saying lots I think about 40% of a dual-core machine.
I'm trying to make a perl data extractor, but even that is very slow compared to opening it with excel and copy pasting the data...
I was writing such thing for a proprietary simulation program (without documentation). Binary file, meshes with hundreds of thousands of nodes, lots of tables encoded in the most bizarre ways. The most efficient way was to load everything to memory and make a state machine to roll through the data. At least faster than original postprocessor.