Software bloat musings: Software Disenchantment


FBnil

They'll own everything and be miserable.
Joined
Dec 14, 2012
Messages
5,114
Location
Yurp
http://tonsky.me/blog/disenchantment/

I stumbled upon a nice article; that starts off a bit dry but then goes to many places and that leave you thinking. Sprinkled with some humor it's a good read (especially the links here and there)
 
Are you insisting that this code runs slow? Thats violating the CoC!
 
I'm not sure why his 'bright spots' list only seems to list new stuff, rather than old stuff that's still being maintained and still is pretty small. Things like bash, and vim. And to a lesser extent (because it keeps adding codecs) ffmpeg and mplayer still run well on my old hardware.
 
Yet mpv devs removed like 70% of mplayers code while forking it without loosing features. Just because it runs well it doesnt mean its not bloated.
 
M$ office is nowadays GB in size, but users do exactly the same things they did decades ago.
 
Things like bash
Do you know dash? It runs faster too if you have scripts with loops and such. (and ksh is faster than bash too, although you pay with the startup time)
Code:
$ ls -l /bin/dash /bin/bash /bin/ksh93
-rwxr-xr-x 1 root root 1029624 Nov  5  2016 /bin/bash
-rwxr-xr-x 1 root root  125400 Nov  8  2014 /bin/dash
-rwxr-xr-x 1 root root 1504944 Jan  8  2013 /bin/ksh93

@Linux-SWAT Exactly. But there has been added so many extra's, for example, fill your hard drive with multi-edit a spreadsheet. Or collaboration (video)
Although I am impressed with the speed of Excel. Opening a 40MB file in 3 seconds, while LibreOffice takes 30.
 
Last edited:
So LibreOffice probably has to go over all the reverse engineering stuff.
ODT should be far more fair for a comparison.
 
reverse engineering
It's XML too, just... different. See, it has this file that says that A1 contains reference 1 and A2 contains reference 2, and then another XML file that has reference with id 1 is this, and with id 2 is that...
The XML is bloated also with extra parameters.
LibreOffice is not happy because the file contains many tabs
I'm trying to make a perl data extractor, but even that is very slow compared to opening it with excel and copy pasting the data...
 
This is linked from the op's article, but I thought it was funny/interesting.


Code:
https://medium.com/@jdan/i-peeked-into-my-node-modules-directory-and-you-wont-believe-what-happened-next-b89f63d21558
 
This is linked from the op's article, but I thought it was funny/interesting.
when a rouge developer
I'm interested in seeing a rouge developer, are they sunburnt? :-D
Jokes aside, I think it's shameful to call him a rogue because he removed his own code from a public repo.
 
I'm interested in seeing a rouge developer, are they sunburnt? :-D
Jokes aside, I think it's shameful to call him a rogue because he removed his own code from a public repo.
I think it's a joke. He mentions "tens of projects".
 
Ah, okay. To be fair, I didn't read his article, just the headline that shows here. But I do remember the incident from El Reg articles, which apparently upset a lot of people
 
the writer makes lots of errors (conclusino), presumably since that's how kids write nowadays. interesting points, i wonder if they still stand. (e.g. glimmerjs including all of the brittanica to include the definition of glimmer in their help file.)
 
There is a problem. But it seems like it's made mostly by two factors:
1. Business wants software now, written asap, not after optimization, thinking that computers have almost infinite power. It may even work, but only if one such badly written program is operating at a moment. Now this is the programming method not of this bloated CAD package, but whole operating systems. This trend is then shifted to open source which was always underoptimized as most devs have computers much faster than average. The true problem will start when they will try to fix security by forced virtualization instead of firing the **** who notoriously accesses arrays in C by abusing pointer over another array's boundary. Spotted in a scientific package which costs about 8K € per user.
2. Most students of 3rd degree Computer Science I had met have no idea about elementary computer architecture, registers, assembler at all, memory access methods, but hey, they can program in C# or JS. To use a one-directional linked list they can pull a 80MB library into project.

I'm not sure why his 'bright spots' list only seems to list new stuff, rather than old stuff that's still being maintained and still is pretty small. Things like bash, and vim. And to a lesser extent (because it keeps adding codecs) ffmpeg and mplayer still run well on my old hardware.
I don't know exactly what is inside mplayer, but using it to play Youtube saves lots of CPU power for more battery time or computations. And by saying lots I think about 40% of a dual-core machine.

I'm trying to make a perl data extractor, but even that is very slow compared to opening it with excel and copy pasting the data...
I was writing such thing for a proprietary simulation program (without documentation). Binary file, meshes with hundreds of thousands of nodes, lots of tables encoded in the most bizarre ways. The most efficient way was to load everything to memory and make a state machine to roll through the data. At least faster than original postprocessor.
 
I don't know exactly what is inside mplayer, but using it to play Youtube saves lots of CPU power for more battery time or computations. And by saying lots I think about 40% of a dual-core machine.
Yes, that's very true. I use it on some of my oldest computers to allow them to play fullscreen videos downloaded from youtube, which works much better than using flash video. I also use a more modern machine to transcode videos to the screen resolution of the slower machine, so that it can avoid having to scale anything, and play them over the local network using sshfs. I've been able to script ffprobe and ffmpeg to work out the aspect ratio of supplied videos and transcode them to fit inside the target resolution, so I can control that all over an ssh connection from my slower more portable machines.

ffmpeg could still be faster in transcoding webm videos; it's appreciably close to real-time when handling mp4 videos for me, but webm videos doing the same translations run exponentially slower. But I guess because that's an encumbered codec they have difficulty using anything other than the provided source code and can't modify it because there's no public spec telling anyone what it all means, other than what the current code actually does.
 
This is linked from the op's article, but I thought it was funny/interesting.


Code:
https://medium.com/@jdan/i-peeked-into-my-node-modules-directory-and-you-wont-believe-what-happened-next-b89f63d21558

Luckily I don't want to do Web development but C++.
 
Back
Top