Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The inefficiency of computing platforms is like the amount of congestion. Both expand to the level of being just barely bearable.

If a computer is too slow nobody will use it; if it's only irritatingly slow, some will use it and; if it's slow but still bearable enough, everybody grumbled but still uses it. Development costs and building programs with higher level languages and using big, high-level frameworks saves in development but increases the cost of using the system. The result is that some majority of programs will always be optimized only to the point where they are barely usable.

Similarly, when traffic is always as congested as the people on roads can bear: those will drop out who really hate congestion or prefer to do their trip at another trip but they will be replaced by people who are willing to pay the time tax again. When space is freed on the highway or arterial road, some people will figure out it's worth the trouble and head out.

This means you can't make the user experience faster by getting faster computers and you can't build more roads and lanes to get rid of congestion.

But there's more.

Because a large mass of congested traffic is more difficult to separate and channel into exits congestion on large roads tends to be worse. If you have a village main street with one lane each way on a road, and that road gets traffic congestion that's much less total congestion, area-wise, and it will clear up pretty quickly once the head of traffic is able to exit somewhere. Even so with a grid of 1+1 lane streets, like in a small town. Because the small road gets quickly congested it will only swallow a limited number of cars which will keep a bound on the maximum amount of congested area. Not so with a 10+10 lane highway.

Similarly to computer programs. Slow programs in the early times when everything was simpler could be made faster with sufficient effort. The programs might still struggle in some parts but you could at least invest in the effort to streamline the core operations and make the worst things fast enough. In contrast, consider an enterprise Java program that depends on dozens or hundreds of libraries that amount to a running image of the size of half-a to a whole gigabyte. It takes ten seconds to start and opening each window takes seconds because there's slack everywhere. Everything runs top of something else and even simple operations involve running mundate pieces of code for seconds in wallclock time. Because the computer that's fast enough to run the program in the first place also has the computing power to run a lot of crap in a similar manner. So the amount of crap just increases as the computing power increases, and getting rid of that crap becomes an unsurmountable task. So it becomes close to impossible to make such a big program run fast because you would have to shrink everything between the application and the hardware in half or in one tenth to get some sort of speed. Coincidentally, old computer systems often had faster user interface latency and responded to user inputs in a very timely manner. They could do the cheap things really fast even if the total lack of computer power still made the programs spend seconds on the heavy processing.



There's an element of truth to what you say, but I think software efficiency will increase in importance over time, especially when the limits on Moore's law are too expensive to overcome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: