Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Flub Paradox (steved-imaginaryreal.blogspot.com)
98 points by iopq on June 8, 2015 | hide | past | favorite | 74 comments


I just got back from a very positive strategy meeting with my partner (I do the code and he does the ops and the sales) and this essay struck a chord with me. Why are we making so much progress when I'm a decidedly average programmer? I think it's because I'm a decidedly average programmer in enough languages to know what their strengths and weaknesses are and thus to be able to pick the right tool for the job:

- hardware-accelerated video decoding on the ARM - C obviously, with a small amount of ASM.

- web server - Python using web2py

- web interface prettiness - jquery and javascript

- server processes and admin automation - Python

- core apps - Luajit because it's so fast on the very limited ARM boxes we're using

- customer data munging and reports - emacs lisp because it's right there

- desktop apps for Windows, OSX and Linux - Python again

- some bash and Perl for sysadmin stuff

What Steve doesn't say - although it is implied - is that he knows enough languages at a practical level to make these kinds of judgement calls. And I know for a fact he's a far better coder than I am because I use his Penlight library for Lua every day.

Perhaps Flub can never be a force for change the way Lisp was because modern languages are incorporating enough features from Lisp to bring them close enough to the top of the hierarchy not to matter as much as before.


That's a nice kitchen sink full of languages.


Well, but the thing is, productivity is not only about power of the language, but several factors, including proficiency in the given languages, available libraries, ease of getting developers in the given language, etc.

And of course, on the article "A disruptive startup is more likely to use Blub in creative ways, focusing on the idea, not the implementation. Facebook dominated its market - using PHP - which everyone agrees is crap. Google built up the world's biggest advertising agency using Java and C++; Android is a Java-like ecosystem that runs in most pockets these days."

Personally I don't care (too much) about using the fanciest language, but rather something a little more mature and easy to use.

Your startup is not going to make or break because your language support macros in the vast majority of cases. And even then you can come up with a service that encapsulate that part and code everything else in your "normal language"

"The vitriol surronding JavaScript is interesting and revealing"

Absolutely. Everybody hates it, but there isn't a right answer, but it got popular (because of its power) and it works. Every language is going to have something people will hate.


Javascript didn't get popular because of "its power," it got popular because it was the only option* and because of inertia (all legacy scripts on the web were/are in Javascript, because it was/is the only option).

Edit: * Yes, I'm aware of VBScript, Flash, Java applets and ActiveX, and I'm aware they were all worse than JavaScript. Doesn't then follow that JS is good, or that it won on a level playing field. Frankly, it says something that for all its many faults Flash was often the superior option until the late 2000s.


> Doesn't then follow that JS is good

Of course not, and, sure, Flash was a better alternative for a lot of time (but mostly for things outside of JS like HTML5 capabilities)

C is not great as well, but between something good that doesn't do what is needed and a bad one that does guess what is used in the end.


I knew C; C was a friend of mine. Javascript, you're no C!

When it appeared, C was a revolution in programming languages. It offered a combination of performance and expressiveness that simply outclassed anything else available (still true today, 43 years later, if more narrowly). It instantly became the most popular and held that position to this day, becoming a major influence on virtually every other popular programming language that's come since.

Now, of course, C is much better suited for, say, writing an operating system than it is for writing security-intensive internet services. (A concept which did not exist in 1972). But, all things considered, it was as close to perfection as a new programming language has ever come at the time of its introduction.

JavaScript is not even in the same realm; much superior, existing scripting languages (Python, Ruby, even Perl or some Lisp dialects) were passed over in favor of a deliberately half-assed effort to advertise Java with a language that shared a name and a superficially similar syntax, which (almost accidentally) became the only VM that any web browsers implement.


> much superior, existing scripting languages (Python, Ruby, even Perl or some Lisp dialects) were passed over

Python existed in 1995, but I doubt it was very known at the time.

Ruby was created in 1995

Lisp is older, sure, but maybe there were concerns about the syntax and interpreter requirements?

"But, all things considered, it was as close to perfection"

Pascal didn't have a braindead way of dealing with strings. C's syntax is weird and prone to bugs, "if (a = 1)" being one example.


Yes, C's string handling is pretty crappy (especially given how important a data type strings are in Unix). And its operator precedence rules aren't always the best. But just like 10 million Elvis fans can't be wrong, neither can a language that is to programming what Elvis, the Beatles, and Michael Jackson put together were to music be anything other than a triumph of engineering.

Better string handling or not, just as surely as video killed the radio star at about the same time, C killed the Pascal programmer. And the FORTRAN programmer, and COBOL, and BASIC, and assembler, and pre-1972 Lisp, and all the other old programming languages that are nowadays too obscure for me even to list. And operator precedence or not, every non-Lisp programming language of any popularity today apes C's syntax, not Ada or BASIC or somesuch.

I'm not saying C is perfect. I don't even particularly like coding in C; I'm just saying that it just was a huge breakthrough in programming language design at the time it came out, and was overall the best designed language ever to that point (which, Lisp fanatics and Donald Knuth aside, I think most would agree with). Certainly not something you could ever say about JavaScript with a straight face.

My point re scripting languages was that if Netscape had wanted to be on the bleeding edge, they still had much better choices. JavaScript is a programming language created in one week, where one of the major design considerations was to be an advertisement for a different programming language, and it shows. It certainly showed in 1995, before we had 20 years, billions of dollars and millions of man-hours invested by all the major companies in optimizing VMs, improving the standard, creating libraries and all the other miscellaneous forms of putting lipstick on a pig.


>Lisp is older, sure, but maybe there were concerns about the syntax and interpreter requirements

As I understand it, Brendan Eich originally spent a lot of effort on a Scheme implementation to serve the browser-scripting role for Netscape/Mozilla, but couldn't get it done in time and hacked up Javascript in a hurry to prevent an even worse alternative from becoming the de facto standard.


C string functions lets me copy an image from a memory position to the video controller for displaying, or to the disk for storage. Pascal's ones don't.

Calling it "better" is completely missing the point. Nobody would have written UNIX in Pascal, for obvious reasons.


Agreed with most of the above, with the caveat that I actually like JavaScript. Sure, I have grumbled about JavaScript more than about various niche languages that are in some ways more mathematically elegant, but that's for the same reason I have grumbled about C, C++ and Python - they are all languages that get enough of the important things right that I've ended up actually using them enough to run into the rough edges.


Those are very keen observations and not commonly voiced in that combination. I can't remember the last time I read an article about the development of PLs that I agreed so much with.

If I had to venture a guess about future development in PLs, there would only be one pattern that I find reliable enough: In a rough birds eye perspective, PLs have moved from hardware-specific things handling mutable state to more formally defined beasts fed by mathematical concepts. That's one trend I expect to continue, but then also not ad infinitum. In my view, Haskell is a good example of how you can overdo it, with the best intentions: concepts that are beautiful and frictionless in pure math become a cumbersome burden and an impractical cognitive load when translated too directly into a PL. I want more math in PLs, but I feel we haven't found a good balance yet. Maybe this turns out to be a moving target, too.

There are two languages I expect to keep their (niche) places for the foreseeable future: C and Lisp. Both are such a peculiar mix of properties that they are incredibly hard to beat in their respective domains. Maybe the same goes for Ada and Java, I wouldn't want to do any injustice :)


There are a couple of things I disagree with.

First, you are lumping together trends in academia with trends in the industry, and the two are not always in tandem, though they certainly influence one another. For example, Go, a language that certainly goes against this trend, is gaining popularity much faster than any academic or academically-influenced languages (of course, it's easy to grow fast when you're small, but it's more academically inspired languages are just as small). This mixing of the domains which in reality is quite complex is also apparent in your examples. C is one of the most popular languages in use, and has been for a few decades; Lisps have been around even longer and has never broken beyond around, say 1% of production projects (except maybe for a very short while in the eighties).

Second, you say "I want more math in PLs" when referencing Haskell. While it is true that pure-FP, because of its relationship to some types of logic, lets you prove some properties about your code in the code itself, that does not make it any more mathematical than other languages and other approaches. For example (and I give this example often, but only because I'm really impressed by that language), Esterel, a language designed in the eighties is an imperative language with lots and lots of global side effects, yet it allows you to verify its correctness absolutely, and you don't even need to write the proof yourself (it verifies temporal properties that are especially hard to prove using PFP). Esterel and languages (mostly graphical languages) inspired by it have been used by the industry orders of magnitude more than PFP languages for domains where correctness is crucial. That PFP allows you to write some proofs in code does not mean that there aren't better ways to verify programs mathematically. PFP is the answer if what you're asking for is more second-order logic in your PL; it's just one of many if you're just asking for more math (in fact, it's just one of many even if you want more interesting type systems).

Also, it's hard to trace influences, but I believe that the focus on immutability that is finding its way into more and more languages is actually inspired by Erlang and later Clojure, two languages that have little to do with PFP, and a lot to do with the design of multi-core and distributed systems. It was not the current academic trend of pursuing the Curry-Howard correspondence as a software verification technique, but an industry solution for the problem of handling concurrency.


For those who want to see what Esterel is all about, Céu is a modern language inspired by it (or at least its synchronous concurrency approach).

http://ceu-lang.org/


> Haskell is a good example of how you can overdo it, with the best intentions: concepts that are beautiful and frictionless in pure math become a cumbersome burden and an impractical cognitive load when translated too directly into a PL.

Why do you say this?

Sure, Haskell is radically different from most programming languages out there, even ones that claim to be 'functional'. But that does not mean it is 'overdone'.

With traditional imperative languages, you have years of experience behind you. Anytime you see a problem, you have those years of experience to guide you in choosing an appropriate way of solving it. All those idioms that you learn over decades of programming add up.

The problem with Haskell is that because it is so radically different, all that experience is largely useless. You start again from scratch. Remember the first time you learnt how to drive. It was confusing and you paid a lot of attention on every action and every turn. Eventually, with experience, it becomes instinctive and you don't even think about it under normal circumstances. Now, imagine that you wake up tomorrow, with all of that instinct gone. It would be utterly disorienting and scary.

Now, time for an anecdote: I've spent the bulk of the past one and a half years learning Haskell, with a couple of years of experience with other, partly 'functional' languages like Scheme and Python. In the beginning, I was extremely confused, and whenever I came across a problem, I had no idea of how to express myself in Haskell. This was completely alien to me. No other language I had come across had left me so lost.

However, in a few months, I came to learn functional idioms and began to grasp the language. What really appealed to me was how expressive the language was once you knew how to "speak" it. If you have to become 'fluent' in anything, you have use and think in it repeatedly. Eventually, it becomes second nature.

Because of how so many things in Haskell are based on solid mathematical foundations, it is extremely easy to generalize and most importantly, compose. Haskellers tend to draw a lot of inspiration from category theory(which I've also taken a liking too). The essence of the concept of "composition" is a category. It often astounds me how simple, generic functions and combinators can be strung together to form complex programs. I haven't come across this level of generality and composability in any other programming language.

Programming in Haskell, once you learn to "speak" it, reduces cognitive burden by a huge margin. Often, you deal with concepts so general there is only one way to implement them. You don't have to care about details of the underlying representation, because the number of representations is extremely vast. This is initially extremely disconcerting, and is probably the reason why people face so many problems with monads. The concept is so general, and has no single "concrete representation" that would make you grok it. When you ask "what is a monad?", people answer in a vast variety of ways. In truth, the only correct way to describe a monad is the monad laws. Anything that follows the monad laws is a monad. Those three laws are it. Newcomers are often astounded by that. How can that be it? The laws don't really say anything! They can't say much because they can be used to describe a vast number of things, from IO to exception handling to parallelism to non determinism. When you write code for a generic monad, you don't care about any of those detail, and what the monad you are dealing with actually is. All that cognitive burden is thrown away. In the beginning, it seems scary, but after a while I realised how liberating it actually was.


> You start again from scratch.

You describe my situation with Haskell quite exactly, and I agree you have good points. The non-monad parts of Haskell are, after all, the most beautiful PL I've ever seen, and that puts me between a rock and a hard place.

My difficulty with monads comes from the very reason why they're so powerful: You can do anything with them, but at the cost of the monadic form taking center stage and the original problem's form getting almost lost, scattered across the functions implementing the monad laws. Maybe that's not so bad, but I haven't adjusted my thought process to that yet.


For me, monads are just as beautiful as any other part of the language. Like almost everything new, you just have to work with them for some time, until you build intuition for the underlying mathematical object and everything just "clicks".


> In my view, Haskell is a good example of how you can overdo it, with the best intentions: concepts that are beautiful and frictionless in pure math become a cumbersome burden and an impractical cognitive load when translated too directly into a PL.

Can you give some examples of concepts that are a cumbersome burden and cause impractical cognitive load in Haskell? Perhaps some example code?


Partially, I can explain by quoting myself from another answer:

My difficulty with monads comes from the very reason why they're so powerful: You can do anything with them, but at the cost of the monadic form taking center stage and the original problem's form getting almost lost, scattered across the functions implementing the monad laws. Maybe that's not so bad, but I haven't adjusted my thought process to that yet.

Or try to temporarily throw some debug logging into a deeply nested function that doesn't have the IO monad in its return type. Maybe that's the wrong type of thinking for this language, but there doesn't seem to be a “howto” on what to do instead...


Haskell isn't "overdone". It's extremely principled and requires that, for example, you put the existence of stateful effects (e.g. I/O) in the type system. It requires you to specify state (to a degree, and with escape hatches); it doesn't eliminate it.

Haskell isn't that much of an investment when you consider what is gained. See, learning any new code base takes 3-6 months, and if the code is great, you learn a lot. If the code is garbage, well... perhaps you learn what not to do. Compared to the slog of learning a new corporate codebase, the effort involved in learning Haskell isn't that much... and the hope with Haskell is that the next software project you write won't be unmanageable... and I think that even average-case Haskell realizes it. (I focus on the average case because, of course, you can write shitty code in any language and you can write good code in any language, but languages differ in what tends to happen on average.)


I would be interested in your thoughts here because Haskell has always seemed, theoretically, a great fit for me but it has never panned out in practice.

I believe you and I have a common-enough mental framework (theoretical-leaning applied math Ph.D., happy to hold forth on e.g. algebraic geometry, information theory, nonlinear dynamical systems etc.). I look at Haskell and it certainly feels clean and beautiful like all the mathematics that I love. But then I try to use it and, objectively, I get less done (even though I knew what an endofunctor was before even seeing Haskell!).

My longest stretch of trying to use Haskell every day was about 3 months at which point I was still substantially less productive than my preferred mix of C/Python/Lua (I mainly do heavy-lifting numerical work coupled with low-latency server designs). I also feel like I get more done in OCaml.

So, you being someone whom I identify as similar in skills to myself, I ask you: what gives? :-)


I've never taken a calculus class and I'm very productive in Haskell. Maybe knowing all of the applied Math and that there is a better solution available in Haskell keeps you from being productive?

I take the route of solving problems the first (or second) way I come up with, then take advantage of how easy it is to refactor Haskell code when I have a better solution later.

For instance I didn't understand Monads too well while writing my first Haskell program and just did everything in continuation passing style.


Well, depending on your problems, Haskell may simply not be the correct answer. It shines complex behavior for example, but "numerical work" normally means simple behavior (but highly optimized), and I'd be wary of it on low latency applications.

Haskell is powerful, but won't solve all the problems on the world.


This is an excellent point and maybe you gave me a good compact summary of the situation. Perhaps you can tell me if you agree.

The kind of work I do tends to involve a small number of low-variability data types, and the ROI on complex logic (w.r.t. computational performance) has to be very high for it to be accepted into a project. We don't shy away from complex logic, but we do have a high hurdle for it to cross before it becomes a net win.

In contrast, Haskell is perhaps best for large numbers of high-variability datatypes where complex logic is inevitable, regardless of performance costs. And it helps precisely by encoding as much of the complex logic (e.g. safety rules) into the variability and expressiveness of the type system.

I guess all that may be obvious, in hindsight. But I think your comment helped me crystallize the distinction. So, thanks :)


That's what I was trying to say (and much more precise).

Yet, I'd look at codygman's answer. It's a library that tries to improve memory locality by using more complex logic, and hides everything under Haskell abstractions so it's easy to use.

I'm not currently working on this area, so I can't really tell what it's good for.


> Well, depending on your problems, Haskell may simply not be the correct answer. It shines complex behavior for example, but "numerical work" normally means simple behavior (but highly optimized)

To give a rebuttal to Haskell not being good for "numerical work", you might be interested in this pre-alpha numerics library:

https://github.com/wellposed/numerical#performance-faq

I'm tempted to agree with you re: low latency, then I run across stuff like this:

https://github.com/tomahawkins/atom

I believe it (or a similar library) are used in quite a few of a certain companies (or multiples) flagship applications, though I can't recall the details.


Haskell took me a while, because there weren't many great resources on how to write many kinds of "real" code in Haskell. The quality of material is getting steadily better.

I think you've got a good 6-12 months before Haskell is more productive for you than OCaml or a C/Python/Lua stack. If you're coming from Java, you're more productive than you were at 1 month in Haskell (you still know very little Haskell, but you're pwning your former Java-toting self)... but in your case, you already know quite a few high-level languages. So it's not surprising that you get less done in a language that forces you to contend with monads (which aren't as hard as they're made out to be, but they're one more new concept) just to have state. If you're coming from Java, you're more productive in Haskell almost instantaneously, before you really even know it. If you're coming from Python, you're less productive day-by-day because the compiler keeps burning you, but you find long-term development going faster due to fewer code breakages. If you're coming from Ocaml... the short answer is that they're both great languages with more in common than not (although I prefer Haskell) so it's not surprising that it takes a while to get up to your prior speed (because your prior speed is so much higher than that of someone from Java).

Real World Haskell is a good book, though a bit slow and dated. Learn You a Haskell is a decent intro book but it has some gaps. I'm working on a Summer Haskell Course at my company and will be publishing the slides. And Chris Allen recommends the CS 194 course that Penn offers (you can find it online). Resources are out there.


I think you've got a good 6-12 months before Haskell is more productive for you than OCaml or a C/Python/Lua stack.

Though merely anecdotal, that data point eases my mind a bit. I come from Assembler, Lisp, C, OCaml, so maybe my difficulties are par for the course.


Haskell is a research language with lazy evaluation. Also with a lot of fun type-stuff. I guess you could make a language which tries to be immediately pragmatic. But it is more focused to try to focus on a relatively small area and see how far you can push those concepts.

Haskell is just one datapoint. Are PLs moving in one uniform direction, upwards to some kind of "ideal" state? Arguably not until we manage to collapse all the myriad aspects of both programming languages and things they try to solve, and within which constraints (such as those pesky limited digital computers).

Is "math" some formalism that is ideally totally divorced from hardware? Then what about formalisms that try to model some kind of computation? What about linear logic and being able to model the disciplined handling of resources such as memory and handles? Is that incompatible with pure functional programming? Not really, since effectful functional programming is a thing, and effects that are encapsulated (such as mutating a local variable inside a pure function) isn't an effect from the view of the caller of the pure function. What about using types to easily[1] declare the layout of memory, such as using SoA over AoS? I'm sure I've seen some paper on that.

[1] Or at least more easily and less ad-hoc than what you typically would do in C/++.


> Haskell is a research language

The language standard is from 1998, and is mostly unchanged since then. We now have Haskell 2010 and quite a few compiler extensions, but none of the experimental ones are necessary to be used in production. The researchy ones have their benefits, but they also make it pretty clear that they are things not to be toyed with lightly in production code.


The description of Rust in this blog post is not accurate in my opinion.

1. Compile times aren't great right now, but 20s for 2400loc is a real anomaly, and improving compile time is a focus of the team now that the language is stable. This is the only specific painpoint the blog post mentions with Rust, and its just a single anecdote.

2. The idea that Rust is a premature optimization is a weird meme. Faster, safer, more expressive. Rust is at least 2/3 for pretty much every 'mainstream' language. I would rather write Rust than any other language I know, and I would be more productive in it.

3. Rust is not a Flub language struggling to overcome a niche - not yet. Rust is a language that became stable 3 weeks ago and time has not yet told whether it will be 'mainstream' or not.

Rust has a memory model that includes some degree of manual management, like C (instead of malloc/free and pointer arithmetic, you have ownership and references). This makes it a poor first language, like C. But no one says C is a language for the 'elite' because you have to think about the memory model.


    The idea that Rust is a premature optimization is a weird meme.
I don't think the sentence you're referring to is really about Rust per se, but rather the author's feelings about manual memory management. Rust just happened to be the language being discussed at the time:

"To use Rust for writing programs that can afford the overhead of garbage collection would be premature optimization."

The idea being that manual memory management is such a productivity black hole that the lack of GC in a language dominates any 'Flubby' features the language provides.


It was perhaps unfair to single Rust out given that it is so young, but there are costs to any innovation, and the first cost that leaped out at me was stories of long compile times. Besides, extraordinary claims require extraordinary evidence. Personal anecdotes of increased productivity remain anecdotal - it may be more a property of the programmer than the language. I don't personally have a horse in the race, but pointing out cost points is needed.


I really don't expect Rust to exhibit what most people mean by "increased productivity", ever. Most people mean something fairly local by that, or at most medium sized, that it allows them to take some task and write code faster than other languages that accomplishes the task. Rust is always going to labor under the disadvantage that it holds you to a higher degree of correctness than other languages, and at best it can tie with another language that doesn't hold you to those guarantees of correctness.

Where it'll increase productivity or not is at larger scales. If Servo comes out at production quality in some reasonable time (call it 2 years or less from now, given where they are), able to function on the general web most of the time, and it's also faster than current renderers by a significant margin, and with some small fraction of the development effort (probably even if you include the entire Rust development effort; do not underestimate the effort poured into the existing engines!), I will consider Rust's story proved. We've now had many people take cracks at writing renderers, in the fastest language of the time for the task (C++), and I daresay they've converged at fairly similar levels of performance with little reason to believe the renderers are going to get much faster. Said renderers have had massive optimizations applied to them, and this Servo that I'm expecting to be much faster will actually be version 1.0, basically.

If Rust is to work at all, there will very, very rapidly be a class of development that it will be actually insane to write in anything but Rust... but that will only be the very high-and-large end. (It will take many more years for this fact to penetrate through to the development community at large.)


The standards one holds emerging technologies to tends to be proportional to how much one personally dislikes them. (and of course the corollary vice versa)


Three observations, of the smallest fine tuning variety (its pretty good as is):

The essay fluidly rotates between two points of view best expressed by two direct quotes: "Note: this is about tracking fashion, not a comment on the actual virtues of these languages." and "getting things done". I don't think those are synonyms or have much to do with each other so the close intermixing is a bit jarring. Often enough they oppose each other. They don't get along so its hard to use them as supporting arguments even if true.

Another observation is hard to quote but boils down to languages and paradigms are synonyms, which I disagree with. A language is a pile of paradigms wrapped up in some syntax to formalize it. You can write FORTRAN or perl in any language. Or functional, at least somewhat. Its easier to write worse programs than the language and very hard to write better programs than the level of the language. In that way LISP is the master language above all the others because it can express seemingly anything therefore if everything is beneath it, its above everything. The third paragraph is closest to what I'm trying to express and it comes very close to hitting the mark.

It would be interesting to see logic programming in a safety critical embedded space. I'm not sure if it would be "good" interesting or "train wreck" interesting. That would be an interesting suggestion for the second to last paragraph. If you define safety as solving a full set of safety constraints, then automating it would look like ...


> The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

pg wrote that in 2003

nowadays, what would be the most frightening langs?


It is not the languages themselves that are scary. It is the people using them. So if the competition look for e.g. Erlang programmers and you don't for some kind of messaging app. Then you should be worried. If they use it for big files, then you should be less worried. Because they need to solve a few more limitations in Erlang/OTP to make it work well.


Scala, Rust, Haskell, JavaScript, VHDL, ASM and PHP :P

If joke languages count then Brainfuck and BS[1]

[1]https://skillsmatter.com/skillscasts/6088-the-worst-programm...


The author holds up Yesod's low adoption rates (compared to what? Ruby on Rails? Ruby was arguably Flub a few years ago, and Yesod is much younger than Rails) as an example of how powerful programming languages aren't making as big of a dent as they should be. But the thing is that yesod is a CRUD web app framework, and although Haskell is quite capable at CRUD apps, they're not where it really shines. Look at Haskell (and OCaml and F#)'s adoption rates in the financial sector and you'll see "Flub" kicking some serious ass.


>Clojure is an attempt at re-Flubbing Lisp.

And it's a successful attempt - unless you require top tier adoption to qualify.

>Where is Flub being used as a secret weapon, in the way described by Paul Graham? Why aren't its practitioners exploiting their natural advantage?

Clojure is a good candidates for Flub not only because of its power and expressivity (and as a Lisp, its natural fit as a PG-style Flub), but because it is being exploited to advantage by those who've discovered it.

A small sampling includes: Adzerk, Beanstalk, CircleCI, Cognitect, Consumer Reports, Daily Mail, Database Labs, Democracy Works, Deutsche Bank, eBay, Factual, FarmLogs, Groupon, Heroku, Intuit, LivingSocial, ViaSat, MixRadio, Netflix, Nubank, OpenTable, Pivotal, Precursor, Prismatic, Puppet Labs, RoomKey, Sonian, Soundcloud, Spotify, Staples, Teradata, ThoughtWorks, Cisco, Two Sigma, Upworthy, uSwitch, World Singles, Innovation Labs, Zendesk, ... (sorry If I left out your awesome Clojure-fueled project/company but the list is too long)

The Clojure experience at these companies seems to mostly fall somewhere between gushingly effusive and merely enthusiastic. Sure, there's some self-selection, and it's hard to separate developer talent from language choice in those successes, but read a few case studies and you start to get the impression that Clojure itself is not only an advantage, but a significant one. That's not to say that Clojure is the only good candidate for flub; there are other powerful, productive, and pragmatic languages that can deliver high productivity in the right context. But if I had to pick one language to be dubbed Flub, it would be Clojure.


> Is BASIC more powerful than Assembly? I doubt it, you can do anything in assembly, and BASIC is deliberately limited [4].

Doesn't this definition make the "power" of all Turing-complete languages equal?

> In other words, Lisp was the Flub of its day. A few years ago Flub was Haskell, and recently the new contender appears to be Rust, judging from the buzz.

If the author thinks Haskell and, of all languages, Rust are most powerful, I think he has significantly misunderstood Graham's original essay.

Among other things, they are statically checked with relatively difficult types that can take a long time to get right. That is obviously not the kind of language Graham had in mind for agily prototyping around corporate behemoths.

The author seems to define Flub as "the most popular language among the elite". Lisp was rarely even that, however, and we also have painfully obvious evidence of the most popular not being the most powerful. As such, I think a large portion of this essay hinges on a strawman: he dismantles a definition of Flub that Graham never used.

> The Flub Paradox can be stated like this: although Flub is self-evidently more powerful and productive than Blub, relatively few people use it and its effect on the real landscape of innovation appears minimal.

Is this really a paradox anymore? We have so many examples of inferior products dominating through inertia that I've completely lost my faith in humanity at this point.

I could add a few more points, but I think that would only be unnecessarily negative. I do like some points from the essay. I agree that we should take another look at Paul Graham's power continuum: for instance, the power of Lisp scales with lines of code (with low LOC, macros don't really enter the equation, and using a "batteries included" language with comprehensive libraries is probably faster). Haskell is easier than many agile languages to refactor, another important dimension to consider. And so on.


My mistake was to do exactly what PG did not do; name names. By Flub I mean any new language that its users would like to see displacing Blub. I mentioned Haskell and Rust as _examples_ of languages which have gone through popularity surges, and remain curious why they haven't made more of an impact, by PG's original logic: pick the most powerful language you know.


I don't think naming names was a mistake; rather, I think the names demonstrate that your thesis doesn't have anything to do with "Beating the Averages".

I mean, by your definition, Flub is literally every new language, modulo a few that are too domain-specific. Hell, Flub need not even be a powerful language, just a language with crackpot users.

So, rather than about being any averages, it appears to me that your point is just to not-so-subtly poke at anyone who thinks any language is better than the status quo (if so, then why hasn't it made an impact?!).

I don't see that as being a difficult question in general; there are many real-world problems, such as technological inertia and corporate culture, that already skew the chances of any new language.

However, in this case, where you've broadly directed this question at, well, everyone, I'm not sure if an answer would do you any good. What is true for Haskell may not be true for Rust (which is way too new to have made a big impact), which may not be true for any given crackpot language, which, to avoid offending anyone, I'll call Crud.

Aside from ruffling feathers, was there any point to this?


Every day I work with C++, and I've learned not to shoot myself in the foot. If it were the pinnacle of PL development, I would be depressed; there has to be a successor at some point (Data point: who remembers Algol 68 or PL/1?). It would please me if something like Nim gained traction, because it's C++ _done cleanly_. I was just pointing out that adoption of new languages is very much "Crossing the Chasm" - after the enthusiastic early adopters, the gap until the mainstream starts listening. (I apologize for ruffling feathers, by the way; I suspect I have a nasty sense of humour.)


C++ is obviously not the pinnacle of PL development, but real-world impact is something else entirely.

Unfortunately, the data point doesn't mean much; we tend to, collectively, reach a platform that's "good enough" and then just stop looking. So just because Algol 68 is no longer used does not mean C++ will eventually go the same way; indications are that it's a "good enough" language that will continue to see massive use.

More to the point, everything, or at least most things, written in C++ would have to be rewritten. This is enormous inertia compared to Algol 68.

>I was just pointing out that adoption of new languages is very much "Crossing the Chasm" - after the enthusiastic early adopters, the gap until the mainstream starts listening.

I don't get your point (not trying to be rude; I literally don't get how it ties in to your article).


I apologize for ruffling feathers, by the way; I suspect I have a nasty sense of humour.

Don't worry about it. The other definition of Flub I've seen [0], in Doug Hoyte's Let Over Lambda, comes from a similar place.

[0] "A language that can only thread code by subroutines." [In this case, "thread" has an idiosyncratic meaning.] Basically, anything other than Forth. Or those Lisps with the right sort of macros to re-implement Forth.


>... another important dimension to consider.

That's it, right there. One crucial point is that the elegant simplicity of the 'programming language power' rating is in fact full of messy bits and a multitude of dimensions.


Is the idea of a single language that's clearly better than anything else, for everything, a common one?

If one thinks of languages as tools, it seems pretty obvious that different languages will be the best tool for different kinds of tasks.

Are you writing a REST wrapper around an existing java toolset? Clojure's your friend.

A cryptographic, secure communication library? Rust is probably (going to be? in a couple months?) the way to go (unless you only care about the jvm).

Or maybe you're gluing a couple unix tools together in a fairly complicated fashion, in which case something like python is likely the right choice.


What if it's a large project where you have to do all these things? Would you run ten different language runtimes, each with its own garbage collector? Would you pay the cost of marshaling and unmarshaling data at the boundaries? What about training new programmers to understand and extend your system? What if you want to move some functionality from one part of the system into another, can you do it easily? Etc, etc.

There's a lot of value in adopting one general purpose language as a company-wide standard, like Java. It doesn't have to be an excellent fit for every task, but it has to be adequate and not have any major flaws. That's harder than it sounds. I like the Rust approach (memory safety without GC), though Rust itself might not be the best possible realization of that approach.


The great thing about Rust is that it doesn't need to be garbage collected, so it's easy to use inside a Ruby project. That's why you can use Rust in a lot more places than you would use Java.


> To use it effectively, however, requires using an IDE, and Flubbists hate IDEs, partly because their finger muscle memory has been overspecialized from spending too much time in Eighties power editors, but also because using an IDE is too closely associated with Blub practices. (People who can't ride a bicycle without training wheels, basically.)

This also has an attitude. Certain things an IDE is almost required, for example Data Science with a REPL. IDEs like RStudio and Jupyter (AKA IPYthon) or for SML dialect Racket's REPL.

Also this is anti VIM and EMACs?


Ever so slightly ;) I think it admirable that people invest a lot of time in the environments they spend so much time in, but a professional has to be flexible. (It is entirely possible that my sarcasm got out of control here.)


Just out of curiosity, what makes you say IDEs are bad?


IDEs aren't bad, but...

Automatic code generation is bad (unless its completely automatic in "you don't ever see it"), because code always have to change, and changes aren't automatic.

Heavyweight building setup is bad, because you can not replicate or automate it.

Long names everywhere hinder legibility. (Although long names on confusing or ambiguous places are a good thing.)

And, finally, people learning to program should learn how to program without one at some point, so they know what they are doing.

None of that makes IDEs bad. It just neuter their biggest selling points, but there are lots of small gains you can get from one.


The article is negative about IDEs. I was stating that in fact IDE can be a valuable tool for specific areas.


Is there even a meaningful definition of 'powerful'? I guess C# 4 is strictly more powerful than C# 2 since it only adds features, but how to compare the power of Haskell versus Rust? They are designed for different domains.


Of course, I wanted to emphasize that there is no single power continuum, so PG's argument rests on sand.


There is for each problem domain, spreadsheets for example are a great tool for a tiny set of problems. Lisp was incredibly well suited to solving ViaWeb's problems. The landscape changed and continues to evolve, but I can't use 2050's programming languages. So, picking the best language for each project based on when your starting it really is an important choice.

Also of note, Lisp was an old language in the mid 90's so the compiler was a non issue.


You could also summarize this article in one sentence: the choice of programming language is driven by the market and not the opinion of a single person. That means that not always the best one wins, but the one which has most appealing properties. And those are not necessarily the "beatifulness" or some academic aspects, but rather: popularity (and easyness of finding programmers), understandability, availability of 3rd party libraries, productivity and more...


>summarize this article in one sentence: the choice of programming language is driven by the market and not the opinion of a single person.

I wouldn't summarize it that way. More accurate would be: "the effectiveness of a programming language is driven by multi-dimensional factors instead of a single factor such as expressiveness of syntax. One's expert usage of a language can lead him to overestimate its overall effectiveness."

When people debate programming languages using words like "simple", "powerful", "better", it's not very precise. It's actually misleading. All those labels have multidimensional factors but people discuss them as if they are a single dimension scalar.

E.g. "Golang is simple" -- it's simple in keyword count but not simple in terms of implementing a family of algorithms that differ only by type. The paradox of "simple" has contradictory conclusions because that word has multiple dimensions. Therefore, if one's typical use case of Go does not involve templated algorithms, it is perceived as "simple" and one would proceed to evangelize the language as "simple."

Same type of analysis can be done for words like "power", "better", "easier", etc.

Yes, languages like Haskell/Lisp/Ocaml/F# etc have more expressive syntax but there are other dimensions in creating software that can overwhelm that advantage. This is why PG can have reasonable opinion on the "power" of Lisp but Aaron Schwartz came to a different reasonable opinion[1] and determined Python in his case to be more appropriate for Reddit. Aaron was not stupid and he knew how to write Lisp. To understand Aaron's viewpoint, you have to notice that he's using different dimensions of the word "better".

[1] http://www.aaronsw.com/weblog/rewritingreddit


I agree, but then again, the market accounts for all those "multi-dimensional factors".

Maybe what should be added is that there are external and interal factors that decide of which language is better. External factors have been already discussed. Internal ones are those that correspond to the company internal structure, culture, etc. For instance: If you have a team of people writing Java you wouldn't choose Python at all.


> It could have easily been something much worse, like Tcl.

Ouch. Tcl might not be the right choice for many situations, but it's a fantastic language for many. It is, by far, one of my favorite languages to work in.


The Schlub paradox: a sufficiently Flub language is so far divorced from its target platform that reasoning about the execution characteristics of a Flub program is non-trivial.

In data-oriented design the mantra is, "the hardware is the platform." A given CPU comes with some instructions it understands and our job as programmers is to use those instructions to apply transformations to a stream of data. The optimal language is of course the assembly language of the target platform. However it's often more practical to use a language that compiles down to a target platform's instruction set with the fewest number of transformations (barring the presence of optimizations).

Flub languages attempt to give programmers the tools to write programs which hold some provable invariant, provide denotational semantics, or some combination of these. The benefit of this is some reduced cognitive load, expressive power, and other "soft" features. This power is what lets small teams run circles around Blub programmers in the Blub/Flub paradox.

However the utility of these features and whether they actually do allow small teams to get more work done in less time hasn't been proven to me yet. It may seem tedious to think to think of the statistically significant data-access pattern in a given operation and forming your data-structures and transformations to make that access pattern the fastest route for the target platform... but it has big wins in terms of execution speed of the programs and the maintainability of such programs. One still requires a significant amount of mathematical reasoning and understanding of what a compiler can and cannot do for you.

I'm slowly coming to the opinion that there won't ever be a compiler or Flub language that will give you your cake and let you eat it too. In practice I've found re-usability to be a low-priority concern, generics to be a performance hindrance, and I've yet to find a convincing example of how type-safety is the future of mission-critical software (though there are some promising projects for sure).

Thus in conclusion: I think Blub languages will continue to rule the roost. Flub will come and go and remain fashionable as the OP has alluded to. However I don't think it's the avenue worth pursuing in order to write better programs... too much abstraction makes it rather difficult to reason about what our programs are actually executing on the target platform and for many cases that can be critical. It always comes down to the data and at some-point one needs to consider the target platform anyway. Data-oriented design is one way to think of the data first and allows the programmer to choose the appropriate semantics for the necessary transformations to be expressed by their program.

(I hope I'm making sense).


Generics are not a performance hindrance unless you mean executable size or compile times. They run just as fast as code with potentially dangerous casts.


Funny that you mention data oriented design when I coincidentally happened to mention research on types and data layout in my other comment in this post[1]. Maybe this won't turn out to be practical for data oriented design or for caring about memory layout in particular; maybe it won't or wouldn't give enough positive benefits, like being able to easily change the layout of the data structures without changing a lot of the calling code. But the view that PL research and ideas are all about denying the underlying machine seems to be an overplayed meme. Not to mention the idea that people that want more powerful languages are lazy and don't want to think about engineering considerations, instead preferring to put their fingers in their ears and deny reality; I don't think we will be able to simply send our specifications to the compiler and let it figure out everything for us. Thankfully there are a many more approaches that don't involve a black box like an omniscient compiler.

Jonathan Blows Jai language also has some ideas about being able to easily change between SoA and AoS.

[1] http://dl.acm.org/citation.cfm?id=604147


> But the view that PL research and ideas are all about denying the underlying machine seems to be an overplayed meme.

Well I am not terribly hip to what's "played out," but I don't intend to disparage academic research.

What I am concerned with is the idea that you can't have mathematical reasoning and correct programs in the absence of a VM and some combination of denotational semantics and program invariants and other features-du-jour. The trade-offs for "expressivity," are not zero and we throw them out rather casually these days. Other trade-offs such as "correctness" also have a cost and non-trivial trade offs for compile times, run-time support, etc. And it makes reasoning about these characteristics more difficult the more abstract Flub is from a real machine.

> Jonathan Blows Jai language also has some ideas about being able to easily change between SoA and AoS.

His work is pretty cool and I have been following it from a distance.

The paper you linked to looks interesting. I wish I was an ACM member so I could read it!


> it is unlikely that major operating systems will emerge written entirely in Rust

Why?


It is unlikely that a major new operating system will emerge at all because of high investment cost, momentum and compatibility


This is intelligently argued and articulate but the content is a bit lacking.

First, Paul Graham isn't an expert on languages. He wrote a decent Lisp book, years ago. Relative to VCs, he's a technical genius. Relative to technical geniuses, he's a VC. I respect him for taking iconoclastic positions against the Establishment, and I miss that Paul Graham, but... he has serious blind spots when it comes to, say, statically typed languages.

Now, PG probably intended "Blub" to mean a mediocre language du jour like C++ (in the early 1990s) or Java (since the late '90s) but Blub is more of an attitude than a language. Yes, there really are one-language programmers who can't think out of a specific paradigm. They exist, and they're the ones who write FactoryVisitorFactory classes because they haven't been exposed to functional programming and the right way of solving certain problems. That said, there are cases where C or Java is the absolute right language to use. Not many, for Java, but they exist. Not everyone who uses those "Blub languages" is a mediocre programmer; Java is not always Blub, and C is definitely not always Blub (in fact, it's out of fashion among the Blubanistas, who avoid low level programming because "it's too hard".)

As for "Flub"... I don't even know where to begin. Just going to snipe specific points.

'Power' and 'Expressiveness' turn out to be separate concepts. In the end, there is no simple continuum that you can use to line programming languages up, nose to tail, feeble to awesome.

I agree with this, whole-heartedly. Assembly and C, I would argue, are more powerful than Haskell because it makes it so much easier to create custom control flows and to manage memory explicitly. Haskell is more expressive. You trade some power, in exchange for a language that allows you to write very robust code, very quickly. But sometimes you need that power and C is the best option.

To use it effectively, however, requires using an IDE, and Flubbists hate IDEs, partly because their finger muscle memory has been overspecialized from spending too much time in Eighties power editors, but also because using an IDE is too closely associated with Blub practices.

No, we dislike IDEs (actually, we don't; we dislike broken languages and codebases that require us to use IDEs) because it's a lot easier to keep flow when you're using the keyboard only. Switching to the mouse to recompile breaks flow.

IDEs also have a stigma because they're most useful when you're maintaining other peoples' code, which generally is a disliked sort of work given to juniors. So, IDEs tend to be associated with the second-class programmers who spend most of their time on maintenance work. I, personally, think the "reading code is for losers" attitude that many programmers cop is stupid and counterproductive. I enjoy reading good code and I wish that I had better tools for doing it.

A disruptive startup is more likely to use Blub in creative ways, focusing on the idea, not the implementation. Facebook dominated its market - using PHP - which everyone agrees is crap. Google built up the world's biggest advertising agency using Java and C++; Android is a Java-like ecosystem that runs in most pockets these days.

Oh, this shitty argument. "Your tech should be boring, your product should be interesting." Yawn. It's a bullshit argument and I'll tackle it another time.

Google was originally written in Python, which was more of a reach than Haskell is now. Python sucked in 1998. Also, the late '90s were a time when PL was considered to be a dead field, so PL never worked its way into Google's DNA and it's stayed that way. They use C++ because it was the best choice in the late 1990s, and they tolerate Java in the context of acquisitions, and when they tried to come up with their own language... the best they could do is Go.

Facebook started on PHP but is increasingly innovative on the language front, with Erlang, Ocaml, and Haskell getting a presence.

You put a group of merely good Lisp programmers on a project, and embed them in a corporate environment, and the stellar results are likely to be not reproducible. (I don't doubt they will be more productive than Java programmers. But will the result blend?)

"Will it blend?" I'm sorry, but I prefer not to think of my work as something that will be ground to a pulp. If that's the kind of work you want, then hire a Scrum drone, not someone like me.

Big companies understand this problem well, and prefer Blub programmers, who are easier to source and come with less attitude.

False. Do Haskell programmers come with more "attitude" than mediocre Java programmers? Yeah, there's probably a slight difference. However, the problem isn't "Haskell programmers" or "Java programmers" but people. People are just a pain in the ass: all of us, to some degree. Since you need 30 Java programmers to do the work of 5 Haskell programmers, you not only have more sources of "attitude" with the 30, but you have more politics and less per-person productivity on account of the larger team.


But a good IDE won't make you take your hands off the keyboard. See, for example, IntelliJ. There are keyboard shortcuts for everything.


IDEs become indispensable primarily in giant, overly-OO tarballs where (in the words of Adele Goldberg) "everything happens somewhere else". In more sane languages/codebases you don't need to "jump to symbol definition" constantly because the code is pretty easy to navigate.


First, the complaint I was replying to was about having to take your hands off of the keyboard. Your reply to me therefore looks like a rant that was looking for a place to happen, rather than something that actually belongs in this thread.

Second: Fine, you don't like OO (at least the way it's often done). You don't want to have to jump to symbol definition. Great. But there are quite a few other useful things that an IDE can do to help you besides jumping to symbol definitions. Maybe you should learn some of them - you might find them useful.


No offense, but the whole essay hinges on some core concepts that are NOT logically robust, hence the ultimate 'paradox' is really simply an application of reductio ad absurdum

Basically the premise of what it means to be 'powerful' is at best ill defined, and most likely is not logically consistent.

More or less there is no clear qualitative definition for 'powerful', nor is 'powerful' meaningful across all developers, organizations, or application.

Now by an empirical definition in this context the meaning of 'powerful' is clear. 'Powerful' means how strongly the program language selection correlates with successful projects... based on that the 'powerful' languages are pretty much the 'blub' languages. there are many many successful PHP, Java, and especially C projects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: