Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's wait for a few years and we'll see plenty of articles "Goodbye functional programming". You can write good and bad stuff with OOP, you can do the same with FP. There is no one-size-fits-all porgramming style.


You're making a false equivalence. Functional programming has a strong basis in discrete mathematics that OOP simply does not. This makes it better suited to accurately describing computation at a high level than OOP.

However, you probably arrived at this conclusion because you still see programming languages as having intrinsic paradigms and those paradigms meaning anything about computation.

The fact of the matter is that all programming languages are little more than notation systems for algorithms. The best programming language is the most accurate algorithmic notation system.


You mean lambda calculus? That model has plenty of shortcomings. Someone recently pinpointed the problem for me. Complexity analysis is impossible in a system that is inherently unaware of time and computational cost of transition rules. Lambda calculus is inherently timeless (both in the theoretical sense because of turing equivalence and the practical sense of being unable to provide a proper framework for complexity analysis). See http://cstheory.stackexchange.com/questions/376/using-lambda....


So you list one shortcoming, and don't even cover the position that lambda calculus is better suited as a foundation for a programming language despite not being better at time complexity than a Turing machine would be?

http://cstheory.stackexchange.com/questions/21705/what-is-th...

My point is basically made here:

  This algebraic view of computation relates naturally to programming languages 
  used in practise, and much language development can be understood as the search 
  for, and investigation of novel program composition operators.


This is the usual response. I didn't say the algebraic approach does not have advantages but the complexity analysis issue is always swept under the rug by proponents of the algebraic approach. Operational and axiomatic semantics also have their place and if the theoreticians haven't yet settled on "the one true way" why is it that regular programmers think they have?

I agree that the search for novel composition mechanisms is a prominent goal of PL theory and research but it is not the only one.


Fair enough. I'll agree that their is no "one true way" and contest whether it's likely there is one, however I do think I understand the frustration some theoreticians have with the mainstream of programming when their contributions to programming language theory through denotational semantics, type theory, monads, etc have been largely ignored for the close to thirty years in favor of recycling the same tired dialects of Algol over and over again.


You can count your reduction steps in lambda calculus just fine. Can't you?


If you want a foundation for computing, may I humbly suggest imperative algorithms in the integer RAM model. That gives correct answers to questions like "what's the time and space complexity of quicksort in the average and worst case?" Good luck answering that with lambda calculus, or Turing machines for that matter.


The RAM model is nice for some simple space and runtime complexity analysis. Of course, it doesn't know about eg caches or parallelism, either.

(Lambda calculus is an interesting foundation mostly for work on correctness of algorithms---indeed analysing cost of computations is harder. Chris Okasaki has done a good case study of analysing runtimes of purely functional algorithms / data structures.)


Can you use lambda calculus to prove the correctness of quicksort? :-)


Depends on which version you are talking about.

If you are happy to prove correctness of tree-sort, which does the same comparisons as quicksort, it's straight-forward.

If you want to talk about the clever in-place quicksort, you'll want to talk about a slightly more complicated model.

Either implicitly, where you still model everything as lambdas, or explicitly: you can model state (like an array) and its manipulation inside a lambda calculus framework just fine. Eg using linear typing, or something like Haskell's state monad (which has a pure description, but ghc is smart enough to optimize it to what you would expect when translating to machine code).


So that would be the traditional analysis of quicksort, mucked up with linear logic terminology so you can't see what's going on.

The sweet spot of lambda calculus is structurally recursive algorithms on trees. You can use it for other kinds of algorithms, but there it just makes things harder, and I've never seen it solve a problem that wasn't solved first by other means.


I came to this conclusion because as soon as a paradigm becomes popular clueless people who don't really understand the foundation will start using it. Then you have consultants popping up who don't know much either but come up with slick patterns to sell. This happened with OOP and will happen with FP.

Functional programming is not that easy to do right in the real world. It takes some level of abstract thinking and understanding. Not too many people in this industry really want to do this level of thinking. They just want a recipe for getting things done with the least amount of thinking.


Yes, but the foundation under OOP is considerably less solid. I agree that you can write a bad program in any language, but the fact remains that it is impossible to write certain bugs in statically typed pure functional languages and that functions as a base abstraction are more apt for describing computation than objects are.


Totally agree. But don't forget that the real-world is pretty messy and a lot of people are not willing to think deeply about a problem. They want quick solutions. Out of the people I work with I would trust only a small percentage took the time to understand OOP. Only a few guys will take the time to understand FP. The other people pretty much just copy/paste boiler plate code.


Wow, that's depressing. I haven't had a lot of experience working with other programmers but if this is what the software industry looks like I can understand why you'd have these opinions.


People sort themselves into competency buckets by company as well.


> The best programming language is the most accurate algorithmic notation system.

Actually, the best programming language is the one you're programming in right now. /s

Although my comment is in jest, I do think it has a lot of truth; I'm never going to write something for HPC in Haskell just because it has better algorithmic notation, I'd most likely choose Fortran or C. I'm not going to write a web back end in Haskell, I'm going to write it in Python, Ruby, Javascript, or any other language much better suited for the application. I'm not going to write statistical model analysis in Haskell, I'm going to write it in R, Python, Matlab, or Mathematica.

Saying that all programming languages are just little more than notation systems for algorithms doesn't actually reflect reality that programming languages are not just math operations, even if that's what they become in the compiler or to the CPU, and telling people that is disrespecting the entire industry and academic side of programming and computer science.


Haskell is equally suited to backend web development as Python, Ruby, or JavaScript. Just look at Yesod. There's nothing about Haskell that makes it worse for web development, and a lot that makes it better.

For what it's worth, Haskell has uses in finance for statistical modeling because you can model different currencies and equities/derivatives with ADTs and the type system will ensure you never confuse them in your code. Haskell syntax is beautiful and in many cases mimics mathematical syntax and as such is used frequently by mathematicians.

But I don't mean this to be a defense of Haskell. There are many posts that do it better, just yesterday there was a blog post on the front page about one startup's use of Haskell in its main product: http://baatz.io/posts/haskell-in-a-startup/

There are other languages like Lisp that directly imitate lambda calculus. Lisp is elegant because it is like programming in a syntax tree with the entire structure of the tree available for manipulation such that you can create extremely concise and expressive programs with less code.


And Real Programmers(tm) can write Fortran in any language...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: