Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Warming Up to Unit Testing (junglecoder.com)
51 points by yumaikas on Feb 24, 2020 | hide | past | favorite | 73 comments


The article doesn't mention the single most import reason for unit testing: to enable you to refactor your code.

Without a suite of unit tests, there's no way to know that your refactor hasn't introduced bugs of some kind. A suite of tests won't necessarily catch every bug you'll introduce, but it will catch many of them. I've seen this repeatedly.

I've had many conversations with developers who claim they don't need unit tests. When I ask "How do you refactor?" the answers is usually some variant of "I don't."


I keep running into the exact inverse: people who seem to believe they only need unit tests (i.e. no functional/integration tests). It's insane. If I had to pick one, I'd go with the more "live" multi-component tests and ditch the unit tests. Unit tests still have value, as a lightweight way to catch dumb errors in future refactors. That's fantastic. Love it. As soon as they cease being lightweight (and the norm seems to be more effort spent creating mocks/stubs than the actual tests) then they fail even in that purpose and just become another impediment to useful work. I'm all for unit tests as part of a full testing meal, but a whole plate full of nothing but unit tests makes me want to throw up.


The way I've started to think about testing is like this: you should test your interfaces, because the interface is what matters. If your code is a web app, then you should be testing the user interfaces and workflows that users interact with throughout the app. If your code is a library, then you should be testing the classes and functions that users will be interacting with when they use your library. In my opinion, everything else is an implementation detail.

But, I think this distinction gets blurrier the larger your organization. While you may have intended some class in your web app as an implementation detail, the larger your organization is, the more likely it is that somebody will start using that class for some other purpose. And thus and implementation detail becomes an interface that other people depend on.


I think mocks in unit tests are a code smell, that indicates that the business logic has not been correctly isolated from the rest of the code, or that the code shouldn't be unit tested at all, and should instead be component/integration tested.


Most architectural guidance encourages separation of concerns into layers (handler, controller, gateway, etc). Each layer consumes the next. You will be hard pressed to unit test one layer without injecting a mock of the layer below.

When the layer is essentially pro forma and has no logic of its own, this is excruciating, but the line coverage number rules all.


Yeah, I think this separation of concern misses a step.

Let's say you have controller -> Business logic layer -> Data access layer.

The controller takes in some input, validates it, passes it to the business logic layer. The business logic layer retrieves some data from the DAL, does some logic, maybe writes data back to the DAL, and passes the data back up to the controller. The problem is that the business logic layer is usually doing two things at the same time, in the same function: the 'plumbing' (calling other things and passing data round), and the actual business logic.

When you try and define a boundary around the code, it gets messy, because what you want to test, the business logic, is mixed in with the plumbing so you have to use either dependency injection or mocking to force a boundary. This makes your tests more brittle as changes in things you don't want to test (your dependencies) affect your tests.

If you pull out the business logic into its own modules, and instead have a function that orchestrates the business logic (i.e. tell the database what data you want, tell the business logic functions to perform an operation on the data), you end up with a nice clean boundary around your business logic, which can be unit tested easily without mocks.

You then test the 'plumbing' part by using a component test that checks that the inputs and outputs of the component as a whole (business logic + plumbing + data access) are correct.


I think what you’re describing is what we would call a controller, and management / senior ICs would still demand 90% line coverage by unit tests.


I agree. As always, "It Depends".

Use the best tool (type of test) to document & test the business or technical rules being requested.

Tests help teams (and future-self of the individual who wrote the code) understand why certain code was implemented & what purpose it is meant to solve.

As the business changes the rules they want implemented, having well written out tests seem to help significantly with handling those rules. It also helps when you want to refactor for the sake of code understanding & readability.

It is often the case where both developers & those requesting the changes forget why some of the rules were put in place to begin with.


> As soon as they cease being lightweight (and the norm seems to be more effort spent creating mocks/stubs than the actual tests) then they fail even in that purpose and just become another impediment to useful work.

Agreed. I think that in many cases it is a signal that your code is too complex, and may be violating the Single-Responsiblity principle.


I'd say that when I don't know where to start (because there are so many ways one could start building something) outside-in TDD is very useful. (About 40-50% of my code is outside-in TDD).


Now mix in VP-level accountability for unit test coverage KPIs, coupled with piss poor integration test infrastructure, and you have the uphill battle I fight daily.


There’s more nuance to this - poorly written unit tests (eg excessive mocking and things like testing private methods) actually make it harder to refactor code, and using strongly typed languages vastly reduces the importance of unit testing for refractors (to the point where the IDE can even do things like method extraction for you)


I propose "Granshaw's law". Every time the value of unit tests is discussed the debate over static vs dynamic typing must be looming eerily in the background.

That said, as someone who has spent the majority of his career writing dynamically typed code I can definitely recall far more instances where statically typed code has run and worked first time vs times when I've written some code with unit tests and it worked first time.


When I was contractor there was a project where the tests were so extensive and complex that most change was impossible because of the impact on the tests. Testing private methods is especially evil.


Perhaps runtime assertions are the best option for private methods. Bake them right in.


You need "tests" to be able to refactor. They don't have to be unit tests.


There is also the chicken and egg problem where it's hard to write good unit tests for code which is poorly abstracted and in need of refactoring.


Any form of maintenance or cleanliness is basically bistable.

Start off with things clean, consistently try to keep it that way, and it's actually pretty easy and not too much effort.

Start off with a mess, or start off clean but then neglect things for a while, and you will wind up at the bottom of a hole that it is going to be a long, painful slog to dig yourself out of.

This applies to tons of things: coding, keeping your bathroom clean, car and home maintenance, personal relationships, financial management and record-keeping, etc.


Unit tests also force code to be written in a way it can be refactored and is abstracted at a (somewhat) reasonable level. Its very hard to get test coverage on a 2000 line mega-function.


Fixing bad code is like repairing an old building, you have to put in the scaffolding and supports (int/component tests) first to make sure it stays up before you gut it and check whats salvageable (refactoring and unit tests)


Bit of a disappointing read for me. Given the title, I was expecting more "meat" to the topic, like more details about why and how the author is now warming up to unit tests, anything to get the article beyond a surface-level summary of events. Right now, it simply reads "I didn't previously unit test, then I read some books, and now I do".

Seems to me writing like this is best served with a few book recommendation tweets and doesn't really make sense in this format.


(author here)

Why:

Mostly, I think it's down to having a better understanding of where to try to break dependencies, specifically trying to abstract away from the outside world (akin to how they did it in the Moonpig billing system).

The how is still in progress. Perhaps the most notable instance I can think of it is that recently, due to work on breaking dependencies, I was able to stub out a RabbitMQ connection for an in-memory class, so that I was able to easily test how two components interacted without having to stand up an external service.

For me, looking back, I think it was the Moonpig article that's had the most influence on how I think about things.

Do you have any other questions?


We started using doctests to do our unit testing: https://github.com/supabase/doctest-js/blob/master/README.md

This was a good way to get our coverage up. Something about writing the ‘test’ in the context of the function makes it seem easier. Also it helps that you get tests + docs + intellisense all in one fell swoop.


I admit I still haven't really latched onto unit testing. I gave it an honest effort about a year ago, but the amount of time I was spending to mock out the inputs and then actually writing the test was absurd and I didn't feel provided enough benefits for the time out took me away from "productive" coding.

Maybe I'm doing it wrong, who knows...


Maybe, but there are a couple of more likely explanations.

(1) You're working on a system that is poorly structured for testing.

(2) You're trying to make a unit test do an integration test's job.

Both are incredibly common. In particular, if you find that you're spending a lot of time creating mocks/stubs to support unit tests you'd probably be better off finding a way to inject errors in an integration test. Error injection can be done at any level, works even when the code structure makes mocking difficult, and exercises code paths that cross multiple components in their "final" close-to-production form. IMX most of the people who go crazy about unit tests just haven't thought deeply enough about the problem to realize that error injection is strictly better (and a lot easier) most of the time.


#1 is very likely. However if it is extremely common to do unit tests "wrong", then that would mean I would need to find a testing guru to learn from in order to do it "right", and do a great deal of study to learn myself. The possibility of that happening is very low.

Does this not give credence to the notion that there may in fact be something wrong with unit testing in general? Is it not just compounding the problem?

I'd rather spend my limited cognitive currency and time in the system itself, not in mastering the art of unit tests... Unit testing should be a tool not an art. If a tool doesn't simplify my life, it has limited utility.


I don't exactly disagree with you. I've been on many projects where unit tests failed to justify the time spent writing them (including mocks) and dealing with spurious failures. I believe they can be beneficial, but only if people understand their limitations and play to their strengths. I'll gladly use them to vet code that implements a completely self-contained data structure or algorithm with no dependencies, and I've done so to good effect many times. Otherwise, I'd rather write a proper stub/simulator that can be used in integration tests to exercise multiple components at once than waste time with single-purpose mocks.

> I'd rather spend my limited cognitive currency and time in the system itself, not in mastering the art of unit tests

That's not quite an either/or. Over an entire career you'll probably work on many systems each requiring their own specific knowledge. Knowing one system is great as long as you're working on that system, and mostly useless thereafter. Knowledge of how to write good tests (unit or otherwise) is much more transferable. Every team can use somebody with those skills.


There's quite a bit of "no true Scotsman" in there? If few get 'unit testing' right, then I believe there may actually be something wrong with it, not just a failure to 'do it right'.

Mocking is for exercising all inputs to a method, not just error injection. In fact that's fundamental to unit testing, and a necessary part. And it can grow to become a nightmare.


> There's quite a bit of "no true Scotsman" in there?

Only in your head. I'm not trying to claim unit tests aren't useful, or that any particular unit tests aren't good. The only fallacy in sight here is strawman.

> Mocking is for exercising all inputs to a method

How useful is that if they're values that the real version of the mocked component can never return? As I said right at the start, this kind of validation has value when changing code later, but that's a lower priority than validating that the whole system as it is today does what it's supposed to.

Doing unit tests right is valuable, but is much more of an art than most people realize. A good unit test should cover inputs and paths well, but not constrain implementations. Most unit tests I've actually seen - quite a lot BTW - are the exact opposite. People need to stop thinking in terms of depth (adding mock after mock and implementation-specific check after check for a few obvious scenarios) and think in terms of breadth (lighter weight validation of more scenarios) instead.

P.S. If your "inputs" are results from functions that have to be mocked for testing, it's often a sign that the code has a backwards control flow. When that happens, it's better to fix the control flow than to add more mocks.


Still my favourite source on unit testing and mocking: http://www.growing-object-oriented-software.com/


When you first hear about unit tests it seems like the greatest idea ever. Automation of a process you already do manually makes that process repeated-able and quick. When I was a junior programmer I was all over it.

Over the years I have come to reject some things that are seemingly fundamental to software engineering in the industry. One of them is unit testing. I don't believe that unit testing is harmful in itself, but excessive use of it is a sign of weakness in another area.

- if your code relies on unit testing for correctness than your coding methodology (or language) is error prone. A good programming methodology where the domain is well understood usually does not need a single unit test.

- If your code base utilizes dependency injection to make the code more "testable" with mocks, you are making the code 10x worse. There are other ways to make your code more modular without injecting new scope, logic and apis into an object.

This logic sort of applies integration tests which is basically testing the same thing as unit tests but touching functions that have IO. However it is a little different. If your program does Not need unit tests but needs integration tests it means things like:

- your type checker does not extend across systems

- the programming methodologies become more error prone as you move to other systems and you have no control over it.

- you lack understanding of the foreign system.

The thing with integration tests is that all of the above is largely inevitable and integration testing really becomes the only way to correlate (not verify) the entire system with correctness when you lack the integrated approach of a single programming language. Therefore because of this I am not against integration testing.

I recently green fielded a small micro-service at my company and I did not write a single unit test. It had one logic error and that was largely due to a flaw in the type checker. I can imagine many engineers being aghast at the whole concept.

So essentially I've had the opposite conclusion. Unit testing to me is a symptom of bad engineering practices. If you rely on it, something is wrong.


A lot of the software I write would be considered "safety critical" in that a crashed program could lead to injury to people and damage to property.

Unit testing is by far the most valuable form of testing during development, in that it catches 90% of bugs before the code touches hardware. Also, after development, unit tests serve as regression tests that catch unintended changes made by others.

Mocking, whether or not through dependency injection, is absolutely necessary to enable testing when targeting bare-metal platforms, and very helpful to speed up testing when targeting an operating system. Dependency injection in particular usually constrains code to be written in a way that, although perhaps seemingly more complex at the surface, is a lot more maintainable long term.

Integration testing is also a very good idea. The thing with integration tests, though, is that they get bigger and more complicated and slower over time. Relying solely on integration testing will likely become a development bottleneck if your system grows in complexity. If you can run an integration test suite deterministically and efficiently on a development system, then the software under test probably isn't that complicated, relatively speaking. Also of note is the tendency for mocks and dependency injection to help out when integration testing as well; being able to completely abstract away hardware allows for faster than real-time deterministic simulation at a system level.

If you're doing well without unit tests and folk generally seem to be happy, then good on you. Unit tests are a pain to write. I'd suggest that you might be severely limiting your full potential as a software developer, though.


>If you're doing well without unit tests and folk generally seem to be happy, then good on you. Unit tests are a pain to write. I'd suggest that you might be severely limiting your full potential as a software developer, though.

Maybe it's you who hasn't taken the red pill.

Safety critical software would be even more safe if you used formal methods. Formal methods verify logic to a degree of 100% while integration tests dig out unintended side effects or leakage.

Let me put it another way. In Computer science there's a field called formal methods. With formal methods it's possible to do things like dependent typing. These techniques and technologies can verify your program to 100% correctness. This is not unit testing to correlate correctness to a certain degree but actual 100% verification.

Such a technique has pitfalls in the sense that it is brutally hard. I am saying that there are programming methodologies that can be used to go above standard typing and below dependent typing where the program is not verified to 100% correctness but the correlation of correctness is so high that you can forego unit tests.

Some languages like Haskell and Rust already sort of enforce these programming methodologies / technologies on you such that most unit tests become redundant.

You need a lot less unit tests for a haskell program than you do for javascript.

Therefore a program that relies to heavily on unit tests is a sign that the programming methodology is flawed. Something is wrong.


My ideas of writing tests has changed over the years:

- It's a must-have for business logic, because errors can lead to actual dollar costs (and looked at by the SVPs). Any maintenance change should require a unit test for this type of logic. This way our code confidence in deployment of a new feature or change goes up.

- Catching regressions. We've had errors we've missed in the past, and as our team changes and team members come and go, the want to make sure we are not making the same mistakes. Thus, automated testing on regression tests helps our confidence in code stability goes up.

- System failures. This is more obvious as an integration, but it's good to have these tests in place because once again they can be traced to actual dollar amounts. ALso helps confidence level of deployments by junior/new employees.

- Maintenance of legacy code over YEARS. I think most people skip unit tests because they are writing new code, which gets tested over and over during development, but adding a new feature to massive legacy code is still an aspect of our jobs. I don't remember how my code worked last month let alone 3 years ago, so unit tests help not only in confidence levels, but in speed of development because I wrap mocks around any services that I don't want to wait for or should not be triggering for this rapid-iteration development.

Each circumstance is different, but I think unit testing goes beyond just knowing the domain well.


Unit testing is not just about catching errors while you build the code, it's (and much more) about being able to catch errors at any time. Now, tomorrow, in 2 weeks, by someone else trying to refactor something in 10 years from now. Refactoring without tests is a nightmare and people will avoid touching anything because of it. With time that results in piles of old crap layered one over the another, hack over the hack, as no one ever wanted to take the responsibility of untangling it and refactoring the core problems properly. Many things you can do without tests, but tests make it more approachable and less scary to try.


The real value in unit tests comes when you need to refactor a codebase. Especially if you're not the original author. Unit tests are a codified API contract.

It's easy to write something small and correct from scratch (like a greenfield microservice). It's far harder to maintain that over several years and several changes of hands.


A type checker should track an aspect of this thoroughly. If you find yourself relying unit tests as a contract to prevent future changes something is wrong with your coding methodology. Too many dependencies in your code, not enough modularity.

If your coding methodology was good, the type checker is enough to verify these contracts.

A language without a type checker will have more need of unit tests.


A type checker is very helpful, but type checkers catch a few different type of error from unit tests. I would not want to refactor a large program without both.

Unit tests ensure that everything you thought about still works. Types ensure that ALL types are compatible.

What types do well is the everything case, you cannot forget something with types. However they only cover compatibility not correctness. You can have a type correct program that is wrong.

There are formal methods can can get to correctness. If you have the right constraints. The only problem is you can have the wrong constraints. Within that limit though it shows the program works correctly in all cases.

Unit tests prove only the cases you thought about work. While this is far below the infinite number of cases the above prove, they are still very useful because it is generally very easy to reason about the one case you do choose. If you choose the cases well, the majority of possible cases are "close enough". Thus a few well chosen unit tests can give you assurance that your code works correctly in the important cases.


Unit tests operate in the same domain and constraints as formal methods. In terms of correctness formal methods beat unit tests hands down.

For things that live outside of this domain the term integration test becomes relevant. If you are testing for unknown side effects it is no longer a unit test but an integration test.

This is not my point.

Unfortunately I can't go into detail about my point because it's just too long to talk about. I glossed over it and called it "Programming methodology."

In the context of correctness, with the right programming methodology, programming language and type checker you can use this and your intuition alone to forego the need for unit tests. This does not apply to integration tests.

Anyway without going in to deep the right programming methodology involves reducing complexity. It's similar to how if a programming doesn't have nulls you can't have runtime errors caused my nulls. If you use the right programming language or right methodology you can reduce much of the complexity of the program to the point where unit tests are redundant to your intuition.

Another good example of a language that pulls this off is Rust or Haskell. You don't have to go as far as COQ but the previous two languages force you into a methodology that ensures correctness to a degree that many unit tests become redundant.


Formal methods don't beat unit tests for one important reason: it is much easier to reason about the single case the unit test covers and convince yourself it is correct than about all the possible cases (which might be an infinite set) at the same time. As Donald Knuth has said "Beware of bugs in the above code; I have only proved it correct, not tried it". Because even Knuth himself is smart enough to know he sometimes makes mistakes in his proofs. Just a few trials (ie unit tests) give confidence it works in practice and in theory.

Or to put it a different way, unit tests guard against an incorrect specification. Code can be proven correct and wrong at the same time. Unit tests make it much easier to reason about if what is specified to happen is what should happen.

As an aside, I'm not against blurring the line between unit tests and integration tests.


>Formal methods don't beat unit tests for one important reason: it is much easier to reason about the single case the unit test covers and convince yourself it is correct than about all the possible cases (which might be an infinite set) at the same time.

Dependent types and/or automated proof checkers should bridge the gap of intuitive reasoning. Basically the dual relationship where code verifies the type and dependent types verify the code should be strong enough to forego unit tests and guard against "incorrect specification."

>As an aside, I'm not against blurring the line between unit tests and integration tests.

Sure that's fine. But then the type of test and the context of what I am referring to is strictly tests that only cover pure mathematical logic.

Things outside of that bound cannot be covered by proof nor by programming methodology and are also not the type of test I am referring to.


A type checker still accepts a correct but enormous interface without complaint. It’s only when you want to test it in isolation you realize that the function needs a dog+kitchen+universe you have trouble providing. So you refactor it to have a smaller interface and that lets you write the test. The act of writing the test now made you write less coupled software and it will remain and prevent anyone from adding the coupling in the future.

For me this is the best side of unittests: their validation of “correctness” is just one aspect, but their job as a guarantee of decoupling and a living documentation is at least as important. Even a test that is only written and compiled but never run provides great value for architecture and documentation.


I am only arguing the fact that unit tests are unneeded FOR correctness now and in the future if you have the right programming methodology.

If you want to write unit tests for documentation that's a different argument. I'm not against that.


While a lot of domain knowledge might be possible to encode using types, most domain knowledge is probably easier to ensure via a combination of types and unit tests. It not only needs to be possible for unit tests to b me unneeded, it needs to be practical (as in economical, possible on the language used). You can probably make a formally verified first person shooter which validates the logic of its physics/network/game logic in Agda. But you likely can’t do it in C++ and be done by the holiday season. So unit tests are probably still needed and a good idea for most situations.

Unit tests famously don’t prove correctness (the absence of bugs) only the presence of bugs, so I guess it’s wrong to compare them to actual verifying actual correctness.


There's another emergent phenomenon that isn't often talked about for unit tests. It's known intuitively though nobody mentions it though.

Unit tests are like statistical samples of all possible test cases. So unit tests while they don't verify correctness overall they can correlate with correctness. That is why you only need a couple unit tests to correlate with. a degree of correctness with a domain that's almost infinite. The phenomenon that enables science also enables unit tests to work.

That being said, there are programming methodologies of managing complexity that negate the need for unit tests. This is my main point.


I like the idea of statistical samples. this makes a lot of intuitive sense. However, I'm unaware of any methodology that obviates the need for unit testing.

I presume you're talking about proofs, or maybe something like Idris?


No. While dependent types or something like Idris is strongest form of correctness the technology just isn't mainstream enough yet. This is not what I'm talking about.

It's really too long to get into but the programming methodology I'm talking about has to do with reducing complexity of your program to the point where your intuition along with the type checker should render unit tests mostly redundant.

For example a language with no nulls renders all unit tests that test for null runtime errors unnecessary. Add exhaustive pattern matching into the mix and you'll render All tests that cover runtime errors unnecessary because your program simply can't have a single runtime error (see elm.)

Both of the examples above involve either the language itself or a programming methodology or both. Rust and Haskell are two languages that force a lot of "methodology" onto your programming where a significant amount of unit tests that you would otherwise need in say python or C++ are rendered unnecessary.

Another way to look at it is using a highly restrictive form of programming. The more restrictive the less opportunity for errors to the point where your intuition and type checker alone should be able to allow you to forego unit tests.

There's more too it like segregating all IO from core logic but I've already written too much. Suffice to say what I'm talking about falls within a subset of the domain of a semi-popular style of programming. I don't bring up the name because that will only detract me from the main point.


I agree that you can remove a lot of simple representation bugs. You must have either a credit card or a billing address or your order simply cannot be in the completed state, and so on.

Everything including nulls, index-out-of-bounds is unnecessary and can be elegantly handled by the type system.

But complex business logic like building fire codes, tax law, network standards, traffic rules etc aren't that simple. You are quickly into Agda/Idris non-mainstream territory if you want to represent some tax law in types. It's basically "if you have two or more dependents and make under 10k then the tax is 4.5% of the income over 5k minus...". It's of course possible to represent anything using a complex enough set of types - but I haven't seen a good example of a complex piece of domain logic that didn't need tests. I'm a big fan of e.g. "Domain modeling made functional" and similar, but usually the most complex calcultions are just that: calculations. You can encode things like "the input must be positive" or "the output is guaranteed to be positive" with types easily, but when you want to encode that "given the input 3 the output must be 16 on a thursday" your are beginning to probably see diminishing returns if doing it with types instead of e.g. property based testing.


I get where you're going with this. You are talking about macro phenomena of multitudes of simple axioms interacting with each other.

Let me put it this way:

Given an exact set of tax laws, network standards or traffic rules I can implement this perfectly and easily with types, intuition and programming methodology alone. (Another hint: don't use mutable variables or even better forego variables and go point free; it eliminates a whole class of bugs that has to do with state.)

Implementing a specification is trivial even when the specification is complex. It's simply a mapping from english to programming logic. There is little complexity here.

I think what you're getting at is a sort of derivation of the specification from a macro goal.

For example, what is the best way to tax society so that society has maximal benefit and zero loop holes for bad actors? What is the best way to create a set of traffic rules and network standards so that throughput is maximized, fair among everyone, and (most importantly) 100% bug free?

This is hard and specs derived from a macro goal can be bad or wrong. This also usually happens before programming begins.

Bad specs are not something that unit tests are really used for. Unit tests are used for logic errors. To catch where a programmer implemented a spec incorrectly.

While I can see some logical value for using unit tests to test for macro level goals this stuff in practice happens at the IO level and is the domain of integration tests. You are not testing a "unit" when measuring a macro level goal, the philosophy of an integration test applies in this case because a macro level goal is basically operating at the integration level rather than the unit level.


crimsonalucard's reply is a good summary, but it must be understood that this kind of thing obviates the need for some unit tests, not all unit tests. Haskellers love unit testing, after all! (See QuickCheck and HedgehogQA).


For very simple programs, sure.

Type checkers do not verify behavior. Unit tests verify behavior


Unit tests do not verify behavior, except for a few cases (out of a virtually infinite set of cases) that the programmer happened to think about.

Types do verify behavior - e.g. this function always returns an Int, that function only returns a UserId, this other function doesn't do any I/O.


That is a very different behavior though. If my multiply function returns ADDs the input and returns an int it passes the type checker but is still wrong.


But if your multiply function fails to handle some special case that the tests don't catch (negatives, overflow/saturation, denorms, ...) then the tests are also wrong.

What you really want is to be able to state properties, e.g. "for all x/y, mul(x, 0) = 0, mul(x, 1) = x, mul(x, 2) = x + x, mul(x, y) = mul(y, x)", and so on. And working with these kinds of statements (via e.g. quickcheck or whatever) might generate tests, but feels a lot more like working with types. You're making proofs, not examples.


Which is why I have always (well for the last 5 years or so anyway) said that you want both because they catch different issues.


Speaking as someone who has fielded many reliable bits of software at least a decade or more before unit testing became a thing, I can't agree unit testing is a sign of bad engineering.

Tests add real value for me. They are now usually the first "user" of my code. I often find design issues when writing tests. They also catch bugs (rarely but often enough to be useful) and make it easier to refactor with confidence. YMMV!


It's a sign that your programming methodology is so error prone that you need unit tests to catch errors.

Most of the code I write doesn't need unit tests (automated or manual) because the way I programmed it reduces the complexity of the program to the point where my intuition and the type checker alone will cover correctness.

Basically another way to put is... the code doesn't have any errors in the first place so any form of unit testing isn't really needed.

Note that I am talking only about unit tests or tests that cover pure internal logic. Anything that touches IO or anything external becomes an integration test and That is not what I'm talking about.


Can you point us to some code that demonstrates how you do this?


Haskell code.

I would try coding some solution in haskell with minimal IO. You should only need the type checker to verify correctness.


> If your code base utilizes dependency injection to make the code more "testable" with mocks, you are making the code 10x worse.

I don't know that I'd word it exactly like that, but one thing I've found over the years is any tests we have that rely on mocks basically just don't test anything at all.

If your whole test is "This gets called, then that, then that", you aren't actually writing a test. You're repeating what the code already says.


Well, "something" is being tested as your code is verifying that an input produces an output.

But your philosophy is correct and we are in agreement on that.

The thing I was referring to is the abstraction of dependency injection itself. It leads to code that is over complicated as the abstraction itself is not only pointless but adds unnecessary layers of complexity.

There are ways to target the philosophy of what dependency injection for unit testing is trying to accomplish without introducing the side effect of unnecessary complexity that comes with dependency injection. It's as simple as just separating IO functions from core logical functions. Don't bundle these two things together into an object or a single function, that's it. (Also don't use objects period, but that's another topic.)


I like dependency injection even when I'm not writing tests (which I usually skip for side-projects.)

I find it natural and convenient to think about "What dependencies on other parts of the system does this code have?" Expressing those dependencies explicitly feels like it reduces complexity, not adds it.

But, I'm not talking about pulling in a big fancy DI framework, just making dependencies explicit in your function/class parameters.

I will say that DI is sometimes used as a tool in overly-abstracted systems. An example that comes to mind is the ASP.NET MVC framework -- with DI and inheritance, literally any behavior can be overridden in fairly opaque ways. Trying to suss out the concrete behavior details is like swimming in quicksand. (Or it used to be that way, haven't touched ASP.NET in a while.)

As an aside -- I'm curious about your programming language of choice. I think DI is a lot more useful in some PLs than others. For example, I find JS code often uses imports to create complex graphs of implicit dependencies, and DI can help tame that complexity. But for other PLs like Python or Clojure, I basically don't use DI at all.


Javascript doesn't "need" dependency injection because modules are objects, and you can mock those directly by replacing what they reference to. Very similar story in python. You could say people do DI in these dynamic languages without calling it DI.

Now Java and C# are different because they are compiled. You need a DI tool to do dynamic dispatch if you want to mock.


It's all aspects of the same thing.

You can't inject a nuclear reactor into a burrito in any language untyped or not because it will lead to an error.

The only difference is python/js the error will happen at runtime while in C$/java the error happens at compile time.

The main difference is the type. In typed languages you need describe am interface or a class of types if you want to do mocking or dynamic dispatch while in dynamic languages you don't need to explicitly define this as a type, the definition exists in how you use the object that is passed in.

Either way DI, however universal, in any language is bad practice.


Good dependency injection results in flatter code rather than more layers. Dependency injection simply moves the dependencies to the top of the program rather than the bottom.


> If your whole test is "This gets called, then that, then that", you aren't actually writing a test. You're repeating what the code already says.

Yeah, mocks are just a way to zero out the dependencies so you can focus on the unit you're actually trying to unit test. If all you're doing is zeroing out dependencies, you're not testing much. Every mock you bring in carries and assumption, and those assumptions need to be tested, too.

Cases like that really call for some small scale integration unit tests, where you actually bring in your dependencies and test them all working together.


As a personal rule, I use test versions of the real APIs (or another instance of the API if a testing version isn't provided) on the CICD, and I use stubbed versions locally. Main motivation was that race conditions can happen between the two, and there is only one CICD, but many developers. I change the interfaces through feature flags/macros at compile time

EDIT: What I'm working on currently doesn't have a hardware component to it, so this might not be practical in the future


I think verifying that your code calls this and that is that it actually works the way you expect. Without such test we might assume code works in one way when the happy path is not even reached.

I find it valuable at times, depending on the context, but I agree that extensive mock testing like this can get quite useless real quick.


> So essentially I've had the opposite conclusion. Unit testing to me is a symptom of bad engineering practices. If you rely on it, something is wrong.

I think that's being too extreme and dogmatic. Unit testing and small-scale integration testing (implemented as unit tests) are a way of testing your assumptions and documenting and communicating your assumptions to others. Both are good engineering practices.

You just have to be pragmatic about how you use these practices. Teams definitely get too dogmatic and narrow minded about mock-heavy unit testing, and some people do manage to write code without writing any unit tests, however those are both extremes that are best avoided or of only limited applicability.

The benefit of unit tests is that certain things are more economical to test that way. You literally can test everything though integration tests of varying levels (or manual testing!), but it'll take more effort and run more slowly. If you want to thoroughly test a regex, just test the regex by throwing a dozen strings at it. Don't bootstrap the program and construct valid input requests to test each of those strings (but do do that to test that everything can work together, which includes a subset of those strings).


>I think that's being too extreme and dogmatic. Unit testing and small-scale integration testing (implemented as unit tests) are a way of testing your assumptions and documenting and communicating your assumptions to others. Both are good engineering practices.

I use documentation for communication and documentation. Unit tests may do this but this is a side effect. The purpose of unit tests is to test your own logic and find mistakes you've made.

I'm saying there are ways to code where your intuition and type checker alone are enough to verify correctness without the need of a single unit test but with the same correctness as if you did verify your program with unit tests. You don't even know what this "way" is as I haven't even described it, so you can't even argue for or against it.

Dogmatic is a word for those who spout their point without understanding the reasoning behind the other point.


> If you want to thoroughly a regex, just test the regex by throwing a dozen strings at it. Don't bootstrap the program and construct valid input requests to test each of those strings (but do do that to test that everything can work together, which includes a subset of those strings).

Or even write the test cases separately, then import them into a unit test runner that can check them all very fast as well as an integration test runner that can check them all very slowly.


  - If your code base utilizes dependency injection to make the code more "testable" with mocks, you are making the code 10x worse. There are other ways to make your code more modular without injecting new scope, logic and apis into an object.
what other ways?


Programming is about composition of primitives. You compose primitives together to reduce the number of modules and be better able to reason about complexity. During composition you also want to maintain flexibility, you want modules to be able to be swapped out and replaced.

Dependency Injection is one form of composition. One perspective to look at it, is that objects are like tiny scoped programs with variables and methods. To do a dependency injection is to put a program within a program.

Is this a good way to compose primitives? It's like parsing a python program inside a java program inside javascript. What are other ways to manage complexity and compose primitives without dependencies everywhere? What is a programming primitive? Is the most fundamental primitive of computing a function? Should we be composing bricks to form buildings or compose buildings to form bigger buildings?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: