Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They accept malformed input & attempt to make sense of it, instead of rejecting it because the fields they care about are malformed.

I don't think that's true at all. The whole point of the law is that your interfaces should be robust, and still accept input that might be nonconforming in some way but still be possible to validate.

The principle still states that if you cannot validate input, you should not accept it.



The state of HTML parsing should convince you that if you follow postel's law in one browser then every other browser has to follow it in the same way.


That's a truism in general. If you're liberal in what you accept, then the allowances you make effectively become part of your protocol specification; and if you hope for interoperability, then everyone has to be follow the same protocol specification which now has to include all of those unofficial allowances you (and other implementors) have paved the road to hell with. If that's not the case, then you don't really have compatible services, you just have services that coincidentally happen to work the same way sometimes, and fail other times in possibly spectacular ways.

I have always been a proponent for the exact opposite of Postel's law: If it's important for a service to be accommodating in what it accepts, then those accommodations should be explicit in the written spec. Services MUST NOT be liberal in what they accept; they should start from the position of accepting nothing at all, and then only begrudgingly accept inputs the spec tells them they have to, and never more than that.

HTML eventually found its way there after wandering blindly in the wilderness for a decade and dragging all of us behind it kicking and screaming the entire time; but at least it got there in the end.


> The state of HTML parsing should convince you that if you follow postel's law in one browser then every other browser has to follow it in the same way.

No. Your claim expresses a critical misunderstanding of the principle. It's desirable that a browser should be robust to support broken but still perfectly parceable HTML. Otherwise, it fails to be even useable when dealing with anything but perfectly compliant documents, which mind you means absolutely none whatsoever.

But just because a browser supports broken documents, that doesn't make them less broken. It just means that the severity of the issue is downgraded, and users of said browser have one less reason to migrate.


The reason the internet consists of 99% broken html is that all browsers accept that broken html.

If browsers had conformed to a rigid specification and only accepted valid input from the start, then people wouldn't have produced all that broken html and we wouldn't be in this mess that we are in now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: