Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: We launched a new web browser
203 points by ca98am79 on April 26, 2022 | hide | past | favorite | 177 comments
My company launched a new open source web browser built on Chromium. It supports decentralized domains on Handshake and is the first browser to support .eth DNS. It is also the first browser to support secure web browsing with DANE.

Check it out:

https://impervious.com/beacon

https://github.com/imperviousinc/beacon



You launched a version of chrome, not a new web browser :/


Eh. People say "a new OS" when talking about a new Linux based distro. A new Chromium based browser is apt.

I think the ship has sailed and we won't have a new browser from scratch any time soon. If two of the richest companies in the world can't do it ( Apple's Safari is always late with features, has significant bugs; MS abandoned their Edge), who could and why? What would be the value add over just using Chromium? It'd be a massive undertaking which you can't profit off because the competition is free.

PS: I'd like someone like the EU Commission ( for instance in the upcoming Digital Markets Act) to force Google to give away Chromium to an independent committee. It is the basis for Internet consumption, it shouldn't be in the hands of a single company.


26 days a go this topped HN:

SerenityOS Browser now passes the Acid3 test (https://news.ycombinator.com/item?id=30853392)

That is a new browser from scratch, (well actually, it is a full OS from scratch). I got the impression this is developed by very few persons.

So to answer your rhetorical questions:

Q: who could?

A: Andreas Kling, Linus Groh and Sam Atkins

Q: why?

A: Because they want to use it. "This is a system by us, for us, based on the things we like."


> Andreas Kling Mar 30 > There's still a long way to go before it's usable for daily browsing. For example we need to improve support for flexbox, grid and table layouts. Performance also needs work. > Our JavaScript engine is decently mature, test262 score tracked here: https://libjs.dev/test262/

In a few years when they have achieved feature parity with Chromium or at least Safari we can discuss this again, but for now it's not ready for real life use.


Hahaha, I don't think a project as monumental as a browser deserves such dismissiveness. Let's not forget that Mozilla's full rust browser has not achieved (and never will) feature parity with safari or chrome either, and they invested far more man power and time into it.

The lone effort of a handful of developers that got equally far is worthy of a mention as a "new browser" even if it's not fit for human consumption yet.


Yeah I remember the first Mozilla builds in the early 2000s. They were awful. Everything crashed constantly. It was such a memory hog. fast forward twenty years and I use it on a daily basis, and I'm so happy I'm not forced to use Chrome (not to mention some abominations from the past like IE with ActiveX - and at some point such setups were mandatory in order to access some services).


Ah yes, the Mozilla Milestones of what came to be known as SeaMonkey. If the competition was flawless I'd agree. However, Netscape and MSIE also crashed constantly. Opera might've been the most stable, also in the sense it was the first to have proper session management and recovery (plus it had unique features like MDI long before tabs were a thing). It took ages till Firefox once again had certain features Netscape used to have (like sharing profile over network which Netscape could with LDAP). But such was abstracted on filesystem level in the Microsoft world. An OS or browser needs a reasonable unique selling point, standards compatibility, and stability and security. Feature-wise one can slack, as long as these aspects are strong enough.


I don’t think they’re dismissing it - that’s the point they’re making. And the other response here furthers it. Nobody is going to invest 20 years at this point until there’s a more compelling reason to do so.

So until then, why moan about the things people are doing to innovate in the space even though they’re using an off the shelf core.


I'm not dismissing their effort, on the contrary! It's commendable, but they're still way off, and would probably need years of work. A fully fledged browser is a monumental undertaking.

Mozilla.. basically they're lingering on due to Google's fear of having the only browser available and the scrutiny associated. Ideally Firefox should be funded by public funds ( like thr EU donating money to FOSS projects they use like VLC, 7 Zip, etc.). But if a browser company at it for a long time can barely survive while also having a browser that's not "fully featured"... and Microsoft, with their infinite pockets, also abandoned theirs... what hope is there for anyone succeeding at it? And make no mistake, for any browser to succeed it would need the full feature set of Chromium.


Keep raising that bar


Can you answer this question?

> What would be the value add over just using Chromium?


Not using Google is by itself value added.


Not even the creators of the browser think that it is ready to use, inplying it is unusable. So it is immediately disqualified.

Like what the sibling comment said: [0] Until it has achieved feature parity with at least Safari, then you can talk about 'value' or realistically using this browser over a Chromium one.

I can use a Chromium-derived browser today like Brave or ungoogled-chromium and have no 'Google' in it. The SerenityOS browser doesn't even qualify to be compared with the rest on usability other than being 'built from scratch'.

[0] https://news.ycombinator.com/item?id=31165497


Chromium is open source, not "Google", since all the bits that make it Google are in Chrome.


There is inherent value in the Web being accessed a variety of clients, including a variety of rendering engines.


Still, I wish more "browser distros" like this would base themselves off Firefox rather than Chromium. Chrome doesn't exactly need the help at this point, and Firefox is the only fully independent browser left that isn't beholden to a major corporation.

A hard fork of Chrome would also be nice; would be a great way to create another fully independent browser without needing to do all the work of building a browser from scratch (though maintaining it would still require a lot of resources).

This is just a generic gripe with the overall state of the web though; obviously it's not up to OP specifically to solve it if that's not the goal of this project.


> Apple's Safari is always late with features, has significant bugs

While I agree on bugs, "late on features" is at least partly a lie. Features are pumped out by Chrome at a rate of ~400 new Web APIs per year [1]. And Chrome pretends they are standard even if often both Safari and Mozilla are against them (e.g., most hardware APIs, constructible stylesheets in their original form etc.)

[1] https://web-confluence.appspot.com/#!/confluence


> we won't have a new browser from scratch any time soon

What about Flow? It's a a clean-slate closed-source browser and engine, designed to make effective use of parallelism: https://en.wikipedia.org/wiki/Flow_(web_browser)


> closed-source

Aaaand I'm out. Still not over Opera abandoning presto. With open source there's at least a chance it can be continued by someone else.


I would also prefer to see it be open source, but I think in this context that's not the point? My parent was saying that they do not expect to see new browsers from scratch, and this is one.

Aside: you could make the argument that both Chrome and Safari are only open source because they were not built from scratch.


If they weren't based on FOSS we would all be running Firefox and Opera now, not MSIE (Firefox was popular on desktop before Chrome existed, as was Opera on mobile). Or perhaps they'd have forked Firefox/Gecko instead, or collaborated the effort with these projects. Safari and Chrome are merely a wheel in the larger ecosystem of Apple and Google, a means to an end.


>> It is the basis for Internet consumption, it shouldn't be in the hands of a single company.

Somehow I agree with you but this is a slippery path.

Should we do the same with x86 ISA?

Should we do the same with Windows?

YouTube?

Where should we draw the line?


I know you weren’t asking me but

Yes

Yes

Yes

Infrastructure should as well not rest in the hands of a single company, so somewhere past that?


> Eh. People say "a new OS" when talking about a new Linux based distro. A new Chromium based browser is apt.

Really? Who are these people, and where are they saying this thing?


It depends. If you just want to support HTML, it's very easy. If you decide to support CSS, it's tedious but possible. If you decide to support JavaScript, it's theoretically possible, but you would need hundreds of millions of dollars, and you will be behind Chrome and Firefox anyway. On the top of that, there is the issue of DRM that is extremely difficult to solve for smaller vendors, and without it your browser will be crippled.


> I think the ship has sailed and we won't have a new browser from scratch any time soon.

I think the big mistake is that we let the HTML spec get so complicated that it's essentially become Google's platform. Although it's technically open, it certainly looks and smells like a monopoly.

How do we fix it? Maybe someone could make alternate rendering engines that use Canvas and WASM?


This is a topic dear to my heart. Actually, I'm working on a first-step project to help address this very issue. It's worth reflecting on a few important points.

1. "We" didn't "let" HTML5 become anything. HTML5 reflects whatever browser makers choose to ship, which is why it's un-versioned. The attempt at an industry wide consensus building effort around what HTML should be failed (W3C), largely because it spent most of its political capital on implementation-independent academic dreams rather than incremental improvements. But if you're shipping a browser then what you want is incremental improvemetns.

2. I think by alternative rendering engines you mean different approaches to rendering UI and documents, that aren't HTML. But then you describe something that would have to fit entirely within the constraints of whatever browser makers impose. You'd be trying to build a competitor to browsers on top of browsers. The industry has a history of that and it doesn't work - browser makers always find reasons to kill them off. Flash, Java, Shockwave, ActiveX. Some were more secure than others but in the end, browsers killed them all. They won't tolerate competing platforms that ride inside the browser. That means to make a competitor to HTML5 you need to go outside the browser and build new kinds of browser and document/app technology.

3. To do (2) it really helps to have a good, easy to use deployment technology.

Watch this space.


Seems really interesting so I have been Googling to try to figure out what you are talking about. Is the Hydraulic product a way to easily deploy GraalVM software? And maybe by adding some type of search and document browsing capability, it can become a new type of web browser?

Could you take a look at my comment in this thread and also at https://github.com/runvnc/tersenet

I have not totally updated that but my current thinking is that we really want to finish deconstructing the overlay operating system that the "web browser" has become. For example, we should actually not bundle the information browsing program and application VM together, but rather have a simple standard for them to work together. Such as, the info browser can save the list of the application binaries to a file or directory that the VM system knows to watch.

We also actually want to further decompose this into a multilevel window manager concept. On the first level, just a rule that applications save and reload window layouts when the user adjusts them.

I really think it should be a goal to standardize on some web assembly extension with simple UI features like canvas or framebuffer and keyboard events.


Yep, I saw your comment. You're definitely thinking along the right lines here. It's all about incremental decomposition and recombination.

Hydraulic's first product is currently in private beta. If you like, email me and I'll add you so you can see what it is. Actually email me anyway, because I'm putting together a list of people who are interested in post-web technologies for perhaps a podcast/interview series. My address is in my profile.


> To do (2) it really helps to have a good, easy to use deployment technology.

That's why I suggested "Canvas and WASM." It's very trivial to get a "Canvas and WASM" program running in the browser. All you need to do is plan the API inside of WASM very carefully, because...

> That means to make a competitor to HTML5 you need to go outside the browser and build new kinds of browser and document/app technology.

... the next step is to remove the HTML & Javascript adaptors, and "build new kinds of browser"s that are compatible with the WASM API in the "Canvas and WASM" hack!

In this case, the newer browser would probably perform much better than the "Canvas and WASM" hack. :)

Edit: If you read this and want to talk further, my profile has a link to my web page. You can email me or track me down on LinkedIn.


I can't email you because the email link on your website is broken unfortunately. But sure, would love to chat about this stuff. You can find my email in my profile if you want to say hello. I'm putting together a list of people who have some interest in the post-web space with the goal of starting a little community, or maybe a podcast series. Would be great to have you take part!

Re: canvas+HTML5. Yeah, you can definitely go that way but there are some issues. My own analysis took me down a slightly different route. The issue with using WASM/Canvas is:

1. You need a UI toolkit that can draw into the canvas. Those are hard work. Making a simple one that gets abandoned after a year is easy, making one that's got a competitive feature set and which is maintained over the long term; much harder. Some do exist and it'd make sense to leverage them.

2. Of the toolkits that do exist, none are particularly well suited to wasm (maybe Qt could work?). In particular you normally want to write GUIs in high level GCd languages, but wasm doesn't support GC nor language-specific JITCs.

3. If you look at things developers are expressing a need for today when they go outside the limits of HTML5, it's often a combination of things like performance, OS integration, hardware integration, a wish or need to use other programming languages etc. HTML5 is a poor UI toolkit or really barely a UI toolkit at all, but that's not what's driving people currently. So the question is what would your offer be in the short term if you're restricted to wasm and canvas?

4. Finally, if you look at where HTML5 is weak and not innovating, or not even in the game at all, in my view there's lots of areas but they mostly require you to be outside the browser to fix them.

So I think the way to go is to make being outside the browser a much more hospitable place and in that way encourage people to build competing neo-browsers. Let 1000 flowers bloom, if you see what I mean.


Thanks, I'll follow up. (And fix the email link)

FYI: You can do GC in WASM. I'm actively working in Blazor right now, which is C# compiled to WASM. Even though it's GC, it still relies on the Dispose pattern. (IE, if an object has a resource or needs explicit cleanup, it can't rely on the garbage collector to know when to release its resource or otherwise do its cleanup.)

But I'd be careful about being too opinionated about forcing a language onto the consumers of an API. I believe WinForms (the first C# UI API) was a thin wrapper around the Windows UI API, but I never did a Windows UI in Win32 directly to fully validate my assumption.


>"The attempt at an industry wide consensus building effort around what HTML should be failed (W3C), largely because it spent most of its political capital on implementation-independent academic dreams rather than incremental improvements."

Interesting. Can you elaborate on what some of those academic dreams are/were that derailed things? Did they end up in the spec or were they just bikeshedded to death?


What I meant here was the combination of X[HTML] and RDF.

During the era when the W3C was firmly in charge, their tech output was all oriented around making the web stack more rigorous and - as they saw it - better suited for training AI. HTML itself was more or less abandoned and everyone told to move to XHTML, the benefits of which would be the ability to embed new DSLs like SVG, app-specific markup, and the ability to express abstracted "knowledge" in the form of RDF triples. All these specs still exist of course and they were all implemented in browsers, but hardly anyone uses them.

There were several big problems:

1. XML has strict validation rules. Much, much easier to build a parser and tools for, but, not compatible with most web content which is full of markup errors. For a brief period some people heroically tried to fix all the errors in their markup and make it fully validating, but the effort involved was high especially for big sites and the reward was ... well, there was no reward really. The only tool most people care about is the browser and browsers had complicated hacky HTML parsers nobody understood. But, pixels got to the screen and they got their reliably.

Especially consider how important that is given the prevalence of HTML-by-string-concatenation. In XHTML if you made a tiny error in that process then the browser would stop and render an error page, meaning a site outage that doesn't show up in your server logs. HTML has the opposite philosophy: keep on trucking no matter what. Your page might render a bit garbled at worst, but users can tolerate that occasionally.

2. RDF/XML/Tim Berners-Lee turned out to have the wrong idea about AI. Or, well, maybe. I suspect the jury is still out on that one given that stuff like GPT-3 doesn't really meet people's prior expectations of what AI will be like, but still. The idea of expressing human knowledge in the form of an abstracted graph of nodes, in which all the nodes and edges are labelled with URIs, and then serializing that to XML and embedding it into XHTML web pages. Yeah, no. Big, big specs. No tools that actually used any of it to do anything useful.

Meanwhile in all of this HTML4 was stagnant, missing lots of small quality of life fixes that ordinary webmasters and browser makers really wanted. So you can't blame browser makers for killing off the W3C. It wasn't meeting the world's needs.

On the other hand, once unleashed from any kind of broad consensus or standards process, HTML more or less ceased to be a spec you could actually implement. You can't even say you're compliant with it because a week after your statement it might have had another 100 pages added to it.


Wow, thanks for the comprehensive response. This jogs my memory a bit although at the time I wasn't able to pay as much attention to this as I would have liked. What a sad situation. I was curious about this:

>"I suspect the jury is still out on that one given that stuff like GPT-3 doesn't really meet people's prior expectations of what AI will be like, but still."

I apologize if this is a naive question but what is the connection between GPT-3, browsers and Berners-Lee?


Before the 2000s-era machine learning revolution, AI was assumed to be solvable via symbolic reasoning. Pure logic, in other words. See: Cyc. Cyc was/is an attempt to encode all of human knowledge using predicate logic. If you have knowledge in this form then you can use the standard rules of logic to infer new knowledge and do reasoning.

TBL/W3C felt like the next big upgrade for humanity would be AI that could understand the web. To achieve that, you need web pages to be encoded in the form of logical triples. Cyc used a custom lisp-ish language called CycL to do that, RDF was the same concept but 'web-ified' i.e. using XML and URIs for everything.

Of course that approach never took off. Even by 2003 Google was starting to master machine learning techniques like scalable logistic regression, the webbiest of web companies just didn't care about this symbolic AI approach at all because in reality nobody was using it. Too complicated, too abstract, no tools or cool demos. Pure vapourware in other words. GPT-3 has now shown that you don't need everyone to learn new syntaxes or predicate logic to train AI on the web. You can rely purely on statistical techniques and neural networks.


This is fantastic historical information. It's even more fascinating(and horrifying) to think that these things collided the way they did and derailed the evolution of a maintaining a proper HTML spec and that ultimately one company - Google shifted both the direction of AI movement as well as the direction of the browser. Thanks for the wonderfully detailed responses, I really appreciate it.


You're welcome!



You can get a bunch of new browsers from scratch. You just have to completely redefine what a browser is. Heh.

The issue is that "browser" now is code for a specific type of complex operating system that runs inside of another.

That should be split up again, and also reset to content-centric networking (such as IPFS).

- View information: just need something like markdown or RST. The Info browser

- Simple applications: web assembly with framebuffer, audio and mouse/keyboard input. Media browser

- Complex applications: Web assembly plus every device I can think of. Web assembly plus a flexible pluggable and granularly permissioned device driver system. Extended media browsers.

https://github.com/runvnc/tersenet


I would say a browser is much more than its rendering engine, look at weebkit and all its forks and variations (like a programming language is much more than its compiler)


Edge is chrome.


Exactly… I see the point people are trying to make, but new browsers aren’t that interesting. New browser rendering engines are.

It great to have Edge, Brave and other, but they provide very little of interest.


Edge didn't use to be Chrome, and people would probably have made the same comment if Microsoft had launched a new browser that was just Chrome.


Congrats on the launch.

I think this indirectly slowed down the foreverdomains server with the HN traffic.

Since you advertise "decentralized internet" vs blockchain, I'd love to see this also support some more non-blockchain protocols such as Dat (https://dat.foundation/) and IPFS directly. Maybe even Bittorrent and Tor (Onion router).

It may already do some of that, I just couldn't tell from your wording.


<https://dat.foundation/> is apparently deprecated in favour of <https://dat-ecosystem.org/> which is impressive since it only became the dat foundation in december 2019 ...


I just picked the first thing that came up when I google Dat protocol that seemed to match. Thank you for pointing out the mistake.


dat ecosystem though


Don't tell us, you're here all week... ;)


Don't forget the boys at the Oxen network and their Session program. Phenomenal work, better than Tor


> better than Tor

In what ways does it compare?

> the boys

I hope it's a much bigger world than that!


This looks really interesting. It looks like their lokinet uses an onion router though or am I misinterpreting the "better than Tor" here?


Isn't Loki the same cryptonote fork that forked signal to remove the phone requirement and then called it better than signal?


> Phenomenal work, better than Tor

Citation needed


Why are you swallowing "error" from invoking the "reall" tool? That seems like a great way to make contributors really frustrated: https://github.com/imperviousinc/beacon/blob/main/tools/src/...

and again https://github.com/imperviousinc/beacon/blob/main/tools/src/...

This trend of "I'm going to invent some build tool because there are not enough build tools in the world" is evidently leaking out of the node ecosystem

For clarity, I did see that this was inspired by the brave-browser model, but of the ones to draw inspiration from, that's for sure not it given that their CI is closed source and they're trying to use npm in lieu of a more structured, comprehensible system

I like trying out alternate browsers, so congratulations on the launch, and I'll for sure try to build it, but I wanted to draw these to your attention because my experience with software is that error handling is about 80% of the job


> Why are you swallowing "error" from invoking the "reall" tool? That seems like a great way to make contributors really frustrated

Glad you're looking at the code! There's still some refactoring needed for butil. It will get more polished soon ;)

> This trend of "I'm going to invent some build tool because there are not enough build tools in the world" is evidently leaking out of the node ecosystem

butil applies string replacements, patches and overrides all under one tool using a statically typed language but still usable like a scripting lang since it can rebuild the actual tool as needed. You can technically do something similar with various python scripts I suppose and use chromium's GN build system . GN would just end up calling various scripts so I think i prefer this more unless you have some other ideas.


It may be "statically typed" but from a consumer's PoV, 100% opaque. My first question when trying to build any project is "what is this going to do to my machine" and with some golang based build system, you're asking for a lot of trust on two different levels: 1. go install something (so now I need to read its source) 2. oh, turns out it actually delegates to an entirely separate binary that I now need to separately read

And I hear you about "but scripting!!1", however, using one of the existing tools means there is a non-zero chance of someone having experience with them and maybe even having reasonable editor support for it

It's your project, so I know I'm just some rando on the Internet, but I wanted to make sure you were choosing the contribution path you wanted to build upon, and not just hacking something together for expedience that then has to be unwound later. In this specific case, that goes double because it already has tight coupling to GN due to Chromium


> but I wanted to make sure you were choosing the contribution path you wanted to build upon

> that goes double because it already has tight coupling to GN due to Chromium

Interesting i'll experiment with moving this to GN. I have more high priority things to deal with atm but i'll get to that.


Curious to know your team size and engineering challenges with building this on Chromium. Also, did you consider WebKit or Gecko?


What problems did you face that made you decide to create a browser fork of Chromium instead of a web browsing extension or a separate application that could utilize any other web browser as a client for using these protocols?


We already built a separate application and wanted to avoid making a browser for a while. The idea for Beacon started when we wanted decent support for iOS and android, which nowadays is like the majority of users.

It's easy to install an app and start browsing. No hacks or workarounds and the UX can be much more tailored (even for desktop).


Neat and some interesting todo/roadmap items too! Will be keeping an eye on this.

Random thought, perhaps for later: some form of keyboard-centric navigation functionality - boosts your UX differentiation and it likely speaks to your target audience. Something along the lines of Nyxt browser[1] or the popular Vimium extension[2].

[1] https://nyxt.atlas.engineer/

[2] https://github.com/philc/vimium


Handshake has looked interesting for awhile now but what is the point of a blockchain based DNS? Isn't a big selling point of DNS having the fastest possible response times?


No, the selling point of DNS is mainly that everybody knows how to work with this bit of technology from the 1970s. Mostly it's fast because modern implementations do a lot of caching and only occasionally query parents in their hierarchy. You can use the same strategy to make pretty much any alternative implementation fast as well. And of course slapping some sort of DNS compatible facade on that is not rocket science either. So, you don't even lose the backwards compatibility. Better still, you can fall back to existing DNS too.

The key challenge with DNS is that it is hierarchical and that there's a root with an owner. That owner is an entity that we just have to trust to do the right things in terms of certificates, security, not censoring entries, implementing fair policies for allocating names, etc. Whether you trust the current owners is of course a bit political. Mostly I have no big issues with that myself but it is annoying to have to bribe certain companies for the privilege of using our registered trademark as a domain name. Literally the only service these companies deliver is 'maintaining' a record of that.

A blockchain can replace this with distributed ownership of a tamper proof ledger of ownership of domain names. Building an index of these names is not that hard. Any old search engine or database can do that. But the key part is administering ownership in a fair and transparent way.

It's one of those use cases where using a blockchain actually makes sense without devolving into some pyramid scheme for creating lots of wealth for some. When you think about it, existing DNS works exactly like a pyramid scheme (by design even). Millions of people are paying in the order of 10$ a year for a cheap database record in an ancient hierarchical database. The only value these companies deliver is monopolizing ownership of common endings like .com, .io, etc. And the only reason we need those is because it's an ancient hierarchical database.

With a blockchain, we get rid of those companies and potentially also the top level domains. I see no reason why google, twitter, etc. could not be top level domains.


I agree on most points and this is why I for the most part run my own DNS.


The lookups are near instant just like regular DNS, it just becomes un-censorable and not controlled by a central body.


I had a bit of a disappointing moment recently when I found out that only top level domains are uncensorable in handshake. Such a let down.


Since everybody including you, can own as many top level domains as your heart desires, this should be no problem.


Using blockchain prevents the domain from being seized. This is helpful for service providers so they can maintain uptime and it also helps users who might rely on a service.


It also means you can lose control of your domain permanently if you make any mistakes with your keys.


The block time would be more comparable to a record's TTL, and not how long it takes to look up that record.


Good to know - this is is the kind of info I was looking for!


Why not a web extension?

Why use a ENS/Namecoin versus a petname solution like GNS?

I like the idea of such a browser integration, but I don't see this actually being used. Perhaps you could have upstreamed those as experimental features in firefox/Torbrowser.

Edit: I suppose you could upstream it to brave, they are very crypto-friendly. Also, no Linux support?

Edit 2: Most of my points have been made moot by this: https://impervious.com/fingertip


DANE is really interesting. Wider adoption would be good for everyone I think.

I’m going to have to give this a try.


The problem with DANE is that it doesn't move us away from CAs, it just conflates the role of running a DNS zone with also being a CA. And even CAs may be problematic, the entities running DNS zones are often more so.


If you own a DNS zone, you should be the only one with the authority to issue certificates for it. That is what Handshake+DANE does.


Fair enough - I missed the part about Handshake. I don't know much about Handshake, but, if it does what I assume it does, I do see how DANE would make sense in that context. Whether or not Handshake is a good idea I guess is the real discussion point - but, maybe one I'm not too interested in debating, especially given my limited knowledge.


What is your vision of the web and how does the current browsers' default search engine fit into that vision?


Oh god another chromium fork.


This is an honest question because I really don’t get it: what problem does this solve, especially with regard to “distributed domains”?


DANE is a really big deal. It removes the dependency on TLS certificate authorities so self signed certificates can be used and verified via records, published in DNS and signed with DNSSEC.

Though I have greater concerns regarding privacy of the new browser. Is it like ungoogled-chromium? DANE should really be merged into upstream chromium.


Interesting idea. This can be done with the existing DNS infrastructure I believe. It really sounds like a workable solution.


The first thing I would think of is not having to trust random certificate authorities, which would avoid incidents like this [0].

Another problem with the current state of the web was explained in an article I read a few days ago [1].

[0]: https://slate.com/technology/2016/12/how-the-2011-hack-of-di...

[1]: https://www.politico.com/news/2022/04/09/website-domain-more...


An interesting thought. This relates to DANE if I understand correctly. This can work on existing infrastructure.

But what about this “distributed domains“ thing though? Unless I’m understanding it wrong, there’s blockchain technology involved. Why?


You are repeating the question while both of the given articles explain the current problem of the internet and make the reader understand the why.

Another user in this thread with the name 'jillesvangurp' has elaborated on the why, which is worth a read.


> It is also the first browser to support secure web browsing with DANE.

Props for that. I wish we moved away from CAs.


You get CAs with DNSSEC/DANE, just not PKIX CAs (though you can have those too).


Agreed. I don't like the trend of devices, especially IoT devices, that bake in support for CAs without supporting other certificates or allowing you to adjust their trust stores. Real attempts at decentralization away from CAs might force companies to change how their devices deal with certificates and trust, hopefully putting power back in the hands of device owners.


Is DNS really a problem with respect to 'centralization'?

Where is the harm being caused?

Where are the risks of harm?

What are systematic negative effects?

And if those can be articulated, are blockchain resolutions a solution to those problems?


You could have very well built an extension for this..


This is the kind of thing Linux users like.


...but cannot use, as of now.


Do any of the Chrome forks explicitly work together?

The standard IT pattern seems to be a dominant corporation vs a ragtag of small groups.

The browsers wars have already gone through this cycle and the ragtag group won the last time.

But there were also lots of speciality browsers built on IE and it's hard to tell where any new Chrome fork browser stands.


Imagine a world where Microsoft, Mozilla, Google, Apple were cooperating and building interconnecting blocks of a modern browser since the early 2000s. Instead, MS discontinued Edge, Mozilla has only 2% market share (still big in global terms), and 90% of the internet traffic is basically running on Chrome. No innovation is possible that is not in-line with the direction the upstream, or at the increasing cost of constant catching up to updates.


I have a question about blockchain DNS's. This might be the time to ask!

There is a centralization aspect in any DNS, blockchain or otherwise in that, literally to be useful, there needs to be consensus about which chain operators own which .{name} extensions for a given name. Which makes financial incentive for say an investor to 'bribe' browsers into using their DNS, which then charges money to end users, which er... feels a lot like the DNS we have.

I think that "true" decentralization might come from nameless "domain". If a domain is a SHA256 hash with no name, then with QR Codes and search engines and hyperlinks it doesn't matter as much that they are not memorable (we managed with nameless phone numbers, right!). Hash providers can be any chain, and then your job as a browser is to add as many mainstream chains as possible (there is no need to decide who is the 'official' one, or rank them, so Ethereum is no better than Dogecoin) as hash collisions are practically impossible.


A nameless phone number can be relayed verbally, and is easily memorized. Neither of those things is really true for a SHA256 hash.


If you are going to use an immemorable hash instead of a memorable name, why not use IP directly?


Because you don't necessarily own your IP, usually it's owned by your hosting provider.


There could be several immemorable hashes pointed at the same IP, the server needs to know which immemorable hash it should serve.

(don't think this makes much sense, just answering your question...)


IPv6 may work for that. But the idea is to have a decentralised allocation so the hashes from your own generate PKs is better.


we already have "nameless" IP addresses, in which DNS is built on top of it as abstraction. using SHA256 is no better that using IPv6 IMO


There is probably a decent case to be made for having an indirection layer above IP addresses, even if they are as meaningless and unmemorizable as a hash: for portability (IP addresses are tied to the host infra provider), load balancing, etc.


We already have that.


Do we? Where can I buy an IP address? Do all hosting providers support assigning an arbitrary IP address to a website?

Let's say I have a hosted wordpress blog with a custom domain. How do I get a static IP for it?


Except you have your ownership in the blockchain allowing you to mutate the IP address. I don’t know if you can get a guaranteed for life IPv6. And who dishes them out. It is centralised.


Is this just technically interesting? If it's more than that, what are the benefits for me?


A Chromium fork, which could've been an extension or a companion app. No thank you!


I apologize if this a naive question but a lot of this is new to me - are Forever domains and ENS and Handshake all complimentary or are they alternatives to each other?

Might anyone have any good resources for blockchain DNS they could share?


Forever domains are built on Handshake. Handshake is an attempt at a decentralized DNS root zone. ENS is a different, independent project.

I guess the Handshake people will make sure .eth (the ENS suffix) resolves to ENS on Handshake. Handshake would like to eclipse everything :)


It's surprising that nobody mentions theoldreader:

https://theoldreader.com/

it's kinda like the old google rss reader and works really really well!


Could you please update your page with more information?


Linux version for Mint? Mail list for updates?

Thanks


Congrats on your accomplishment!

However, I will never use it because a browser is one of those really important information gateways that I want to be very sure is not compromised.


Your strong wording is interesting here, and I'm curious about what this is implying about how you approach security.

The implication here is "not compromised by independent actors that wouldn't already be capable", right? You're delegating trust to closed source megacorp products or open-but-insanely-complex megacorp dependents (firefox/chromium babies?), all of which will naturally create a strong incentive to find or hide exploits, once they have some traction. Discussion on HN makes me think it's impossible to really trust a browser to be secure atm. The alternative is to hope there are no wide-sweeping exploits, and to try to remain anonymous.

I guess the real additional issue added by something like this is this introduction of another actor which you need to be inherently suspicious of since they're attempting to funnel you towards their system, which they have some control over. Just like other corps, but you've given them your trust already. It's not crazy, I don't know if it's even wrong.

But I'd say maybe we can reframe your issues adopting something like this. Would it be something you would trust to use daily if the following become true? :

- The team at Impervious, develop a sufficient reputation for being stewards of open software with healthy communities over the coming years. (I'm implying that's a good approach to get security researchers giving time to your project, maybe it'd be sufficient to have a really good bugbounty program, or just develop a sufficient security team)

- A large audience adopts this browser, so you're not one of the hundred beacon users that's easily picked out throughout the web (I assume fingerprinting techniques make this an issue though I admit I've little knowledge on the topic)

I'd love to hear what I'm missing, and if this conflicts with your approach to assessing security maybe you can help me see your perspective :)


For me to consider using it, the company would need to have millions of users, a large team of highly credentialed security engineers working on ensuring it is not vulnerable, and 6-figure bug bounties. If your unproven company can't be on the hook for $250,000 bug, why should anyone trust it with their banking info?

Basically there are not enough signals here for me to believe they are life-or-death serious about it. I know it sounds dramatic, but people's finances, careers, lives, etc can be destroyed by mishandling the information that flows through the browser. They need to take it that seriously, and signal that to everyone.


[flagged]


Unfortunately, the number of people qualified and with the time to do a source code review of something as complex as a browser, is very small.

For the vast majority of users that's not a realistic prospect...


Not only that, but review every single commit forever. It's just not realistic.


So you're saying that either way he has to trust someone. Then why not trust a company which at least open sources their code so anyone who has an interest in auditing the software can do so? (or have experts audit it for them,[who they also need to trust])

Yet people seem to trust google without batting an eye? "Google fired dozens of employees from 2018 to 2020 for accessing users' personal data."

https://www.businessinsider.com/google-fired-employees-abusi...


So (like everything in security) it depends on your threat model. Open source has significant advantages over closed source, from the perspective of allowing the possibility of review (although that can be a false sense of security as we've seen several bugs in high profile projects live for decades)

Where closed source might work better is where you are a large company with a smaller supplier. There you can use contractual controls to require a level of review to be done alongside other controls and have meaningful financial penalties if those requirements are not met.

At the moment honestly the idea of fully trusting any large project seems like a tricky one as most projects/products are comprised of large quantities of 3rd party open source libraries, which are trusted. Whilst there's work to address that (e.g. the OpenSSF) there's a looong way to go.

That's why defence-in-depth/segregation/detective controls are so important, relying on any one control is likely not to end well :)

As to trusting Google, again threat model. I have a gmail account, could a google staff member access that? yep they could. Do I think I'm likely to be a target for that, not really :)


> either way he has to trust someone. Then why not trust a company which at least open sources their code

Because the alternative is to trust a company which open sources their code _and_ a lot of security researchers regularly verify the code.

> Yet people seem to trust google without batting an eye

There are always people who believe in wrong things.


Trust in the original authors' code is only half of the attack surface. The other half is trust that no future contributors are malicious. Is the project more capable than Google in ensuring that malicious code can't land in the code base? I think the answer is clearly no.


I'm not sure what you expect Google to do? Permission to access personal data was limited to a small number of Google employees and when they abused their power they were fired. Do you want them to have some sort of AI that guesses the intent of looking at personal data?


You are downvoted, because what you suggest is impractical.

So you either don't realize that or being disingenious with your suggestion.

And that's without considering the need for verified builds, which is a separate issue.


As someone else already pointed out, you either put in the work or you have to delegate to people who you need to put your trust in.

Google has violated that trust many times, yet people just shrug it off and are like "well just accept your corporate overlords, everything else is just impractical".

Some people in his forum are just obnoxious and every-time I read the comments it reminds me why people call hackernews toxic.


i would like an open source browser that shares data collected from all its users. so the public can have access to the types of data google et al have.


> i would like an open source browser that shares data collected from all its users. so the public can have access to the types of data google et al have.

That sounds like a disaster waiting to happen. The public once got a hold of a sample of the anonymized searches of AOL customers and it didn't take long for individuals to get identified. The amount of data google has on people goes far beyond the things they search for or the sites they visit. A web browser that publishes everyone's browsing habits for the public would also get collected by data brokers and 'google et al' to be exploited for their own gain. Who would sign up to use that?

I'm pretty sure that if the data google has were ever made available to the public, as soon as the first congress person reviewed their own dossier we'd see laws passed very quickly banning the use of that data and perhaps even preventing it from being collected in the first place. I imagine any popular browser or project exposing their users similarly would have the same problem.


either the data collection should be banned or everyone should have access. why should monopolies like google and those who can afford access to data brokers get to extend their power indefinitely? why do people support snowden's leaks but no one is leaking similar shit from google? it can be a payment for releasing your data publicly if people don't want to give it up for free.


Real whistleblowers acting against the interests of the powerful within a society need to be willing to sacrifice their own lives (and even possibly putting their family and loved ones at risk as well) for ideology. And by definition of being at a point to have access to such information, they also are probably at a position in life where not doing so would allow them to continue to lead an extremely comfortable life.

And ultimately, it may not even matter. Snowden's leaks revealed that basically every single privacy related conspiracy theory of times past was true. They showed that the NSA had are not only trading your nudes [1], but even have spies all the way down into things like World of Warcraft and XBox Live [2]. They also demonstrated countless illegal acts that violated every single possible interpretation of the 4th amendment.

And now? Snowden is holed up in some place in Russia knowing full well he'll likely getting Assanged, or worse, if he ever steps foot in a place friendly to the US. And the powers that be? They've only become more flagrant in all of their violations, endlessly protected from the law by a mixture of national security and lack of standing legal claims.

Don't underestimate what genuinely leaking against the interests of the powerful entails. People like Snowden are truly unique and selfless individuals.

[1] - https://time.com/3010649/nsa-sexually-explicit-photographs-s...

[2] - https://www.smithsonianmag.com/smart-news/the-nsa-was-spying...


> why do people support snowden's leaks but no one is leaking similar shit from google?

Whistleblowers are very rare creatures to begin with. Not too many people are willing to risk the legal, financial and career ending repercussions assuming that they even have both access to the data and a means to copy it in bulk to back up their revelations. Even now, a large number of people would sadly still consider Snowden to be a traitor.

I imagine Google keeps the data they collect (and more importantly what they've inferred about us using the data they collect) under tight controls to avoid that problem. With only a limited number of people able to access the trove of data at all, and those people being kept happy (or perhaps more pessimistically, kept in line by fear. I mean let's face it, Google likely has enough dirt to bury any one of us alive) it's not hard to imagine that no one has been willing or able to come forward.


You're right, it should be banned. It should not be legal to covertly(through trackers running on every website, effectively spyware) catalogue people's browsing habits like that.

Although I'll give you this. A massive leak might stir up the public outrage necessary to convince the US Congress to actually do something about it.


> either the data collection should be banned

GDPR


> "I'm pretty sure that if the data google has were ever made available to the public, as soon as the first congress person reviewed their own dossier we'd see laws passed very quickly banning the use of that data and perhaps even preventing it from being collected in the first place."

And now consider the alternative world, the one we happen to live in, where it's all private unless it somehow ends up getting "hacked" or "leaked" to the public, indirectly of course. And it'd be a shame if that happened wouldn't it Mr. Senator.


You aren't wrong. The amount of power Google has is outright dangerous. Amazon and Microsoft aren't far behind. To make matters worse, we should assume the state has every scrap of data these companies have collected as well. It's a lot of power to have and I imagine it could be used to quickly identify upcoming threats to those in power while also providing a lot of ammo to use against them.


Wow congrats on the launch! This looks really exciting.


No GNU/Linux?


Will there be namecoin support?


Yet Another Chromium Fork. In a way, it's almost a shame Chrome was open sourced through Chromium. Maybe if it weren't, we'd have a healthier, more diverse browser ecosystem.


As much as I lament the same thing. I think it’s more likely we’d just have nothing.


I doubt Chrome would be in a position it is if it wasn’t open source. After all blink just started off as a fork of webkit (which was a fork of KHTML..). Had they had to build a browser from scratch it would had taken them years to have something that could compete with IE, FF and Safari. Also Apple didn’t seem to be very interested in promoting the Windows version of Safari, so it was basically the only webkit browser most people would even has access to.


Maybe. There's no way to tell, of course.


Well, we could look at history. For example, Internet Explorer’s former dominance.


Writing a browser from scratch is hard.


Hard is underselling it. At this point it’s borderline impossible. It’s unlikely we will ever see a new browser.


Why, licensing/proprietary issues?


That's one. Just look at some of the specs that have to be implemented. There are so many most devs have probably never even heard of. It's really mind boggling the amount of work required to create a full browser from scratch.

List of web APIs: https://developer.mozilla.org/en-US/docs/Web/API Canvas API spec: https://html.spec.whatwg.org/multipage/canvas.html

Take one look at the docs for just rendering text and any sane developer would probably give up: https://www.w3.org/TR/css-text-3/


Web browsers are the most complex codebases in existence. Firefox contains more lines of code than Linux. It would require probably 100 developers just to keep up with new features being added. Then going back and catching up on the already existing components is virtually impossible.

Mozilla had a project called Servo which aimed at replacing just a small slither of Firefox and the project ran for years with a huge monetary investment and eventually was canceled.


> Mozilla had a project called Servo which aimed at replacing just a small slither of Firefox and the project ran for years with a huge monetary investment and eventually was canceled.

Haven't some parts of it been integrated in Firefox?

The project does still seem to go on, although outside of Mozilla: https://github.com/servo/servo


I think some CSS stuff did get merged. But if you look at the insights tab you can see the project is moving at 3% of its previous rate. Mozilla got rid of the servo team so I assume it’s only hobbiests still working on it.


Not to be confused with the other impervious browser:

https://news.ycombinator.com/item?id=30968715


Yikes, what a naming collision given the "blockchain-y" tone underlying both uses of that name


To be fair, this one is called Beacon, Impervious is just the name of the developer. Not that much of a collision imo...


Let me guess: your company ultimately profits somehow, whether directly or indirectly, from the use ("sale", presumably) of these "decentralized domains" and/or ".eth DNS"?


built on Chromium

If only Firefox(Gecko) would get some more attention...


And that goes double for trying to bootstrap a new protocol; I can only presume the Tor Browser is built on top of Firefox for a reason


There's a reason: legitimate performance differences that exist between them.


Maybe it would if it had a more permissive license.


They’ve gotten plenty of attention. No one wants to build off a sluggish browser.


Why exactly? I honestly do not understand HN fascination with Firefox. Firefox isn't even in the top 3 browsers anymore and has about the same Marketshare as Samsung internet. It is irrelevant at this point. Blink is the standard for rendering now just as Linux is the standard for web servers.


> Firefox isn't even in the top 3 browsers anymore and has about the same Marketshare as Samsung internet

This is precisely the issue. The fact that Google has such a mass control over Internet rendering means that they are free to write their own standards with absolutely no pushback. It is essentially an Internet explorer situation, whereas if Google wants something to work a certain way, you have no choice but to adapt it least certain websites won’t work.

I don’t think a single commercial entity — let alone one with such a disastrous privacy record whose primary source of income is advertising — should be able to strongarm the Internet like it currently can.


Globally, Safari is #2 on mobile, and in the US specifically, #1. Even on desktop, Safari has a fairly strong showing.

Apple and Mozilla have basically just swapped places when it comes to keeping developers in check wrt adopting chrome-only features.

I don't know that more people using FF would change much at this point, to be honest.


> This is precisely the issue. The fact that Google has such a mass control over Internet rendering means that they are free to write their own standards with absolutely no pushback.

It is an open source community driven project with contributions from, "Google, Facebook, Microsoft, Opera Software, Adobe, Intel, IBM, Samsung, and others"

There are so many stakeholders, it isn't just google dictating what occurs in blink


If Firefox wasn't so sluggish maybe more people would use it.


That’s a big if, Mozilla has been declining for a long time. They don’t have the funding to progress their aging codebase. Firing everyone in the name of Wokeness hasn’t helped.


> Why exactly? I honestly do not understand HN fascination with Firefox.

Well, for starters, uBlock Origin works as intended on Firefox as opposed to Chromium and this will become more apparent with manifest v3.

I don't know about others but I also like how Firefox lets me choose the fonts that I want to see on the web while Chromium browsers don't.


> Blink is the standard for rendering now just as Linux is the standard for web servers.

It is THE problem. Software should not be "standard".


Without community supported and adopted standards software wouldn't have progressed all that much in the past 20 years


Standards must be standards. Software must implements standards.


I can already imagine the DNSSEC/DANE naysayers' complaints. I don't care, I love this.


Why would I want .eth DNS? What is DANE? Not really selling it here.


my thoughts exactly... this seems like a browser targeted for a specific audience. my guess would be the crypto community.


Based on chromium, looks like chromium. No thanks.


What happens with old web browser?


All the .eth crap makes me gag and recoil, but DANE seems worth investigating.


Why do blockchain-based name systems (or ENS specifically) make you "gag and recoil?"


Proof of stake for one, and the cult it’s built on. Shouldn’t need to be said.


I think it does need to be said.

Nobody can engage with your opinions if you don't express them.


That's great! Do you think your product will lead to more companies creating Chrome-based browser forks? Cause there's about 20, I can't wait for more Chrome-based variety!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: