Hacker Newsnew | past | comments | ask | show | jobs | submit | renegat0x0's commentslogin

How is it different from searxng then?

I see nilch as slightly more about being simplistic and not having many features that are unnecessary. I do share many of the values and benefits with searxng (and really love their work!), however this is also about my own specific desire for something that is clean and has very little that is unnecessary.

searxng allows me to use various locations

https://searx.space/


Don't care. I use my own domain index, which also contains youtube channels, which I use through RSS.

https://github.com/rumca-js/Internet-Places-Database

Most important links are available through a simple search

https://rumca-js.github.io/search

I have also RSS search

https://rumca-js.github.io/feeds


It is a fine project to limit 'doomscrolling', but I think the premise is wrong.

- I have created my own RSS readers, that contains 500+ sources. I do not doom scroll

- doom scrolling appears when social media algorithm feeds you data, even from a month ago

- I have various filters so I can browse whatever I want

So RSS should just have filters, categories, search extensive capabilities to solve doom scrolling, and on the other hand it will be able to provide you extensive amounts of data.


If anyone is interested in RSS feeds, here are mine in SQLite table:

https://github.com/rumca-js/Internet-feeds


A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.

LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.


that is hilarious :-)

I use my own library of domains exactly for scenarios like that

https://github.com/rumca-js/Internet-Places-Database


Consider also checking out fmhy.net as a source of great websites


needs a simple p2p desktop client


For me capturing list of RSS sources is fun

https://github.com/rumca-js/Internet-feeds

But i do not check all 3k sources regularly


I think I use SQLite like that (to some extent):

- https://github.com/rumca-js/Internet-Places-Database

For UI I use HTML, because it already provides components with bootrap, and everybody can use it without installation of any software.

All data comes from a single SQLite that is easy read, and returns data.

My database is really big, so it takes time to browse it, I wanted to provide more meaningful way to limit scope of searching


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: