Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mostly focus on text based content so PDF and webpages are easily supported. for PDFs I thought about using https://github.com/phiresky/ripgrep-all or pdfgrep https://pdfgrep.org/

For images, what do you want to grep for? for exif data -> https://exiftool.org/ if you want to find image based content, you might need something smarter. I think maybe it is a place where tools such as https://github.com/ultralytics/yolov5 can shine for me. simple enough to work with most of my images and tag them according to some preferences, and I would save such tags in a txt file.

Anyway, all metadata I store about images, links etc are all persisted in txt files. summaries, tags, etc, incoming/outgoing links etc, each has its own file. There are folders per link/content. Under each folder, one file per type of metadata. So it is very easy to know if some metadata is missing for a file, no index needed, it is just as simple as checking the presence of a file. everything is compatible with grep then.

for docx and xlsx it is out of my plate at this time, I didn't experiment enough to judge what works well enough. I hate those things.



As docx / xlsx are zip files, I normally unzip them and then use some sort of XML-aware grep. But these formats are a rabbithole on their own ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: