Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am using two of those two million RPis with camera modules to record pollinators visiting flowers (backpackable; activated by motion; solar powered). I'm still ironing out some kinks with the software, but the quality of footage produce by the camera module, the low power requirements and the flexibility you get with the RPi are really pretty amazing considering the price.


Agreed about the camera module, with the important caveat that, like most of the rest of the Pi, the guts are proprietary Broadcom stuff. In particular, there are really basic camera settings (the light meter, white balance) that aren't accessible to the user.

Want to maintain a consistent white balance or exposure between shots? Sorry, no can do. And since it's a proprietary blob, it's not something you can fix.


I know many people have a problem with the binary blobs, but even with its limitations, the camera module is also at least $1000 cheaper than building a system using something like a PointGrey Flea 3 and better than any consumer grade USB webcam currently on the market. Power consumption is low enough that solar power is a realistic option while keeping the system portable. JamesH has also been very responsive with adding new features to raspicam/raspistill such as the very useful signal mode. It WOULD be nicer without the binary blobs.


Yeah, I agree that the camera module is really good for the price. It's just a bit frustrating not being able to fix things that I know that the hardware can do, because I don't have access to the source.


Do you have any pictures or footage? I'd love to see.


I use a camera module to capture and catalog a daily timelapse video of the sky and weather above Boston: http://guipinto.com/skylapse


That's really excellent.

Is this website being fed in real-time by the RPi?

What framework/tool are you using to create the UI for your website?


I have the raspi collecting images every 5 seconds, then shipping them off to a local box i'm dedicating to render the videos using ffmpeg (ffmpeg -r 30 -i "images_*.jpg" [+ x264 and crf at 23]). Then I ship them off to S3 for storage/serving. The UI was custom built and is still very bare right now. Simply includes HTML5-video object with 2 video options (ogv and mp4). The quality of the rendered video, as well as capture settings (controlling exposure instead of auto, etc.) still needs a lot of work, and this is an ongoing project for me.


Very cool, how did you achieve this? Is there a write-up you could point me in the direction of?


I do. I'd rather not post links to them here for privacy reasons.


is it possible to share your motion activation / solar circuit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: