The main challenge there is how to you ensure your database is resilient to machine or datacentre outages? ie what happens if the 1 server with the database is in a datacentre that loses Internet connectivity?
SQlite "merely" assumes that the problems that come with distributed systems are handled at the application layer. You'll have to solve those problems for yourself, sure, but in practice I have rarely (I think never actually) had dataloss through a fault of sqlite.
Also, did you know you can use in-memory instances (and share them across threads!) with the right incantation? And that you can backup your on-disk instance to an in-memory one, do your expensive transactions without hitting the disk then backup the modified instance right back to disk, even in-place if you want!
Sqlite is amazing when you don't expect the DB to do replication or failover on its own.
No I did not know that, I've looked for a long time for a way to convert a sqlite3 database to an in memory database and then back again. Do you mean that there's support in sqlite3 for this? Could you point me in the right direction?
We both learned something new today. Looks like this is what you want in combination with using an in memory database.
I've been doing a handrolled in memory cache layer to speed data access, but with this, I can just call the db directly and then periodically sync to disk, redis rdb style. Sqlite is a staggeringly good piece of technology!
In Python you just have to open the special file name “:memory:” to get a memory-based db. I don’t remember what the raw SQLite incantation is (or if it’s different). Also, pay attention to “ATTACH” - it’s the way to use multiple databases (file and/or memory) while still letting SQLite handle it all (e.g. join a memory db to a file db, insert result into 3rd file db - all without having to look at records in your own code)
I feel like the sibling comments here are basically just saying "yep, that's the main challenge!" without providing useful tips. I personally haven't used it, but I'm aware that this library exists to help resolve this challenge. https://litestream.io
Just open your SQLite database in read-only mode :)
SQLite works really well for static or semi-static data. For example, a blog where you have a small number of users writing and many users reading from the DB. If the authors are content to use one server to edit the DB then you can easily push that DB to the servers handling the reads.
Yes this can work, however you are mostly relying on the operating system's file system cache for speed. Other databases will try harder to keep their own cache. But true, there is lots of room where SQLite works nicely.
TBF: you don't. The moment you care about any shortcoming of SQLite, move away.
One of the cool things about it, is that SQLite is very lax about what it accepts (mostly in the datatype area). You can write your SQL statements targeting whatever database you think you'll move to later and they'll work while you're still on SQLite. I believe having this migration work seamlessly towards PostreSQL is one of the advertised features.