As a database person, I think most ORMs lack the ability to run complex queries. Sure, for a simple OLTP system, you might just need a few simple joins. But if you store valuable data (otherwise why store it), you will eventually want to ask questions about it.
So this ORM positively surprised me, as you can still just use SQL and get a JS function generated from it.
To bridge object-to-relational mapping, Oracle has an interesting concept called "JSON duality views". Don't dismiss it because it is Oracle, I think the concept is brilliant. The translation from relational to JSON happens in the database, and it also allows you to send an updated JSON back and the DB will automatically run the necessary DML operations. A good example is here: https://oracle-base.com/articles/23c/json-relational-duality...
You can definitely tell when an ORM was written by people who actually -like- databases rather than people who want to not have to think about them.
perl's DBIx::Class ORM had a rule from day one of "if you can't get the exact same SQL you'd've written by hand, that's either a bug or a missing feature" and close to two decades on people who've moved to other languages still regularly tell me they miss that aesthetic.
That is my biggest gripe with Mongo. As much as SQL requires a totally different way of thinking, as a query language ts simply unmatched for its expressive power, except by maybe Datalog.
Mongo just doesn't feel like that. Every example I have seen is simply "KV lookup". No discussion of things like range based queries, or how data relates to each other. Yes of course these things are possible, but it requires writing a imperative query. Seems like it encourages people to be lazy and duplicate data everywhere, which results in a poor data model.
Why would we dismiss the number one feature we have been asking database vendors to implement for the past 20 years just because Oracle happens to be among those vendors?
You can get something similar in SQLite with normal views and triggers using JSON1 functions, but it would be a lot of effort to create those triggers manually.
JVM/Kotlin ecosystem has a similar approach with SqlDelight - https://github.com/cashapp/sqldelight. I've been using it for a while and it's quite nice to use.
From experience…code generation is not the way. I’ve had my own custom solution both for databases and web services. The maintenance of the library itself and the code you are working as well(always go generate after each change)
(1) Abstractions are expensive. That's because, by their very nature/purpose, you build on top of them. Abstractions for code are very expensive because the code you build on top of them become tightly coupled to the abstraction such that you typically have to heavily rewrite the code to change the abstraction.
(2) So, it behooves you to make sure that abstraction solve a big problem for you. And you definitely want to make sure the abstraction doesn't make things more complex for you.
This kind of thing where you write one language in terms of another is pretty much always a bad abstraction to adopt because you always end up needed to deal with the original language anyway. So you need to know the alternative language, the original language, and the details of the mapping mechanism. These things tend to handle the easy cases well, so at first it may seem a breeze, but soon enough you run into problems and when you do, you are now debugging your code in the alternative language, the mapping code, and the result of the mapping in the original language. Sure, easy things are easier, but hard things are harder. Which do you want to optimize for?
Usually, the abstraction is too simple as well, meaning it doesn't provide a way to express or access some of the things you end up needing from the original (SQL started off simple as well -- all the complex stuff it has ended up there due to some need at the time -- some of those needs will be your needs too). So now you need to find a way to work around the abstraction, or learn the details of its extension mechanism and build yet more code on top of that.
And, there's usually some case or issue that ends up being important to you that the author of the abstraction doesn't see as that important. So you end up dealing with, living with and/or working around inefficiencies or gaps that wouldn't even exist without the abstraction.
In the end, the abstraction gives you the impression you don't have to deal with SQL, but this comes a large and on-going cost and it ultimately doesn't even deliver since you'll be debugging SQL the tool generated (which is more difficult than debugging SQL you generated). I'm not much of a fan of SQL, but you're kidding yourself if you think you can use something like sqlite or postgres and not deal with it.
> current_timestamp will not work properly when wanting to set the default date to the current time. This is because current_timestamp does not include timezone information and therefore when parsing the date string from the database, JavaScript will assume it is in local time when it is in fact in UTC time.
This might be the answer to an unusual date bug I noticed years ago but haven't prioritized fixing in a personal project. TIL!
Very nice work. I've always tended towards libraries that automated more the mapping of database entities to their language-specific representation, such as Dapper on .NET. SQL is generally quite fine as a query language, warts and all. Better than digging through docs and issues to find out how to issue some lesser-used functionality that may or may not be supported by a query builder.
this is my goto. Dapper was also my intro to it, but its a pattern I've followed in every other language since. Writing queries is easy, mapping is the annoying/repetitive part. sqlx for Golang has some decent mapping helpers too.
I don't want to learn/remember a new way to query the same underlying database every time I switch projects/stacks/etc... sql is better, literally.
But why would anyone use an ORM that is specifically to one database? For me, using an ORM is the ability to switch between databases, specifically SQLite for testing and PostgreSQL for production, without having to change any logic.
Curious how that works? Do you not use any features of Postgres that aren't present in SQLLite? If so, why not just SQLite everywhere? Alternatively, why not spin up an ephemeral PG container for testing? Seems odd to not practice how you play to that extent.
It virtually never works. ORMs letting you change databases is something ORM proponents like to suggest but very rarely does it work without a ton of work (nullifying the point.)
I have been DB-agnostic for 20 years, and it has never been a problem, except when I meet ORMs or "SQL-based solutions" (usually procedures and/or views) that are specific to one database.
Luckily, I'm in a position where I can choose for our clients, so... :)
Oh I see MySQL added functional indexing in 8.0.13, that’s the main thing I would miss from Postgres. Partial indexes are also nice to have, but I suppose a sophisticated enough ORM could map them to functional ones, at least in so far as testing is concerned. The memory use would likely be much higher than a proper partial index.
Oh I was thinking of ORMs like Prisma, where the single ORM System is responsible for both runtime mapping of objects to relations, and development/deployment time provisioning of schemas, migrations, indexes, etc.
I prefer to only use ORMs that can be used "everywhere." PostgreSQL needs to be used on production for performance; SQLite is good, but not _that_ good :), but it's perfect for testing things locally.
I wonder if there’s a type system strong enough that it can actually parse files, send remote queries, etc… so that types will be generated on the fly as you type with no additional steps.
I heard TS’s and other languages’ type systems are Turing-complete, but surely they’re all isolated enough that this wouldn’t be possible?
Seems like it would be pretty cool. Also kind of a footgun, since the type system becomes a program of its own.
I have come to believe that select * should only be used in exploratory SQL and never in application code.
I know a lot of devs treat database schemas as sacrosanct and try not to modify the schema much once created. However, I do like being able to iterate on schema quite a bit during early stages of application and in case of dot-star queries I often end up with either run time errors or type errors that are quite far from the query itself. So now I have settled on ensuring that all columns are always specified in queries originating from application code.
So this ORM positively surprised me, as you can still just use SQL and get a JS function generated from it.
To bridge object-to-relational mapping, Oracle has an interesting concept called "JSON duality views". Don't dismiss it because it is Oracle, I think the concept is brilliant. The translation from relational to JSON happens in the database, and it also allows you to send an updated JSON back and the DB will automatically run the necessary DML operations. A good example is here: https://oracle-base.com/articles/23c/json-relational-duality...