Not a database, just a map of json strings where you can update the json stored at some key. You could write the same interface on top of localStorage.
Couchbase mobile has been doing this for over a decade and early versions of membase 15 years ago were using a sqlite backend as a noSQL JSON datastore
I'm using something like this for a small personal project that's only going to have a couple of users. Basically, just an app for myself and my girlfriend for all of the various restaurants, movies, recipes, tv shows, locations, etc. that we plan to go to/do at some point in the future. It's basically just a glorified todo list that uses APIs (TheMovieDataBase, OpenStreetMap, etc.) to grab additional metadata and images to present everything nicely
I want us both to be able to make notes/add ratings to each item, so the set of tables looks like this:
- TodoItems
- Notes
- Ratings
Where every TodoItem can have multiple Ratings/Notes attached. Because each of the TodoItems is going to be of a different type with different metadata depending on the type of item (IMDB/TMDB id, image url, GPS location), and I want it to be extensible in future, its schema has ended up looking like this:
CREATE TABLE TodoItems (
id INTEGER PRIMARY KEY NOT NULL,
kind TEXT NOT NULL,
metadata BLOB NOT NULL
);
With SQLite's json manipulation functions, it's actually pretty pleasant to work with. As it grows I might end up adding some indexes, but for now the performance seems like it will be fine for this very low traffic use case. And it makes deployment and backups incredibly simple.
Postgres added native support for JSON in 2012. People have been using RDBMS to store denormalized data and even as a key-value store for way longer than that. In fact, it's very hard not to do that
I did something similar with dotnet and linq. Idea was to create something like marten but for sqlite instead of postgres. Stopped working on it some time ago, the thing that was really slow was de/serialization, but with new source generators for json maybe it can be sped up.
I'd be a little less reflexively hostile if the prompting used to generate it was prominently included, so that it was at least possible to see which properties of the output were deliberately asked for.
When I read a piece of code I don't understand, ideally I'm able to assume that the author intended what was written, and learn something when I figure out why. With AI generated code, I can't do that: the original author is more or less destroyed once the context is gone; at best I can ask someone who was kinda-sorta around when the author wrote it (i.e. the person who prompted it). At worst I'm really just asking someone (or something) else to guess what was intended.
Even granting that the generated code is implementing a coherent design (and that's a big if!), what we're left with is a very quick way to generate legacy code who's author is no longer around to answer questions. Such legacy code, if it's been proven to work, I might just stick a “don't touch this” sign on it, but if it starts causing any problems, I'd end up doing a ground-up rewrite if only to (potentially) understand what problems and concerns influenced the design to make it what it is.
AI can certainly be useful for this later task, but it'd be far preferable to just start there, as a form of pair-programming. And at that point, I doubt it'd be so interesting to say “built with ‹whatever model is popular today›” in the title.
Not sure what your exact use case is, I'm curious actually, but storing JSON strings should work much better. JSON functions are supported since SQL Server 2016 [0]. This is how I do it atm. I store only indexible content in table columns and everything else goes into an `attributes` JSON column. MSSQL supports indexes even on JSON fields, but I have not tried that, yet.
Secondly, JSON is already serialized, so it doesn't make sense to store as a base64 string. You're adding 30% data overhead to transform a string into a string. Base64 is useful for serializing opaque binary formats.
Lastly, some people might be getting a wry smile that you have the power of a relational database but are just trying to store "json" rather than an actual relational model.
You slap a full text index on the base64 string. There's only a finite number of base64 substrings for the un-encoded substrings "id", 42, etcetera, so you first filter on those. Then you decode those full strings into json and do the final filtering application side. Easy!
Yes, yes, database with AI written code. NoSQL with a database that can't be trusted with your data? I. have. seen. this. before. To quote a classic:
> I suggest you pipe your data to devnull it will be very fast
In defense of the database that video was about, I worked as a software architect for the company which became the first commercial user of it, Eliot hilariously didn't want to accept money for support at first. Good old days. However, around 2015 when all three large open source SQL databases --- SQLite, PostgreSQL, MySQL -- added JSON support I felt there was no more need for these NoSQL systems, though.
Not a database, just a map of json strings where you can update the json stored at some key. You could write the same interface on top of localStorage.
Couchbase mobile has been doing this for over a decade and early versions of membase 15 years ago were using a sqlite backend as a noSQL JSON datastore
I'm using something like this for a small personal project that's only going to have a couple of users. Basically, just an app for myself and my girlfriend for all of the various restaurants, movies, recipes, tv shows, locations, etc. that we plan to go to/do at some point in the future. It's basically just a glorified todo list that uses APIs (TheMovieDataBase, OpenStreetMap, etc.) to grab additional metadata and images to present everything nicely
I want us both to be able to make notes/add ratings to each item, so the set of tables looks like this:
Where every TodoItem can have multiple Ratings/Notes attached. Because each of the TodoItems is going to be of a different type with different metadata depending on the type of item (IMDB/TMDB id, image url, GPS location), and I want it to be extensible in future, its schema has ended up looking like this: With SQLite's json manipulation functions, it's actually pretty pleasant to work with. As it grows I might end up adding some indexes, but for now the performance seems like it will be fine for this very low traffic use case. And it makes deployment and backups incredibly simple.Postgres added native support for JSON in 2012. People have been using RDBMS to store denormalized data and even as a key-value store for way longer than that. In fact, it's very hard not to do that
It's stored :memory:, there exists the same interface with:
This is a parody because the implementation is hidden, and I'm not convinced the implementation isn't just newing an object.The implementation is very clearly included, if you... scrolled down.
And the point is, you could easy do `const db = new Database("./database.sqlite")` instead.
The wrapper makes it so manipulating your database is just like manipulating a plain Javascript object.
I did something similar with dotnet and linq. Idea was to create something like marten but for sqlite instead of postgres. Stopped working on it some time ago, the thing that was really slow was de/serialization, but with new source generators for json maybe it can be sped up.
Beautiful. Please turn it into a repository. You wrangled that AI masterfully for this. Well done! :)
The worst of both worlds, perfection.
> Built with o1.
NAK
I'd be a little less reflexively hostile if the prompting used to generate it was prominently included, so that it was at least possible to see which properties of the output were deliberately asked for.
When I read a piece of code I don't understand, ideally I'm able to assume that the author intended what was written, and learn something when I figure out why. With AI generated code, I can't do that: the original author is more or less destroyed once the context is gone; at best I can ask someone who was kinda-sorta around when the author wrote it (i.e. the person who prompted it). At worst I'm really just asking someone (or something) else to guess what was intended.
Even granting that the generated code is implementing a coherent design (and that's a big if!), what we're left with is a very quick way to generate legacy code who's author is no longer around to answer questions. Such legacy code, if it's been proven to work, I might just stick a “don't touch this” sign on it, but if it starts causing any problems, I'd end up doing a ground-up rewrite if only to (potentially) understand what problems and concerns influenced the design to make it what it is.
AI can certainly be useful for this later task, but it'd be far preferable to just start there, as a form of pair-programming. And at that point, I doubt it'd be so interesting to say “built with ‹whatever model is popular today›” in the title.
Seems to mean "negative-acknowledgement" if like me you've never seen this abbreviation used before.
And yeah I agree.
But is it web scale?
100% ;)
Doesn’t sqlite-utils does this and more, better?
At work, we're using sql server, and i stored all json as base64 string though.
Not sure what your exact use case is, I'm curious actually, but storing JSON strings should work much better. JSON functions are supported since SQL Server 2016 [0]. This is how I do it atm. I store only indexible content in table columns and everything else goes into an `attributes` JSON column. MSSQL supports indexes even on JSON fields, but I have not tried that, yet.
0 - https://learn.microsoft.com/en-us/sql/relational-databases/j...
This needs some explanation
I'm still figuring out why i do this.
Submitted it to The Daily WTF yet?
Why ??
Others are being mean by not explaining the joke.
Firstly, SQL server has a built-in JSON type, which lets you query and manipulate the JSON directly: https://learn.microsoft.com/en-us/sql/relational-databases/j...
Secondly, JSON is already serialized, so it doesn't make sense to store as a base64 string. You're adding 30% data overhead to transform a string into a string. Base64 is useful for serializing opaque binary formats.
Lastly, some people might be getting a wry smile that you have the power of a relational database but are just trying to store "json" rather than an actual relational model.
How do you query json with SQL server like let's say you have one data point like this
{ "id": 42, "quantity": 12, bla bla bla
And you want rows where this column has quantity and quantity ≥ 20
How do you do it if you encode everything as base 64?
You slap a full text index on the base64 string. There's only a finite number of base64 substrings for the un-encoded substrings "id", 42, etcetera, so you first filter on those. Then you decode those full strings into json and do the final filtering application side. Easy!
<joking>have col names id, quantity, json and greaterthan20
This is only a joke until a manager hears it. Then it’s part of the Q1 roadmap and we will refactor it in Q3.
[dead]
> Built with o1.
Yes, yes, database with AI written code. NoSQL with a database that can't be trusted with your data? I. have. seen. this. before. To quote a classic:
> I suggest you pipe your data to devnull it will be very fast
In defense of the database that video was about, I worked as a software architect for the company which became the first commercial user of it, Eliot hilariously didn't want to accept money for support at first. Good old days. However, around 2015 when all three large open source SQL databases --- SQLite, PostgreSQL, MySQL -- added JSON support I felt there was no more need for these NoSQL systems, though.
[dead]
[dead]