So, I’ve been working on getting all the stuff set up for better DevEx around the current state of the database tool I’ve been working on. I’m gonna include some excerpts here and example usage from the in-house README, but am leaving off some of the less fleshed out parts.

The Project

The high level goals of this project are to create a database which is, simple and fast. In order to achieve these goals it will be built in Rust for the logic, RocksDB for the data layer, gRPC for transport, and serialized protobuf messages as the storage format.

The advantages provided by the utilization of Rust, RocksDB and gRPC are correctness, speed, and a little bit storage size.

In today’s database climate, we have a lot of different “camps” with really strong feelings about “what is right”. But, the actual “right” answer is the one that solves the problem the best for a user, so it doesn’t mater which letters you want to rearrange or put “No” in front of, at the end of the day we just want fast responses and to minimize the number of things we need to manage ourselves, in order to scale our systems.

Talking to you like you’re 5, again

I think it’s worth acknowledging that everything in data reduces back to “address” and “stuff at that address” (or “key” and “value” but if you say “all data storage reduces to key value stores” the “LAMP is life” or “.NET MCV contractor” GenXers start to yell, and the “I only know how to use ActiveRecord and Prisma” millennials start to cry).

A hairy problem with data, in distributed systems, is figuring out how to split it all up everywhere. In a lot of systems you have to think about some form of sharding, replication, location, etc. etc. etc. So we need a minimally complex way to scale our data, but we also need to store our data in a way that it can be speedily retrieved later. So, holding these two things in tension, in our current world, I think we often come up with less than ideal solutions, and our sharding strategies get very domain specific, or are informed by developer preference, and a lot of changes we end up needing to make later are much higher overhead because they require migrations, code updates, breaking changes, and blah, blah, blah, blah, blah.

With that in mind, I think we really need to take a step back and start evaluating the idea of building on “backbones” – backbones which make up the “heart and soul” of that system’s data, and then… just get comfy being more loose-y goose-y with the way we materialize views over top of the backbone (without ever putting the “backbone data” in danger). Our “backbones” would be our SSOs or “single sources of truth.” And honestly… I think the easiest way to build a backbone like this is with a distributed KV store. A distributed KV store can scale horizontally with minimal complexity: i.e prefix the box address to the primary key IP_ADDRESS#RECORD_A, and the IP_ADDRESS can be interpreted at the load balancer, and you’re basically just extending the PK (this is not how I would do this IRL, this is just pseudo to convey an idea).

Why gRPC and Protobuf?

Okay, leaving the distributed stuff behind, because currently the project is only focused on supporting a single instance at a time (no autoscaling stuff yet but hopefully, ya know, someday soon) gRPC works great because it’s more efficient in transport than JSON over HTTP, and protobuf message size being more compact / binary format actually makes it a great candidate for storage. The other thing that’s cool is that there are like gzip codecs for gRPC, so maybe compression shows up as a configuration option later.

“Head in the Clouds”

There are some cool opportunities for collocated compute and data storage as well. I have thoughts about bolting a runtime onto the side of it which allows for host-op queries to be able to hit the data directly on disk (when this becomes distributed it would still feel and behave that way for the developer, but the node itself may need to go find records it doesn’t currently own, on other nodes, and all of that would be opaque to the collocated compute function). So you say, “oh, these are just SPROCS” and I would say “yes! it’s just that those of you who know what a SPROC is, really hate whenever I start by saying that’s what it is.”

If you need to understand why I think this is a good idea (sadly, it’s not my idea… tho I was really excited when I thought it was), look no further than the creator of Postgres. Michael Stonebraker’s VoltDB (or I think they changed their name to Volt Active Data) has supported using JVM based SPROCS as like one of their primary cool parts for a long ass time. Not only that, Postgres (and I genuinely have no idea why no one does this because AWS RDS does also let you do it) will allow you to run essentially JavaScript AWS Lambda functions as stored procedures, using the plv8 extension on your Postgres RDS instances.

Anyway, collocating compute with data is cool and there is precedent.

Doubles down on theoretical SPROCS

I would probably choose to opt for a JavaScript + WebAssembly runtime though, instead of JVM. JVM is cool ‘n all, but I need a way to ship around sandbox’d Rust and WASM’s the best, light weight, way to do that right now. Also I think you need a good “move fast and break things” scripting language for certain kinds of tasks (though I am feeling less and less of that lately) and JavaScript/TypeScript seem like pretty sane defaults for people who don’t have delusions of turning your grandma’s Facebook clicks into the meaning of life (or whatever lie you tell yourself about what good machine learning is actually doing for the world right now).

Actual Usage

To set up the database, you define a .proto file with messages in it. Each of those messages corresponds to a “model.” If you wanted to have the logical concept of a House in your system, you would write the following proto file:

// schema.proto
syntax = "proto3";
package database;

message House {
    // required. this is the key you query by later
    string pk = 1;

    string color = 2;
    string address = 3;
}

and then you would run the following:

## haven't settled on a name, so I'm just swapping out my `cargo run` with a made up one. "sean-db" def not gonna be it tho
sean-db create schema.proto

Running that command, will create a server binary and proto file for your consuming services. So in the out/ you’ll have server and database.proto.

The database.proto is what any developer who is familiar with gRPC can use to code-gen and build a client to the database. Right now, the database is very light weight and has no administration infrastructure, permissions, auth or mTLS; So you’d only really ever want to run it in like private subnet in the same VPC as its consumer or something… but I intend to evolve everything, over time.

Running

Once RocksDB is finally done building (holy fuck that takes way too long and I need to figure out how to like cache it or some do some other Rustacean magic to make it stop), you should be able to run the server with:

./out/server

Clients

Because the database server is just a gRPC server, you can use all the gRPC libraries for any language you like. And IN FACT you can also roll over to your preferred gRPC client and type in localhost::50051, and because we implement server reflection, when you plug in the URL (if your tool supports it) it can automatically load up all your available RPCs and even generate the fake content to send.

Example

The simplest example here (because right now it’s “just a kv store” and I’m not gonna explain STD to everyone rn) is just 1 model, so we’re gonna pick Dog.

Schema

// cli/test.proto
syntax = "proto3";
package database;

message Dog {
  string pk = 1;

  string name = 2;
  uint32 age = 3;
  string breed = 4;
}

If we run our above commands to generate the ./out/server and ./out/database.proto we can get the actually useful proto definition for our client (./out/database.proto):

sean-db create schema.proto

Proto

When we run our script with the schema.proto, it barfs out this new proto (database.proto). In this new proto, are the actual primitives for database interaction and has all the fun stuff we need to do in application-land.

This file shouldn’t ever need to be modified by the developer directly and could break a lotta stuff.

syntax = "proto3";
package database;

message Dogs { 
  repeated Dog dogs = 1; 
}
message Dog {
  string pk = 1;
  string name = 2;
  uint32 age = 3;
  string breed = 4;
}

message IndexQuery {
  oneof expression {
    string eq = 1;
    string gte = 2;
    string lte = 3;
    string begins_with = 4;
  }
}

message Empty {}

service Database {
  rpc GetDogsByPk(IndexQuery) returns (Dogs) {}
  rpc DeleteDogsByPk(IndexQuery) returns (Empty) {}
  rpc PutDog(Dog) returns (Empty) {}
  rpc BatchPutDogs(Dogs) returns (Empty) {}
}

Client

I was gonna put together a full on code example for every language but that’s exhausting and grpcurl gives you the idea.

Basically we have 4 RPCs for each model:

  • GetXByPk
  • DeleteXByPk
  • PutX
  • BatchPutX

And then you have the IndexQuery helper which basically allows you to do key-range queries.

Here are the really basic examples:

## PutDog
grpcurl -plaintext -d '{
  "pk": "DOG#1",
  "name": "Rover",
  "age": 3,
  "breed": "Golden Retriever"
}' localhost:50051 database.Database.PutDog

## BatchPutDogs
grpcurl -plaintext -d '{
  "dogs": [
    {
      "pk": "DOG#2",
      "name": "Buddy",
      "age": 2,
      "breed": "Labrador"
    },
    {
      "pk": "DOG#3",
      "name": "Max",
      "age": 4,
      "breed": "Poodle"
    }
  ]
}' localhost:50051 database.Database.BatchPutDogs

## GetDogs
grpcurl -plaintext -d '{"begins_with": "DOG#"}' localhost:50051 database.Database.GetDogsByPk

## DeleteDogs
grpcurl -plaintext -d '{"eq": "DOG#3"}' localhost:50051 database.Database.DeleteDogsByPk

Conclusion

Really all I did was build a crappy KV-Store ORM with Rust, gRPC and RocksDB. But the thing that is (maybe) interestingly unique is that we actually create code paths based on our access patterns, instead of leaving the query plan to be parsed and interpreted at runtime, or using a structured string, we have “compiled” our access patterns into the database server itself.

There are challenges with this, and currently we only support a very narrow set of access patterns… but with Model and Field Macros™ I think we’ll be able to compile in a lot more optimization for a broader set of access patterns.

The goal of this project is to develop an understanding of what we do OVER TOP OF these primitives in order to build our apps, so that we can take those learnings and “pull them back” upstream to our database and figure out, just how much of this repetitive work – which we delegate to our developers – can just be compiled instead.

Caveats

One thing that I haven’t thought a ton about… is migrations. The reason for this, is that I have a pretty high degree of confidence “don’t break the rules of protobufs” is kinda a reasonable enough constraint. And as long as they don’t ever break the rules of protobufs I think their whole shit should always be fine.

PS

Please don’t challenge me about relationships, sorting, searching, analytics or any other bullshit you think I haven’t thought about. I am already thinking about those things and have road-maps for how to manage each (mostly with those #ModelAndFieldMacros I mentioned earlier). If you have any thoughts, ideas or constructive feedback, I’m always open to those.

If it can be written in code, it can be a plaintext annotation in the config file that indicates usage of code variation X instead of code variation Y. I also just think it’s worth noting that most every feature you’re likely worried about because, “it’s your favorite part of other system _____” was probably built using “addresses with stuff at those addresses” (aka: “key value stores”) so… it’s fine. I like building stuff and know there is more work to do.