A canonical model, or normalized database schema, will not allow you to scale. As you add indexes for performance of queries, your inserts and updates deteriorate in performance. The only way out is to break the normalization scam that DBAs are still milking. The easiest way to do that is to adopt ledger for state. #EventModeling #EventSourcing
@adymitruk a rigid database schema will also eventually severely impede your ability to add functionality as your product evolves (this is perhaps the more relevant form of scalability for many apps).
@adymitruk How many times does one hear "Trying to get this DB migration done, no impediments" at standup? DB migrations *are* impediments!
@leviramsey @adymitruk so help me out, guys. what does it look like in the real world to update your schema and data from event streams?

@adymitruk @brandonmull @leviramsey

You don’t update data of events, they are immutable facts of the past. You can add information in your interpretation of past events if it is possible to use a default value. That’s the same with classical storage, because you can’t magically let missing information appear from nowhere.

Schema updates: Depends on what change you have in mind, can you elaborate!?

@TonyBologni @brandonmull @leviramsey I wasn't implying that past events are modified
@adymitruk @leviramsey @brandonmull I replied to brandon ;-)
@TonyBologni @leviramsey @brandonmull I was thinking that. Madison doesn't rearrange the replied-to accounts order. So i wrote that just in case but felt wrong doing so 😄

@brandonmull @adymitruk It depends.

You don't update the events in place. In some cases you might reproject (like if there's a wire-format change). More likely you're going to do some form of event versioning (whether explicitly versioning events or just defining a new type of event) and have a tolerant reader of some sort.

The schema of your journal might change, but needing to is probably a sign that you're trying to do too much at the journal level itself: smart-plumbing is an antipattern

@leviramsey @adymitruk i use upgraders to manage event schema changes. is that what you mean by "tolerant readers"? anyway, i was referring to migrations of read models, not events.
@brandonmull @adymitruk building a new read model from scratch is generally more effective than a migration.

@leviramsey @adymitruk @brandonmull

That works if you only have one read model that’s affected. When you have hundreds…

@asher @adymitruk @brandonmull why would you need to change more than one read model if you're not changing the event model (note that Brandon is talking about not changing the event side of the model).

Other posts cover the case where the event model is changing, which typically will only result in a small number of read models changing if you have multiple read models.

@leviramsey @adymitruk @brandonmull

Wow.. I totally missed that comment 😂

@leviramsey @asher @adymitruk changes to reporting structure, esp. long term historical data
@asher @leviramsey @adymitruk a change in schema is typically targeted so hundreds of read models isn't a scenario i think of as likely nor i am i interested in
@leviramsey @adymitruk that's what i meant. whether youre copying or reconstituting, it's a migration... from one schema to another
@brandonmull @adymitruk I probably tend to interpret it more in the in-place, mutable sense.