Improved Declarative Persistence in Wolverine
To continue a consistent theme about how Wolverine is becoming the antidote to high ceremony Clean/Onion Architecture approaches, Wolverine 4.8 added some significant improvements to its declarative persistence support (partially after seeing how a recent JasperFx Software client was encountering a little bit of repetitive code).
A pattern I try to encourage — and many Wolverine users do like — is to make the main method of a message handler or an HTTP endpoint be the “happy path” after validation and even data lookups so that that method can be a pure method that’s mostly concerned with business or workflow logic. Wolverine can do this for you through its “compound handler” support that gets you to a low ceremony flavor of Railway Programming.
With all that out of the way, I saw a client frequently writing code something like this endpoint that would need to process a command that referenced one or more entities or event streams in their system:
public record ApproveIncident(Guid Id);
public class ApproveIncidentEndpoint
{
// Try to load the referenced incident
public static async Task<(Incident, ProblemDetails)> LoadAsync(
// Say this is the request body, which we can *also* use in
// LoadAsync()
ApproveIncident command,
// Pulling in Marten
IDocumentSession session,
CancellationToken cancellationToken)
{
var incident = await session.LoadAsync<Incident>(command.Id, cancellationToken);
if (incident == null)
{
return (null, new ProblemDetails { Detail = $"Incident {command.Id} cannot be found", Status = 400 });
}
return (incident, WolverineContinue.NoProblems);
}
[WolverinePost("/api/incidents/approve")]
public SomeResponse Post(ApproveIncident command, Incident incident)
{
// actually do stuff knowing that the Incident is valid
}
}
I’d ask you to mostly pay attention to the LoadAsync() method, and imagine copy & pasting dozens of times in a system. And sure, you could go back to returning IResult as a continuation from the HTTP endpoint method above, but that moves clutter back into your HTTP method and would add more manual work to mark up the method with attributes for OpenAPI metadata. Or we could improve the OpenAPI metadata generation by returning something like Task<Results<Ok<SomeResponse>, ProblemHttpResult>>, but c’mon, that’s an absolute eye sore that detracts from the readability of the code.
Instead, let’s use the newly enhanced version of Wolverine’s [Entity] attribute to simplify the code above and still get OpenAPI metadata generation that reflects both the 200 SomeResponse happy path and 400 ProblemDetails with the correct content type. That would look like this:
[WolverinePost("/api/incidents/approve")]
public static SomeResponse Post(
// The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
ApproveIncident command,
[Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
Incident incident)
{
// actually do stuff knowing that the Incident is valid
return new SomeResponse();
}
Behaviorally, at runtime that endpoint will try to load the Incident entity from whatever persistence tooling is configured for the application (Marten in the tests) using the “Id” property of the ApproveIncident object deserialized from the HTTP request body. If the data cannot be found, the HTTP requests ends with a 400 status code and a ProblemDetails response with the configured message up above. If the Incident can be found, it’s happily passed along to the main endpoint.
Not that every endpoint or message handler is really this simple, but plenty of times you would be changing a property on the incident and persisting it. We can still be mostly a pure function with the existing persistence helpers in Wolverine like so:
[WolverinePost("/api/incidents/approve")]
public static (SomeResponse, IStorageAction<Incident>) Post(
// The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
ApproveIncident command,
[Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
Incident incident)
{
incident.Approved = true;
// actually do stuff knowing that the Incident is valid
return (new SomeResponse(), Storage.Update(incident));
}
Here’s some things I’d like you to know about that [Entity] attribute up above and how that is going to work out in real usage:
The options so far for “OnMissing” behavior is this:
public enum OnMissing
{
/// <summary>
/// Default behavior. In a message handler, the execution will just stop after logging that the data was missing. In an HTTP
/// endpoint the request will stop w/ an empty body and 404 status code
/// </summary>
Simple404,
/// <summary>
/// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
/// endpoint the request will stop w/ a 400 response and a ProblemDetails body describing the missing data
/// </summary>
ProblemDetailsWith400,
/// <summary>
/// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
/// endpoint the request will stop w/ a 404 status code response and a ProblemDetails body describing the missing data
/// </summary>
ProblemDetailsWith404,
/// <summary>
/// Throws a RequiredDataMissingException using the MissingMessage
/// </summary>
ThrowException
}
The Future
This new improvement to the declarative data access is meant to be part of a bigger effort to address some bigger use cases. Not every command or query is going to involve just one single entity lookup or one single Marten event stream, so what do you do when there are multiple declarations for data lookups?
I’m not sure what everyone else’s experience is, but a leading cause of performance problems in the systems I’ve helped with over the past decade has been too much chattiness between the application servers and the database. The next step with the declarative data access is to have at least the Marten integration opt into using Marten’s batch querying mechanism to improve performance by batching up requests in fewer network round trips any time there are multiple data lookups in a single HTTP endpoint or message handler.
The step after that is to also enroll our Marten integration for command handlers so that you can craft message handlers or HTTP endpoints that work against 2 or more event streams with strong consistency and transactional support while also leveraging the Marten batch querying for all the efficiency we can wring out of the tooling. I mostly want to see this behavior because I’ve seen clients who could actually use what I was just describing as a way to make their systems more efficient and remove some repetitive code.
I’ll also admit that I think this capability to have an alternative “aggregate handler workflow” that allows you to work efficiently with more than one event stream and/or projected aggregate at one time would put the Critter Stack ahead of the commercial tools pursuing “Dynamic Consistency Boundaries” with what I’ll be arguing is an easier to use alternative.
It’s already possible to work transactionally with multiple event streams at one time with strong consistency and both optimistic and exclusive version protections, but there’s opportunity for performance optimization here.
Summary
Pride goeth before destruction, and an haughty spirit before a fall.
(Proverbs 16:18 in the King James version)
With the quote above out of the way, let’s jump into some cocky salesmanship! My hope and vision for the Critter Stack is that it becomes the most effective tooling for building typical server side software systems. My personal vision and philosophy for making software development more productive and effective over time is to ruthlessly reduce repetitive code and eliminate code ceremony wherever possible. Our community’s take is that we can achieve improved results compared to more typical Clean/Onion/Hexagonal Architecture codebases by compressing and compacting code down without ever sacrificing performance, resiliency, or testability.
The declarative persistence helpers in this article are, I believe, a nice example of the evolving “Critter Stack Way.”