Ever had a microservice dependency fail and bring down your entire application with it? It's a common problem in distributed systems, but one that can be solved with the Circuit Breaker pattern. I'm excited to share my new Dart package, dart_circuit_breaker, now available on pub.dev. It provides a simple, state-driven solution to prevent cascading failures and keep your services healthy. Learn how to build more robust applications and add it to your project today! 🔗 https://lnkd.in/dFKargyV #Dart #Flutter #SystemDesign #DistributedSystems #ResilientSystems #CircuitBreaker #SoftwareArchitecture
Introducing dart_circuit_breaker: Prevent cascading failures in Dart apps
More Relevant Posts
-
Virtual threads are fantastic for simplifying blocking I/O scenarios, but reactive streams still have several key advantages: **Composition and Flow Control**: Reactive streams excel at complex data transformations and pipeline composition. You can elegantly chain operations like map, filter, flatMap, and handle backpressure declaratively. With virtual threads, you'd need more imperative coordination code for equivalent complex workflows. **Backpressure Management**: This is huge for I/O bound systems. Reactive streams have built-in backpressure handling - when downstream consumers can't keep up, the system can buffer, drop, or slow down producers automatically. Virtual threads don't inherently solve this; you still need explicit queue management and coordination. **Resource Efficiency at Extreme Scale**: While virtual threads are lightweight, reactive streams can be even more efficient for scenarios with millions of concurrent operations since they're event-driven rather than thread-based, even if those threads are virtual. **Event-Driven Architectures**: For systems built around event streams, message brokers, or real-time data processing, reactive streams are more naturally aligned with the problem domain. **Integration with Reactive Ecosystems**: If you're already using reactive databases, message systems, or frameworks, staying reactive end-to-end often makes more sense than mixing paradigms. That said, virtual threads are game-changing for traditional request-response patterns and make blocking I/O code much simpler to write and debug. The choice often comes down to whether your problem is naturally stream-oriented versus request-oriented, and how much complexity you're willing to trade for the reactive benefits. What's your take on the debugging and maintainability aspects? That's often where the rubber meets the road in real projects.
Simplifying Code: migrating from Reactive to Virtual Threads This is exactly what virtual threads were made for - making the life of developers simpler by making code easier to maintain. And yes, the reactive code might have been faster, less resource intensive, but it probably still was less economic. Looking forward to the 'yes, but...' comments here 😂
To view or add a comment, sign in
-
For those of you who use Hexagonal-/Onion-Architecture in your applications: 1. What’s your primary reason/goal for using it? 2. How do you assign code to the abstraction elements (ports, adapters, rings)?
To view or add a comment, sign in
-
Low-level Swift: Linking The LLVM backend transforms LLVM IR into machine code and produces object .o files. These files contain optimised CPU-architecture-specific assembly instructions, alongside metadata, constants, and debug information. ARM assembly object files might look a little like this: See the full blog here: https://lnkd.in/epWH2JYm
To view or add a comment, sign in
-
-
Want fewer on-target bugs? Pretend the hardware doesn’t exist. Most embedded developers do this: if (gpio_read(PIN_BUTTON)) { // Do something } It looks harmless, right? But now your logic can’t run unless a button is physically wired to that pin. You’ve just handcuffed your system to a piece of silicon. Now, try this instead: bool is_pressed = read_button_state(); Let read_button_state() handle the hardware. Let the rest of your code just deal with data. That one shift means: - You can now simulate button presses in unit tests - Your logic works the same whether you’re on a dev board, in CI, or shipping to 100k units - You’re no longer blocked waiting for a dev board, you’re already shipping code That’s how you ship faster, break less, and sleep better. ♻️ Repost this if you’re tired of debugging on-target.
To view or add a comment, sign in
-
One key insight I gained from designing a scalable Pastebin is the critical role of a cache layer. In systems with a high read-to-write ratio
To view or add a comment, sign in
-
-
Another bit of value rolled into #rlvgl. This is independent of lvgl and screen support but is an obstacle to anyone standing up a #embedded #rust project right now. The configuration of modern micro controller is complex. Some live off of device tree in the line world. In the bare metal and RTOS world, Vendors tend to have a tool that helps you set this up. For rust, there is a well standardized WAY of setting up HAL and PAC code in a vendor agnostic way, but the vendors have not caught up, so the vendor knowledge of clock trees and special cases is can be lost of become a stumbling block. Since I want to demo rvlgl on real hardware, I need this layer and rather than just write a bit of rust code and work it out, I converted the STM open pin data repo to a rust binary which includes a .ioc se-serializer which stored the data in a vendor agnostic way. This then feeds templates for generating rust bsp code, allowing rlvgl-creator to generate all of the pin, peripheral and interrupt configuration via the agnostic #hal / #pac APIs based on the custom configuration generated as a .ioc file by #cubemx. This was a quick #dal_e sketch, so the arrow on the right side is wrong - rlvgl-creator generates the PAC/HAL code in a vendor agnostic way, translating from vendor specific tool files. First up #stmicro ...
To view or add a comment, sign in
-
-
🚀 𝗦𝗻𝗲𝗮𝗸 𝗽𝗲𝗲𝗸: 𝗻𝟴𝗻 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝘄𝗶𝘁𝗵 𝗣𝗮𝗿𝘀𝗲𝗮𝗯𝗹𝗲 🔍 Setting up n8n workflows is fun; debugging them should also feel the same. Question for you, what’s the single most important thing you need when instrumenting observability for an n8n flow? Drop your #1 must-have in the comments below. Why does this matter? n8n ships its logs through the popular Winston logger. Handy, but teams keep tripping over a few bumps: • Performance tax – Winston’s heavier JSON serialization can add noticeable CPU & latency versus lightweight loggers. • Trace context gaps – getting Winston logs to play nicely with OpenTelemetry often needs custom code. Stay tuned for the full write-up on auto-instrumenting Winston, normalizing logs, and linking everything to metrics + traces in one place. Until then, try Parseable at demo.parseable.com #n8n #observability #logging #NodeJS #Winston #Parseable
To view or add a comment, sign in
-
Did you know that since #Angular V20, you can make your host element type-safe 🤩 You simply add typeCheckHostBindings to your tsconfig and your host element becomes fully type-safe 🎉 Stop using @HostBinding and @HostListener, and start using the host element inside your component decorator ✅
To view or add a comment, sign in
-
-
This is a re-post with new picture. I had modified the RISC-V picture without changing the bottom portion to reflect x86. ϕEngine has only 4.5× the code density of x86. The 19× was for RISC-V. Sorry about that. We compared RISC-V with ϕEngine on saturated add. But how does the x86 compare? At least it has flags, so the code won't be quite as bad as RISC-V. For an even comparison, we will assume that the calling convention allows passing of the arguments in EAX and EBX: 0000: 01 d8 add eax,ebx 0002: 71 07 jno b <NoSat> 0004: 72 06 jb c <Minus> 0006: B8 FF FF FF 7F mov eax,0x7FFFFFFF 000B: c3 ret 000C: B8 00 00 00 80 mov eax,0x80000000 0011: C3 ret 0012 Keep in mind that, when overflow happens, the sign is the opposite of what it should be. The code came to 18 bytes. That is 4.22× times the code density of RISC-V. Much of that is because it has flags and can detect overflow in hardware. But the x86 can't do clamping in hardware so it has to test the flags and load the appropriate values into the register that returns the result. So ϕEngine code density is 4.5× that of the x86. The x86 is about midway between RISC-V and ϕEngine on a log scale.
To view or add a comment, sign in
-
Senior Software Engineer | IT Professional | AI & AI Agent Engineer | Data Analyst | Mobile (Android & iOS) | Django | Python | WordPress Expert | IT Law | Intellectual Property Law | Contact Law | 30K+ Network
3wIs just the part that you always hope doesn’t happen. But Awesome job man.