I've seen engineers be incredibly productive when they're in a state of "flow." But the way we structure most programs actively prevents this. Here's an analogy. Think of an engineer writing complex software like a mechanic assembling a high-performance engine. All the parts are carefully laid out on a table making it easier to assemble. An interruption is like someone bumping the table and sending the parts across the floor. The work doesn't just resume. The engineer has to painfully reposition the parts to get back to where they were. I've experienced this first-hand writing code that wasn't all that complex. This is the reality of "context switching." It’s why a "quick question" can kill 30-60 minutes of productive time. Protecting your team from these interruptions isn't "coddling" them; it's sound economic policy. #ModernVRO #GovTech #ArmyFutures #PEO #DevSecOps
How interruptions kill productivity in software engineering
More Relevant Posts
-
Prompt Engineering vs. Context Engineering A simple way to think about it: Prompt Engineering → make it behave ↳ Small instruction set for role, style, format, and goal. Context Engineering → make it know ↳ Grounding data, tools, and memory so the model has what it needs. I recently shared 4 memory strategies that support Context Engineering (https://lnkd.in/dUcdUuYq), and over the next few days, I’ll be sharing new posts on retrieval, tools, and guardrails to provide a more complete overview of other Context Engineering aspects. If there is anything specific that you'd like me to include in these next posts, let me know.
To view or add a comment, sign in
-
Prompt Engineering is Dead, it's time for Context Engineering. Do you remember a year or so ago, when companies were offering up to 300K USD for prompt engineering? We don’t see these posts or the company anymore. Prompt engineering was kind of hacky from the start. It was the fancy way to make everyone believe that they were engineers. But now things have changed and have taken the approach of traditional software engineering with the power of LLMs. So, let's look at the ultimate guide on how to build real agentic workflows that actually scale and don’t break in production. 📢 Table Of Contents 👉 Context Is Everything, Not Prompting 👉 Natural Language → Tool Calls 👉 Own Your Prompts 👉 Own Your Context Window 👉 Tools Are Structured Outputs 👉 Unify Execution State and Business State 👉 Launch, Pause, Resume: Lifecycle Management via Simple APIs 👉 Own Your Control Flow 👉 Compact Errors into Context Window 👉 Next Gen Agents: Modular, Accessible & Stateless Agents #AI #AgenticAI #LLM #LLMs #aiagents #Artificialintelligence #Machinelearning #DataScience #DataScientists #data https://lnkd.in/eUHcwuKr
To view or add a comment, sign in
-
Every customer request sounds reasonable in isolation. Every competitor feature feels like table stakes. Every stakeholder has "just one small addition." However, we've learned that complexity is the worst enemy of reliability. The Minimal O1 succeeds because it masters three functions, not because it attempts thirty. Our current strategies: For every new feature request: "What can we remove instead?" Measure success by problems solved, not features shipped Make feature advocates prove the case for "yes" But honestly? It's an uphill battle. The pressure to add never stops. Your turn: What's keeping you up at night as an engineer? Technical debt that's become technical bankruptcy? Resource constraints killing your timeline? Quality vs. speed trade-offs? Team scaling challenges? Something else entirely? Drop your challenge below Maybe we can crowdsource some solutions. Sometimes the best engineering insights come from engineers helping engineers. #Engineering #TechChallenges #ProductDevelopment #TeamBuilding #minimalengineering
To view or add a comment, sign in
-
Under-engineering hurts today. Over-engineering hurts forever. When we hacked together early telemetry, the pain was instant. Agents streamed logs on demand. It broke as soon as we added scale. The temptation was obvious: patch it. Batching. Caching. Deduplication. That would’ve taken months and locked us into the wrong design. Instead, we scrapped it. Rebuilt with centralized collection. One week later: fixed. Here’s the rule: Under-engineering creates pain that forces you to adapt. Over-engineering creates momentum around the wrong path. Technical debt is manageable. Design debt compounds silently until it kills you.
To view or add a comment, sign in
-
𝗣𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗳𝗼𝗿 𝗻𝗼𝗻-𝗺𝗮𝘁𝗵 𝗽𝗲𝗼𝗽𝗹𝗲 You write a prompt. You look at the output. You tweak the prompt. You look again. Repeat. At first it feels like trial and error, but if you zoom out, you’re actually doing something very structured: • You test different “inputs” (prompts). • You observe how the system reacts. • You judge which outputs are more useful. • You slowly converge towards a “better” formulation. That is an optimization process — just without the equations. 𝘠𝘰𝘶’𝘳𝘦 𝘭𝘪𝘵𝘦𝘳𝘢𝘭𝘭𝘺 𝘤𝘭𝘪𝘮𝘣𝘪𝘯𝘨 𝘢𝘯 𝘪𝘯𝘷𝘪𝘴𝘪𝘣𝘭𝘦 𝘮𝘰𝘶𝘯𝘵𝘢𝘪𝘯, 𝘵𝘳𝘺𝘪𝘯𝘨 𝘵𝘰 𝘧𝘪𝘯𝘥 𝘵𝘩𝘦 𝘱𝘦𝘢𝘬 𝘰𝘧 𝘶𝘴𝘦𝘧𝘶𝘭𝘯𝘦𝘴𝘴. It’s optimization under uncertainty. 🎯
To view or add a comment, sign in
-
Debugging intermittent 500s in production when logs look clean 🚦 Intermittent 500s in production with clean logs can feel like a diagnostic riddle. Often the issue isn’t a single failing line, but a combination of resource pressure, timeouts, or upstream quirks that logs don’t surface by default. The result is silent symptoms that only appear under load or specific conditions. 💡 A pragmatic triage approach that has worked for me: 1. Build end‑to‑end visibility: propagate a unique request ID across services and map latency and failures through the call graph with lightweight tracing. 2. Look beyond logs: track latency distributions, error budgets, CPU and memory usage, GC pauses, and database connection pools to spot bottlenecks that logs miss. 3. Inspect environment and configs: verify worker limits, timeouts, keep‑alive settings, and any upstream quota or permission mismatches that could cause sporadic failures. 4. Instrument and sample: add structured, contextual logs around critical paths and enable short‑lived, controlled debug traces during a window of interest to avoid noise. 5. Reproduce safely in staging: simulate production load patterns and upstream failures to validate hypotheses before pushing changes. 6. Use targeted debugging when necessary: consider remote or in‑process debugging with strict scope and safeguards, only after observability has narrowed the cause. These steps help separate flaky behavior from real bugs, turning silent failures into actionable work items and reliable improvements. What patterns have you seen with intermittent 500s, and which triage step would you start with in your stack? What’s your go‑to technique to gain visibility quickly under pressure? ✨ #DevOps #Observability #SRE #Backend
To view or add a comment, sign in
-
PROMPT ENGINEERING The right prompt determines the quality of the responses we get from one tool to another. Generic responses don’t answer many questions and may not give insights that are particular to the scenario at hand, as you still end up doing a lot of work. With the right prompt, we would have done much in less than. That’s effectiveness! Nothing beats that.
To view or add a comment, sign in
-
One of the things that I look for in strong engineers is the ability to troubleshoot. I find that a lot of folks are really strong at taking an idea of their own and building it out - but they have a strong mental block around dropping in to some random codebase, and taking a first principles approach to dissecting an issue. Back in my hardware days, the quality of a circuit designer wasn’t measured in the strength of their design, but also their ability to actually get something working during bring-up. You could have the most beautiful design, but if it didn’t work during bring-up, you had two options: either you triaged and found creative ways to still make that design function, or you had to send the circuit back to manufacturing and wait 8-12 weeks. So there was a lot of value in engineers who could work with a suboptimal design and bring it to life. With the ease of software engineering (on-demand compilation and run), sometimes there’s a tendency to write more code to fix an issue vs. thinking deeply through why something is happening in the first place. The hesitation often stems from not having a deep understanding of the constituent components that make up a functional system, including model layers, middleware, gateways, networking, and firmware components. While I am not advising every engineer to become an expert in every layer of the vertical integration stack that makes up the modern software or AI ecosystem, my recommendation to early-in-career engineers is to always look one stack below and one stack above their responsibility, and build a good intuition of how things work. The archetype of “fixer” in software engineering, someone who can drop into any problem and take a first-principles approach to troubleshooting and design tweaks, will become more and more critical as we move up layers of abstraction with AI.
To view or add a comment, sign in
-
#Stop #Wasting #Your #Senior #Engineers' #Time on #Repetitive #Tasks The power engineering talent shortage is real. Are your best people analyzing the grid or debugging Python scripts? The industry faces a critical shortfall of experienced power systems engineers, with a projected need for hundreds of thousands in the coming years.31 Your senior talent is your most valuable—and most constrained—asset. Yet, studies engineers often spend up to 30% of their time on tedious, automatable tasks: configuring simulation runs, managing data files, and wrestling with brittle scripts.33 This is a massive waste of expertise and a direct hit to your team's productivity. The highest ROI comes from automating the tedium to amplify their analytical insight. Calculate the real cost of manual work on your team. DM us for a simple ROI calculator. #EngineeringLeadership #PowerSystems #Talent #ROI #Automation
To view or add a comment, sign in
-
The Two Sides of Feature Engineering: When I worked on a fraud-detection model, the initial dataset had 16 core features. With just those, the model achieved over 90% recall. Curious to push performance further, I engineered 20+ additional features. Instead of improving results, recall dropped below 70%. The extra features added noise, introduced redundancy, and confused the model — a clear case of over-feature engineering. I stepped back and focused on quality over quantity. By carefully selecting features like transaction velocity, device consistency, and geo-distance, and validating each one through ablations and correlation checks, the model improved: recall stayed high while false positives dropped by 22%. The lesson was simple: feature engineering is about precision, not volume. The right features amplify signal, the wrong ones bury it.
To view or add a comment, sign in
-
Sr DevSecOps Engineer | MBA | LSS-MBB | Certified ScrumMaster & CSP | ITSM / ITIL | ISC2: CC | ISO-20000 | ISO-22301 | ISO-27001) | PMQ-Certified Leader
3wGreat point. On the CI/CD & Configuration Management team I supported (we then supported 200+ DevOps teams), we had 2 Sr Systems Engineers for that reason. Sprint 1, Tim was available for adhoc calls, while Sally focused on her projects. Sprint 2, vice versa. Allowed them to remain focused while still supporting the team. “You do realize that sometimes it takes 60 minutes just to get to the point on these servers and services that we can start to work? Everything leading up to that point is prep work, and has to be almost redone so we’re back to where we were disrupted?…[sic]”