Science Innovation: The Neurodivergent Kid on the Block - pt. II
In science, results aren't only numbers going up. No result IS a result, a very powerful one.

Science Innovation: The Neurodivergent Kid on the Block - pt. II

What an Intentionally 'Failed' LinkedIn Post Teaches About Startup Theater and Scientific Integrity

In early July, I ran an experiment.

It wasn’t dramatic. No bold claim, no thread, no storytelling arc. Just one sentence, posted without context:

“POV: AI isn’t #deeptech.”

That was it.

No pitch. No emotional pull. No invitation to debate. Just a hypothesis — left there like a seed to see what would grow.

Within seven hours, it received 112 impressions, two reactions, one comment.

Let's call it experimental data - schematic ;D

For comparison: my usual posts draw between 500 and 800 views, 15 to 20 reactions, several comments. I’ve posted consistently for years. The algorithm knows me. The network knows me. This one missed — badly.

And that was exactly the point.

Because in science, you don’t measure success by applause. You measure it by signal — and sometimes, silence is the signal.

This was never about engagement. It was about understanding how different systems interpret results. One system — the startup world — sees silence as failure. The other — science — sees it as data.

Experiments Always Deliver. Just Not Always What You Expected.

In science, every action produces a result. Maybe not the one you hoped for — but always something you can learn from.

If the hypothesis fails, you don’t throw it away. You observe. You adjust. You ask: under what conditions did it fail? What variables mattered? What might have interfered?

Even a null result is valuable. A failed reaction tells you exactly what doesn't work under these conditions. Silence isn’t rejection — it’s information.

This is where scientific minds differ from startup logic. Business tends to equate silence with irrelevance. No traction? Must be the wrong message. Wrong offer. Wrong direction. Pivot.

But a scientist doesn’t pivot after one trial. They replicate.

One Attempt ≠ a Pattern

In science, you don’t make conclusions from a single data point. You need replication. Controls. A pattern.

Most experimental protocols require at least three iterations before you can even begin to assess whether an effect is real or just noise.

But in startup culture, especially in pitch-driven ecosystems, one underperforming metric often triggers a complete change in direction. As if one non-click is a referendum. As if one unfunded call means the venture isn’t viable.

This mindset is dangerous — not just because it’s reactive, but because it punishes depth. It turns real innovation into a popularity contest. It confuses surface-level response with structural readiness.

The One-Parameter Rule: Change One Thing at a Time

Scientific experimentation follows a core principle: only change one variable at a time.

This is how you isolate causality. How you identify what matters.

If I were to repeat the post — “POV: AI isn’t #deeptech” — next week, I’d only shift one element. Maybe the time of day. Maybe the tone. Not both. Not the structure, format, and audience all at once.

But startup strategy often throws everything into the air: new tagline, new deck, new business model, new platform — and then tries to guess what “worked.”

That’s not strategy. That’s noise.

Science doesn’t guess. It tests. It takes longer but the results are non-negotiable.

Environmental Variables: There’s Always Noise in the System

Even in the most tightly controlled lab, some variables are out of your hands: humidity, ambient temperature, lab equipment calibration. You can’t remove these factors — but you can design with them in mind.

On LinkedIn, those environmental variables look like:

  • Who logged in that morning

  • Whether key readers were on vacation

  • What the algorithm prioritized that day

They may seem trivial, but they’re not. They’re the equivalent of room temperature in a chemical reaction — subtle, but potentially game-changing.

If you ignore them, you misinterpret the data. If you respect them, you can begin to understand why the outcome looked the way it did.

Most of the startup world doesn’t account for these variables. They assume the system is clean and the results are direct. But when you're working in complexity — and science always is — you don’t get clean data without context.

This Is Not Just About LinkedIn Posts — It's About Startups

Everything I just described? Applies directly to science-based startups.

Because scientific ventures operate under similar systemic constraints:

  • Market readiness

  • Regulatory shifts

  • Policy delays

  • Investor sentiment

  • Long R&D cycles

  • And sometimes, sheer macro timing

Yet we judge them as if they’re SaaS. One pitch, one slide deck, one pilot. As if traction can be pulled out of thin air, when in reality, it’s built through aligned systems over time.

When a scientist runs an experiment and gets minimal results, they don’t shut down the lab. They document. They iterate. They refine. But when a startup doesn’t get funding after one try, we often treat it as dead.

This is the real issue. We’ve built a system that interprets scientific signals using startup logic — and then blames the scientist when it doesn’t work.

Science requires:

  • Patience

  • Controlled iteration

  • Environmental awareness

  • Intellectual humility

  • And above all, discipline

If we applied even a fraction of this mindset to venture building, we’d stop calling promising deep tech ventures “not investable” just because they didn’t produce immediate traction.

We’d start recognizing signal — even in the silence.


Coming Up in Part III What happens when you try to force science into startup accelerators, pitch templates, and lean canvases?

It breaks. Or worse — it disappears.

In Part III, we’ll explore why the very structure of the startup world is designed to misread science. And why the dilution of the term deep tech is making the real thing invisible.

Read the full piece on our website

💜 eM. from Arise Innovations Where silence is data, not defeat.

Agustin Harfuch

Innovation at the forefront with Embedded Systems

2mo

I liked this Article Maria! I support the idea that most of the times ventures operate under long R&D cycles. As many times in deep tech is expected for results from one day to the next one, and it is true what you said that it is necessary to iterate and test, lots of testing. Then it would be then when you make up real conclusions about the experiment. What I do not understand clearly is why could this be compared to Software as a Service? ,,One pitch, one deck, one pilot.". Can you clarify this?

To view or add a comment, sign in

Others also viewed

Explore content categories