#10. Mapping the Heavens, Mapping the Enterprise

#10. Mapping the Heavens, Mapping the Enterprise

How Greek constellations foretell the next-gen technical stack for agentic knowledge.

Prologue — the night Thales lost his footing

Legend says Thales of Miletus was so intent on reading the stars that he tumbled into a well. When teased for ignoring the ground beneath him, he traced lines between the lights overhead, turning random specks into shapes sailors could steer by.

Twenty-six centuries later, our companies glitter with PDFs, Jira tickets, IoT pings, and SaaS APIs—each a lonely star. Unless we connect them into patterns agents can read and extend, we’re Thales in mid-fall. Below is a pragmatic roadmap for drawing those lines: a mesh of graph shards, curator agents, and a shared micro-dialect that together support real-time agentic autonomous work—without sacrificing privacy or safety.

1. From isolated stars to guiding shapes

On a clear night the Greeks didn’t catalog Betelgeuse for its own sake—they cared that Betelgeuse sat on Orion’s shoulder and that Orion pointed east just before dawn. The pattern held the meaning; the single star was trivia: individual data points matter far less than the relationships that weave them into a coherent, decision-ready picture.

Inside a modern enterprise the same logic applies. A 200-page PDF describing a turbine seal is just another bright dot. What matters is the thread that ties Pump #P123 to its warranty, to the vendor who stocks the seal, to the safety rule that caps restart cycles. When an agent receives an alarm, it doesn’t shout “fetch the PDF” or “fetch the semantically close chunk of the PDF!”—it quickly traces those links and answers: divert load, dispatch crew, order seal.

To let agents think that way we flip three switches:

First, store the relationships before the prose. Treat every manual or ticket as raw ore and smelt its facts into a graph: Neo4j, Neptune, Blazegraph—whatever lets you ask in Cypher or SPARQL,

MATCH (p:Pump {id:'P123'})-[:HAS_WARRANTY]->(w)
RETURN w.id        

and get back a precise node, not a paragraph.

Second, make the sky shimmer in real time. Every edit to a document, sensor stream, or Jira issue emits a tiny MERGE patch on Kafka or NATS.io. Your graph is never a quarterly snapshot; it is live starlight.

Third, guard the vocabulary like a harbor master. Classes and predicates live in a Git repo; a CI gate refuses duplicate verbs or fuzzy synonyms. That way hasWarranty is always spelled once, meaning one thing, across every constellation.

With these habits in place the graph stops being a data graveyard and becomes what Orion was to the Greeks—a shape you can steer by when the sea turns dark.

2. Constellations, not a single globe

2.1 The case for many small skies

Ancient Greece never owned a single, official star-map. Each polis kept its own version of Orion, yet sailors stitched those variants into one continuous night-sky and navigated safely from Crete to Thrace. Your knowledge landscape should follow the same rule: stewardship stays local, discovery goes global.

  • Curator agents sit directly beside each domain source—one embedded in the CMMS, another in the ERP, a third protecting legal content. Every curator continuously ingests its local feeds, converts them into triples, and renders those triples as compact sentences in a controlled micro-dialect I call Pseudo-Structured English (PSE). I’ll unpack PSE with concrete examples in a follow-up post; for now, think of it as English engineered for zero ambiguity. Each curator also publishes a tiny Bloom-filter sketch—“I know about Pump, hasWarranty, and 18,000 IDs that look like this”—so the router can decide, in microseconds, which shards are worth querying.
  • A router / query-planner receives a question expressed in PSE, consults a shard catalog (itself a miniature graph), chooses the shards most likely to answer, fans out the sub-queries, merges the signed replies, and streams a ranked result back.
  • The shard catalog holds heartbeat timestamps, sketches, public keys, accepted AIM scopes, and region tags so the router can satisfy data-residency rules while it plans.
  • When two shards disagree on a fact, a conflict-resolver opens a Jira or ServiceNow ticket that embeds both provenance trails. A human (or AI) steward picks the winner; a webhook then patches the losing triple.

Even if an enterprise exposes a single, centralized entry point, empowering each business unit with agent-level access to its own compartmentalized knowledge drives faster innovation. A shared ontology keeps that access uniform, no matter how—or where—each slice of knowledge is stored.

2.2 Where MCP becomes the gravity between stars

Decentralisation is powerful only if every agent can plug into every curator without bespoke glue code. That’s exactly what the Model Context Protocol (MCP), an open protocol still in early adoption, provides.

  1. Tool discovery, not hard-wired wrappers Each curator hosts a lightweight MCP endpoint that advertises JSON-schema tool cards—query_kg, write_kg, summarize_manual, open_ticket, and so on. An LLM agent reads the manifest, fills the arguments, and calls the tool just as it would call a local function. Whether the curator stores data in Neo4j, DynamoDB, or a vendor’s REST API is irrelevant; the MCP layer abstracts those details away.
  2. Uniform security envelope Every call carries a purpose-scoped JWT. Shards verify the token before answering, so role-based access control and intent filtering travel with the request instead of being re-implemented in thirteen different SDKs.
  3. Hot-swap model independence Because MCP is model-agnostic, you can point today’s router at Claude, tomorrow’s at GPT-5, and the same tool contracts still line up. No code freeze, no brittle prompt surgery.
  4. One-time SaaS onboarding A SaaS “castle” that refuses to expose a graph no longer blocks the constellation. The vendor adds an MCP manifest once, mapping get_invoice, find_defect, or download_safety_sheet to its native API. From that day forward every enterprise agent can reason over the vendor’s slice as if it were part of the larger sky.
  5. Conflict resolution, native The resolver itself is just another MCP tool—open_ticket—so any LLM can escalate ambiguity, watch for closure, and apply the correction without custom integration work.

Put differently, a cross-graph federation layer can stitch multiple shards into a single panoramic view for deep analytics, but MCP acts as the gravitational field that lets any agent focus on any shard—safely and predictably—regardless of who owns the data or where it resides.

By pairing local curator agents with an MCP-aware router, you keep data where it belongs, let subject-matter experts evolve their shards at their own pace, and still grant every autonomous agent a seamless, tool-rich universe to navigate—just like sailors once roamed the Aegean under a sky stitched together from many small stories.

3. Guardrails for a sky that stays clear

A constellation is only helpful if its light guides the crew without blinding them. Here are the practical checks most teams put in place.

  • Purpose-scoped access – Every request carries a signed role + purpose claim (e.g., maintenance-diagnostic, 90 min TTL). Curator shards verify both before releasing a single triple.
  • Data-residency routing – Shards declare their region. The router enforces location filters so EU data answers from EU soil unless an audited exemption is present.
  • Prompt-sanitation chain – Curators strip scripts and suspicious markup. The router runs a lightweight detector for jailbreak strings before context is fused into the model prompt.
  • Provenance-first conflict handling – If shards disagree, each returns its signed fact. Resolver surfaces both; the answering agent must cite its chosen source or the response is rejected.
  • Encrypted transport and masked qualifiers – All mesh traffic moves encrypted. Sensitive qualifier values (prices, PII) are salted and hashed; only authorised readers can decode them.
  • Vector-only shards for high-sensitivity domains – For nuclear ops, zero-days, or medical data, the curator shares embeddings plus cryptographic similarity proofs, never raw text.
  • End-to-end traceability – Each query, tool call, and model response carries a unique trace_id; the mesh streams these spans into an immutable audit log with strict retention. Because chain-of-thought hashes and provenance IDs are recorded alongside, security teams can later replay exactly which prompt accessed which shard, what data returned, and why the agent took its next step—no black boxes, no gaps.

Guardrails are layered protection. Prioritize them according to your risk profile. With guardrails in place the charted sky remains bright enough to navigate yet dark enough to protect what matters.

In the next installments I’ll chart the practical waypoints—how to roll out curator agents in phases and how PSE turns raw documents into unambiguous triples—so you can start mapping your own sky without capsizing daily operations.

Epilogue — Thales’ lesson re-learned

Thales’ fall reminds us that insight needs both a grounded footing and a connected sky.

By letting each domain steward its own stars, linking them through a lightweight catalog and router, and speaking one precise dialect for all internal reasoning, we give autonomous agents the same gift the constellations gave ancient navigators: a map they can trust in the dark.

When the next storm looms—whether it is a typhoon disrupting logistics or a zero-day shaking IT—your agents will lift their gaze, trace the right constellation, and guide the enterprise to safe harbor. That is the promise of a sky mapped for intelligent work.

References

1. Rebuilding Babylon—A Call for a Common Language for Intelligent Work by Yuriy Yuzifovich

2. Everything a Developer Needs to Know About the Model Context Protocol (MCP) by Michael Hunger, Neo4J 

3. Milky Way Starry Sky by will zhang from Pixabay

Guillermo Wrba

Autor de "Designing and Building Solid Microservice Ecosystems", Consultor Independiente y arquitecto de soluciones ,evangelizador de nuevas tecnologias, computacion distribuida y microservicios.

1mo

Envisioning !! Great article

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics