Cisco-Splunk strategy shift unveiled with Data Fabric "AI Canvas is the best AIOps interface I've yet seen," said Steven Dickens, CEO and principal analyst at HyperFRAME Research. "Mostly, what I liked about the AI Canvas piece was this multiplayer mode, where you build a canvas with [colleagues], and all of you look at things together … which [follows] how a major incident might happen -- a small ad hoc team of a network person, an observability person, maybe a server person, all come together to examine a problem for three or four hours to figure it out. Then that team dissolves, and people go back to their single-player mode." Read the full article by Beth Pariseau h here: https://lnkd.in/ePi_st5W
Cisco-Splunk Data Fabric strategy shift with AI Canvas
More Relevant Posts
-
DATA is EVERYWHERE. But with FEDERATION, so is INSIGHT... Federal and Public Sector Agencies are under pressure. Balancing mission outcomes, regulatory mandates, and the explosion of AI adoption. But the challenge remains the same: how do you get the right data into the right hands, at the right time... without compromising security or control? Our latest Starburst blog explores why data federation is the key to unlocking AI and advanced analytics. Not by moving or duplicating sensitive datasets, but by querying data where it lives. For agencies, that means: -Secure access to distributed data across clouds, on-prem, and classified domains. -Reduced risk by minimizing data movement and exposure. -Faster insights to power AI, RAG, and mission-critical workloads without waiting on lengthy ETL processes. -Flexibility to centralize only when it adds value, while maintaining compliance with strict governance standards. In defense, intel, and civilian missions alike, federation ensures agencies can innovate with AI while still protecting the data that matters most. Read the full post here: https://lnkd.in/eP2TgKs5
To view or add a comment, sign in
-
There has been a surge of excitement around data fabrics and pipelines this past week — and for all the right reasons. At Databahn, we’ve been guided by a few core beliefs since day one: 1. Independent pipelines are the future: Data may be the new oil, but the value comes from a neutral, producer- and consumer-agnostic pipeline. Our “data Switzerland” delivers the right data, in the right format, to the various consumers and AI models. 2.Context is everything: Analysts and SREs shouldn’t be stitching data, definitely not during crisis or mid-incident. Context-first pipelines reduce friction and accelerate outcomes. Also, AI models depends on quality data: If you don’t shift left — normalize and enrich upfront — you’ll pay for it in compute and token costs. Clean, enriched data is the foundation of every effective agent. 3. Decentralized data stores are the direction things are heading. Your business may rely on a data pond or data lake of its choice, IT may select another, and cybersecurity may operate with an entirely different flavor. Trying to co-locate all of this data into a single solution often leads to long, tedious projects and the hidden egress costs can add up over time. The best strategies we see involve customers sending only critical data through independent pipelines for real-time detections, while leaving the rest in its system of origin. Some also leverage cold-storage effectively. Later, using a federated AI-powered search enables virtual-co-location of data and delivering the benefits of centralization without the overhead. 4.The future is multi-model, not mono-model: No single model will win every task. You need freedom to route workloads to the best model for cost, latency, capability, or governance — and to swap models as better ones emerge. Own your prompts, guardrails, and evals; keep them portable across providers. 5. AI will be the strongest consumer of your data: Models have everything except your institutional context. Independent, AI-ready pipelines — and the ability to choose and change models — are how you bridge that gap. Net-net, our guiding principle is clear, architect for control. Control of your data and control of your model choices. Use open formats and frameworks, OCSF, OTEL.. Avoid vendor lock-in, design for BYO/replaceable models, stay agile, and prepare for continuous AI upgrades. The organizations that do this won’t just keep pace with current challenges but also future proof the AI wave. At Databahn, we’re building for exactly that future. #dataneutrality
To view or add a comment, sign in
-
Own your data security leaders! you should not need to change your ENTIRE pipeline each time you change your SIEM vendor..
There has been a surge of excitement around data fabrics and pipelines this past week — and for all the right reasons. At Databahn, we’ve been guided by a few core beliefs since day one: 1. Independent pipelines are the future: Data may be the new oil, but the value comes from a neutral, producer- and consumer-agnostic pipeline. Our “data Switzerland” delivers the right data, in the right format, to the various consumers and AI models. 2.Context is everything: Analysts and SREs shouldn’t be stitching data, definitely not during crisis or mid-incident. Context-first pipelines reduce friction and accelerate outcomes. Also, AI models depends on quality data: If you don’t shift left — normalize and enrich upfront — you’ll pay for it in compute and token costs. Clean, enriched data is the foundation of every effective agent. 3. Decentralized data stores are the direction things are heading. Your business may rely on a data pond or data lake of its choice, IT may select another, and cybersecurity may operate with an entirely different flavor. Trying to co-locate all of this data into a single solution often leads to long, tedious projects and the hidden egress costs can add up over time. The best strategies we see involve customers sending only critical data through independent pipelines for real-time detections, while leaving the rest in its system of origin. Some also leverage cold-storage effectively. Later, using a federated AI-powered search enables virtual-co-location of data and delivering the benefits of centralization without the overhead. 4.The future is multi-model, not mono-model: No single model will win every task. You need freedom to route workloads to the best model for cost, latency, capability, or governance — and to swap models as better ones emerge. Own your prompts, guardrails, and evals; keep them portable across providers. 5. AI will be the strongest consumer of your data: Models have everything except your institutional context. Independent, AI-ready pipelines — and the ability to choose and change models — are how you bridge that gap. Net-net, our guiding principle is clear, architect for control. Control of your data and control of your model choices. Use open formats and frameworks, OCSF, OTEL.. Avoid vendor lock-in, design for BYO/replaceable models, stay agile, and prepare for continuous AI upgrades. The organizations that do this won’t just keep pace with current challenges but also future proof the AI wave. At Databahn, we’re building for exactly that future. #dataneutrality
To view or add a comment, sign in
-
Open standards, hybrid deployments, and built-in governance are no longer optional. Cloudera is helping enterprises meet these demands with Iceberg support, GPU observability, and real-time AI at the edge. Great market overview here: https://shorturl.at/DK9FR
To view or add a comment, sign in
-
Open standards, hybrid deployments, and built-in governance are no longer optional. Cloudera is helping enterprises meet these demands with Iceberg support, GPU observability, and real-time AI at the edge. Great market overview here: https://shorturl.at/DK9FR
To view or add a comment, sign in
-
Current data systems were built for humans reading dashboards. But AI doesn't want summaries—it wants everything. An H100 can consume 4 million images per second. Most sit idle 70% of the time, waiting for data. We built Spiral to fix this: ⚡ 10-20x faster than existing solutions 🎯 Direct S3 to GPU data loading ✨ Unified system for all data types 🔒 Enterprise-grade security built in Already backed by Microsoft, Snowflake, and Palantir. $22M raised from Amplify Partners and General Catalyst. Full announcement: https://lnkd.in/ePPwkXXM #DataInfrastructure #AI #MachineLearning #DataEngineering
To view or add a comment, sign in
-
Incredibly excited for Spiral to launch today. AI teams deserve a multimodal data platform that's actually built for them. Since we led their seed, Spiral's file format (Vortex) has already been adopted by The Linux Foundation. Check them out:
Current data systems were built for humans reading dashboards. But AI doesn't want summaries—it wants everything. An H100 can consume 4 million images per second. Most sit idle 70% of the time, waiting for data. We built Spiral to fix this: ⚡ 10-20x faster than existing solutions 🎯 Direct S3 to GPU data loading ✨ Unified system for all data types 🔒 Enterprise-grade security built in Already backed by Microsoft, Snowflake, and Palantir. $22M raised from Amplify Partners and General Catalyst. Full announcement: https://lnkd.in/ePPwkXXM #DataInfrastructure #AI #MachineLearning #DataEngineering
To view or add a comment, sign in
-
We just annouced AI-centric advancements to the Splunk Platform, seamlessly connecting knowledge, business, and machine data. The result? Unmatched insights, faster AI model development, and smarter decisions at scale—so you can stay ahead of the curve. Turn data into intelligence, with Splunk. https://lnkd.in/ddvyG94r
To view or add a comment, sign in
-
For existing StorageGrid users, additional functionality has just been released, enhancing your ability to turn data into a strategic advantage across your already secure, resilient, and high-performance data management infrastructure:: The AI revolution is running full speed ahead, and with NetApp StorageGRID 12.0, we're providing the high-octane fuel to power it. We've packed this new release with incredible features—including some industry firsts—to help you manage your data and accelerate your AI workloads. Here’s a quick look at what you can do with StorageGRID 12.0: ✅ Simplify AI workflows: Our new bucket branches feature makes collaborating on S3 a breeze, so you can manage and version AI content at scale. ⚡️ Accelerate AI workloads: Get up to 10x more speed with high-performance S3 caching—without changing your existing infrastructure. 📈 Scale to new heights: We’ve doubled our capacity to support over 600 billion objects in a single cluster, pushing the boundaries for what’s possible. Ready to see how you can transform the way you manage data? Check out the full blog here:
To view or add a comment, sign in
-
The AI revolution is here and object storage is its unsung hero, powering every step from data ingestion to inferencing. Say hello to StorageGRID 12.0! Here’s what’s new, including some industry-first features: ✅ Version AI content at scale: Bucket branches make S3 collaboration seamlessly to simplify workflows Accelerate AI workloads ⚡️ High-performance S3 caching delivers up to 10x speed, no infrastructure changes needed. 📈 Extreme scale: Over 600 billion objects in a single cluster, a 2x increase that pushes boundaries for small object workloads. Stay at the forefront of data innovation and transform the way you manage data. Jo Smith Banon Zinna ☁Zak Thakur Alex Kennedy Thomas Spinks Adam Wilcox Gurjeet Singh Hayer Read more: https://ntap.com/4nkhu8j
To view or add a comment, sign in