CriblCon 2025 Recap: 3 Takeaways from the Front Lines of Observability
Oct 21, 2025
Matt Morrissey
I spent a few days in National Harbor, just outside Washington, D.C., for CriblCon 2025—and it was impossible not to feel the energy in the room.
From mainstage keynotes to hallway conversations, one message came through loud and clear: teams don’t want to do less with their data—they want to do more.
AI and exploding data volumes are forcing enterprises to rethink how they move, store, and analyze telemetry. The future belongs to architectures built for choice, control, and flexibility.
Here are my three biggest takeaways from seeing it all firsthand.
1. Freedom to Do More
Cribl CEO Clint Sharp captured the tone perfectly during his keynote:
“Cribl Stream is a telemetry pipeline. We pioneered that category. It decouples sources from destinations. It allows you to route data anywhere. It gives you the choice, the control, the flexibility to get the data in the right shape for the right destination. Enrich it. Reduce it. Filter it. Reshape it.”
That message summed up what nearly every customer echoed on stage.
Under Armour untangled years of syslog clusters and Splunk forwarders into a single, version-controlled pipeline—achieving roughly 70% cost savings and total visibility.
Johnson Controls went even further, using Cribl as a neutral data layer to migrate from Splunk to CrowdStrike NG SIEM in six months with zero downtime—now feeding five analytics platforms in parallel.
Cribl proved that when you decouple data movement from analysis, you gain the freedom to do more—to onboard new data, expand coverage, and experiment without breaking budgets or workflows.
That same principle—freedom through decoupling—is what inspired Imply Lumi.
Everywhere I looked at CriblCon, teams were solving how to move data, but still running into the same downstream challenge: once that data landed, it was costly and complex to keep it searchable at scale.
Lumi extends that same decoupling into the query layer. Its event-indexed format stores data up to 5× more efficiently than GZip while keeping it instantly queryable—no rehydration, no extra pipelines, no operational burden.
That efficiency isn’t just about saving money—it’s about unlocking data that was once out of reach.
With Cribl and Lumi, you can now use Splunk for high-volume, cloud-native sources like CloudWatch, CloudTrail, and VPC Flow logs—simply by connecting to the S3 buckets where those logs already live.
Cribl gives you the freedom to bring in more data. Lumi gives you the freedom to keep it—and do more with it.
2. Keep What Works, Expand What’s Possible
If there was one theme that cut through every hallway conversation, it was this: teams are lean, but their ambitions aren’t.
The winners are the ones who simplify without starting over.
At Getty Images, Simon Overbey’s team reduced Splunk ingest by 800 GB/day, offloaded logs to S3, extended retention, and automated ingestion with CI/CD—all without adding headcount.
At Pegasystems, engineers replaced legacy forwarders with Cribl Edge agents and modernized their SIEM, improving visibility and scalability across global teams.
What stood out wasn’t just efficiency—it was continuity. These teams improved performance and scale without changing how they work.
That’s the same idea behind Lumi. You keep your dashboards, queries, and alerts—everything.
No new workflows. No retraining. Just the same experience running faster, at a fraction of the cost.
Cribl made data movement simple. Lumi keeps that simplicity alive downstream—so you can build on what works and expand what’s possible.
3. Open Data, Infinite Potential
AI was everywhere at CriblCon—but the smartest conversations weren’t about algorithms; they were about architecture.
During his keynote, Clint Sharp described what he called the era of agentic telemetry—where autonomous systems continuously analyze logs, metrics, and traces alongside human context.
As he put it:
“AI isn’t coming to replace humans; it’s coming to replace bad architecture.”
That sentiment echoed across every session: AI can only succeed when the underlying data is open, structured, and accessible.
Open data isn’t just about interoperability—it’s about potential.
When telemetry flows freely through open pipelines and lands in open, queryable storage, both people and machines can act on it instantly.
That’s where Imply Lumi completes the picture.
Cribl gives teams control over how data moves; Lumi ensures that data stays accessible once it lands—whether it’s powering dashboards, alerts, or the next generation of AI-driven investigations.
Together, they form the foundation of an observability architecture built not for limits, but for possibilities.
The Big Picture
Walking out of CriblCon 2025, one thing was clear: the era of compromise is ending.
Decoupled pipelines, simplified operations, and open data aren’t about doing less—they’re about unlocking everything teams have been holding back.
Cribl brings freedom to the pipeline. Imply makes that data instantly usable and infinitely scalable.
Watch: Eric Tschetter in conversation with Bradley Chambers of Cribl
Welcome back to the CribbleCon twenty five Cribblecast news desk. My name is Bradley Chambers back here with Eric Cheddar from Imply. Eric, welcome to CribbleCon. What’s been your favorite part of the day? Thank you. The my favorite part of the day was probably the keynote. I I don’t know. Maybe you’ve gotten that answer a whole bunch now. But I really I enjoyed the keynote. I enjoyed the was it the irreverent kind of seriousness that was mentioned? The the jokes and commentary about, you know, AI is maybe not ready, oh no, now it’s ready and and all of that. I also really like the Star Trek references. I don’t know like if that’s all under fair use or or whatnot, but I I love the Star Trek reference. We with irreverentment series is one of our core values and we take our products and our customers very seriously. We just don’t take ourselves very seriously. So even Clint like in our company Slack will like be self deprecating. Know, like it’s just I think it’s like we all the world is complicated enough. Let’s have fun while we’re at. Right. Yeah. So we’re glad you all are here. We appreciate you all making this rec up up here. Now just if if someone’s listening doesn’t know who apply is and and what your products do, what do you all do and how does that fit into a company’s broader observability infrastructure? Yeah. So we are Imply. We just introduced this new product actually called Imply Lumi, which is what we’re here at CribbleCon talking about everything. And Imply Lumi is a Splunk compatible data layer. It’s not just Splunk compatible. It’s actually much more broadly compatible. But the easiest way to describe it is a Splunk compatible data layer. What we’ve done is we have at a core nugget, we’ve implemented our own compression technology. And that compression technology basically kind of underscores and understands the inherent structure in data and leverages that to compress it. We then have our own indexing technology that we built up around that, which also is an optimized indexing format. And these two combined mean fewer bytes on disk, but still queryable and still searchable. We then have query implementations layered on top of that. One of them being SPL, other ones being SQL, LogQL, things like that, that allow us to then integrate with other ecosystems and allow users to basically put data into us, take advantage of the more efficient lower unit cost storage of data. And the benefits that we gain from that indexing also allow us to often speed up queries. But the end user doesn’t have to change their workflow. They don’t change what they’re doing day to day. They keep using the tool that they’ve been using and just get the benefits of better compression, faster queries, more good stuff. I was gonna make a joke. Did you call it Pied Piper? Okay. I don’t know if you’re gonna get the silicon dust from the Silicon Valley TV show Yeah. Oh, yes. Back in today. No. No. It it the compression technology is called middle out compression and actually as a roadmap item we’re trying to figure out how to make the refrigerators do the compression for us. So that’s that’s that’s a no. I’m No. I I get the reference. Now, Luby supports federated search from within Splunk. What role does Cribble play about optimizing or simplifying that query path and getting that data where it’s supposed to go? Yeah. So we work very nicely with Cribble. Cribble streams can be used, of course, to do all of the data prep. There’s a bunch of people talking about inlining things like geo IP lookups, geo lookups, doing lookups, understanding where things are coming from. And that stuff really has to be done close to the source at the time that the data is generated and and all of that sort of stuff. Now that data then flows downstream. And so we just exist as a destination. And we can basically coexist to allow Cribble with all of its routing infrastructure, with everything, just you know, pick the right place for the data. One of those places now is imply Lumi where you can take advantage of this lower unit cost data and queries without actually getting the users out of their initial workflow. Well, I think that’s really nice when you can just kind of replace some of the underpinnings And you just take advantage of some of the newer technology Yep. But you don’t really have to like relearn anything Right. Re change your company workload. Right. You’re changing those workflows can be almost cost prohibitive. I mean, the training involved with there is sometimes can be more than the savings that you’re you’re seeing. Absolutely. Yeah. No. A technical migration and user migration are two completely different things. Yep. And whenever you try and tightly couple them, you just get really high migration costs and and it becomes really expensive to migrate from one thing to another. We’re we’re we’re decoupling that and allowing you to do that technical migration without actually migrating your users. What’s been the customer reacting? If this is their first time learning about, you know, imply, what’s been the reaction to some of that? There’s a lot of interest. It’s it can be overwhelming sometimes how much interest there is. It’s it’s fantastic. We’re yeah. No. It it it’s been great. And I think the other thing also with the the kind of working alongside Cribble is one of the things where that our compression technology really allows us to do is we are leaning in to this notion of index time extractions. And so traditionally, a lot of times if you do index time extractions, you’re paying an added cost to like expand the size on of data on disk and all of that in order to have that data available. With how we’re compressing the data, there’s not a significant increase in actual storage to doing the index time extractions. Which what that means then is if you’ve used Cribble to do a whole bunch of data enrichment enhancement and addition, there’s not that added cost of actually indexing all of that at index time. But you get all of the space, all of the query benefits and and all of that. And so it really changes the physics a little bit around how you take the trade offs of search time versus index time allows for doing a lot of those transformations in the stream. Last question before I let you go. I know it’s been, it’s you know, it’s still crazy here. It’s getting later today but it’s like it’s I think more people somehow we’ve multiplied people. You know, when you’re ingesting data via Criblestream and yet working with Lumi together, how do you preserve the field structure and metadata without breaking things downstream? So our compression, the way we deal with the data, everything, we believe it’s our job to show people exactly the data that we were given. And so what what we do in figuring out the structure and and all of that, that’s all internal implementation details. It’s all things that are figured out automatically by the the system dynamically such that we’re not actually decomposing, we’re not separating it out, we’re not we’re not doing that. We can recreate the exact data that we were given at any given point in time so that we don’t have to do that. That’s like that’s such a cripple way to do it because it’s like, you know, you’re not breaking thing, you’re not locking that at it. That’s like the cripple mantra of like, it’s a customer’s data. Yeah. We’re gonna do it right, know, we can give it right back to you so that Right. You guys are kind of taking our, you know, anti vendor login Yes. Login ethos. Yeah. It’s to see. Can you get the benefits of it with really no trade offs? Right. So, well Eric, this has been great. We’re we’re so thankful you all here. What you know, it’s I’m I’m glad you’re getting connected with you know, it’s really some of the industry’s biggest and brightest years. So we’re thankful you all been here and we certainly hope to see you back next year. Yeah. Thanks for having us. It’s been great to be here. We’re excited. TripleCon’s been an awesome place. Well, thanks everybody for tuning in. And and if you wanna learn more about Imply, we’ll have a link to the website in the show notes on YouTube. Again, has been Bradley Chambers from the CrippleCon twenty five Cripplecast news desk, and we’ll be back later with more great interviews.
In this short interview, Eric Tschetter, CTO and co-creator of Apache Druid, sits down with Bradley Chambers from Cribl to discuss how Imply Lumi fits into the future of observability infrastructure. They explore:
How Lumi supports federated search across hot and cold data.
How Cribl optimizes and simplifies the query path to keep observability stacks efficient.
Why preserving field structure and metadata through Cribl Stream + Lumi ensures seamless downstream workflows and interoperability.
Together, they represent the architecture modern observability has been waiting for: open, federated, and ready for whatever comes next.
Ready to See How Far Your Data Can Go?
If you’re exploring how to pair Cribl’s pipeline flexibility with a high-performance, cost-efficient query and storage layer, we’d love to show you what’s possible.Book a demo and see how Imply Lumi helps you do more with your data—no migrations, no rehydration, no limits.
Other blogs you might find interesting
No records found...
Feb 25, 2026
Imply Lumi Product Preview: Removing the Cost–Performance Tradeoff in Observability
If you caught our recent product update, you’ve already seen the pace of development on Imply Lumi has been relentless. Last quarter, we delivered major performance and usability improvements to data...
Since releasing Imply Lumi in September 2025 as a decoupled data layer for observability, the Imply R&D team has been hard at work to make it easier and more economical to retain, query, and analyze observability...
The Most-Read Imply Blogs of 2025 (and what they signal for 2026)
Before we take on 2026, let’s rewind. 2025 was the year observability teams stopped asking, “How do we reduce data?” and started asking the real question: “How do we build an architecture that can keep...