Orb and Apache Druid: Building customer trust through data correctness with Kshitij Grover

Aug 22, 2023
Reena Leone
 

Real-time data has many applications but one place where it’s extremely valuable is with usage tracking, billing, and generating reports. Ensuring the freshness and availability of this data is not only essential for financial success but also for establishing a more challenging aspect—trust. That’s precisely why Orb chose Apache Druid and Imply as the backbone of their advanced pricing platform. This platform encompasses invoicing, usage monitoring, and comprehensive reporting. On this episode, Kshitij Grover, co-founder and CTO at Orb, guides us through their innovative utilization of Druid, allowing their users to assess metrics across queries and beyond. He also gives some great advice for those just starting on their real-time data journey. Definitely worth a listen…or two!

On this episode, Kshitij Grover, co-founder and CTO of Orb, discusses the use of Apache Druid and Imply in their modern pricing platform. Orb leverages real-time data and analytics to provide billing solutions for companies with hybrid billing needs. They prioritize flexibility for customers by allowing them to specify their own metrics in SQL queries, which are then used to generate invoices and provide insights. The decision to use Druid and Imply was driven by the need for data correctness, resilience, scalability, and real-time visibility into customer usage data. By adopting Druid and Imply, Orb is able to provide real-time transparency and visibility into customer usage, empowering businesses with the ability to make data-driven decisions and build customer trust. The scalability and flexibility of Druid enable Orb to handle growing data volumes and evolving customer needs effectively.

Listen to this episode to learn more about:

  • How Apache Druid and Imply help Orb deliver real-time transparency and visibility to their customers on how their customers are using their product
  • How Orb is able to handle ingesting as a service hundreds of thousands of events a second and over 1000 queries per second with subsecond performance
  • Orb’s various use cases for Druid, including being a key part of their  ingestion pipeline, powering graphs in the Orb Web app, and keeping invoices up to date- Orb’s core responsibility.
  • How Druid can scale only with Orb’s growth as a business
  • What to consider before you start evaluating real-time databases, especially if you’re a start up.

Learn more

About the Author

Kshitij Grover is the co-founder and CTO at Orb, the modern pricing platform for the world’s fastest growing companies. At Orb, Kshitij focuses on scaling Orb’s data infrastructure to support the most demanding customers while providing a safe and predictable developer experience. Before starting Orb, Kshitij was an engineering leader on Asana’s infrastructure teams.

Transcript

[00:00:00.330] – Reena Leone

Welcome to Tales at Scale, a podcast that cracks open the world of analytics projects. I’m your host, Reena from Imply, and today I am joined by my colleague Julia, and we’re here to bring you stories from developers doing cool things with Apache Druid, real time data, and analytics, but way beyond your basic BI. We are talking about analytics applications that are taking data and insights to a whole new level. Julia, thank you for co hosting with me today.

[00:00:23.090] – Julia Brouillette

Thank you for having me. Super excited to be here.

[00:00:25.600] – Reena Leone

So, on today’s show, one of the best use cases for real time data is usage, billing, and reporting. Data freshness and accessibility are crucial to not just bottom lines, but to building something much more difficult: customer trust. That’s why Orb has chosen Apache Druid and Imply. So, Orb is a modern pricing platform that gives businesses the ability to bill for seats, consumption, and everything in between. They’re on a mission to provide every business with the infrastructure to unlock their revenue. And joining us today to talk us through that mission and where they are with Druid is Kshitij Grover co founder and CTO at Orb. Kshitij, welcome to the show.

[00:01:02.720] – Kshitij Grover

Thank you. Thank you so much for having me. Really excited to have this conversation with you.

[00:01:06.570] – Reena Leone

So before we get into your data architecture, I like to start every show with a little bit about you and your background and how you got to where you are today. So can you share a little bit about your journey?

[00:01:18.170] – Kshitij Grover

Yeah. So previously, my co-founder and I, my co-founder Alvaro [Morales], and I were at Asana, and I was an engineering leader there, mostly working on the infrastructure side. And we were at Asana from that company being 100 or a couple hundred people all the way to direct listing and really seeing the growth there. And at Asana, we saw a lot of business growth, but a lot of pains around monetization and the pricing and packaging of that company changing over time. It was really effective for the business, really effective for the fundamentals of the business and how it evolved over that time. But it did involve a lot of engineering effort and a lot of effort from both collaborating with both the product and engineering teams and so getting to see that growth up close. Alvaro and I were really excited to work in this space, and that’s kind of the origin story of Orb as awesome.

[00:02:10.000] – Reena Leone

Awesome. So let’s talk a little bit about Orb and what you guys do.

[00:02:16.160] – Kshitij Grover

Yeah, of course. So, Orb is a billing platform, and we’re primarily designed for companies that have what we call hybrid billing. So some combination of subscription billing, probably on seats, and usage based billing. Of course, the usage part can be super varied. So we work with companies in the cloud infrastructure space, the developer tooling space, and even the fintech space. So companies that have some component of metered usage or usage, where some dynamic element of what’s being used in the application counts towards their customers bills. And in a lot of ways, the core responsibility of Orb is to take in this usage data and generate invoices on our customers’ behalf. And in addition to just being that billing system, Orb does a lot on top of that. So we give you analytics and insights into that data. We generate those invoices, but also deliver those invoices. So Orb has a native inbuilt invoicing solution that’s really transparency-forward again, reinforcing what you mentioned earlier, that idea of customer trust. And then we’re really leaned into the finance audience as well. So we do financial reporting for revenue recognition on top of the invoices that Orb generates.

[00:03:29.610] – Reena Leone

So you mentioned a couple key words there, data and analytics. And so I want to kind of dive into what is powering that. I know you’re here today because Orb uses Apache Druid and Imply. What kind of prompted the search that led you down this path to Druid and Imply?

[00:03:46.640] – Kshitij Grover

Yeah, it’s a great question. So I think it kind of starts with where we were and how we got to Druid and Imply. So I think from the beginning, one of the priorities at Orb was building our billing system with flexibility in mind. So instead of taking approach where we would try to limit the sorts of things that you could express in the billing system, the priority was we want to let our developers and our customers specify their own metrics in Orb with as close to SQL queries as possible, and ideally just having them write their metrics in SQL. So to zoom out a little bit and explain what that means, the shape of Orb is, folks, send us these usage events. These usage events are schemaless, right? There’s not a fixed schema to them. Then they set up metrics within the product. These metrics, as I was just saying, ideally are just queries on top of these events. And then we take those metrics, tie them to pricing, and generate invoices from them. With that framing in mind, we started with a Postgres based solution.

[00:04:47.780] – Reena Leone

Ahhh OK, yep…

[00:04:48.300] – Kshitij Grover

Yep, Postgres is super flexible. Everyone’s familiar with SQL syntax, and they can just write their metrics and will execute those queries on Postgres on top of the events that they send in. Postgres also happens to have great support for JSON B columns. And so the queries that you’re writing can operate on top of the schemaless data. Your events can evolve over time. I think we got a lot of the promise of flexibility on top of Postgres. Now, the reason why we’re talking and what you might imagine is as we started to experience a lot of super exciting growth on the business side for us, we realized that Postgres wasn’t the right long term solution specifically for this problem. And so then we started kind of exploring other databases that might make sense for this business problem and this technical problem and how we kind of marry those. And then we started talking to the Imply team and started exploring Druid in a little more detail.

[00:05:47.470] – Reena Leone

I see with Postgres, sometimes there’s issues with scale and then sometimes pre-comp is what’s a challenge that people run into when they kind of move from Druid to Postgres. Is that something that you were seeing?

[00:06:00.500] – Kshitij Grover

Yeah. So in terms of the options that we explored, I think this is what you’re kind of getting at. The immediate thing that we had tried jumping to or were exploring were well, Snowflake can handle lots of data, obviously a very flexible system in a lot of ways. Is that something that might work for us? And I think we quickly realized, even just starting the engineering evaluation there, that that would mean having to build this sort of two tier architecture. Just because Snowflake is not designed for that real time use case. We’d have to build some set of queries that access a different database and then some set of queries that access Snowflake kind of pre compute that data and cache it in a different data layer. And we were not excited about that. I think we weren’t excited about that because it didn’t feel like the native solution to our problem. And it would have been a lot of engineering complexity for a team like ours where we really wanted to focus on, again, the billing and the finance and the reporting side, rather than spending all of our time architecting around what felt like not quite the right database.

[00:07:06.480] – Kshitij Grover

So I think we kind of quickly dismissed Snowflake in some sense. And then we started looking at databases that were really meant for real time analytics. Right. And I think in that class we looked at obviously Druid, we looked at Pinot, we looked at ClickHouse, and I think we were actually pretty quickly excited about Druid for a couple different reasons. I think the first thing is we knew that correctness and database resilience matter a lot for specifically our domain and service. So an example of this is Druid has an inbuilt Kafka connector, which means that Druid can provide these exactly once guarantees for data coming into Druid via streaming solution like Kafka. And again, that’s the sort of thing where there are ways to work around it and other solutions and we would have to add more complexity to figure out how exactly to get exactly that guarantee in something like ClickHouse or Pinot. But Druid really emphasizing that and building towards that from the foundational architecture was really appealing to us and it gave us the confidence that we’d be able to maintain our correctness guarantees for Orb on the billing side. Like naturally, maybe unlike a general purpose analytics solution, you really can’t have duplicate events in billing because it all comes back to customer trust.

[00:08:25.890] – Julia Brouillette

Definitely.

[00:08:26.340] – Reena Leone

You mentioned Kafka. I’m going to throw it over to Julia because she’s kind of our resident Kafka expert.

[00:08:32.070] – Julia Brouillette

Yeah, you’ve actually mentioned a couple of things. A lot of things that Druid is known for when describing your use case. Like the subsecond queries at scale, consistency, like you just said, high concurrency, all of those things. Ingesting batch and real time data at the same time, high reliability. Those are all things that pillars of Druid you could say. So could you talk about why those things were important to you? Why were those features important to your use case? What was going to be the impact to customers if you didn’t have real time architecture, for example?

[00:09:04.960] – Kshitij Grover

Yeah, that’s a great question. I think one of the big reasons why our customers choose Orb is because it allows them that real time transparency and visibility into how their customers are using their product. So that’s actually one of the biggest value props of a product like Orb. And so we really needed a database that was designed with that in mind. And I think when it comes to the sort of complex metrics that people set up in our product, they need to be able to move across customers, slice and dice their data really quickly. And again, having a database that was built for that was important. I’ll say a couple of other things. Given that our product and the adoption of the product was growing very quickly, it was a little bit hard to predict exactly how that will scale over time. So we needed a solution that was easy to scale out and expand our usage very nimbly. And I think that’s, for example, one of the things where you look at ClickHouse, and I’m sure some of this maybe has changed over the last couple of years, but scaling out a cluster in ClickHouse is a little more painful than Druid.

[00:10:11.100] – Kshitij Grover

And we’ve been very successful at just adding nodes to Druid over time and making sure that it can keep up with our use case. And it’s funny because our use case, of course, just scales with how quickly our customers are growing. So it’s a very good problem to have, in some sense, where our customers utilization is growing, we’re growing, and then Druid can keep up very well with that. And similarly, one of the things you just mentioned was high concurrency. So today, to just throw some numbers in there, we’re ingesting as a service hundreds of thousands of events a second, and we have capacity up to, I’d say, about 1000 queries per second. And I think that sort of concurrency while maintaining subsecond queries was really attractive to us and it’s worked really well.

[00:10:54.050] – Julia Brouillette

That’s awesome to hear. And I wanted to actually talk about you touched on a couple different use cases. You were talking about the invoicing generating invoices, but also that real time insight through dashboards and such to your customers. Can you talk about which use cases, Druid and Imply powering specifically and also the role of Kafka as well in that.

[00:11:18.030] – Kshitij Grover

Yeah, of course. So to take you kind of through the ingestion pipeline, it’s actually pretty simple as what you would expect. We have events coming into our API layer. Those get ingested into Kafka as our streaming solution or our message bus. And then we also have a deduplication layer where, again, because that’s so important in the world of billing, we use actually AWS’s memorydb in order to deduplicate events and provide an item potency guarantee. And then finally, downstream of that, those events make it into Druid. And so one thing that’s kind of interesting about that ingestion pipeline and where Druid sits in our architecture is in the ingestion pipeline, we actually keep a precise count of how many events should make it to Druid at any given time or for any given time period. And so we have quite a bit of monitoring, actually, to make sure that exactly that many events land in Druid. And if there’s any failure modes in the ingestion pipeline, maybe a node goes down and resends an event. Even if that’s not a duplicate caused by Druid, that could still mean that there’s multiple events that match each other downstream in Druid.

[00:12:26.210] – Kshitij Grover

And so that monitoring lets us basically ensure that we never finalize an invoice is what we call it. We never issue an invoice where the underlying data has duplicates. So we’re able to catch that and make sure that we’re still maintaining the correctness of our service. But to your question of what exactly do we do with this data? How does it function in the product-  Druid powers I think what’s our core responsibility, which is keeping invoices up to date. So with all these events in Druid, we’re executing the queries that customers have defined which power their line items and the quantities on those line items. And so we’re constantly keeping our customers’ customers invoices up to date with the data that’s in Druid. And then we also have a bunch of other use cases that are a little more auxiliary to that, like being able to provide alerting of your customers usage to you. So when a customer’s usage hits a threshold, you might want to know either internally to alert your own teams or maybe even to alert your users, hey, you’ve hit 1000 compute units and you might want to upgrade your allocation, for example.

[00:13:30.290] – Kshitij Grover

And then naturally, as you were saying, all of the graphs in the product itself in the Orb Web app are powered by Druid and the queries we execute on Druid.

[00:13:40.320] – Reena Leone

So we talked about the benefits of Druid, but I got to ask, you chose Druid with Imply as your path. Why did you decide to go with Imply and Druid instead of just open source Druid?

[00:13:52.230] – Kshitij Grover

Yeah, that’s a good question. And I think actually going back to our evaluation, it was a big part of it. So looking at solutions like ClickHouse at least at the time, there wasn’t a very mature managed service around ClickHouse, and we saw that Imply had been working with Druid for a long time. Obviously, Imply also has a lot of the creators of Druid, and so we were excited about working with a really experienced team with whatever data store we chose. And one of the things that we get working with a cloud provider like AWS is that mission critical support. And if we were going to go outside of AWS because we needed this sort of specialized time series based events data store, we wanted to make sure that we could match that level of mission critical support. So I think that support piece was one of the big pieces of it. I think there’s a couple of other pieces. Again, because we’re early in our growth cycle and we’re growing so quickly, we wanted to make sure that we had a team that could help give us recommendations on how, as our workload matured and changed, we’d be able to tune the cluster to those specifications.

[00:14:59.380] – Kshitij Grover

And then finally, I’d say, actually, Imply has a lot of great resources around observability into the Druid cluster that you don’t kind of get out of the box or if we were to run it on our own. And that’s the sort of thing where the more observability we have, the better we can kind of tailor our cluster to our use case.

[00:15:16.790] – Reena Leone

We love observability here, for sure. I feel like we talk a lot about it on this show, actually.

[00:15:23.750] – Kshitij Grover

Yeah. And I believe the product is called Clarity, if I’m not mistaken. I think whether it’s been on support calls with the Imply team or even as our engineering team gets more used to using it effectively, I think it’s been really useful to understand what exactly is our query pattern? How can we make sure that the sorts of queries we’re running on Druid are performant and we’re not missing anything about the initial setup? For example, pretty recently we realized that we were in one case, running a huge backfill through Druid’s real time ingestion service, and it happened to work and nothing really fell over. But something that Clarity taught us is like, great, here’s how that’s actually affecting the rest of the queries on the cluster. And the Imply team was able to provide a better recommendation of using batch ingestion for specific types of workloads, right? So that’s an example of, even if it’s not monitoring our alarms that go off, clarity can give us the insight to make sure that we’re using Druid correctly as our use cases expand.

[00:16:29.330] – Reena Leone

So I like to keep a balanced show, so I like to always kind of dive into any challenges you might have run into and how you solve them, because whether you’re dealing with Druid and Imply or open source Druid, inevitably challenges come up. You got to figure it out. So have you run into any of those. And can you share your solutions?

[00:16:49.240] – Kshitij Grover

Yeah, of course. So I think one of the big aspects again of adopting Druid was ensuring that the correctness guarantees that we want to offer our customers are available with whatever solution we picked. And when we picked Druid, we wanted to make sure that was true. So I think the process I was describing around deduplication did require us building some more monitoring and process and infrastructure on our end to make sure that we can alert anytime that there’s a duplicate, we can pause the finalization or the issuing of invoices. And today that does involve still a little bit of manual process. Right. We have to ensure that when we get alerted of a duplicate we pause real time ingestion. We run Druid’s built in compaction task, we make sure that duplicate is gone and then restart ingestion. And again, it’s not a huge lift, but it is the sort of thing where I think being really aware of what are the correctness guarantees that your data store is providing you is actually really useful because then you can build these workarounds. And actually, in that specific case, I know that or I believe that compaction of real time data is coming in the Druid roadmap.

[00:17:58.470] – Kshitij Grover

So we’re obviously excited about that and will let us eliminate one piece of manual process. The other thing that I think is interesting to add here is one of the strong points of Druid’s architecture is you’re able to add nodes. It’s very resilient. Anything can go down as long as your deep storage is up. But the trade off that that comes with is it’s a more complex architecture, there’s different node types. You have to think about the movement of data in a little bit more of a sophisticated way. And I think one of the things we were initially gave us a little bit of hesitation is the operations of that architecture. Like our team being able to understand how to deal with the failure modes, how to understand all the different moving parts and concepts. And I think, you know, the solution here was everyone on the team read the white paper and they had a good understanding of what are the different processes. But also one of the things I was really excited about was we got Vadim, one of the co founders of Imply and obviously very familiar with Druid, to come and give a technical talk at our offices and I think that was really great.

[00:19:00.030] – Kshitij Grover

And again is where working with a team like Imply is really helpful, where we don’t just have to rely on documentation, we can have someone really give us an overview of Druid that specialized to our use case and our business use case as well.

[00:19:14.930] – Reena Leone

Vadim is fantastic and actually technically he is the very first Druid user.

[00:19:21.020] – Kshitij Grover

Yeah, I heard that. That was pretty fun.

[00:19:23.830] – Reena Leone

You mentioned a couple things about future vision. It’s a good segue into anything in Druid in the upcoming roadmap or anything on your wish list that you’d like to see in Druid?

[00:19:34.240] – Kshitij Grover

Yeah, I mean, if we can have it all, we want it all right. I think the thing that Druid does really well is it gives us a flexible SQL query layer to query over these schemaless events very, very quickly and it makes sure that that scales very well and is very resilient. And so I think the parts that we’d like more of, honestly, are just deepening each of those aspects. So, for example, in the SQL query layer, we’re excited for window function support.

[00:20:02.450] – Reena Leone

Yep, yep, we hear that!

[00:20:03.320] – Kshitij Grover

We have companies that could benefit from that in their metrics. And I think it’s the sort of thing where of course we can build more query semantics on top of the queries that we’re issuing to Druid. But of course, the less we need to in that regard, the better. Another example of this is in the billing domain. What you might imagine is we’re oftentimes querying most for the last, let’s say 30 days of data, just because most people are billing on a monthly cycle or maybe it’s the last quarter of data. And so one of the things that we’re really excited about on that front is being able to query from cold tier where if you have billing data or usage events that are a couple of years old, you’re probably accessing them very infrequently. We’re probably okay trading off some query time in the product as well as in any of the analytics features we offer. And so that’s the sort of thing where having that built into the Druid architecture means that we don’t need to manually do that archiving of data or offload it from the cluster. We don’t need to reroute queries from one place to another.

[00:21:10.700] – Kshitij Grover

Druid will do that for us. So just another one of those things that I know is on the roadmap, I know it’s coming soon, but we’re increasingly excited about. And the last thing is what I mentioned before, which is compaction over those real time segments because right now, if we want to compact over the current day, we’d have to stop ingestion and we’ve built enough stuff upstream of Druid where that’s okay. But again, that would be helpful.

[00:21:35.190] – Reena Leone

I swear I don’t ask this question to put pressure on the community, but I feel like everybody wants window functions and cold tier.

[00:21:43.310] – Reena Leone

They are very popular!

[00:21:44.740] – Julia Brouillette

The popular kids!

[00:21:46.180] – Kshitij Grover

And actually I think that’s given us a lot of comfort because the sorts of things we are looking for are very aligned with what Druid and Imply are building towards. And I think that’s nice because it almost makes us feel like we’re in the right place, we have the right use case, we’re using the product that’s kind of directionally designed for the sorts of problems we’re trying to solve.

[00:22:07.520] – Julia Brouillette

That’s huge. Yeah. Thank you for sharing that journey, Orb’s story and your journey to finding Druid. I think it’s really inspirational. And to that end, I wanted to ask if you have any advice or words of wisdom for folks who may be at square one of kind of where you started, just starting to define those real time requirements and looking for solutions. What would be your advice to them?

[00:22:31.390] – Kshitij Grover

Yeah, I think there’s a lot of these generic guides of what’s the tradeoffs between ClickHouse and Snowflake and Postgres and Druid. And I think the thing that I’d emphasize is starting from, and especially if you’re a company that doesn’t have a super mature data infrastructure stack already, starting from the business requirements, like starting from what do your customers care about? As I said for Orb. That’s correctness. I think first and foremost because customer trust is a big part of Orb’s value, but then of course, it’s also the real timeness, the data volume, but really teasing out what’s most important to your customers. And then from there, trying to understand, okay, for each of the data stores we’re evaluating, what correctness guarantees do they offer or what properties do they have that align with what we’re looking for? I think maybe to summarize that, don’t try to evaluate these data stores in isolation for what’s the best one, because what the best one is, is going to vary to your use case. It’s going to oftentimes be very time dependent, right? Where what you’re looking for in the next couple of years might very well change five years from now or seven years from now.

[00:23:45.100] – Kshitij Grover

And I think on that topic, I’d say if you’re an early company, one of the pieces of advice that was given to me a couple of years ago is aim for the architecture that doubles your lifespan. Right? So if you’ve been around for two years, aim for the architecture that buys you another two to three years. Don’t try to plan 5,7,10 years ahead because your requirements are going to change and what you want out of the solution is going to change.

[00:24:09.330] – Reena Leone

Awesome. That’s such good advice. That’s such good advice because so many people talk about future proofing and they’re looking so far out. But especially if you are evaluating open source technologies, there’s always going to be changes to the technology itself. New players enter the market nothing. Nothing in our industry really stays the same.

[00:24:28.950] – Kshitij Grover

Yeah, exactly. And I think, in fact, Druid is a good example of that. I think has changed, in my view, has changed a lot of the ecosystem pretty significantly. And maybe two to four years ago, there weren’t that many solutions that could handle real time analytics at scale. And and I think Druid is an example of, you know, if you were to really build everything around an existing solution, and then something like Druid comes along and you realize, well, actually this provides a lot of the properties we’re looking for. You don’t want to be stuck in a position where it’s extremely hard to switch or you’ve made some irreversible sort of constraints around your existing solution, like.

[00:25:08.440] – Reena Leone

Investing heavily in Snowflake. Because once you get into Snowflake, then that’s kind of it.

[00:25:13.100] – Kshitij Grover

Yeah. And actually, I think that’s an interesting point in the sense that if you’re an engineering team that is small or just generally has a lot of other product roadmap to build, which is basically everyone, I think you really want to be careful about where your innovation tokens go. Right. I think there are some companies where it makes sense to build your own data store. Obviously, Druid came out of Metamarkets that built their own data store. But I think being really cognizant of is this something where you want to spend a lot of roadmap time? How deep or important of a solution is this for your business and what does that mean or what are the consequences of that in terms of how much engineering effort you want to dedicate to it? So I think in our case, we’re very happy dedicating engineering effort to scaling and thinking about data store guarantees building around Druid. But we’re very glad that we can use a solution like Druid to get us a lot of the fundamentals.

[00:26:12.780] – Reena Leone

This has been amazing to hear and it’s so exciting for your company because I know you guys are relatively new and to see so much success so soon, and especially with Druid helping you get along that way. It’s fantastic to talk to you today. Kshitij, thank you so much for joining us.

[00:26:27.840] – Kshitij Grover

Yeah, and thank you for having me. I really enjoyed this.

[00:26:30.810] – Reena Leone

And thank you, Julia, for joining me today in a new format where you’re my co host.

[00:26:36.570] – Julia Brouillette

Thanks, Reena. This was really fun. Thank you. Kshitij. Yeah, this was awesome.

[00:26:41.240] – Reena Leone

Teamwork makes the dream work.

[00:26:43.040] – Julia Brouillette

Exactly.

[00:26:43.860] – Reena Leone

If you’d like to learn more about Orb, please visit www.withorb.com. And to learn more about Apache Druid, you can always visit druid.Apache.org. And to get more info on Imply, you can visit imply.io. Until next time, keep it real.

Let us help with your analytics apps

Request a Demo