The importance of real-time data continues to grow. Companies recognize the value of collecting data in real time, however, they can struggle with who needs it and why. Product managers, sales, and marketing teams often express a need for real-time data, but it’s crucial to understand their actual requirements.
Real-time data goes hand-in-hand with customer-facing analytics, operational visibility, real-time decisioning and more. When dealing with real-time data, companies need to carefully consider their architecture to avoid creating systems that are either too simple or too complex. It’s all about balancing performance and flexibility while avoiding potential bottlenecks, and removing limitations so that developers can build and innovate faster.
We hear a lot about data-driven culture but it’s essential for organizations to prioritize the right data, the right strategies and the right tech. Listen to the podcast to learn more about:
- Who needs real time data and why it’s important
- What challenges companies are facing when harnessing the power of real-time data and how they can overcome them
- How real-time can build trust with customers
- Architecture considerations for building out real-time systems and applications
- Examples of companies leveraging real-time data in the right way
Learn more
- Druid Summit 2022 Keynote: Real time Analytics in Modern SaaS Applications with Gwen Shapira
- Build Real-Time Analytics for your Kafka Data
- Top 7 Questions about Kafka and Druid
About the guest
Gwen Shapira a co-founder and CPO of Nile (thenile.dev). She has 20+ years of experience working with code and customers to build reliable and scalable data architectures – most recently as the head of Cloud Native Kafka engineering org at Confluent. Gwen is a committer to Apache Kafka, author of “Kafka – the Definitive Guide” and “Hadoop Application Architectures.”. You can find her speaking at tech conferences or talking about data at the SaaS Developer Community.
Transcript
Gwen, welcome. you’ve literally written the book on Hadoop Architectures and Apache Kafka, and fast forward to today. You’re the co founder and CPO of Nile. That’s quite a journey.
[00:01:14.960] – Speaker 2
I know, right? I would say that my entire career has been trying to solve one problem and then realizing that by solving one problem, a new problem is created in the world. And now I want to solve this other problem. So I don’t know. It’s kind of the tech driven, progressive career path one may say. I worked on Hadoop, and a lot of the effort and the pain that I’ve seen with Hadoop was, how do we really get data into it and managing data inside it. So Kafka was such a great solution. I moved to work on Kafka because it solved this serious issue on moving data around that organization, which turned out to be a really big problem in its own right. And then working on Kafka, I joined Confluent. We went from running Kafka. Mostly we just wrote the software and then had other people run it to being a managed service. And I was really exposed to the extremely high expectations that customers have from managed services. The user experience they need, how much they desperately want and need someone else to manage the service, but also how hard it is for them to start trusting this managed service and being okay with its limitations, failure modes, being okay with its pricing, using it in a way that it was intended.
[00:02:43.550] – Speaker 2
All this was just a whole new world. And always, like, problems that you’re not deeply familiar with always seem easier than they actually are. So we are like, okay, we need to run Kafka as a service. We know how to run Kafka. It’s not that hard. If we have, let’s say eight engineers and we give it six months, a year tops, we’re probably done. And then, you know, it’s three years later, four years later, we have 200 engineers, we have 300 engineers. Are we done yet? Well, it’s not bad. Now one can call it like a nice managed service. We are proud of it. But is it done? Not quite. So we really discover that it’s a huge space and when you look outside, so many companies have to build the same thing. It’s Confluent, I’m sure Imply has a much service. You had to do a lot of the same thing. MongoDB, same story, Snowflake, similar story. Everyone has to have this experience and you don’t even know what is the thing you’re building. When you just start, you only see the tip of the iceberg. So yeah, I became deeply interested in what is this modern software as a service experience that customers expect and really what is the core technologies that is going to enable this kind of experience?
[00:04:06.770] – Speaker 2
And this is where we are today.
[00:04:09.330] – Speaker 1
Oh no, I know what you mean. It’s like you start on one project, you think it’s going to go one way and it never goes the way that you entirely think, or the scale of it just continues to increase. So what you’re telling me, so your company now fast forward to today is all about providing a modern SaaS platform. And I would imagine with that comes a lot of data needs and a lot of unique data needs depending on the companies that you’re working with and use cases. And the theme of today is real time data, which I think really plays into the modern SaaS space. Can you tell me a little bit about what real time means to you or how you’re seeing it defined by the companies that you’re working with?
[00:04:50.380] – Speaker 2
Yes, this is super interesting because you really tend to see that companies have a lot of data. They are aware that collecting it is valuable, but they’re a bit fuzzy on valuable to whom, who can use it, how they will use it. And there’s a real need in the world, I feel, to really clarify the use cases. So people who build a system, or if you’re a startup, when you prioritize what you’re going to do with your data and how you’re going to build things, you can approach it in the right way. So we’re talking about real-time data. We are looking at things like the things that normally shows up. First and foremost in SaaS system is usage. How are people using my product? The clicks, the views, the scroll, all those kinds of things is normally what people think about as usage data. And there are a lot of different types of people who are interested in this usage data. There is obviously product managers and if you have sales, if you have marketing, they want to see reports. If it’s marketing, they want to see conversion go up. If it’s product managers, they want the clicks on their favorite feature to go up, all those things.
[00:06:12.180] – Speaker 2
This is valuable and useful. But if they tell you they need the data in real- time, you kind of wonder, why? Are you going to do new marketing campaigns in real time? Are you going to prioritize features in real time? Or do you only do sprint planning once a week and then like bigger planning every quarter? Does it actually make sense? And when you tell them, you actually find out that they want to see more or less new data when they refresh the page with the dashboard. But obviously they can be very flexible about their real-time requirement. So you really need to drill down. What do you actually need? Because real time can be, is my query returning fast enough? It can be, is the data really, really fresh? And it can also be, is the data just coming in at a very high scale? And if there’s a burst up on my website, will my system keep up? Which is another way of saying real time I’m not slowing everyone else down, is another sense of real time. So you have all those things conflated and of course, how much querying capabilities you need.
[00:07:23.550] – Speaker 2
Can you pre calculate some stuff? Or do you need extremely flexible slicing and dicing? Product managers tend to want a lot of flexibility, but they don’t need all the extremely extreme freshness. So that’s one thing you solve for. On the other hand, at the other extreme, your customers themselves, they use your product, and a lot of times they also need to have information about their own usage. Sometimes it is for just things like billing. I want to know how is my usage trending so I know how much I’m going to pay at the end of the month. This is one important way of that. Real time is that it’s important. And then note that slicing and dicing is not really an issue. You know exactly what is the one number your customers want to know. And you can pre calculate it, but it has to be accurate or they’ll be extremely upset. And it has to be fairly fresh. So if they have a usage spike and you don’t show it and then they’re surprised by their bill, they’re not going to be very happy about it. So this is almost the other end of the spectrum now.
[00:08:29.510] – Speaker 2
Even if you don’t have usage based billing, a lot of product behaviors in modern SaaS are driven by usage data. So even hints like, hey, it looks like you haven’t used this feature since we released it three weeks ago. But we believe, based on our data, that it’s going to be useful for you. Here’s why. Here’s a hint. Here is the video. You can watch all those things coming in the product at the right time, to the right person is worth 500 marketing campaigns, obviously because you are targeting a person who can actually do something about this information at the relevant time. And this is so powerful and prioritizing that over the product manager himself seeing some metrics about the usage of the product, there is no comparison. What is more useful for getting the product adopted right? But it really depends on really fresh data. If you catch a user at the bedtime, you’re merely annoying them. You’re no longer providing them useful information. Now you’re just the annoying salesperson that they don’t really want to talk to anyway. So it’s extremely important to get it right. And in the product itself, freshness and speed beat flexibility by several miles.
[00:09:52.980] – Speaker 2
Now you need flexibility. The developers who build your products need to be able to actually do the things they need to do. But you don’t need to give tons of flexibility to the user. Again, unless this is specifics in [word I can’t tell], there’s so much value without this level of flexibility. So you kind of need to know, right, who you’re targeting. Otherwise you are basically building the wrong system, or you’re building one system that’s supposed to do everything and therefore makes nobody happy.
[00:10:18.760] – Speaker 1
You bring that up and I was going to ask you so that’s kind of like the right way to use the data. What gaps are you seeing? Where are folks kind of struggling and where to get started? Or how to kind of implement this data driven culture in the right way.
[00:10:36.350] – Speaker 2
I think maybe it’s a startup kind of thinking, but a lot of times culture really starts from the top and at the end of the day, the data culture will align toward what will the CEO ask for? And if the CEO asks for a lot of sales reports, a lot of funnel analysis, all of it is internal, then a lot of the effort will go to internal stuff. On the other hand, if the CEO says, okay, our strategy is product-led growth, which is kind of, I think, one of the hottest things in the Valley these days. So we’re doing product-led growth, which means that actually we don’t have that many salespeople. So it’s much more important that the product will sell itself than to have another sales report. So now you start prioritizing, okay, how will the product sell itself? It should have good information about usage, about inefficiencies, about opportunities. It has to prioritize giving the customers the only ones who can really make a choice to buy more the right information at the right time. And I always bring up this example, but it’s just so good. Confluent Cloud. One of the features that everyone always wanted was really an easy ability to scale a cluster up and up and then we built it, but for safety, we built in a load indicator.
[00:12:06.500] – Speaker 2
So if you try to scale the cluster, you can see how much the current cluster is loaded and we suggest, hey, extend the cluster if it’s overloaded and if you try to shrink it, we say, hey, the cluster is kind of loaded, it’s too dangerous, maybe you shouldn’t shrink it. It’s so useful because this information you can see when you need to buy more and it’s an objective trusted resource, it’s the actual data on the cluster. It’s not a salesperson telling me to buy more. I have the right information, I can make the right decision and I have it at the right time. When I’m operating a cluster, I have the information on, oh, you are not allowed to shrink the cluster, or it’s dangerous to shrink the cluster because it’s loaded when you are about to shrink it, when you are thinking about shrinking it. So this was very powerful and at the end of the day, when we shipped this feature, people were far more excited about the load indicators and the advice we gave than about the actual but capability to shrink and expenses.
[00:13:10.130] – Speaker 1
It’s nice to talk about the other side of data because a lot of times when we talk about especially real-time data usage, we’re thinking about maybe more of like the customer-facing analytics. And yeah, there is some of that, but thinking about how it can actually drive product strategy and featur- implementation on the internal is something I don’t think I’ve ever really thought about is using data in that way and the way that you’re talking about software basically selling itself in that way. But I definitely have been the person who gets excited about that one tiny new feature. That’s exactly what I wanted. So it’s always nice to know that’s what’s going on, on the other side. I mean, in terms of these real time architecture. So in order to get that level of insight, you have to process the data that’s coming in and how would you go about doing that? Right? So how would you take you’re looking at everyone who’s clicking on features in your product. How are you building ways to slice and dice that data to build those visualizations so that you can action on that?
[00:14:11.250] – Speaker 2
That’s fantastic question because there are just so many different ways to get things wrong or get them right temporarily, but then kind of build yourself into a corner that you don’t really want to be in. It’s always hard to balance. We do not want to overcomplicate our architectures, but if it’s simple but extremely dangerous, we don’t quite want the two. So we need to balance those two forces. And you definitely see both. You can see the architectures that are far too complex and you can see the architectures that are far too simple. So on the too simple side of the scale, we see people who basically say, hey, I have a production database over here, Postgres is fine. Why won’t I just record every click, every action the user takes to this Postgres. What they don’t realize is that the normal use case of the relational database is different like normal production use. You’re looking for this one row, right? Edit this record, get me this very specific set of records. You have a lot more reads than writes. So you rely heavily on indexes to slow down writes a bit. In order to optimize for reads, your queries should return in microseconds and not scan huge amounts of data.
[00:15:34.540] – Speaker 2
All those good patterns that you build for. The other side, if you are analytics engine, what you actually want for this real-time data is extremely fast in just a lot of indexes or indexes that are really optimized for specific analytics related things and you actually want to allow the analytics capabilities so the reads themselves can be more around slicing, dicing and flexible things around that. And again, you don’t usually have a lot of preprocessing steps so all those patterns kind of like you end up with a database that is kind of like a bit for this, a bit for that and I think it’s actually fairly fast, you will run into a dead end. This query is taking too long but if I optimize it then the other thing will break and now what do I do? And also the opposite is true if you say I’m going to do everything on my data warehouse. Your data warehouse is also not built to serve your application. So it’s not like you can take that system and build everything with it. It’s very often not even owned by a production organization and doesn’t have the same uptime guarantees.
[00:16:50.030] – Speaker 1
That’s actually what we talk about sometimes with Apache Druid is kind of finding this unique in between use case that kind of takes the best of what you can get from like a data warehouse or a data lake. But it’s optimized for analytics applications so you can get that subsecond query speed and so that you can deal with extremely large data sets that are likely coming in from streaming and in real-time and be able to run those queries and also deal with scale and then high concurrency. Like if you’re getting data from an application that has hundreds of thousands of users that could all be hitting the system and pouring all that data pouring in without it messing anything up. So it’s kind of interesting that you brought up Postgres. On one hand it’s just like a good example but then having a data warehouse on the other side and how there’s not really like one database that serves everything. But for this particular use case you need something that can deal with data at large scales, that can handle the speed that you need to slice and dice and be able to query extremely fast so you can action on that data just like we talked about.
[00:18:01.560] – Speaker 2
Exactly. At the end of the day, as much as we would love to simplify the system. So we’d have one database that does everything. We have different use cases and they have different requirements. And we haven’t really seen a good option to optimize a single system for all those requirements. So either you drop requirements, you just say, okay, this kind of thing I’m not going to do, which is kind of hard because you have your business, or you need to say, okay, I am going to have all those things. How do I structure a data architecture that is reasonable for me to manage, but still gives me all those capabilities that I really need? So I do feel like this is the game that all of us are playing today, and it’s not just about data engineers. Like a lot of time you think, okay, I have to create a data platform. Let’s find some data engineers. But I think that when it’s something that the application relies on, the software engineers who build the application are probably the right people to design the platforms that their application will run on, as opposed to business type of reporting, which is something that data science, data analytics, and data engineering organizations typically do a very good job owning.
[00:19:24.120] – Speaker 2
And it’s usually very successful to actually divide the data systems in that way.
[00:19:29.580] – Speaker 1
Well, that makes sense because I feel like if you’re talking more about the traditional data analyst role, they’re probably dealing with more BI tools and things of that nature versus these true analytics applications that are dealing with more high ingestion rates of data. I think we talked about it in your presentation about using a data lake or a data lake house, and that you could use like a reverse ETL, but that increases latency. There’s like a give and take for everything. So if you were designing a system, what would you say? Be like the ideal state if you’re like, okay, this is the way to do it, right.
[00:20:09.250] – Speaker 2
I’m trying to think of a way to talk through an architecture in a way that will make sense in a podcast where I don’t have a slide or a whiteboard or anything. So I may sound a bit overly generic, but generally speaking, you are trying to find the shortest path between different points. And because you’re a software engineer building an application, do you really need all those pipelines? Sometimes you do, sometimes you don’t. Could your application write to your analytics DB directly? Maybe not. I heard that some of the be allowed, some don’t. I don’t even have to know which one Druid is on. If not, it can definitely write directly to Kafka. So you have some data that your application itself is writing, some data to Kafka, some data to your relational database. The data from Kafka, can it go directly into your analytics system or do you need pre processing? If you can avoid pre processing and you can do all the processing analytics system you just lost another step. So really look into what are the possibilities for you to take stuff out of the system while still meeting all your requirements and not taking yourself into the spot that you will not be able to live with.
[00:21:33.390] – Speaker 2
And really reading a lot of best practices about the systems that you’re using because a lot of times it’s a very common thing. Just because you can do something doesn’t mean you should do something. You really want to find out. It’s easy to know whether you can, but you really want to find out. Should you do the things that you are just working towards?
[00:21:55.510] – Speaker 1
We talked about it with some of our other developers is try new things. There are always new technologies available that can shorten that path, right? Especially within the open source space, especially within the data realm. Because I feel like every day I hear about a new database, a new analytics use case, a new way to process data that’s faster than another one. I feel like we’re always trying to look at different benchmarks against different technologies to see which one is the fastest in terms of query speed or who can deal with the greatest data sets. I feel like every day there’s a new player in this market, specifically, especially around real-time analytics, that space is growing exponentially.
[00:22:43.570] – Speaker 2
Yeah. And I do think that a lot of developers, even senior ones, don’t have a good strategy around when to evaluate new technologies, how to evaluate them, and really the best way to go about it. I don’t know. It almost feels like when my managers asked me, hey Gwen, what is your technology strategy? Even when I was Senior Director leading a large organization, very technological, I didn’t always even know what the question means, much less answer it. But I think on the individual level, it’s important to have some kind of a strategy that says, when will I learn new technologies? Am I going to pick the hottest thing in New Year and make a resolution to learn that? Am I going to pick an interesting problem or use case that I see in my company and instead of picking the technologies that I know, say, okay, I’m going to try three different things for that use case and then pick the best one? If it’s not obvious, I believe the latter is actually a better way. It just gives you a more concrete way to evaluate new systems. It can be a lot for a lot of people.
[00:23:56.460] – Speaker 2
It almost never evaluate a new system until someone else in the organization twists my hand really hard and I’ll still resist. That’s probably slightly suboptimal, but I think.
[00:24:09.710] – Speaker 1
I am that person too. Unfortunately, I should, of all people should know better. But no, that is me. Or unless I have hit a dead end, right. And I need a new solution. Like my current solution is just not working or working suboptimally.
[00:24:27.340] – Speaker 2
Yeah, exactly. And this is respectable. Like if you tell your manager I’m only going to look at new technologies if I know the current one is not working, or even whenever there is a use case, first use the thing I know until I found out that it doesn’t work. I mean, your manager will likely be extremely happy. You are not wasting time, you are very mission focused. There is a lot of benefits to that. But the main problem is exactly that if all you have in the hammer, everything looks like a nail. If you never expand your scope, you never even find out what your current system is missing and what cool new things you could be doing. And you don’t have the opportunity to go to your manager and say we are not really offering our users good advice on when to expand their clusters, but if we did, it would probably have a very positive impact on our business. Can we try something and maybe it’s even we can try the POC with very little data and very carefully and very small number of users on our current Postgres. So we won’t do the investment unless there is some benefit.
[00:25:40.090] – Speaker 2
But if it’s successful, we need to adopt Kafka, we need to adopt Druid, we need to get all those things and get them in here.
[00:25:48.880] – Speaker 1
Yeah, but in that case though, you’re setting yourself up to be like a hero, right? Because if you evaluate a technology that is providing a better experience for your users, that’s a huge win. And then, especially if what you’re doing is not just internal users, but external users. When we were talking before the show, one big thing that all of this at a macro level can do is build trust with your customers, which is absolutely huge. Right now, I know we’re talking about data in terms of features and product development, but who’s using those products? This can really build your business, right? It’s not just building a cool app. This can actually take your business to a whole new level
[00:26:37.090] – Speaker 2
In many ways. Or I would say even the reverse, right? The lack of trust can cause very serious business issues. If you think about it, if you’re building a software as a service, no matter what it is, how much trust your customers have to put in you, right? They have to believe that you will be more available than they can do it, that you will be probably faster than you can do it, and that you will probably let them do stuff that they wouldn’t be able to build themselves. They really need to have. And they have to believe that you are extremely secure, because guess what? You have their data. Even if you are not officially a database, even if you are marketing campaigns, you still own their data. And this is so sensitive and it’s extremely easy to lose trust, extremely hard to gain it. And there is such a tight link between data and trust. One of my favorite role models, Deming, very famous management guru kind of thing, he said, In God we trust, everyone else must bring data. And this is so true, especially if you’re vendors, right? Because they want to trust you. But not everyone is fully comfortable all the time trusting their vendors.
[00:27:59.810] – Speaker 2
They know that you have your own concerns as a vendor, your own agenda, your own stuff that you’re dealing with, I guess. So it’s really about if you show them a very accurate picture of what’s going on, they will eventually grow to trust you. If, for example, we talked about, they give you data and they need to know it’s secure. If you show them a complete audit log of every time someone touches data, who did it, how much they did, what they did, and they see that they go touch something, they ask their colleague, hey, can you go touch this data? What does it say? They can ask someone who has no permission, can you try touching this data? You failed. Does my audit log show that it failed? This is fantastic. Over time, if the data is there immediately, they’ll trust you. If it takes five minutes for the data to show up, they’re like, oh, so if I have a security breach, it will take at least five minutes to catch it. If it takes 3 hours to catch, then that’s even worse if it never shows up. Oh my God. Right? So the data freshness and availability of the data is so critical.
[00:29:13.670] – Speaker 2
And this is obviously anything to do with access, anything to do with billing. Obviously, if you show me that I need to pay that much, does it actually record my latest usage if I make a change? Does it immediately reflect all those things? If you do it perfectly, they will not even send things. If they will just feel like, oh, it’s working as expected and I can rely on this system. If it’s not working perfectly, a bit of a miss may just be like, what the [edited]? If it’s a big miss, then it’s going to be.
[00:29:53.630] – Speaker 1
On the flip side. You want to make it look as simple as possible, right? All this can go on the back end, but that’s like the kind of the key of building a really great architecture is that whoever is using it after knows that everything just works. But customers just want to know that it works and access things when they need to and have things be accurate, right? That’s kind of the key. We can talk about what specific database is and how you’re ingesting the data, all that. They just want it to work.
[00:30:24.120] – Speaker 2
Exactly. And a lot of times you don’t need to show them all the data and give them full querying capability for them to get what they need out of the data. A lot of times they just need some. Information, they just need a decision made for them, a good advice in the right place. Again, it goes back to having just the right data in the right place at the right time. It’s always those things, it always comes.
[00:30:49.710] – Speaker 1
Like so full circle. So are there any companies that you have worked with or that you see that you can talk about that are just doing this right, that really have a good hold on their real-time data, the right architecture, who really kind of understand this?
[00:31:06.320] – Speaker 2
Yeah, it’s interesting because the company that I have in mind, a lot of them are very mature, they’ve been around for a long time. It’s hard for me to think about new companies that I say, oh my God, they’re doing it so right. Which may goes to talk to the complexity of things. So I’m looking mostly from the software as a service perspective, obviously, and one of the examples I can bring up in mind is something like Datadog. Now, obviously data is their entire thing, so there’s no doubt that they’re good at it.
[00:31:40.590] – Speaker 1
One would hope haha
[00:31:41.890] – Speaker 2
Yes, but they’re extremely good at it and they’re also good at it in ways that are slightly unexpected. So yeah, going to the website and seeing all this data is they’re obviously good at it. But if you put the have unique links to all kinds of dashboards and if you put this link in Slack, it will capture kind of a screenshot with the updated data when you post the link and it will show this picture so people don’t have to click on the link to see what the data look like. When you posted this link on Slack, which is like next level, I think, the right data at the right place in the right timing.
[00:32:24.170] – Speaker 2
So I really like that one. There is a company, it sounds like it was supposed to be social media, but they pivoted really in a weird way into B2B. It’s called Untapped. I talked about it with much excitement in my presentation, but it’s just that by being slightly successful social network, they collected massive amounts of information about who is drinking what kind of beer at what time in what location. As a social network, obviously it’s tiny. They couldn’t be a successful business with the usual, they’re doing advertising and all that, but it’s not Facebook, it’s not Twitter, it just is not that scale because it’s so specific to one thing. But if you think about all this information about who is drinking beer and when, the place where it’s actually the most valuable is in the restaurant itself at the time where a person stands next to the bar and tries to decide what to order because then you can. Even though beer selection may change several times a night, as kegs empty and new things rotate in, you always want to have an updated list of what is available with descriptions.
[00:33:37.630] – Speaker 1
A far cry from erasing the chalkboard.
[00:33:40.320] – Speaker 2
Exactly. And even better, if you can see information of, oh, these are the beers that are being ordered right now, and this one is the most popular. And this is new, but it’s trending. It can actually influence demand. This is kind of a scenario where you, again, bring the right data to the right person at the right time and in really settings that nobody else ever thought of. So it’s kind of cool.
[00:34:08.750] – Speaker 1
Well, then this also brings up another point where building real-time applications and utilizing real time data isn’t something that is exclusive to, say, enterprises or large companies. When we talk about massive data sets, we’re thinking of like global brands or giant B2B companies that have a lot of users, a lot of customers, a lot of data flowing in. But really, this could be a game changer for startups, for small businesses, right?
[00:34:38.410] – Speaker 2
Absolutely. It can be THE thing that gives you an edge over the incumbents. If you think about Uber, like Uber today is the incumbent, but way back when they started, the norm was to call a taxi. But if you call a taxi, you don’t know that the taxi is on the way. You don’t know where they are. So actually showing real-time data was the way they got customers initially. They had the app and they could show that the name of a person is on the way and the location and all those things in the other direction. I remember when I was at Cloudera, it was still small, and I ended up in this kind of competitive sell situation in the room with an extremely large and lucrative account we desperately wanted to land. On the other side of the table were 5,6,10 salespeople from IBM. I was there with another saleswoman, and that’s it. So it very much felt like, how can we, as a small company ever compete with a large enterprise when they have all those salespeople and all those resources in the world? The answer is that you cannot compete with a large enterprise by having more salespeople.
[00:35:58.780] – Speaker 2
They’re always going to win. You need a different strategy. Bringing this data to the customers when they need it, as they need it, and using this data that you can relate to them to actually encourage them to use your product. It cause them to grow, build this level of trust. You cannot build trust by taking them to more steak dinners than IBM and buying them more scotches than IBM. By using data correctly, you actually have this edge that maybe they don’t.
[00:36:27.680] – Speaker 1
Well, Gwen, thank you so much for joining me today. I feel like we really tapped into not just what real-time data is, but many different use cases, including ones that I have never thought of before. And I’m like now I’m all inspired to think about things in new ways. Where else could we use this.
[00:36:50.290] – Speaker 2
Exactly.
[00:36:51.080] – Speaker 1
Thank you so much again for joining me. And if you want to know more about the show or anything that we talked about, you can check out imply.io. I’m Reena from Imply. Until next time. Keep it real.