00:13:42
Another great question. So alert fatigue. The easiest proxy we can make about, alert fatigue and the implications it has and how it's and at least that's where that's how we learned to improve our model to fight alert fatigue was looking at cyber security. Cause that's an area where you know there's a lot, It's an area that's slightly more mature than broadly speaking, data governance and data observability.
And where a lot more attention is given to alerts because no company today is safe from a cybersecurity attack. So we looked a lot at that space and we tried to learn from them how to make sure that our Anomaly detection model was not creating extra work and extra alerts for the user.
And it was actionable. We loved the word actionable at Sifflet we use it a lot. And yes, Lineage helps a lot in the sense that it gives context to the anomalies that a tool detects and tells you what you can do on behalf of the alert. So where is it coming from, how it's impacting the user, who's looking into it et cetera?
So you can build, an incident report where you can follow each incident and know exactly what can be done to remediate the accidents and more importantly, to avoid it from propagating and from happening in, in the future. The second big element in fighting alert fatigue is and again, this is more still about technology.
I'll give you also an argument that's less related to technology, which in my opinion it is often overlooked. But still on the technology element. I think a lot of tools and a lot of ways to do Anomaly detection, wanna focus on ML and apply it in getting better and getting smarter and doing more automated ways to deal with Anomaly detection.
Right. And that's great because you can cover a broader variety of use cases. You can automate a lot of the workflows, but you can't do ML detection if ml-based anomaly detection if your ML model is not robust enough and equipped with a very solid feedback loop. Otherwise, it's a recipe for disaster.
And that's how you create a lot of false negatives and false positives and just a bunch of random alerts that are not filtered or presented to the user in a way that helps them. Trust the tool that, that, the monitoring for them. It's very funny and very related to the psychology of humans.
Cuz when you get, an anomaly detection tool or observability tool you use it to trust, to achieve more trust in your data and your data infrastructure. Right? But if you don't trust the performance of the tool, then you're not gonna use it. And the adoption is gonna be very poor. And for you to trust the tool, you need to experiment with all the alerts you get from the tool and how efficient they help you get around monitoring your data assets. So again, without making it about Siffelt specifically but we invested at a time in our ML-based anomaly detection engine, specifically because we wanted to avoid landing in the trap of alert fatigue. And in our ML-based anomaly detection engine, you, so obviously have a very strong feedback loop, but you can also, it's built in a way that it gets smart.
And learns from the anomalies that the user detects and learns from the actions that the user is taking on behalf of the alert. So back to the lineage part. So overall, it makes sure that all the alerts are dealt with and that, all the alerts are relevant. The final point is more related to people and internal evangelism.
If you build the culture internally, and this is the job of data leaders, business leaders if you build the culture internally that you know, celebrate small wins about data quality, makes data quality almost part of the culture within the data team and within the broader organization, then people are more incentivized to take data quality issues and the alerts they get from anomaly detection more seriously. Unfortunately, there is no shortcut to achieving a good and healthy, data culture within an organization. It's a lot of work and a lot of small initiatives that are done repeatedly to ensure that people are incentivized.
But you can't just rely on technology to do that, you need to have also strong adoption internally and strong messaging about the importance of ensuring good quality data. And fortunately on our for or unfortunately there's a variety of ways to compute.
And this is back to the question of how to get the buy-in, and the variety of ways to compute the ROI of data quality initiatives. But also there's been, unfortunately, a big number of highly publicized data catastrophes of public companies that paid fines or made reported huge losses and stuff like that, that is, you can find by a simple Google search.
And you'll see that if data quality is not taken seriously, it can have, it can get very serious and it can have serious repercussions on the business. And so about the buy-in, I think it's a matter of aligning, first of all, aligning the business objectives with the data team's objectives.
And I think that's where a lot of data leaders get it wrong cuz they go and they invest a lot in, a modern data stack and they go and buy all the fancy tools, but they. They often lack that connection to the business that tells them, Okay, this is exactly what we need from the data team and this is why data quality is important.
And again, there's a variety of ways to go about that and get internal stakeholder buy-in. But from my experience and this helps me a lot in my previous experience because I was a, like a hybrid business slash technology leader and I see it play out quite nicely with a lot of the customers that we're lucky to work with Sifflet is that the most successful or the initiatives that I see succeed as far as data quality and data governance go is when both stakeholders from the business and technology are involved in the discussion and in picking the tool for data observability.