No result found for ""
I started my career as a first-generation analyst focusing on writing SQL scripts, learning R, and publishing dashboards. As things progressed, I graduated into Data Science and Data Engineering where my focus shifted to managing the life-cycle of ML models and data pipelines. 2022 is my 16th year in the data industry and I am still learning new ways to be productive and impactful. Today, I am now the head of a data science & data engineering function in one of the unicorns and I would like to share my findings and where I am heading next.
I searched for ‘data platform’ on Unsplash and this is what I got. Photo by fabio on Unsplash Hi from Data Management team at Eureka
ELT is becoming the default choice for data architectures and yet, many best practices focus primarily on “T”: the transformations.
The CRM is no longer seen as the definitive source of trust for enterprises when it comes to collecting customer data. Instead, it has become just another SaaS tool that is unable to handle the complex data architectures that modern enterprises have created.
Salesforce has been considered for a long time as the source of truth. However, in the last 5 years, the number of SaaS tools used by companies has been multiplied by 10.
Spotify is the world's most popular audio streaming service with 433M users, and probably unwarranted of a further introduction. While the company has continued to grow extensively for the last years, so has the need for a fast and scalable infrastructure to support that growth. At our last Heroes of Data Meetup, Spotify’s Sonja Ericsson joined us to talk about how they are migrating from Luigi to Flyte in order to build a next generation workflow platform to power all of their 20,000+ daily workflows.
If you have spent any time in the data space in the last 10 years, you'll know that data job titles have gotten hilariously complicated and confusing. There are Data Analysts, Data Scientists, Analytics Engineers, Data Engineers, Business Analysts, Business Intelligence Analysts, Product Analysts, Product Data Scientists, Data Product Managers (?), ML Engineers, Data Enthusiasts, People Who Just Really Love Counting, and dozens of other titles floating around.
Showing what's happening in dashboards is informative, not insightful. It's useful to know if a key business metric is going up or down, but it's not actionable. Only the “why” behind these changes can drive recommendations and actions.
The modern data stack has helped democratize the creation, processing, and analysis of data across organizations.
This is a long due post, predominantly because there is a lot of confusion around data lineage , data observability and the interdependencies.
The term “data lineage” has been thrown around a lot over the last few years. What started as an idea of connecting between datasets quickly became a very confusing term that now gets misused often. It’s time to put order to the chaos and dig deep into what it really is. Because the answer matters quite a lot. And getting it right matters even more to data organizations.
Data matters more than ever – we all know that. But at a time when being a data-driven business is so critical, how much can we trust data and what it tells us? That’s the question behind data reliability, which focuses on having complete and accurate data that people can trust. This article will explore everything you know about data reliability and the important role of data observability along the way, including:
Data is the most valuable asset for most businesses today. Or at least it has the potential to be. But to realize the full value, organizations must manage their data correctly. This management covers everything from how it’s collected to how it’s maintained and analyzed. And a big component of that is data governance.
Data is getting even bigger, and traditional data management just doesn’t work. DataOps is on the rise, promising to tame today’s chaos and context challenges.
Everyone is talking about the modern data stack (MDS) nowadays. I am a data system person. I started building core database systems in the big data era, and have witnessed the birth and prosperity of cloud computing over the last decade. But the first time I came across the term “modern data stack,” I felt confused - is it just yet another buzzword that the cloud service vendors created to attract people’s eyeballs? There are so many articles online, but most of them are quite markety and salesy. After running a startup building core systems in the modern data stack domain for a while, I would like to share my thoughts. In this article, I will explain “modern data stack” to you in simple terms, and discuss why modern data stack can really matter in companies.
Data governance is more than just having a strategy – it is about establishing a culture where quality data is achieved, maintained, valued, and used to drive the business. Modern-day businesses are supported by data and information in many ways and forms. In recent years, data has become the foundation for competition, productivity, growth, and innovation. We are seeing successful organizations shift their focus from producing data to consuming it, and data governance strategies becoming increasingly important to support their crucial business initiatives. Executives and shareholders are starting to realize that data is a strategic asset and data governance is a must if they want to get value from data.
In the past years, organizations have been investing heavily to convert themselves into data-driven organizations with the objective to personalize customer experiences, optimize business processes, drive strategic business decisions, etc. As a result, modern data environments are constantly evolving and becoming more and more complex. In general, more data means more business insights that can lead to better decision-making. However, more data also means more complex data infrastructure, which can cause decreased data quality, a higher chance of data breaking, and consequently erosion of data trust within organizations and risk of not being compliant with regulations. The data observability category — which has quickly been developing during the past couple of years — aims to solve these challenges by enabling organizations to trust their data at all times. Although the category is relatively young, there are already a wide variety of players with different offerings and applying various technologies to solve data quality problems.
One reality that many companies face when adopting cloud technology: designing data infrastructure for business in a cloud computing environment is just different. Legacy stacks can indeed suffice for many companies. But as business requirements grow and use cases increase, both in number and complexity, the models and assumptions that worked well enough in the data center become problematic.
If you are a Data Leader in 2022, Data Governance is most definitely on your radar. Regardless of your organization's data maturity stage, chances are, you have already implemented or started implementing a Data Governance Strategy.
The modern data stack or the Data Stack is a collection of cloud-native applications that serve as the foundation for an enterprise data infrastructure.
“There must be something wrong with Excel. I can't get these numbers to make sense.” For anyone who has had a similar experience of staring at a spreadsheet for far too long, we have news for you: Excel isn’t the problem; your data is.
Do you know the current status — quality, reliability, and uptime — of your data and data systems? Not last month or last week, but where they stand at this moment. As businesses grow, being able to confidently answer this question becomes more important. That’s because data needs to be clean, accurate, and up-to-date to be considered reliable for analysis and decision-making. This confidence comes through what’s known as data observability.
Without a clear and quick process your dev, sales, and customer success teams can become overwhelmed by the amount of work required to delight new customers and ingest clean validated data.
As the amount of data rapidly increases, so does the importance of data wrangling and data cleansing. Both processes play a key role in ensuring raw data can be used for operations, analytics, insights, and inform business decisions.
The data ecosystem has changed drastically over the last six years and we've witnessed the rise and fall of several different technologies. However, there’s one constant that’s remained the same, the cloud data warehouse.
Change data capture (CDC) is the process of recognising when data has been changed in a source system so a downstream process or system can action that change. A common use case is to reflect the change in a different target system so that the data in the systems stay in sync.
The current data engineering ecosystem is filled with a wide range of tools from both open-source and third-party solutions. While there still isn’t a consensus on which path to choose (to either choose an open-source or third-party vendor), I think it’s interesting to explore the possibilities of building an open-source data stack ( and, with the current state of the market, it’s honestly the best time to re-consider how you designed your data stack and begin to explore open-source alternatives).
As data became more and more available to companies, data integration became one of the most crucial challenges for organizations. In the past decades, ETL (extract, transform, load) and later ELT (extract, load, transform) emerged as data integration methods that transfer data from a source to a data warehouse.
In our experience at Secoda working with many data teams, we've seen most data teams do not have the tools they need to succeed. For growing organizations, the data function is usually an afterthought. The first data hire is brought on before raising a Series A and is expected to manage the workload that comes afterward with little to no support.
From personal experience, I have always found it interesting to learn how to create an organized catalog of data. However, this interest was transformed into a passion when I began to realize the amount of time and effort it could save me within my job responsibilities. Creating a data catalog can greatly help you with organizing the data they collect, therefore making it easier to find what you need when you need it.
Building a data practice is not only about making technological choices; and you will likely have to start with a first iteration and expect it to evolve as your business grows.
Let’s not mince words. Product led growth (PLG) isn’t something that happens overnight. It has to infuse company culture and involves commitment from every team - not just the go-to-market teams on the front lines.
Just like data mesh or the metrics layer, active metadata is the latest hot topic in the data world. As with every other new concept that gains popularity in the data stack, there’s been a sudden explosion of vendors rebranding to “active metadata”, ads following you everywhere and… confusion.
The modern data stack is on the rise. Many companies use raw data from their SaaS analytics tools as input for their data warehouse, but this introduces problems downstream. Are there better ways?
Breaking down some of the problems I’ve seen in data collaboration and offering advice on how to make better, faster decisions with collaborative analytics.
The term “observability” means many things to many people. A lot of energy has been spent—particularly among vendors offering an observability solution—in trying to define what the term means in one context or another.
A majority of business leaders believe data insights are key to the success of their business in a digital environment. However, many companies struggle to build a data-driven culture, with a key reason being the lack of a sound data democratization strategy.
You’ve likely heard about ELT — Extract Load and Transform… the Modern Data Stack’s evolution on ETL. This is a game changer by nature in that it enables organizations to ingest raw data into the data warehouse and transform it later. ELT gives end-users access to the entirety of the datasets they need by circumventing downstream issues of missing data that could prevent a specific business question from being answered.