Skip to content
Data Observability Updated Aug 20 2025

Observations from 10 Years in BI—and Why I’m All In On Data & AI Observability

AUTHOR | Tanner Brockbank

I have spent the past 10 years building and selling data products. 

At Tableau I focused primarily on front-end reporting experiences, living and breathing dashboards, and tabular reports. 

The objective was to lead people to insights as clearly and efficiently as possible. If it led to an action, even better. 

I’d consult customers on how to connect to on-prem dbs, newly adopted warehouses, and in some cases direct to source APIs. All the time, expecting the data pipeline to be reliable and trusted. 

I left that responsibility to other teams. 

My worries primarily consisted of making data consumers happy by building timely KPIs that led to answers rather than more questions. Get in and get out!

And for a long time, I was content with that.

Reliability wasn’t my problem…or was it?

Salesforce eventually built several integrations into Tableau like Einstein Discovery for operationalizing supervised ML. This presented new opportunities to make data science efforts more impactful to business users. 

However, it also raised concerns about the reliability of the data feeding the models.

Fortunately, this wasn’t my problem… or so I thought. 

I remember working with one enterprise to operationalize a churn model to their account managers that primarily leveraged product usage data and a few other sources. 

The results of the model only served to exacerbate those concerns when trust in the results (and their data inputs) couldn’t easily be verified by business stakeholders.

Tableau does have data quality warnings embedded in workbooks (if you have the data management add on or Tab Enterprise) but that still may only apply to the last mile connection of the data extract or published data source. 

I eventually found myself making tradeoffs between speed and trust. I wasn’t viewing reliability as my responsibility, but my actions were having a demonstrable effect on its outcome. 

How AI sharpened the knife

When I joined the team at Domo, it was a similar story – but this time my challenges grew beyond visualization. Data integration, transformation, workflow automation, and new integrations with LLMs all presented opportunities to leverage data in new processes. 

Domo prides itself on having robust connections directly to source, but recently pivoted to encourage customers to adopt a warehouse as well. I’d find myself regularly building ETLs with data coming from over a dozen sources. Manual metric alerts and ETL notifications exist in Domo, but like Tableau, their point solution only covered the last mile of journey.

Data quality and trust were an afterthought because they had to be. Reports were due. Stakeholders had expectations. I was serving my customers by shipping first and worrying about the outcomes later… or so I thought.

In hindsight, what I was actually doing was just ignoring the problem. But whether I prioritized it or not, I was responsible for it.

It’s easy to sacrifice data trust on the altar of speed simply because it’s too time-intensive to control those variables properly. But if your stakeholders can’t trust your products, they won’t use them. That goes for visualizations, platform developments, or AI agents.

Trust should be the foundation of every decision made with data. The question is, “how do you prioritize trust at scale?”

My data + AI observability awakening

Learning more about the various capabilities of Monte Carlo to do things like data threshold monitoring using AI & ML, schema change monitoring across a data’s lineage journey, or even pinpointing the origin of data problems from code changes or system failures, I’ve realized that data + AI teams don’t need to sacrifice speed for trust. 

But they do need to operationalize a modern approach to reliability. 

You can build a data culture founded on trust without sacrificing agile exploration of the data and its insights. Both of which are critical. It all starts with a thoughtful automated and AI-enabled approach to detection and resolution that’s inclusive of the entire data and AI estate.

There’s an interesting example of this with our customer Roche who has increased data trust throughout their organization by leveraging Monte Carlo to empower domain leaders to own reliability for their domains. 

Don’t settle for speed or trust — embrace speed and trust

Looking back on my time with BI teams, I wish I knew what I know now. I regret the wasted hours, the lost trust, and the heavy resources spent verifying data flows, troubleshooting ETL failures, investigating anomalies, and so much more. 

It’s not that these exercises don’t matter – they absolutely do – it’s that a tool like Monte Carlo could have accomplished that work faster. And the less time we spend monitoring and troubleshooting, the more time we spend delivering value for our customers.

The data and AI estate – from its pipelines to its architectures – is only becoming more complex. And with the explosion of data products and AI applications happening right now, trust in their data, system, code, and model responses will become increasingly important to ensuring their value. 

It IS possible to achieve high-trust and speed-to-insight at the same time. And I’m happy to say that tools like Monte Carlo are showing us the way.

Schedule a demo to see for yourself. 

Our promise: we will show you the product.