The largest and most mature vendor profiled here, Informatica offers data observability tools as part of a comprehensive portfolio for data management and governance. The Redwood City, CA-based company started as an ETL provider in 1993. It grew organically and via acquisition to address critical segments such as cataloging, DataOps, data privacy, and master data management, all integrated as modules on an AI-driven platform. Informatica has a long history in data quality observability, focused on structured data at rest and data in motion. It recently extended its offerings to address data pipeline observability, in particular to help enterprises operationalize and scale AI/ML, while controlling the cost of data delivery as they move to the cloud. Various Informatica products already gather relevant metadata, making data observability a natural add-on. All these capabilities are offered as cloud-native services of Informatica’s Intelligent Data Management Cloud.
Many enterprise stakeholders, including data engineers, DataOps engineers, and business owners, use Informatica’s graphical interface and AI-guided prompts for data observability. They study three layers: infrastructure, data pipelines, and business consumption. At the infrastructure layer, Informatica monitors elements such as compute clusters, spots performance issues, and suggests how to fix them. It also helps measure, predict, and control the compute cost of individual jobs that pipelines perform. At the pipeline layer, Informatica profiles data, detects anomalies, and helps remediate issues. At the business layer, Informatica tracks consumption by user, dataset, and consumption to assist compliance with internal policies and external regulations.
Enterprises such as Amgen and Discount Tire rely on Informatica for data observability. It makes about $1.5 billion in annual revenue, has more than 5,000 customers, and trades on the New York Stock Exchange.