One step closer to general availability

Richard Makara

Today, we are on our third major iteration of our core product - the data pipeline compiler.

Under the hood, we've been hard at making the core concepts work well to enable our customers answer critical business questions. These core functionalities are automatically:

  • Resolving joins across multiple tables
  • Resolving attributes with complex business logic
  • Ensuring incrementality and cost optimization
  • Time travel: Historization of data sources beyond dbt snapshots, including relationships across tables. Read more on that here.
"We gain learnings faster from our main system data with reconfigured. The platform gives us data engineering capabilities for in-housing data management."
Niko Sandell, CTO at M Room

Use cases we have been solving since the beta launch

In June, we celebrated our beta release with Rick Johnson & Data Cowboy in a garage because that is where ideas for software businesses are born.

Since then, we have learned by doing, and we give all thanks to our customers who have entrusted us with their most valuable assets and tested the engine with their data and business requirements. We have seen how analytics teams build and manage transformation pipelines with the reconfigured interface directly in their data warehouse. (We unpack this sentence in the last chapter.)

To this date, there are data sets built by our platform that power:

  • Unit economics reporting
  • Staff efficiency calculations
  • Historical revenue evolution reporting
  • Embedded analytics for customers to check the service status
  • board-level SaaS GTM performance dashboards
  • Machine learning training datasets

We firmly believe the Product Managers of reconfigured sit in the customers' office. The roadmap is changing based on customers’ demand, and one theme rising high is visualization.

"Reconfigured has allowed us to organize our data in a way that helps us to reveal hidden behaviours from our day to day operations in almost real-time."
Erno Berger, CTO at Yeply

Upcoming roadmap:

We are turning reconfigured into a visual command center of the data model. We want to go beyond dbt lineage graphs, and we have already built a way to visualize all the allowed joins of your future models:

reconfigured jump points

Building further, we have plans for five other ways to visualize the buildup of your domain model. BI tools have solved data visualization, but who visualises the structure of your data?

We've discussed activities before but were too early with data event pipelines. We realized that we could only start to tackle the event pipeline after being able to save the historical values of a domain model automatically, incrementally, and cost-efficiently.

Working with event data requires a different kind of approach. By nature, incremental, yes, but what if you need to back-fill? What if you need to create ghost events based on several other events happening or not happening within a timeframe? We are talking about business critical events like churned customers, users becoming active, and onboarding completed.

The transparency and flexibility of the tool and the team sets reconfigured apart from other products. I can see – and understand – what's happening under the hood through the compiled code and even go beyond predetermined functionality by configuring their engine, chaining functions, and adding custom macros.
Tom Hämäläinen CIO & Co-Founder at Phaver

Mental blocks to get started with a data project

The bad news is that the timing is never right for an extensive rewrite of your data models. The good news;

Through customer cases, we have learned that the way forward is an iterative and automated approach we can start today! Start small, reconfigure the existing tools, and get small wins before the whole data model is in place.

Everything starts from a data-related opportunity you want to capture ASAP rather than next year. Share that with us; you are not alone anymore in pushing the project forward!

Best regards,
Richard Makara, CEO

P.s. So what are you - reconfigured?

Promised to unpack this statement:

Analytics teams build and manage transformation pipelines with the reconfigured interface directly in their data warehouse.

Analytics teams:

These teams know the business requirements of data. With reconfigured, they can craft the data sets needed in BI tools to answer all leadership, marketing, product, or sales questions.

Build and manage transformation pipelines:

At the heart of BI (or even Machine Learning & AI) are clean, robust, and reliable datasets. reconfigured transforms raw data and creates a domain model, which powers all reports, ensuring individual BI dashboards don't have to include complex logic that is hard to maintain.

reconfigured interface:

Provides a visual interface to work on the data model from all the necessary angles without thinking of pipeline development simultaneously. You plan the model, and the platform implements all changes instead of the already busy data engineer.

Directly in their warehouse:

A centralized data warehouse is where the source of truth needs to be. reconfigured doesn't need access to your data warehouse since we only connect with your dbt project and version control, ensuring you own all the pipeline code. To create dbt-compatible code for your Git repository, reconfigured only needs to read meta information from the dbt project.

Feel like chatting instead?

We're in early-stages of development and always looking to hear from analytics professionals. Catch us via our calendar below.

Latest and greatest posts

Search Pivot