x
Rasgo Product

The story of a pivot, and what we’re building at Rasgo

by
Patrick Dougherty
on
1/25/2023
The story of a pivot, and what we’re building at Rasgo

Starting Out

When Jared and I decided to go all-in on building Rasgo in 2020, we had one central mission in mind for the first product we would build:


“Enable anyone to get trusted insights from data in less than 5 minutes.”

As we started developing the first prototype of Rasgo and showing it to potential users we trusted, we were able to confirm one hypothesis: data scientists are particularly under-served by data tools to help them share and re-use valuable datasets. We oriented ourselves toward the data scientist as our primary user and set out to build a powerful tool for them to build, share, and deploy data. Coinciding with our launch, a market began to form around sharing and deploying datasets for data scientists… the feature store.

Well, we missed something.

Data scientists are under-served by data tools and write most of their code from scratch… but they’re (generally) just fine with that. In fact, the “killer use case” that emerged in the feature store market was much more engineer focused than data scientist focused… namely, to help deploy features into production, usually at low latency.

This use case didn’t fit our early product or our mission statement. Our reaction? That’s ok! It happens to the best of ‘em. But where do we go next?

The Pivot

Well, we didn’t really need to go anywhere. The answer was, fortunately, right in front of us… and coming from our users. It was most apparent when we asked, “why are you using Rasgo?” The answer we heard repeatedly was, “It saves me time!”

Digging into this, we found a common theme… Data Practitioners losing hours per day functioning in a support capacity to help their Data Consumers get the data they needed. The frustration came from the fact that these requests were often repetitive, never-ending back-and-forths to construct SQL logic, provision access to data, share queries, and explain that SQL logic.

Enter Rasgo… one of our users described his new workflow as, “When I get one of those typical questions on Slack, I just send back a Rasgo URL.” With that URL, the Data Consumer is able to consume the context, query the data, join it with another table, and finally download it to a CSV… all in self-service.

Saving Data Practitioners time on support ticket requests unlocks huge opportunities within the organization for proactive insight generation and decision support… the value-add work your company likely had in mind when it set up a data team in the first place.

What We’re Building

After validating this need with other users, the decision was obvious… and our new goal is to make Rasgo the fastest way for Data Consumers to self-serve enterprise data. The foundations of our product remain intact, but our mission is clearer than ever. In that vein, we’ve made some great progress… here are some highlights:

  1. Verification workflow for all SQL logic so that Data Practitioners can label which data assets have been verified for consumption
  2. Auto-generated documentation for all of your datasets via GPT-3
  3. dbt Integration to surface dbt models and metrics directly into Rasgo
  4. Build your own templated SQL query, enabling Data Practitioners to store standard logic with configurable inputs that the Data Consumer can customize

I’m especially excited about our next big release: Apps.

Rasgo Apps are dynamic, interactive collections of tables, queries, metrics, and charts that are curated for Data Consumers. For example, you might have a Sales Performance app that lets any Sales Leader check on their attainment of the quarterly target, as well as interactively query their Salesforce Leads, and finally tweak closing scenarios across their Opportunities to provide a realistic picture of their total bonus incentives.

The power of this App is that under the hood, it’s re-using SQL components that are centralized and verified by Data Practitioners once:

  • Attainment of target is a dbt metric, standardized in the dbt semantic layer
  • Salesforce Lead and Opportunity tables are stored in Snowflake, and interactively queried with Rasgo’s no-code filter tool
  • Total bonus incentives are calculated based on a verified query that the Data Practitioner published as a Rasgo Transform

Our product approach can be summarized as: create SQL logic once, verify it, and then consume it everywhere in self-service. Sound interesting? We’re always looking for organizations that share this objective to kick off a proof-of-concept of Rasgo. Just send me an e-mail or connect on LinkedIn.

Sign-Up For Your Free 30-Day Trial!