The choice to use Count was made before I joined the company; IIRC they migrated to it from Tableau.

We wanted to migrate (to Streamlit, back then) to have the SQL not live locked in a tool, but inside our git repository; to be able to run tests on the logic etc. But the spaghetti mess was felt too, even if it wasn't the main reason to switch.

(But then, 1) some team changes happened that pushed us towards Metabase, and 2) we found that Streamlit managed by Snowflake is quite costly, compute-time wise. (The compute server that starts when you open a Streamlit report, stays live for tens of minutes, which was unexpected to us.)

----

Export to DBT sounds great. Count has "export to SQL" which walks the graph of the cell dependencies and collects them into a CTE. I can imagine there being a way to export into a ZIP of SQL+YML files, with one SQL file per cell.

Thank you so much for sharing, super helpful!

Great take on the SQL lock in, that's something that I need to think hard about. Ideally a git integration maybe?

Kavla also traverses the DAG, psuedo code:

  deps = getDeps() // recursive

  for dep in deps:
    if dep is query:
      run: "CREAT OR REPLACE VIEW {upstream} AS {upstream.text}
    if dep is source:
      done
A selected chain of Kavla nodes could probably be turned into a single dbt model using CTEs!

Thanks for making me think about this!