This seems ridiculously useful. What’s the catch?

The instrumentation has a performance overhead.

You enable the instrumentation with a prepend_before_action,

e.g.

prepend_around_action :callstacking_setup, if: -> { params[:debug] == '1' }

When the request is completed, the instrumented methods are removed (thus removing the overhead).

You have to enable it judiciously. But for a problematic request, it will give the entire team a holistic view as to what is really happening for a given request. What methods are called, their calling parameters, and return values, all are given visibility.

You no longer have to reconstruct production scenarios piecemeal via the rails console.

I'm not familiar with rails, so sorry if your reply above inherently answered this.. So you're enabling the tracing with a call in your controller method, but how is the tool capturing function params and returned values for sub-calls in the respective controller method?

Is it waiting for execution to return to the controller method and polling the stack trace from there?

as far as I can tell, it only executes the trace when asked. It's not an APM like newrelic. Most likely the trace meaningfully slows down the individual request.

When I was at ScoutAPM, we built a version of this that was stochastic instead of 100% predictable. We sampled the call stack every 10-50ms. Much lower overhead, and it caught the slower methods, which is quite helpful on its own, especially since slow behavior often isn't uniform, it happens on only a small handful of your biggest customers. But it certainly missed many fast executed methods.

Different approaches for sure, solve different issues.

These types of profiling gems usually kill performance