The justifications for this kind of thinking are explained very well and in great detail in the Destroy All Software screencast series by Gary Bernhardt.

Yes, it is a good idea to decouple your application from the framework. Yes, it is a good idea to write your own functions which wrap persistence layer library functions. Yes, it is a good idea to drive out the design of the system with automated tests, and to avoid having to test behaviour with more expensive integrated tests where possible.

It depends. Rails is great for rapid application development. You probably don't want to decouple at that early stage. But when your app struggles with slow responses and job management becomes increasingly difficult, you begin to see the flip side of the trade-off between ease of development and performance. This recently featured project, https://github.com/9001/copyparty, is extremely performant but foregoes many of the niceties of Rails. It uses the filesystem instead of a database and has no build step for assets. I'm not sure the solution is to push all queries into repositories, but it's quite easy to abuse the convenience of ActiveRecord by intertwining all the business logic and database access.

Every Rails project I've been involved with has had two phases:

The first phase was being amazed that we got basic functionality up and running so fast

The second phase was feeling like we were spinning our wheels because we were always dealing with some performance issue or some complexity explosion that occurred when we stepped outside of the bounds of a simple low volume CRUD app where Rails excels.

That does not mirror my experience. What I did see a lot was either spinning wheels because multiple people had divergent ideas about how the complexity should be managed, or spinning wheels because people on the project did not want to work with Rails at all and just wanted a rewrite of everything in something else.

Complexity does not have to be an explosion, but it always will be if the team designs in a way that is not coherent - and coherence is hard to achieve.

That project is a treat - but it's effectively an FTP server, so most of its useful data will be in the files it hosts themselves. And it's deliberately built for being run off a single server. I had a similar project a long while ago which would "wrap" an FTP server with a WeTransfer-like web UI, and it also had no database - since none was needed.

"No build step for assets" is how I do my Rails apps these days if I can help it, luckily modern browsers allow for that to a great extent.

I love the series (and the "wat?" talk, of course - somewhat less the whole TypeScript desiderata of Ok, computer).

And yes, it is nicer to avoid expensive tests if you can help it. It has been my experience, however, that if you can use a slightly-more integrated test - which touches more parts of the system - you are likely to find more regressions and much quicker than if you subscribe to the notion of "testing one function with collaborators all around". It is academically very appealing, and your tests will be quite fast indeed - but they will be much harder to read and understand, and are always one "and_call_original" away from being useless.

As to decoupling from a framework in an ecosystem where, realistically, there is only one framework - and where, realistically, there are no deployment targets except for that framework - I don't believe this to be useful, sorry.

Yes, you can deploy your code into a cloud function (both AWS and GCP do Ruby cloud functions and you can get quite a bit of mileage out of it) but this is not an architecture you are likely to plan for - or need - until quite, quite late into the growth of a codebase. And even then - those runtimes have different constraints, so you may want to build a module which is accessible both from Rails and from such a cloud function.

What I don't like is the conversation of "you must decouple to decouple", which is not a way to advocate for something at all.