This depends on what resources you're counting. If you're counting the developer cycles, it is not.
ESP-IDF+FreeRTOS has great value: it solves a host of mundane problems that need solving in real products. Discarding all of that value is foolish; you should preserve it, and look to keep your work aligned with the recent ESP-IDF and FreeRTOS evolution, so future you can adopt updates and supported tooling in a timely manner.
However, you also need at least some of your work to be hard real time, bare metal code. You do this through hardware peripherals, precision memory management, and tight ISRs that do not contend with whatever FreeRTOS or some Expressif driver is up to. Most of all, you want to never have to rework these parts because something in ESP-IDF and/or FreeRTOS, both rapidly moving targets, has changed.
Dedicating cores (0 for FreeRTOS, 1 for you) provides exactly this, and why ESP-IDF supports this model.
There is an ugly truth here. Ideally, one should not need to resort to such things. If the model and runtime behavior of the vendor's stack were extremely mature and could be relied upon with high confidence, it would not be necessary. However, anyone that has ever actually dealt with real time requirements and/or needed to fully exploit hardware peripheral capabilities in the real world of endlessly changing, incomplete, buggy BSPs/RTOSes/etc., knows that they probably won't live to see that.
On basic microcontrollers, mixing message/command I/O with application threads on the same core often lead to missing incoming commands. So it's relatively normal to separate them to their own core free of application logic.
This is reasonably common - Nordic and ST do this as well on the nrf-53 and STm32WB/L respectively. It's convenient for concurrency, and separation of concerns.
Also, running the network stack on a separate core allows it to be encrypted and signed, so that end users can't (easily) reverse engineer it. Which sucks for those of us who would like to run fully open-source code without binary blobs.
For instance, compare the reference manuals for the STM32WL3R and the STM32WB microcontrollers. The former has a single CPU, and it has almost 250 pages of detailed documentation about exactly how the hardware is controlled at a register level. The latter runs the network stack on an auxiliary CPU, and the manual just has a block diagram and a sentence that says "use our drivers" (which are only available in encrypted format).
> seems wasteful
This depends on what resources you're counting. If you're counting the developer cycles, it is not.
ESP-IDF+FreeRTOS has great value: it solves a host of mundane problems that need solving in real products. Discarding all of that value is foolish; you should preserve it, and look to keep your work aligned with the recent ESP-IDF and FreeRTOS evolution, so future you can adopt updates and supported tooling in a timely manner.
However, you also need at least some of your work to be hard real time, bare metal code. You do this through hardware peripherals, precision memory management, and tight ISRs that do not contend with whatever FreeRTOS or some Expressif driver is up to. Most of all, you want to never have to rework these parts because something in ESP-IDF and/or FreeRTOS, both rapidly moving targets, has changed.
Dedicating cores (0 for FreeRTOS, 1 for you) provides exactly this, and why ESP-IDF supports this model.
There is an ugly truth here. Ideally, one should not need to resort to such things. If the model and runtime behavior of the vendor's stack were extremely mature and could be relied upon with high confidence, it would not be necessary. However, anyone that has ever actually dealt with real time requirements and/or needed to fully exploit hardware peripheral capabilities in the real world of endlessly changing, incomplete, buggy BSPs/RTOSes/etc., knows that they probably won't live to see that.
On basic microcontrollers, mixing message/command I/O with application threads on the same core often lead to missing incoming commands. So it's relatively normal to separate them to their own core free of application logic.
This is reasonably common - Nordic and ST do this as well on the nrf-53 and STm32WB/L respectively. It's convenient for concurrency, and separation of concerns.
Also, running the network stack on a separate core allows it to be encrypted and signed, so that end users can't (easily) reverse engineer it. Which sucks for those of us who would like to run fully open-source code without binary blobs.
For instance, compare the reference manuals for the STM32WL3R and the STM32WB microcontrollers. The former has a single CPU, and it has almost 250 pages of detailed documentation about exactly how the hardware is controlled at a register level. The latter runs the network stack on an auxiliary CPU, and the manual just has a block diagram and a sentence that says "use our drivers" (which are only available in encrypted format).