What’s the current state of SDN development these days?

I remember working on related projects about ten years ago in grad school, and even back then it felt like a somewhat naive and overhyped form of “engineering innovation.”

Take OpenFlow, for example — every TCP connection had to go through the controller to set up a per-connection flow match entry for the path. It always struck me as a bit absurd.

At the time, the main push came from Stanford’s “clean slate” networking project led by Prof. Nick McKeown. They spun off things like Open vSwitch, Big Switch Networks, and even open-source router efforts like NetFPGA. Later, the professor went back into industry.

Looking back, the whole movement feels like a startup-driven experiment that got heavily “packaged” but never really solved any fundamental problem. I mean, traditional distributed-routing-based network gear was already working fine — didn’t they already have admin interfaces for configuration anyway (or call that admin interface SDN )? lol ~

It's all at the big cloud service providers. Not as much focused on the physical network (as originally imagined), but in the overlay networks. Seethe various DPUs like Intel IPU, Nvidia/Mellanox Bluefield, etc. Nvidia DOCA even uses OvS as the sort of example out of the box software to implement networking on Bluefield. When your controller is Arm cores 5 cm away on the same PCB doing per connection setup is no longer as absurd ;)

s/Seeth/See/, wow.

To me server/networking hardware companies have a wet dream of manipulating workloads on physical servers the way one manipulates VMs in cloud computing.

Except the dream is to not do it just within a blade enclosure, but across blades in multiple racks, with network based storage in a multi-tennant environment. Maybe even across datacenters.

At some point, dealing (in an automated manner) with discovery, abstraction, and routing across different networking topologies, blade enclosures, rack switches, etc. becomes insane.

Of course a sysadmin with a few shell scripts could practically do the same for meaningful use cases without the general solution’s decade-long engineering effort and vendor lock-in…

SDN is great if you're trying to build something like a multi-tenant cloud on top of another network of machines. Your DPUs can handle all the overlay logic as if there was a top of rack switch in each chassis

i was in close relations with telecoms during that timeframe. they went bananas with it because all of it was new for them. so they abused and misused it.

one of them for example used opendailight not for it's openflow capabilities, but via some heavily customized plugin and kind of orchestrator for automation via some crazy yang models that were sent to execution to downstream orchestrator.

but from their perspective and perspective of the management they were doing SDN

traditional network gear had "element controllers". some of the got rebranded into "SDN*something" and got interface liftups

ps. sdn/openflow like you describe were absolutely out of question for deployment in production networks. they could talk about all the benefits of it, but nobody dared to do anything with it and arguably, they had no real need

A lot of mistakes were made. Almost all the code has been thrown away and all the details are different but maybe some of the ideas influenced things that exist today.

Afaik OvS can use pre-programmed flows so doesn't require talking to controller on every new TCP connection - dataplane uses in-kernel conntrack module. Google cloud uses heavily modified OvS for their VM networking (Andromeda) I think some other cloud providers do as well.