Openflow and Software Defined Networking

Recently I’ve been getting very excited about Openflow and it’s various interrelated projects under the banner of OpenStack, particularly Quantum and Open vSwitch.

Whats got me going about this, in particular, is it’s providing the ability for people (admittedly not ordinary people) to write their own network protocols. Why is this a good thing you ask? Don’t we already have a bunch of protocols both proprietary and open?

Well yes we do, but the reason we have a lot of different protocols is they meet different needs, and as technology is ever-changing the list of needs is changing and growing. These days we have clouds, highly available VMs, active/active data centres, 10, 40. 100 GigE connectivity, and a whole raft of other things that were next to non-existent 10 years ago.

However whilst we have all these new protocols and features implementing them can become a problem, usually requiring new ASICs, which means a long time to market for new features or a long time for updated features. This is one of the greatest features Software Defined Networking (SDN) will bring, new features and protocols can be implemented quickly and easily.

Like most things there’s always new challenges and new solutions to those challenges, and places like Google, Yahoo, HP and Rackspace are using Openstack and Openflow to meet their specific challenges. However, recently, some of the greater challenges for more modest networks and data centres have been driven by the high densities and the problems around mobility, server virtualization has brought.

Challenges like first hop optimization, IP and Vlan configuration changes, Layer 2 security through things like PVLANs and the inefficiencies of redundant bandwidth sitting unused caused by Spanning Tree’s inherent limitations being a “dumb” protocol.

Sure there are technologies and solutions to solve some of these problems, but generally you’re paying a premium for hardware, features or licenses you wont use, or will only use one feature you do need, or in some cases as noted above the feature is in development and long way off when you need a solution now. For example OTV only being supported on Nexus, requiring large capital outlay and horsepower overkill just to extend your Layer 2 networks between data centres easily.

What Openflow promises is that you can solve these problems now without waiting for Cisco, Juniper or Extreme to come out with something.  If  done right OpenFlow will interoperate with multiple vendors or optimised for efficiency to work with the technologies you need in your situation, like TRILL or SPF.

OpenFlow also promises to changing switching and networking like virtualization technologies such as VMWare, Xen, and KVM did for server hardware, i.e. compute is now commodity. These days drivers issues, compatibility and overspeccing servers for burst or redundancy are all gone. Servers are purchased for raw power, then connected to a virtualization platform of choice, and your overall efficiency and utilization is greatly improved. Maintenance and hardware refresh are also a lot easier, migrate the VMs off to another server replace or upgrade the hardware and migrate them back.

I see a day where OpenFlow brings this to the table for networking. Switches are bought based upon raw power, you just buy the best value switch that has the required number ports at the speed you want. No worries about does it support Dot1X, VLANs, MSTP, etc. because all that’s handled in the controller. Need to add BGP, Server Load Balancing or the latest fad to your network, no more forklift upgrade, simply buy a plug-in software module or key for the controller, enable it and start using the feature. Switch replacement for upgrades or maintenance become easier, tell the controller to just stop passing traffic through a particular switch, the rest of the network is updated, traffic works around and you can safely remove the switch knowing your network isn’t going to break.

I recently read an article of how, I think it was, Google uses SDN specifically OpenFlow to pre-test traffic patterns for server or node moves and see what impact they have. A massive plus for management and vastly different to the current process of a change request with the answer to “have you tested this” being “No, we don’t have a test $1M worth of networking kit”. With OpenFlow and SDN you can potentially say yes, we modelled it and know exactly what will happen. There’s already been white papers with proposed cases for testing on live networks using slices, link here.

So what is my interest in OpenFlow?

Well recently I had a  project where we wanted to use PVLANs but hit a wall with different vendors implementing them in an incompatible way, plus in the case of VMWare just not implementing them at all, so needing  the Cisco Nexus 1000v to do it and thus driving up costs. In the end we went with multiple VLANs which means more complexity, more management and bigger gateway devices that’ll handle dot1q trunking or sub interfaces.

So probably the first thing I’ll be looking at is a compatible implementation of PVLANs using Openflow and Open vSwitch as it seems a fairly easy project that has value almost immediately. Then I might look at a TRILL or SPF implementation for OpenFlow. I’m also definitely thinking about how to solve gateway optimization with a better solution than the hybrid bastardization that is GLBP, Proxy Arp, HSRP/VRRP blocking that is used now.

Production quality however is probably a while away, version 1.0 of OpenFlow has no support or provision for controller failure and the question of capacity and performance a long way off being address.

For now though it’s certainly a very exciting field and I think there will be a lot of development in this space.

One thought on “Openflow and Software Defined Networking”

Comments are closed.