In this article, we will be discussing Open Networking and try to dissect what the “open” in Open Networking really means. We will also be looking at different real-life products and solutions that fit the Open Networking model. Finally, we will look at a practical lab example of Open Networking.
The “Open” in Open Networking
Depending on who (vendor, engineer, etc.) you ask about what the “open” in Open Networking means, the answer will vary widely. In this section, we will look at some of the possible definitions of “open” in terms of Open Networking and then hone in on the most acceptable definition for the purposes of this article.
Open as in Open Source
Many of us are familiar with the concept of open source – something whose “source” is freely available to the general public for viewing and modification. Sometimes, open source means “free of charge” but not always. The most generic example of an open source solution versus one that is closed is Linux versus Microsoft Windows operating systems. In the case of networking and networking devices, pfSense is an open source firewall application as compared to the Cisco Adaptive Security Appliance (ASA) whose source code is closed and proprietary.
While open source software is more common, open source hardware is also available. For example, Facebook’s 6-pack is an open hardware modular switch. In terms of hardware, open source probably means the design for such hardware is freely available to the public. There are several of such products available from the Open Compute Project (OCP).
Open as in Open APIs
By using Application Program Interfaces (APIs), vendors can allow external applications to “program” their devices. Therefore, even though the devices are closed and proprietary, they can be made programmable through APIs based on open standards.
An example of this is the Software Defined Networking (SDN) architecture which uses Northbound and Southbound APIs between the SDN controller and other parts of the architecture (network infrastructure and application layer). OpenFlow is an example of a southbound API that facilitates communication between an SDN controller and the underlying network infrastructure, and this protocol is based on open standards.
Open as in Hardware standardization
Instead of building custom Application-specific Integrated Circuits (ASICs), network device manufacturers can rely on merchant silicon (off-the-shelf chips) available from vendors like Broadcom and Intel. This results in some form of hardware standardization and manufacturers can focus on other areas of strength like software or support.
Open as in Network Disaggregation
Traditionally, network devices are sold as “black boxes” where the hardware and the software are tightly coupled/integrated together – you can’t get one without the other. For example, when you buy a Cisco 2960-X switch, you get the hardware (physical switch) and the software (Cisco IOS) together. While this is fine for many customers (e.g. enterprise networks), it does not scale well for customers with large networks/data centers (think Facebook, Google, Amazon). These high-end customers did not want to be locked into a particular vendor and needed to innovate quickly using devices that provided such flexibility.
This gave rise to the model where hardware and software are decoupled. This means that you can, for example, purchase bare metal switches (i.e. the hardware without any software) and then run any networking operating system (open source, commercial or purposely built) that you want on that hardware.
This form of open networking will be the main focus of this article.
Open Networking: Hardware/Software Disaggregation
Like we already mentioned above, open networking (as discussed in this article) is about separating the hardware and software aspects of a networking device. This is similar to how things are done in the server industry: you can buy hardware from any vendor (e.g. Dell, HP) and then install the operating system of your choice on this hardware (e.g. Microsoft, Linux).
In terms of the hardware aspect of open networking, there are two main variants: “White box” and “Brite box”. These two solutions are very similar in that they both provide standard (bare-metal) network devices (e.g. switches). The difference is that brite box are from well-known manufacturers like Dell, Juniper, and HP while in the case of white box, the manufacturers are not as big (in comparison). White box vendors include Accton and iwNetworks.
Note: Brite box basically means branded white box and the term was coined by Gartner.
Looking at the software (also known as Network Operating System (NOS)) to be installed on this bare-metal devices, they can either be open source or commercial (or a mixture of both). Examples of NOS that can be installed on bare-metal switches include OpenSwitch, Cumulus Linux from Cumulus Networks, Switch Light by Big Switch Networks, and PicOS from Pica8. Most of these NOS are built on top of Linux which makes them even more attractive to customers (because of the open source nature of and existing expertise with Linux in large environments).
Choosing the NOS to use will depend on a variety of factors like hardware compatibility (e.g. HP and Cumulus have a partnership), supported features and level of support available.
A couple of real world examples of successful open networking projects include:
We already mentioned some of the benefits of Open Networking including avoiding vendor lock-in and flexibility to change something (hardware or software) without too much hassle.
Another benefit will revolve around lower CapEX and OpEX. Bare-metal switches are generally less expensive than counterpart black boxes. The operating expenses (e.g. support) are also considered less expensive than from proprietary vendors. Even though a report by Forrester titled The Myth of White-Box Network Switches seem to suggest that the cost difference between white box switches and proprietary tightly coupled switches like Cisco Nexus is not much (about $900), this report was found to be inaccurate and did not put into consideration Cisco’s support fees which are usually pretty steep.
First of all, it should be mentioned that open networking (in its current form) may not be suitable for your network today. If a vendor is offering you exactly what you need with their proprietary boxes, then you should think carefully before moving to another solution.
In any case, there are a couple of challenges associated with Open Networking including:
- Support: With dedicated appliances where hardware and software are tightly integrated, support from the vendor will usually cover both hardware and software. With hardware/software disaggregation in open networking, who will cover what support? As an example, HP specifies that they will cover support for their Altoline switches (hardware) but customers can buy software support from the NOS provider. By extension, how about security support? Who takes care of fixing the holes in security when an exploit has been discovered?
- Skillset: There will be a bit of learning curve when first using these open networking solutions. For example, Cumulus Linux does not come packaged with a standard CLI that many network engineers are familiar with. While server administrators may be familiar with the Linux-based syntax of such NOS, they may not understand networking at the level required to run the network. Fortunately, most of these NOS provide good documentation, e.g. Cumulus Linux Documentation.
Open Networking, SDN and NFV
Two other technologies that seem to be gaining ground in the networking world are Software Defined Networking (SDN) and Network Functions Virtualization (NFV). However, these technologies are not the same as Open Networking.
SDN aims at decoupling the control and data plane on network devices so that the network can be more easily programmed and provide greater automation and orchestration. When compared with Open Networking, SDN focuses on the network as a whole (programming and orchestration) while Open Networking focuses on individual devices. Plus, even in their definition, there are differences.
NFV is even closer to Open Networking than SDN. NFV aims to virtualize network functions so that they can run on standard (commodity) computing resources (server, storage and networking). NFV relies heavily on virtualization technology on commodity resources while Open Networking is usually not virtualized (it can be) and runs on networking hardware (not servers).
Lab Scenario: Running Cumulus VX in VMware
Cumulus Networks provides ways for you to test out their product before purchasing either in the cloud or as a Virtual Machine (VM). In our case, we will be trying out the VM in VMware which you can download from here.
Note: You can also use VirtualBox or integrate it into GNS3 as an appliance.
Once you have downloaded the OVA file, import it into your virtualization software. By default, the VM comes with 4 network adapters. Make sure the first adapter is set to “NAT” while the other 3 are set to “Host-only”.
Note: This is not compulsory. It is just so that we have two separate networks – one for management and one for data.
When you are done with your settings, power on the VM. You will be presented with the login as shown below:
The default username is “cumulus” with a password of “CumulusLinux!”.
When you login, you will notice that it is just like any Linux box. Let us check a few things. By default, the management interface is “eth0” and it is enabled to receive IP address via DHCP (from your virtualization software). We can issue the ifconfig command to view the IP address on this interface:
My own VM received an IP address of 172.16.155.129. I can ping and SSH to this switch from my normal host:
Tolus-MacBook-Air:~ Tolu$ ping 172.16.155.129 PING 172.16.155.129 (172.16.155.129): 56 data bytes 64 bytes from 172.16.155.129: icmp_seq=0 ttl=64 time=0.487 ms 64 bytes from 172.16.155.129: icmp_seq=1 ttl=64 time=0.431 ms 64 bytes from 172.16.155.129: icmp_seq=2 ttl=64 time=0.494 ms ^C --- 172.16.155.129 ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.431/0.471/0.494/0.028 ms Tolus-MacBook-Air:~ Tolu$ ssh [email protected] [email protected]'s password: Welcome to Cumulus VX (TM) Cumulus VX (TM) is a community supported virtual appliance designed for experiencing, testing and prototyping Cumulus Networks' latest technology. For any questions or technical support, visit our community site at: http://community.cumulusnetworks.com The registered trademark Linux (R) is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. Last login: Sun Sep 10 19:17:14 2017 from 172.16.155.1 [email protected]mulus:~$
Let’s make some changes on this switch. First, switch to root using the sudo -i command (so that we don’t have to keep typing sudo before every command):
[email protected]:~$ sudo -i [sudo] password for cumulus: [email protected]:~#
First, let us view the interface settings by typing the following command: cat /etc/network/interfaces
[email protected]:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/*.intf # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp
Notice what I said about the eth0 interface being set up to receive an IP address via DHCP. You can change this if you want by following the steps here.
For this lab, we will integrate this VM with a GNS3 topology containing one router. We will simulate a scenario where the router is connected to one of the ports of the switch (L3) and we will test connectivity.
First, let us set up the switchport to which the router will be connected. We will be using one of the other network adapters connected as “Host-only”. In Cumulus, these front panel ports are designated as “swp#” where # can be any number (depending on how many interfaces you have). In our case, we will be configuring swp1 with an IP address of 10.1.1.254. To do this, enter the following commands in the CLI:
net add interface swp1 ip address 10.1.1.254/24 net pending net commit
If we run the ifconfig swp1 command, we will see that our configuration has been added:
[email protected]:~# ifconfig swp1 swp1 Link encap:Ethernet HWaddr 00:0c:29:59:68:60 inet addr:10.1.1.254 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe59:6860/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1272 (1.2 KiB)
Now, we will open GNS3 and add a Cloud. Right-click on the cloud and select “Configure”.
Make sure the “Show special Ethernet interfaces” checkbox is checked, select your “Host-only” adapter from the dropdown list and then click “Add”.
Note: The default Host-only adapter in VMware is “vmnet1”.
Now, add a router to your topology and connect one of its interfaces to the “Host-only” adapter of the cloud.
Start the router and configure an IP address on the connected interface e.g. 10.1.1.1/24
interface fa0/0 ip address 10.1.1.1 255.255.255.0 no shutdown
Let’s test connectivity between the router and the switch using ping:
R1#ping 10.1.1.254 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.1.1.254, timeout is 2 seconds: .!!!! Success rate is 80 percent (4/5), round-trip min/avg/max = 24/50/64 ms R1#
This brings us to the end of this article where we have discussed various definitions of “Open Networking”. We focused on hardware/software disaggregation where Network Operating Systems can be installed on bare-metal network devices which come in two flavors: white box and brite box.
We highlighted examples of white box network switches vendors including Accton and iwNetworks. We also looked at a few brite box vendors which are basically white box with a known brand including HP and Dell.
On the software side of the equation, we looked at NOS like OpenSwitch, Cumulus Linux and PicOS.
Finally, we looked at a lab scenario where Cumulus VX, a demo VM from Cumulus Networks, ran in VMware and was integrated with a router running in GNS3.