0% found this document useful (0 votes)
62 views13 pages

Software Defined Networking Concepts and Challenges: December 2016

This document summarizes a conference paper on Software Defined Networking (SDN) concepts and challenges. It introduces SDN as a new networking paradigm that consolidates network control in a central controller, separating the control and data planes. This simplifies network management and allows for network programmability through high-level languages. However, open research challenges remain around translating high-level programs into device rules and integrating multi-vendor equipment. The paper provides background on the history and concepts of SDN, including its roots in earlier work on programmable networks and control/data plane separation.

Uploaded by

Marian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views13 pages

Software Defined Networking Concepts and Challenges: December 2016

This document summarizes a conference paper on Software Defined Networking (SDN) concepts and challenges. It introduces SDN as a new networking paradigm that consolidates network control in a central controller, separating the control and data planes. This simplifies network management and allows for network programmability through high-level languages. However, open research challenges remain around translating high-level programs into device rules and integrating multi-vendor equipment. The paper provides background on the history and concepts of SDN, including its roots in earlier work on programmable networks and control/data plane separation.

Uploaded by

Marian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/312569297

Software Defined Networking concepts and challenges

Conference Paper · December 2016


DOI: 10.1109/ICCES.2016.7821979

CITATIONS READS
16 6,211

3 authors, including:

Mohammad Mousa Ayman Bahaa-Eldin


Ain Shams University Ain Shams University
2 PUBLICATIONS   19 CITATIONS    62 PUBLICATIONS   239 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Mohammad Mousa on 25 April 2018.

The user has requested enhancement of the downloaded file.


Software Defined Networking
Concepts and Challenges
Mohammad Mousa Ayman Bahaa-Eldin Mohamed Sobh
Computers and Systems Eng. Dept. Computers and Systems Eng. Dept. Computers and Systems Eng. Dept.
Faculty of Engineering Faculty of Engineering Faculty of Engineering
Ain Shams University Ain Shams University Ain Shams University
[email protected] [email protected] [email protected]

Abstract—Software Defined Networking (SDN) is an In order to implement a new network functionality only a new
emerging networking paradigm that greatly simplifies network application is needed to be installed over the network controller
management tasks. In addition, it opens the door for network while no change is needed from the forwarding devices side.
innovation through a programmable flexible interface controlling Figure 1 explains the architecture difference between
the behavior of the entire network. In the opposite side, for traditional network & a Software-Defined Network [4].
decades traditional IP networks were very hard to manage, error
prone and hard to introduce new functionalities. In this paper, SDN has provided a very flexible interface for the creation
we introduce the concepts & applications of SDN with a focus on of new services. Network programmers need to write their own
the open research challenges in this new technology. network policies & services through a high level programming
language (one example of network policies is load-balancing
the traffic to a certain destination over multiple paths to avoid
I. INTRODUCTION the congestion of a certain path). The network controller should
Traditional IP network protocols were designed to adopt a be able to translate these high level programs into low level
distributed control architecture where network devices should forwarding rules over individual forwarding devices. By the
communicate with each other through a large set of network use of centralized high level programs to control the network
protocols to negotiate the exact network behavior based on the behavior, network administrators are able to automate network
configuration of every individual device. Network devices are tasks using programs written in high level general purpose
sold as closed components & network administrators are only programming languages like C++, java & python.
able to change the parameters of different network protocols.
Network administrators should translate high level network
policies into low level scripts written for every individual
device commonly known as the “Configuration Language”.
Also, every equipment vendor has its own configuration
language with its own compliance to the large set of network
standards leading to several inter-operability issues for the
integration of equipment of different vendors. Also, each time a
new functionality is needed (as a load balancer or a firewall), a
new device is integrated to the network or perhaps one of the
devices is upgraded to perform the new functionality. In this
manner, traditional IP networks are neither flexible nor
programmable by any means. In such a distributed, multi-
vendor, multi-protocol & human-dependent environment,
service creation & troubleshooting became a very hard task.
SDN has introduced a paradigm shift in the networking Figure 1 Architecture Difference between Traditional IP
industry. Instead of having a distributed control architecture, it networks and Software-Defined Network [4]
consolidates all the control in a single node called the
“Network Controller” which is simply a software running on a II. HISTORY OF NETWORK PROGRAMMABILITY
commercial server platform. Network forwarding devices no Openflow protocol was developed late 2008 by a group of
longer participate in the network control & only forward researchers from different universities including Stanford
packets based on the set of rules installed from the network University, University of Washington, MIT, Princeton
controller. The network controller programs the forwarding University & other universities [1]. As explained by the
rules of the forwarding devices through the “Openflow authors, the main target of the project was to enable
protocol” & hence the network forwarding devices are called researchers to test experimental network protocols in their
“Openflow switches”. Openflow is a standard protocol that’s campus networks. Later on, the idea was used in other
vendor independent & hence no specific knowledge of domains such as datacenters with the emergence of cloud
equipment vendor is needed to control the forwarding behavior.
computing & the need for a flexible programmable network. C. Control & Data planes Separation
In 2012, SDN was adopted by Google to interconnect its Network devices are involved into two major tasks. First, the
datacenters spread around the world due to the great flexibility devices communicate with each other through standard
when using SDN for inter-data center traffic engineering. network protocols to agree on the traffic paths (as the case of
Also SDN has emerged since only few years, many of the dynamic routing protocols) or to prevent network loops (as the
concepts adopted by SDN were developed over the last 2 case of the STP Protocol) or to perform any other
decades. SDN is built around 3 main concepts which are: functionalities depending on the used protocol. This task is
Programmable Networks, Centralized Network Control commonly known as the “Control Plane” or the signaling
and Control & Data planes separation. In this section, we plane. The output of this task is the data structures used to
brief some of the previous work done in each area. handle the traffic as the routing table found in network routers.
A. Programmable Networks Second, the devices start to forward the traffic based on the
negotiations done in the first step. The action of forwarding
Programmable Networks is the concept to deploy new
the end user traffic (or even to block or any other action
functionalities in network nodes by the use of a programming
directly affecting the end user traffic) is known as the “Data
interface. In this networking paradigm, network nodes are not
Plane” or the forwarding plane.
sold as closed “as-is” components. Active Networks [2] Control & Data planes separation is the concept of having the
represents a very early trial for network programmability control plane negotiations done in a separate node other than
appearing in the 90’s of the last century. It exposed the the node actually handling the end user traffic or the data
resources of individual network nodes (as processors, memory plane. Of course both types of nodes will be communicating
& packet queues) to be programmed for creating a new through a standard protocol. In this section, we introduce some
functionality for a specific pattern of packets. The code of the work done in this area.
controlling the active nodes could be carried over the data
ForCES: [5] ForCES stands for Forwarding & Control Element
packet itself sent form the end user (the Capsule Mode) or it
Separation. It’s an IETF protocol proposed since 2004 to
could be sent from a separate dedicated management interface
separate the Control & Data plane elements within the
(the programmable router/switch model).
network device internal infrastructure in order to allow the
Active networks faced many critics related to network security
control element of a device to communicate with a third party
as end users would be able to play with the network nodes
forwarding element. Regardless of the separation, the network
supporting in-band active programming. Also, although active
device will appear as a single entity to other network devices.
networks proposed a flexible interface for network innovation,
PCE: [6] PCE stands for Path Computing Element. It’s an
it didn’t find a compelling industry problem to solve perhaps
element proposed by the IETF in 2006 to delegate the function
due to the limited spread of the internet that time.
of calculating the path that traffic should follow in an MPLS
B. Centralized Network Control network in order to comply with a set of user defined policies
Centralized network control is related to delegating a certain such as QoS assurance, loadbalancing or minimization of the
network function to a central node communicating with WAN cost. Instead of having these calculations done &
network nodes through a standard protocol. Below we negotiated between edge routers, the function could be
introduce SNMP & NETCONF as a sample of the work done completely delegated to a standalone separate node.
in this area. Ethane: [7] Ethane project was the predecessor of openflow.
SNMP: [3] SNMP stands for Simple Network Management The work was published in 2006 with the main target of
Protocol. It’s an IETF protocol proposed thirty years ago to building a flexible access control frame work for enterprise
have a unified interface for the management of network nodes network. The frame work composed of two main components:
including statistics collection, alarms collection & network the network controller where the access policies exist &
configuration through a remote SNMP agent. SNMP is widely Ethane switches controlled by the network controller. The
used for statistics & alarms collection while it’s rarely used for controller compiles the access policies into flow table entries
configuration management due to having many limitations for over the different switches to control either to forward or to
the modeling of configuration information and other reliability block a certain traffic flow.
& security issues. The protocol tried to overcome these issues III. SDN ARCHITECTURE
in subsequent recent versions.
NETCONF: [4] NETCONF is an IETF protocol proposed in SDN architecture could be divided into four layers as shown
2006 to automate the configuration of network devices in Figure 2. In this section, we explain the first three layers
through a standard Application Programming Interface (API). while layer four (SDN Applications) is introduced in section
It’s based on a data modeling language called YANG that IV due to the large set of available applications. We explain
overcomes the limitations found in SNMP. NETCONF is not the functions & interfaces of each layer together with a sample
an alternative to Openflow protocol as it only allows network of the ongoing research work.
administrators to configure their networks through a
programmable interface but all functionalities & logic should
be implemented first at the network device itself before being
configured by NETCONF & hence it doesn’t offer the
capability to develop new functionalities as Openflow offers.
Matching
Priority Action Counters
Condition
Figure 3 Structure of an Openflow rule

OpenFlow Year Introduced Match Fields Statistics


Version
v 1.0 2009 Ingress Port Per table
statistics
Ethernet: Source, Destination, Per flow
Type, VLAN ID statistics

IP: Source, Destination, Per port


Protocol, ToS statistics

TCP/UDP: Source Port, Per queue


Destination Port statistics

v 1.1 2011 MetaData, SCTP Group


statistics
MPLS: Lablel, traffic Class Action
bucket
Figure 2 Layered architecture of an SDN network statistics

v 1.2 2011 Openflow extensible match


A. Layer 1: Network Data Processors IPv6: Source, Destination, flow
In this layer data processing devices exist. Data packets are label
handled based on the actions installed from the network
v 1.3 2012 IPv6 extension headers, Per-flow
controller in every individual device. Openflow is the standard
Provider Backbone Bridging meter
protocol that is used for the communication between the
tagging
network controller & forwarding devices in order to install the
data processing rules. As shown in Figure 3, an openflow rule v 1.4 2013 Optical Port
consists of four main parts: priority, matching condition, Properties
action & related active counters. Priority field is used to define
the order of matching a data packet against the set of installed v 1.5 2014 TCP flags Extensible
rules & hence once a higher priority rule is matched other flow entry
lower priority rules are ignored. Matching condition consists statistics
of any combination of several IP/Ethernet headers (such as
source IP, destination IP, port numbers & VLAN id). The
action field defines the action that should be taken upon Table 1 A Comparison between different versions of openflow
receiving the packet such as forwarding the packet to an protocol (reproduced from [8] with changes)
outgoing interface or to modify some of the header fields
before forwarding or to drop the packet. Finally the counters 2) Southbound Interface Protocols: While openflow is the
field defines the associated counters to this rule such as the most widely used southbound interface protocol, other
count of the number of matching packets in order to send the protocols already exist as OVSDB. OVSDB [10] stands for
values to the controller to get some statistics about the data Open vSwitch Database Management Protocol. It’s intended
plane. Openflow has passed many revisions with the addition to provide a programmable interface to Open vSwitch to
of many new matching conditions & actions in order to satisfy support advanced features that are not currently supported by
more complicated use cases. Major changes between different openflow while openflow is still used for other operations.
revisions are shown in Table 1. In the following, we highlight Compared to Openflow, OVSDB management operations
some of the open research challenges in this layer. occur on a relatively long timescale. Examples include: the
1) Software Switching: Software switching is related to using creation of virtual bridges where each bridge is considered as a
a software running over a linux OS for openflow switching. separate virtual switch controlled by a separate openflow
It’s commonly used in cloud computing to connect multiple controller, the configuration of quality of service policies over
VMs in the same host to the same external L2 network. Open interfaces with the associated traffic queues, the configuration
vSwitch [9] is an open-source multi-layer switch for software of tunnels for the communication between virtual machines
switching in a virtualized environment. It shows increased residing in separate hosts & finally statistics collection.
flow rate of 25% compared to a linux kernel-based software Suggested future work is related to the cooperation between
switch. Suggested future work is related to supporting new OVSDB & openflow for better network control.
functionalities such as Network-Address Translation (NAT), 3) Network Slicing: Network slicing is related to slice the
Deep Packet Inspection (DPI) and stateful flow processing. forwarding infrastructure to allow multiple logical networks
Also, another research direction is related to decreasing the over the same physical forwarding devices. FlowVisor [11] is
software switching delay. one of the early technologies for SDN slicing by acting as a
proxy between the controller & the openflow switches partitioning due to individual controller failures & finally it
allowing each slice to be controlled by a separate controller. enables interconnecting individually managed openflow
libNetVirt [12] is another network virtualization effort. It’s networks.
written as a network virtualization library similar to the libvirt 3) Controller Services & Application portability:
library in computer virtualization. It has the advantage of Traditional operating systems provides abstractions in the
being able to communicate with other southbound interfaces form of APIs (Application Programming Interface) to provide
other than openflow (as MPLS) for providing network slicing access to shared resources (as CPU scheduling) and allow the
in a legacy network by creating agents to communicate with communication with external devices using the appropriate
libNetvirt driver & use CLI scripts or NETCONF to configure driver. Network Operating Systems provides a similar
the legacy element. Suggested future work is related to function to network applications. While there’re standards for
creating business applications depending of the benefits the APIs & services provided by traditional operating systems
offered by network slicing. such as the POSIX family of standards, no similar function
exists in Network operating Systems. Every controller
B. Layer 2: Network Operating System
provides its own set of APIs & SDN applications are written
In this layer, the network control exists and hence it’s called for a specific network controller. NOSIX [16] proposed a
the “Network Controller” or “The Network Brain”. A lightweight SDN application portability layer. However,
controller communicates with the data forwarding devices to NOSIX is not a generic northbound interface abstraction layer,
install, update & remove openflow rules based on the logic but rather a higher layer abstraction of the flow tables of
defined by the running applications. As the operating system openflow switches. In order to achieve more maturity in this
of a normal computer provides functions such as resources point, more applications should be developed & being used
management & file system management, the SDN controller commercially to settle on a widely accepted set of controller
provides similar functions to the SDN applications. There services. We propose the use of SDN programming languages
exist many open source SDN controllers supporting different & their associated run-time environment as a solution for these
openflow versions, programming interfaces & finally inter-operability issues as discussed in the following section.
providing different services for the running applications. Due In Table 2, a comparison is shown between many available
to the centralized nature of the controller, it suffers from many network controllers with their supported features.
reliability, scalability, security and performance issues. It’s
noted that there’s no standards defining the interfaces or the C. Layer 3: Network Compiler
services offered by the SDN controller to the network In this layer, the network applications are written using SDN
applications & hence many challenges exist in this layer. In specific programming languages & associated compilers are
the following, we highlight some of the open research used to translate the application code to the correct API
challenges in this layer. supported by the used network controller or SDN operating
1) Controller Placement Problem: Controller placement system. While using this layer is not common (as most
plays an important role due imposing a limitation over the applications are written directly using the controller APIs), we
processing speed of the new flows (as it’s common that the suggest using this layer for the following reasons:
switch consults the controller for every new flow to avoid 1) Application portability: Due to the variety of network
overwhelming the switch memory with unused rules). In [13] controllers & associated APIs, applications portability is not
the authors tried to find the optimum number of controllers & allowed with current SDN networks. Using a network
their locations in order to minimize the communication latency programming language, only using the correct network
between the controller & the openflow switches. They have compiler would be sufficient in order to guarantee application
shown the results when optimizing for two different portability between different controllers. Of course this imply
performance metrics: the average latency time & the worst that a compiler software should exists for the target network
case latency time. Simulation results showed that that random operating system but it’s only a one-time job & would be done
placement is around double the delay compared to the optimal by the compiler authors for most of the widely used network
placement. Also, it’s shown that a single controller is
operating systems.
sufficient to satisfy the response time requirements for
2) High level of network abstraction: Using a high level
medium-size topologies.
programming language implies that the developer should take
2) Centralized vs. Distributed Controller: A single
controller represents a single point of failure & a performance care of the programming logic while leaving low level
bottleneck due to the increased delay for the communication operations to the programming language. This implies that the
with remote switches. To satisfy high-availability & fault developer should specify high level policies while the
tolerant requirements, a distributed controller model is compiler should be able to analyze these policies & generate
proposed. In the opposite side, a distributed controller will related openflow rules to be installed over individual switches
suffer from distributed systems issues such as state without bothering the developer with this level of details. In
distribution, consistency & increased programming this way, network innovation is becoming much easier.
complexity. Onix [14] proposed the design of a logically 3) Code reusability: While most controllers offer a direct
centralized controller over a distributed physical controllers programming interface, they don’t offer a conflict free
taking care of the state distribution between individual execution of composite code due to the low level nature of the
controllers. HyperFlow [15] provides advanced features for exposed programming APIs. As suggested in Frenetic [17],
distributed controllers such as the localization of flow the sequential execution of 2 simple programs over a NOX
decisions to individual controllers & hence it minimizes the controller (one program forwards the packets from an interface
communication latency. Also, it provides resilience to network to another interface while the other program measures the web
traffic entering the same interface) will lead to completely of individual switches. Apparently, it’s an optimization
incorrect results & a third program should be developed to problem. One Big Switch [19] proposed a solution for this
combine the executions of the two programs. The result is problem by dividing the network into paths and get the union
expected due to having both programs installing overlapping of all rules over each path. It then divides the total rule
rules, while the switch will execute only the higher priority capacity between paths according to the number of path
matching rule. As a result a programming language is needed policies. Finally, it tries to find a feasible solution for each
to allow the reuse of developed applications & libraries path according to the granted rule capacity.
without the need to adjust the code for correct execution. D. Cross Layer Challenges
In the following, we survey some of the challenges related 1) Testing, Prototyping & Simulation: Testing & simulation
to this layer: represents a basic stage in the software development process.
1) Programming Language Design: Many features are Many tools were proposed to simulate the behavior of
desired to exist in an SDN programming language. Frenetic
openflow switches to simulate the forwarding plane. Mininet
[17] introduced a collection of high-level compositional [20] is a system for rapidly prototyping large SDN networks
operators for querying and transforming streams of network on the limited resources of a single laptop. Using OS-level
traffic. A run-time system handles all of the details related to virtualization features, allows the virtualization to scale to
installing and uninstalling low-level rules. FlowLog [18] hundreds of SDN switching nodes. fs-sdn [21] is another tools
provided extra features such as a stateful rule-based language
that performs simulation using flowlet instead of a packet (a
to write middle box applications as stateful firewalls & flowlet refers to the data volume emitted by an interface in a
application loadbalancers.
certain time period for example 100 ms). It could simulate
large networks of switches with more accurate results
SDN Controller Architecture Prog. Supported License
Lang. openflow
compared to mininet. Controller simulation is a much simpler
version step as a controller is a pure piece of software & could run
over a personal computer as long as it meets its minimum
OpenDayLight Distributed Java v1.0 & EPL v1.0 hardware & software requirements.
v1.3 2) Fault Troubleshooting: Currently network troubleshooting
is done in a very primitive way. Very elementary tools (such
Hyperflow Distributed C++ v1.0 No License as ping, traceroute & SNMP statistics) are used to discover
network forwarding faults making fault discovery a very hard
ONOS Distributed Java v1.0 No License problem for large networks specially when the problem is due
to a mal-functioning node or a configuration mistake. Many of
Onix Distributed Python,C v1.0 Commercial
the following proposals could be used to troubleshoot SDN &
Ryu NOS Centralized, Python v1.0, v1.2 Apache 2.0
legacy networks forwarding faults.
Multi- & v1.3 XTrace [22] suggested a solution to this problem by using a
Threaded generic tracing functionality over multiple technologies
(Routing, encrypted tunnels, VPNs, NATing & middles
Floodlight Centralized, Java v1.1 Apache boxes) by inserting its own metadata in all network layers
Multi- along the path. The main problem with this technique is
Threaded related to degraded performance of network nodes due to the
processing & propagation of the metadata. Interestingly, the
Beacon Centralized, Java v1.0 GPLv2 technique is quite similar to software debuggers used in
Multi- programming languages.
Threaded
NetReplay [23] proposed another technique for discovering
NOX-MT Centralized, C++ v1.0 GPLv3
routers causing packet loss by observing TCP retransmits. A
Multi- packet is tagged with the ID of the first router processed this
Threaded packet while routers don’t tag packets if they have processed
the packet in the near past. As a result, a retransmitted packet
NOX Centralized C++ v1.0 GPLv3 will be tagged with the router ID of the router dropped it the
first time. Concerns related to this technique are related to
POX Centralized Python v1.0 GPLv3 being limited to packet loss while failing to find problems
related to out-of-order delivery, also not being able to find
routing problems & finally not being able to find
Table 2 A Comparison between different SDN controllers abnormalities related to UDP traffic such as VOIP traffic
(reproduced from [8] with changes) (where there’s no retransmits).
NetCheck [24] proposed an algorithm to detect network
2) Big Switch abstraction & Rule Placement problem: Rule applications issues (such as skype bad performance) through
placement problem is related to simplifying the design of SDN the analysis of the application system calls from the relevant
applications by the abstraction of the whole switching network end hosts. By the correlation of the events from end hosts, it
as a single big switch. It’s the role of the SDN Compiler to could discover issues such as MTU size mismatch, IP changes
efficiently distribute the rules over the switches along the path due to passing through a NAT device & UDP packets
between the source & the destination in order to decrease the truncation due to using default buffer size settings. Limitations
number of total installed rules & according to the rule capacity of this algorithm are related to not being able to detect faults
not impacting application system calls. Also, detection engine before. Due to such incremental nature, Veriflow is used for
could be enhanced using machine learning by comparing real time forwarding validation.
application system calls pattern with previously observed
patterns of correct & incorrect behaviors. IV. SDN APPLICATIONS
3) SDN software Debugging: In addition to the complexity of In this section, we survey some of the applications developed
troubleshooting forwarding problems (which is common with in five major SDN applications categories which are: Hybrid
traditional IP networks), new challenges arise in SDN due to Network applications, Traffic Engineering applications, Data
the need to debug network applications software. Ndb [25] Center networking applications, Mobility and Wireless
proposes a debugger for SDN applications to debug network applications & finally Network Secuirty applications.
applications exactly as the case of normal software. The
A. Hybrid Network Applications
debugger utility is able to record the network state changes &
to trace the sequence of events leading to exceptions to find While SDN networking offers a lot of benefits for network
their root causes. The ndb debugger lets the user define a operators in terms of flexibility & programmability, migration
packet breakpoint (such as unforwarded packet or a packet to SDN is still in a slow pace. It’s observed that only Huge
hitting a certain rule) & provides the user with the set of Service Providers such as Google [29] has migrated their
forwarding actions seen by the packet leading to the globally deployed network to be SDN based. The main reason
breakpoint. The authors showed how their debugger could be for this slow deployment rate is related to the high CapEx
used to detect errors related to race conditions, controller logic needed to swap traditional network equipments with openflow
& switch implementation. Current implementation is not able enabled switches. Also, the lake of experience with the new
to find network performance abnormalities such as non-line- technology & the need to train network administrators with a
rate forwarding or traffic in-balance. new set of skills (such as network programming) represents
NICE [26] is an automated debugging tool to discover SDN another challenge. One of the most prominent solutions to this
software bugs through investigating the state space of the problem is the incremental migration strategy. While it’s very
entire system through model checking (i.e. to explore all logical to swap out-of-support nodes with SDN nodes, such
transitions between all states & if the new state satisfies the hybrid network comes with its own set of new challenges.
defined correctness properties). Examples of correctness In [30] the authors suggested the classification of networks
properties are loop-free & back-hole-free forwarding. Due to into 2 types: SDN networks (SDN) & Traditional or COTS-
the large state space, state transition input (i.e. network based Networks (CN) where network components are sold as
packets those trigger certain state transition) is reduced using COTS products (Component On The Shelf). Hybrid SDN was
symbolic execution of the packet event handlers (symbolic suggested to have four forms according to the integration way
execution is used to get which packets are going to change the between both types of networks. Topology-based integration
code execution flow & trigger different execution paths or in was proposed where both types of networks exist in different
other words will trigger certain state transition). geographical areas. Service-based integration was proposed
4) SDN Forwarding Validation: Anteater [27] focused on where both types of networks exist together in the same
discovering forwarding bugs by modeling the network state network devices & each network handles a different type of
(i.e. installed forwarding rules) together with the network bugs services. Class-based integration was proposed where both
under investigations (such as routing loops, forwarding types of networks implement all the services but for a certain
inconsistency & network cuts) as a Boolean satisfiability traffic class such as using SDN for the routing of a delay
problem. The tool is going to use an on-the-shelf SAT solver sensitive application & relaying on CN for the routing of other
to solve the problem & report if any of the bugs are applicable applications. Finally, Integrated SDN where the network
or not to the network under investigation. Also, the tool forwarding devices are kept using the current routing protocols
provides one of the feasible solutions to the satisfiability & the SDN controller uses current legacy protocols to control
problem (i.e. the network state that’s triggering the forwarding the forwarding behavior of the traditional networking devices.
problem) to help the administrator to check the problem Proposed research points are related to the identification &
reason. The main problem with this technique is related to the implementation of services through the cooperation between
long time needed to check the correctness properties of a these paradigms & tools to solve the added complexity of such
certain network state. hybrid paradigms. Figure 4 summarizes the four integration
Veriflow [28] proposed another technique for real-time strategies.
forwarding bugs discovery. Veriflow lies in the middle
between the network controller & the openflow switch to
check all rule installation or modification APIs in real time.
Veriflow is going to send the rule to the switch only if the rule
doesn’t trigger any violation to the forwarding correctness
properties. Veriflow uses Equivalent Classes to classify
packets according to their forwarding path. For each
Equivalent Class, a forwarding graph is build showing the
sequence of traversed nodes. Finally, all forwarding paths are
checked versus stored network bugs (such as connectivity
issues, asymmetric routing or routing backholes). Whenever, a
rule is received, Veriflow checks the modified equivalent class
of the matching packets & only repeat the previous algorithm
if the equivalent class has not passed the validation process Figure 4 Figure 4 Types of hybrid SDN networks [30]
Fibbing [31] suggested the use of augmented topology to modify the flow table entries of the switches (using cisco route
control the routing process in IGP routing protocols as OSPF map & access-list data structures to specify the flow matching
& ISIS by inserting fake entries into the topology database of conditions & associated action) & finally allowing the send-
the routing protocol. This technique was shown to achieve to-controller action whenever a packet is received without
results which are not possible using pure IGP protocols such matching any of the specified flow table entries. The work
as the case of traffic distribution over unequal cost paths (as an could be extended by allowing other openflow functionalities
IGP always chooses the lowest cost path). Figure 5 shows the such as statistics reporting by using the well-known SNMP
concept of topology augmentation. The algorithm paths protocol. Also, there should be a compiler that compiles
through three steps: First, Path Compilation into a DAG openflow APIs to vendor specific CLI commands. Finally,
(Directed Acyclic Graph), then Topology Augmentation to there should be some benchmarking through the use of a real
find the fake topology that satisfies the path requirements & openflow application & comparing the performance of the
finally the Topology Optimization to optimize the augmented Closedflow with native openflow switches.
topology in terms of the number of augmented routers. Figure Panopticon [35] proposed a framework for deploying &
6 explains the main stages of the proposed algorithm. More operating a network consisting of traditional & SDN switches
work is suggested in the cooperation between openflow & while benefiting from the simplified management & flexibility
legacy devices in a hybrid SDN deployment. of SDN. The framework proposes a logical SDN network over
OpenRouteFlow [32] proposed to perform software upgrade a partially upgraded traditional network & hence extending
to legacy routers operating system in order to support SDN benefits to the entire network. Based on the performance
openflow while keeping the same hardware. While it seems a evaluation, the authors were able to operate the entire network
good suggestion for the migration scenarios, such assumption as a single SDN network while upgrading 30% to 40% of the
is not guaranteed to work properly with no performance network traditional distribution switches. Panopticon uses a
degradation. Also, the paper has proposed a practical use case mechanism called Solitary Confinement Tree (SCT) which
of how to control the network behavior using this hybrid uses VLANs over legacy switches to ensure that traffic passes
control model. through at least one SDN switch where traffic end-to-end
Exodus [33] is a tool for automatic migration of enterprise policies are deployed. Network is divided into cell blocks to
networks to SDN architecture. The tool takes the configuration allow VLAN reuse in different cell blocks. Cell blocks are
files of the legacy nodes as its input & produces the controller divided by SDN switches (called Frontier). It also uses an
application of the equivalent SDN network based on flowlog Inter-Switch Fabric (ISF) which is a virtual tunnel for
network programming language (briefly explained in section logically connecting two SDN switches. Using the previous
III). The tool supports a group of non-trivial configuration technique, each access port of a legacy switch is controlled by
items such as static & dynamic routing, ACL, NAT & VLAN. a certain SDN switch & seen as being connected to the SDN
Proposed future work is related to optimizing the auto switch from the network logical view. Figure 7 shows the
generated application by adopting the One Big Switch proposed architecture. Panopticon adopts the “Big Switch
abstraction (briefly explained in section III). Also, the concept Abstraction” [19] concept explained previously in section III.
of auto migration could be generalized to include Service Network administrators are supposed to specify their end-to-
Provider network protocols such as MPLS & BGP. end traffic policies (such as traffic path, load balancing or
dropping certain type of traffic) & Panopticon will deploy the
needed SDN application with the associated VLAN
configuration over the legacy switches. Future work is related
to the development of network applications utilizing the
proposed framework. Also, due to the increased complexity of
the network topology, there’s a need for a troubleshooting
application on the top of the proposed framework.

Figure 5 The concept of augmented topology [31]

Figure 6 Fibbing algorithm [31]

ClosedFlow [34] proposed to provide openflow-like Figure 7 Panopticon proposed logical SDN network [35]
functionalities while using legacy switches & routers. The
authors proposed to use a central controller that communicates LegacyFlow [36] proposed another topology where openflow
with legacy switches & routers through vendor defined CLI enabled switches are used on the edge of the network while
shell. They have successfully implemented 4 types of using traditional switches at the core of the network. Edge
openflow capabilities over CISCO switches which are: openflow switches will be connected to each other through
connecting the controller to the switches, automatic discovery virtual tunnels & rely on LegacyFlow to make necessary
of the network topology, allowing the controller to add or to tunnel configurations over the traditional core switches.
Tunnels could be as simple as a VLAN or using more Future proposed work is related to solve bottlenecks for the
advanced technologies as MPLS L2 VPN. The main communication between the controller & the switches.
advantage over Panopticon is the simplified network In [39] the authors suggested an algorithm for choosing which
architecture with the restriction to completely swap all edge legacy routers to be migrated first to SDN paradigm in order
devices to be openflow switches leading to less flexibility in to maximize the benefits from traffic engineering point of
the migration plan. view. The number of newly introduced alternative paths was
RFCP [37] or RouteFlow Control Platform is a frame work used as the target maximization function compared to the least
for the inter-domain routing control of a service provider cost paths used by legacy routers running link-state protocols
hybrid SDN network. A control node referred to as RFCP acts such as OSPF & ISIS. Simulation results showed that proper
as a gateway between traditional iBGP route reflectors & choice of the first router to upgrade could make capacity
opneflow controllers controlling the openflow forwarding savings up to 16% compared to 9% when choosing a random
devices. Using this hybrid model the authors were able to router for the upgrade.
demonstrate new use cases those are hard to achieve using Authors of [40] proposed a technique that uses openflow
only iBGP such as calculating best BGP paths from the point switches as backup paths for the recovery of any single link
of view of each ingress point, installing redundant paths using failure in a legacy network while taking care not to congest the
openflow multi forwarding table for fast recovery after path new paths by the extra traffic through dynamic traffic
failure & finally providing advanced distributed security distribution between backup paths. For the proposed design, as
services through using the flexibility of openflow switches. SDN controller is not aware by the status of the legacy
Figure 8 shows the RFCP architecture. network a virtual tunnel is used between every router & its
backup SDN switch in order to identify the failing link. The
work could be enhanced by directly integrating the legacy
network with the controller to have a holistic view over the
network. Also, a mechanism should be proposed to deal with
multiple link failures & associated capacity problem while
differentiating between different services.

Figure 8 RouteFlow Control Platform Architecture [37]


B. Traffic Engineering Applications
Traffic engineering is related to the dynamic monitoring &
control of the network in order to achieve high level design
objectives such as satisfying differentiated service delay
requirements, fast failure recovery & maximizing the traffic
that could be served by the network. Using business
terminology, traffic engineering or TE is related to service
continuity & cost optimization. TE has emerged since the
early days of ATM networks, passing through routing protocol Figure 9 Evolution of Traffic Engineering [38]
TE (such as OSPF loadbalancing between equal cost paths) &
currently MPLS TE is the standard TE solution in a Service In [41] the authors proposed a system for the incremental
Provider network. MPLS based TE sufferers from many deployment of hybrid SDN network & showed how to use
limitations such as complexity of configuration (specially the such hybrid system to satisfy a variety of traffic engineering
need to define backup paths or LSPs manually), scalability & goals such as loadbalancing & fast failure recovery. The
robustness limitations [38]. Due to the inherited flexibility & system proposes a migration path in order to maximize the
programmability, SDN is regarded as the future TE solution. traffic engineering benefits due to the initial upgrade steps.
Figure 9 shows the evolution of Traffic Engineering Simulation results showed a maximum link utilization
techniques. reduction to an average of 32% (compared to least cost
Google [29] proposed the design of advanced traffic routing) while upgrading only 20% of the network. The
engineering mechanism through using SDN for the routing authors went a step further showing that hybrid topology has
between the company data centers distributed around the its own benefits compared to pure SDN deployment showing
globe. The proposed SDN controller is a modified version of that previous results required only 41% of the flow table
the ONIX controller. Through the proposed design, the capacity required by a pure SDN deployment. Future work is
company were able to drive links to near 100% utilization for related to studying the scalability issues of the proposed
extended period (during WAN failures) while distributing the system, using historical traffic demands to decide the SDN
traffic among several paths & taking care of the bandwidth migration plan & finally the effect of frequent rule updates on
allocation to meet different application requirements. Quagga the network consistency & related traffic engineering
was used as an open source BGP/ISIS routing application performance.
running over the controller to communicate with existing
routing protocols to support hybrid network deployment.
C. Data Center Applications D. Mobility & Wireless Applications
In [42] the authors proposed the design of a large-scale low- In [46] the authors investigated the function placement
cost multi-tenant data center. A multi-tenant data center is a problem of the Gateways (LTE SGW & PGW) of a Mobile
one that’s shared between different users while every user is operator offering LTE services. As explained by the authors,
not aware by the other users. Network slicing is commonly the function placement problem is either to choose to deploy
used to indicate the same meaning. The designed network uses LTE gateways as a fully virtualized service where both the
a combination of openflow switches & low cost ethernet control & data planes are processed in the same node over a
switches in order to decrease the network cost. Dual SDN datacenter or as an SDN solution where the data plane is kept
controllers are used for achieving the needed network at the transport network element which could be programmed
resilience. Up to 64K tenant could share the same data center through an extended version of openflow supporting LTE data
with each tenant can individually assign IP addresses & plane functionalities such as GTP tunneling & traffic charging
VLAN IDs to his own virtual machines. The design also functions. The virtualized solution has the advantage of
supports virtual machine live migration, fast failure recovery, simplicity with the cost of introducing additional traffic delay
network load balancing & multi-path TCP to provide multiple due to processing the traffic over the Data Center commodity
sub-flows for increased throughput of TCP connections servers. The SDN solution has the advantage of minimal delay
between virtual machines. with the cost of increased complexity & introducing additional
In [43] the authors tackled the problem of decreasing data network load for the openflow communication between the
center network energy consumption in low traffic periods transport network element & the virtualized controller. The
while sustaining the same network performance. They have authors developed a model that minimizes the additional
developed an energy-aware routing application based on the network traffic (as a result of openflow signaling) for a certain
OpenNaaS network manager which could run over several data plane delay budget. Figure 11 shows the different design
SDN controllers such as OpenDayLight, RYU & Floodlight alternatives of the GW function placement problem where
controllers. GW-c indicates the control (or the signaling) part of the GW
In [44] the authors suggested to use the SDN controller to function, GW-u indicates the user data traffic handled by the
update the look-up tables of an optical packet switching (OPS) GW function & NE is the external network entity routing the
node used for high speed data center switching. They were signaling & the user traffic to the GW.
able to create a virtual data center network which benefits In KLEIN [47] the authors investigated the problem of the re-
from the wire speed switching of the OPS node with the design of a mobile operator core network with the target of
flexibility of SDN technology. Openflow was extended to more elasticity with minimal disruption to the existing
support the OPS switching paradigm. The setup was used to network. The authors proposed a resource management
test many SDN features such as QoS guarantee for high algorithm that’s able to manage thousands of sites while
priority data flows & successful network slicing for intra data working within the operational constraints of existing 3GPP
center routing. standards. Combining the algorithm with a distributed mobile
In [45] the authors tackled the problem of uneven traffic core network, the algorithm showed an enhanced network
distribution over the servers of a data center. They suggested performance, better load distribution & an elastic fault
using an SDN controlled loadbalancer between the data center recovery. Figure 12 shows the architecture of the proposed
servers & the switches connecting the servers to the external resource management framework.
network. SDN is used to collect statistics from servers & In [48] the authors showed how to use SDN concepts to
switches to learn the traffic distribution & server loads and offload the core of a mobile operator network. Their
accordingly to perform load distribution between the servers framework suggests to route peer-to-peer traffic directly
while guaranteeing QoS for high priority data flows. Even between access sites without passing through the core network
distribution of the traffic has the impact of decreasing the which improves the user experience by decreasing latency &
average data latency. decreases the load over the mobile core network. Suggested
future work is related to supporting other functionalities over
SDN switches connecting different access sites such as
charging functions & deep packet inspection in order to be
able to deploy the framework in a life network.
In [49] the authors suggested the use of SDN for WSN
(wireless sensor network). They explained several SDN-based
WSN use cases such as centralized networking, optimal code
deployment & predicting WSN performance. Future work is
related to having prototypes of the suggested use cases & to
study the trade-offs between using SDN architecture & energy
efficiency of the whole network.

Figure 10 A Multi-tenant Data Center Design as proposed in


[42]
E. Security Applications
NICE [51] is a framework for cloud security by preventing
vulnerable virtual machines from being compromised in the
cloud & off-course securing the cloud against cloud zombies
(which would be used in later stages to perform DDOS attacks
to the cloud legitimate users). NICE uses SDN to mirror the
traffic of a suspicious host for Deep Pack Inspection without
interrupting the traffic of other users as in the case of a proxy-
based IPS/DPI system. Also, SDN is used to collect flow-
based countermeasures (statistics) as requested by the
framework for identifying compromised virtual machines.
Future work is related to the use of host-based IDS for
Figure 11 LTE GW function placement alternatives [46] improving the detection accuracy. Also, decentralized network
control & attack detection is proposed as a future
In [50] the authors suggested the use of SDN in vehicular ad- improvement to the system. Figure 14 shows the architecture
hoc networks (VANETs). They explained several use cases of the proposed SDN-based IDS system.
such as safety applications, surveillance applications & FlowGuard [52] proposed a software firewall as an
wireless network virtualization. They also compared the application running over an SDN controller. While the idea
performance of SDN based routing to the performance of seems direct & simple (as openflow rules support dropping
traditional VANET/MANET routing protocols showing that packets of a certain traffic flow), many challenges were solved
to achieve this idea. For example, due to the frequent
SDN outperforms other Ad-hoc routing protocols mainly due
openflow rules updates, a dynamic policy violations checker is
to the knowledge about the whole network state & hence the needed to make sure that the security policy is applied all the
fast failure recovery. Future work is related to exploring more time. Another challenge is related to having openflow “SET
SDN applications in VANETs & studying more complicated action” which allows an attacker to change the headers of a
architecture such as allowing the controller-to-sensor (the packet and consequently to bypass the applied security policy.
sensor could be either the Vehicle or the Road Side Unit) Hence a dependency checking tool was developed to check
communication in a Peer-to-Peer mode. Figure 13 explains the flow paths in the entire network & related dependencies in the
flow tables. Future work is related to the development of a
proposed architecture.
stateful firewall application which is widely used in business
environments (i.e. a firewall that relates a packet to a network
connection such as a TCP connection instead of dealing with
each packet individually) which is a challenging topic due to
the stateless nature of openflow rules. Also, the work could be
enhanced by the development of a policy visualization tool.
sFlow [53] proposed a design change in order to use SDN for
anomaly detection. While flow statistics is a very rich source
of information for the detection of network anomalies &
attacks, the authors showed that using openflow for flow
statistics collection overloads the control plane & introduces
scalability limitations. They proposed to separate the data
collection process from the control plane using the sflow
mechanism while relying into openflow only to take the
mitigation action over the detected anomalies. sFlow uses
Figure 12 The Mobile Core network orchestration framework packet sampling for statistics reporting instead of reporting the
proposed in [47] real matching counters as the case of openflow. sflow requires
to deploy the packet sampling mechanism over the openflow
switches which is not a standard feature for the time being &
hence there’s an opportunity for extra work to achieve the
same target while using the available openflow standard
features.

Figure 13 SDN-based VANET architecture as proposed in Figure 14 Architecture of SDN-based Network IDS
[50] framework as proposed in [51]
[11] Sherwood, Rob, et al. "Flowvisor: A network virtualization layer."
OpenFlow Switch Consortium, Tech. Rep (2009): 1-13.
In [54] a Provenance Verification Point component was
[12] Turull, Daniel, Markus Hidell, and Peter Sjödin. "libNetVirt: the
proposed to use the network itself as a point of observation in network virtualization library." Communications (ICC), 2012 IEEE
a forensics system using SDN reporting & traffic steering International Conference on. IEEE, 2012.
capabilities. The idea is to transform every network link into a [13] Heller, Brandon, Rob Sherwood, and Nick McKeown. "The controller
placement problem." Proceedings of the first workshop on Hot topics in
reporting tool by programming a distributed set of openflow software defined networks. ACM, 2012.
switches. When an operation is not applicable within the [14] Koponen, Teemu, et al. "Onix: A Distributed Control Platform for
Large-scale Production Networks." OSDI. Vol. 10. 2010.
switch itself (such as deep packet inspection), the traffic could
[15] Tootoonchian, Amin, and Yashar Ganjali. "HyperFlow: A distributed
be steered to a middlebox or the controller itself for extra control plane for OpenFlow." Proceedings of the 2010 internet network
processing. management conference on Research on enterprise networking. 2010.
[16] Yu, Minlan, Andreas Wundsam, and Muruganantham Raju. "NOSIX: A
lightweight portability layer for the SDN OS." ACM SIGCOMM
Computer Communication Review 44.2 (2014): 28-35.
V. CONCLUSION
[17] Foster, Nate, et al. "Frenetic: A network programming language." ACM
SDN is an emerging networking paradigm that allows the SIGPLAN Notices. Vol. 46. No. 9. ACM, 2011.
control of the network behavior through a centralized [18] Nelson, Tim, et al. "Tierless programming and reasoning for software-
programming capability. SDN offers simplified & automated defined networks." 11th USENIX Symposium on Networked Systems
Design and Implementation (NSDI 14). 2014.
network management that meets the demand of increased
[19] Kang, Nanxi, et al. "Optimizing the one big switch abstraction in
network complexity & several application domains. We software-defined networks." Proceedings of the ninth ACM conference
explained the architecture of SDN networking paradigm with on Emerging networking experiments and technologies. ACM, 2013.
the associated open research challenges & surveyed some of [20] Lantz, Bob, Brandon Heller, and Nick McKeown. "A network in a
the work done in each challenge. While SDN is only a laptop: rapid prototyping for software-defined networks." Proceedings of
the 9th ACM SIGCOMM Workshop on Hot Topics in Networks. ACM,
networking paradigm, the benefits of SDN could be achieved 2010.
through using the correct application. We have surveyed
[21] Gupta, Mukta, Joel Sommers, and Paul Barford. "Fast, accurate
several applications utilizing the benefits of SDN in five major simulation for SDN prototyping." Proceedings of the second ACM
application domains which are: Hybrid network control, SIGCOMM workshop on Hot topics in software defined networking.
Traffic Engineering, Data Center networking, wireless ACM, 2013.
networks & network security applications showing the [22] Fonseca, Rodrigo, et al. "X-trace: A pervasive network tracing
framework." Proceedings of the 4th USENIX conference on Networked
recommended future work for each application domains. The systems design & implementation. USENIX Association, 2007.
authors hope that the work would be useful for researchers [23] Anand, Ashok, and Aditya Akella. "Netreplay: a new network
willing to start a research work in the interesting research area primitive." ACM SIGMETRICS Performance Evaluation Review 37.3
of SDN or for professional engineers willing to extend their (2010): 14-19.
knowledge with the benefits & the terminology of the future [24] Zhuang, Yanyan, et al. "Netcheck: Network diagnoses from blackbox
networking technologies. traces." 11th USENIX Symposium on Networked Systems Design and
Implementation (NSDI 14). 2014.
[25] Handigol, Nikhil, et al. "Where is the debugger for my software-defined
network?." Proceedings of the first workshop on Hot topics in software
REFERENCES defined networks. ACM, 2012.
[1] McKeown, Nick, et al. "OpenFlow: enabling innovation in campus [26] Canini, Marco, et al. "A NICE way to test OpenFlow applications."
networks." ACM SIGCOMM Computer Communication Review 38.2 Presented as part of the 9th USENIX Symposium on Networked
(2008): 69-74. Systems Design and Implementation (NSDI 12). 2012.
[2] Feamster, Nick, Jennifer Rexford, and Ellen Zegura. "The road to SDN: [27] Mai, Haohui, et al. "Debugging the data plane with anteater." ACM
an intellectual history of programmable networks." ACM SIGCOMM SIGCOMM Computer Communication Review 41.4 (2011): 290-301.
Computer Communication Review 44.2 (2014): 87-98.
[28] Khurshid, Ahmed, et al. "Veriflow: Verifying network-wide invariants
[3] Choi, Mi-lung, et al. "XML-based configuration management for IP in real time." Presented as part of the 10th USENIX Symposium on
network devices." Communications Magazine, IEEE 42.7 (2004): 84-91. Networked Systems Design and Implementation (NSDI 13). 2013.
[4] Nunes, Bruno AA, et al. "A survey of software-defined networking: [29] Jain, Sushant, et al. "B4: Experience with a globally-deployed software
Past, present, and future of programmable networks." Communications defined WAN." ACM SIGCOMM Computer Communication Review.
Surveys & Tutorials, IEEE 16.3 (2014): 1617-1634. Vol. 43. No. 4. ACM, 2013.
[5] Doria, Avri, et al. "Forwarding and control element separation (ForCES) [30] Vissicchio, Stefano, Laurent Vanbever, and Olivier Bonaventure.
protocol specification." Internet Requests for Comments, RFC Editor, "Opportunities and research challenges of hybrid software defined
RFC 5810 (2010). networks." ACM SIGCOMM Computer Communication Review 44.2
[6] Farrel, Adrian, Jean-Philippe Vasseur, and Jerry Ash. A path (2014): 70-75.
computation element (PCE)-based architecture. RFC 4655, August, [31] Vissicchio, Stefano, et al. "Central control over distributed routing."
2006. Proceedings of the 2015 ACM Conference on Special Interest Group on
[7] Casado, Martin, et al. "Ethane: taking control of the enterprise." ACM Data Communication. ACM, 2015.
SIGCOMM Computer Communication Review. Vol. 37. No. 4. ACM, [32] Feng, Tao, and Jun Bi. "OpenRouteFlow: Enable Legacy Router as a
2007. Software-Defined Routing Service for Hybrid SDN." Computer
[8] Kreutz, Diego, et al. "Software-defined networking: A comprehensive Communication and Networks (ICCCN), 2015 24th International
survey." Proceedings of the IEEE 103.1 (2015): 14-76. Conference on. IEEE, 2015.
[9] Pfaff, Ben, et al. "The design and implementation of open vswitch." [33] Nelson, Tim, et al. "Exodus: toward automatic migration of enterprise
12th USENIX Symposium on Networked Systems Design and network configurations to SDNs." Proceedings of the 1st ACM
Implementation (NSDI 15). 2015. SIGCOMM Symposium on Software Defined Networking Research.
[10] Pfaff, B., and B. Davie. The Open vSwitch Database Management ACM, 2015.
Protocol. No. RFC 7047. 2013. [34] Hand, Ryan, and Eric Keller. "ClosedFlow: OpenFlow-like control over
proprietary devices." Proceedings of the third workshop on Hot topics in
software defined networking. ACM, 2014.
View publication stats

[35] Levin, Dan, et al. "Panopticon: Reaping the Benefits of Incremental [45] Tu, Renlong, et al. "Design of a load-balancing middlebox based on
SDN Deployment in Enterprise Networks." 2014 USENIX Annual SDN for data centers." Computer Communications Workshops
Technical Conference (USENIX ATC 14). 2014. (INFOCOM WKSHPS), 2015 IEEE Conference on. IEEE, 2015.
[36] Farias, Fernando, et al. "Legacyflow: Bringing openflow to legacy [46] Basta, Arsany, et al. "Applying NFV and SDN to LTE mobile core
network environments." (2011). gateways, the functions placement problem." Proceedings of the 4th
[37] Rothenberg, Christian Esteve, et al. "Revisiting routing control workshop on All things cellular: operations, applications, & challenges.
platforms with the eyes and muscles of software-defined networking." ACM, 2014.
Proceedings of the first workshop on Hot topics in software defined [47] Qazi, Zafar Ayyub, et al. "KLEIN: A Minimally Disruptive Design for
networks. ACM, 2012. an Elastic Cellular Core." Proceedings of the 2nd ACM SIGCOMM
[38] Akyildiz, Ian F., et al. "A roadmap for traffic engineering in SDN- Symposium on Software Defined Networking Research(SORS 16).
OpenFlow networks." Computer Networks 71 (2014): 1-30. ACM, 2016.
[39] Caria, Marcel, Admela Jukan, and Marco Hoffmann. "A performance [48] Saunders, Ryan, et al. "P2P Offloading in Mobile Networks using
study of network migration to SDN-enabled traffic engineering." Global SDN." Proceedings of the 2nd ACM SIGCOMM Symposium on
Communications Conference (GLOBECOM), 2013 IEEE. IEEE, 2013. Software Defined Networking Research(SORS 16). ACM, 2016.
[40] Chu, Cing-Yu, et al. "Congestion-aware single link failure recovery in [49] Jacobsson, Martin, and Charalampos Orfanidis. "Using software-
hybrid SDN networks." Computer Communications (INFOCOM), 2015 defined networking principles for wireless sensor networks." 11th
IEEE Conference on. IEEE, 2015. Swedish National Computer Networking Workshop (SNCNW), May 28-
29, 2015, Karlstad, Sweden. 2015.
[41] Hong, David Ke, et al. "Incremental Deployment of SDN in Hybrid
Enterprise and ISP Networks." Proceedings of the 2nd ACM [50] Ku, Ian, et al. "Towards software-defined VANET: Architecture and
SIGCOMM Symposium on Software Defined Networking services." Ad Hoc Networking Workshop (MED-HOC-NET), 2014 13th
Research(SORS 16). ACM, 2016. Annual Mediterranean. IEEE, 2014.
[42] Lee, Steven SW, et al. "Design of SDN based large multi-tenant data [51] Chung, Chun-Jen, et al. "NICE: Network intrusion detection and
center networks." Cloud Networking (CloudNet), 2015 IEEE 4th countermeasure selection in virtual network systems." Dependable and
International Conference on. IEEE, 2015. Secure Computing, IEEE Transactions on 10.4 (2013): 198-211.
[43] Zhu, Hao, et al. "Joint flow routing-scheduling for energy efficient [52] Hu, Hongxin, et al. "FLOWGUARD: building robust firewalls for
software defined data center networks: A prototype of energy-aware software-defined networks." Proceedings of the third workshop on Hot
network management platform." Journal of Network and Computer topics in software defined networking. ACM, 2014.
Applications 63 (2016): 110-124. [53] Giotis, Kostas, et al. "Combining OpenFlow and sFlow for an effective
[44] Miao, Wang, et al. "SDN-enabled OPS with QoS guarantee for and scalable anomaly detection and mitigation mechanism on SDN
reconfigurable virtual data center networks." Journal of Optical environments." Computer Networks 62 (2014): 122-136. Elsevier,2014.
Communications and Networking 7.7 (2015): 634-643. [54] Bates, Adam, et al. "Let SDN be your eyes: Secure forensics in data
center networks." Proceedings of the NDSS workshop on security of
emerging network technologies (SENT’14). 2014.

You might also like