Slide 60 -
|
Software-defined Networking Nick McKeown
nickm@stanford.edu
Infocom, April 2009 Part 1: Inside the box
Switch and Router Design
Part 2: Outside the box
Software-defined networking How big should buffers be? [1/√N] How to build really fast buffers? [Nemo] Which schedulers give 100% throughput? [MWM] Which schedulers are practical in hardware? [iSLIP] How to schedule multicast? [ESLIP] How to run the scheduler slower? [PPS] How to avoid scheduling altogether? [VLB] How to emulate an output queued switch? [MUCFA] How to lookup quickly in hardware? [24-8] Heuristic classification algorithms [HiCuts] Three Open Topics There’s something special about “2x speedup”
Deterministic (instead of probabilistic) switch design
Making routers simpler
Three Open Topics There’s something special about “2x speedup”
A maximal match crossbar scheduler gives 100% throughput [Dai&Prabhakar]
Makes a Clos network strictly non-blocking [Clos]
Allows a CIOQ switch to precisely emulate an output-queued switch [Chuang]
Three Open Topics There’s something special about “2x speedup” (contd.)
Allows a parallel stack of small switches to precisely emulate one big switch [Iyer]
Valiant Load-Balanced switch (or network) can give 100% throughput [Valiant]
Related observations “2x speedup” is key for both deterministic & probabilistic systems
A maximum size bipartite match is at most twice the size of a maximal match
A switch has two simultaneous constraints: input and output
Local “selfish” routing decisions cost twice as much as “global” ones [Roughgarden]
Three Open Topics There’s something special about “2x speedup”
Deterministic (instead of probabilistic) switch design
We need more analytical tools for “mimicking”
Generalized pigeon-hole principles
Making routers simpler
Three Open Topics There’s something special about “2x speedup”
Deterministic (instead of probabilistic) switch design
Making routers simpler
5389 RFCs Barrier to entry Bloated Power Hungry Many complex functions baked into the infrastructure
OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … We have lost our way Process of innovation Almost no technology transfer from academia
Deployment Idea Standardize Personal regret I wish I had said it sooner and louder
Our “dumb, minimal” datapath turned into a bloated 1960s mainframe! The essence of my talk (1 of 2) Hardware Substrate
The PC industry found a simple, common, hardware substrate (x86 instruction set)
Software-definition
Innovation exploded on top (applications) and in the infrastructure itself (operating systems, virtualization)
Open-source
100,000s of developers blew apart the standards process, accelerated innovation
The essence of my talk (2 of 2) It is up to us to make it happen.
Until we (someone) does, it remains ossified.
Let’s define the substrate. Hardware
Substrate Open Source
Culture Software-Defined
Network Innovation! Part 1: Inside the box
Part 2: Outside the box
The need for a substrate
The inevitability of software-defined networking Computer Application OS abstracts hardware substrate
Innovation in applications x86
(Computer) Windows
(OS) Application Application Simple, common, stable, hardware substrate below
+ Programmability
+ Competition
Innovation in OS and applications Linux Mac
OS x86
(Computer) Windows
(OS) or or Application Application Simple, common, stable, hardware substrate below
+ Programmability + Strong isolation model
+ Competition above
Innovation in infrastructure A simple stable common substrate Allows applications to flourish
Internet: Stable IPv4 lead to the web
Allows the infrastructure on top to be defined in software
Internet: Routing protocols, management, …
Rapid innovation of the infrastructure itself
Internet: er...? What’s missing? What is the substrate…?
Mid-1990s: “To enable innovation in the network, we need to program on top of a simple hardware datapath” Problems: isolation, performance, complexity Late-1990s: “To enable innovation in the network, we need the datapath substrate to be programmable” Problem: Accelerated complexity of the datapath substrate (Statement of the obvious) In networking, despite several attempts…
We’ve never agreed upon a clean separation between:
A simple common hardware substrate
And an open programming environment on top
But things are changing fast in data centers and service provider networks. Observations Prior attempts have generally
Assumed the current IP routing substrate is fixed, and tried to program it externally
Including the routing protocols
Defined the programming and control model up-front
But to pick the right x86 instruction set, Intel didn’t define Windows XP, Linux or VMware
We need… A clean separation between the substrate and an open programming environment
A simple hardware substrate that generalizes, subsumes and simplifies the current substrate
Very few preconceived ideas about how the substrate will be programmed
Strong isolation
New function! Operators, users, 3rd party developers, researchers, … Step 1: Separate intelligence from datapath We need… A clean separation between the substrate and an open programming environment
A simple hardware substrate that generalizes, subsumes and simplifies the current substrate
Very few preconceived ideas about how the substrate will be programmed
Strong isolation
Step 2: Cache decisions in minimal flow-based datapath “If header = x, send to port 4” Flow
Table “If header = ?, send to me” “If header = y, overwrite header with z, send to ports 5,6” 1. Unicast 2. Multicast 4. Waypoints
Middleware
Intrusion detection
…
3. Multipath
Load-balancing
Redundancy Types of action
Allow/deny flow
Route & re-route flow
Isolate flow
Make flow private
Remove flow What is a flow?
Application flow
All http
Jim’s traffic
All packets to Canada
… Packet-switching substrate Payload Ethernet
DA, SA, etc IP
DA, SA, etc TCP
DP, SP, etc Collection of bits to plumb flows
(of different granularities)
between end points Properties of a flow-based substrate We need flexible definitions of a flow
Unicast, multicast, waypoints, load-balancing
Different aggregations
We need direct control over flows
Flow as an entity we program: To route, to make private, to move, …
Exploit the benefits of packet switching
It works and is universally deployed
It’s efficient (when kept simple)
Substrate: “Flowspace” Payload Ethernet
DA, SA, etc IP
DA, SA, etc TCP
DP, SP, etc Collection of bits to plumb flows
(of different granularities)
between end points Payload Header
User-defined flowspace Flowspace: Simple example IP SA IP DA Single flow All flows from A A All flows between two subnets Flowspace: Generalization Field 2 Field 1 Single flow Set of flows Field n Properties of Flowspace Backwards compatible
Current layers are a special case
No end points need to change
Easily implemented in hardware
e.g. TCAM flow-table in each switch
Strong isolation of flows
Simple geometric construction
Can prove which flows can/cannot communicate A substrate Flow-based
Small number of actions for each flow
Plumbing: Forward to port(s)
Control: Forward to controller
Routing between flow-spaces: Rewrite header
Bandwidth isolation: Min/max rate
External open API to flow-table
OpenFlow as a strawman flow-based substrate Our Approach 1. Define the substrate OpenFlow is an open external API to a flow-table
Version 1.0
Defined to be easy to add to existing hardware switches, routers, APs, …
Timeframe: Now
Version 2.0
OpenFlow-optimized hardware
General “flowspace”
Timeframe: 2011 Our Approach 2. Deploy Deploy on college campuses
Deploy in national research backbone networks
Enable researchers to freely innovate on top
OpenFlow Hardware Cisco Catalyst 6k NEC IP8800 HP Procurve 5400 Juniper MX-series WiMax (NEC) PC Engines Quanta LB4G More coming soon... An OpenFlow Controller Martin
Casado Scott
Shenker “Nicira” created NOX controller
Available at http://NOXrepo.org Controller OpenFlow Basics Ethernet Switch Data Path (Hardware) Control Path Control Path (Software) Data Path (Hardware) Control Path OpenFlow OpenFlow Controller OpenFlow Protocol (SSL) OpenFlow Basics (1) Rule
(exact & wildcard) Action Statistics Exploit the flow table in switches, routers, and chipsets Flow 1. Flow 2. Flow 3. Flow N. Flow Table Entry OpenFlow Protocol Version 1.0 Switch
Port MAC
src MAC
dst Eth
type VLAN
ID IP
Src IP
Dst IP
Prot TCP
sport TCP
dport Rule Action Stats Forward packet to port(s)
Encapsulate and forward to controller
Drop packet
Send to normal processing pipeline + mask what fields to match Packet + byte counters Examples Switching * * 00:1f:.. * * * * * * * port6 Flow Switching port3 00:2e.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 Firewall * * * * * * * * * 22 drop Examples Routing * * * * * * 5.6.7.8 * * * port6 VLAN * * * * vlan1 * * * * * port6,
port7,port9 OpenFlowSwitch.org Controller OpenFlow
Switch PC OpenFlow Usage Dedicated OpenFlow Network OpenFlow
Switch OpenFlow
Switch Peter Usage examples Peter’s code:
Static “VLANs”
His own new routing protocol: unicast, multicast, multipath, load-balancing
Network access control
Home network manager
Mobility manager
Energy manager
Packet processor (in controller)
IPvPeter
Network measurement and visualization
… Separate VLANs for Production and Research Traffic Normal L2/L3 Processing Production VLANs Research VLANs Virtualize OpenFlow Switch Normal L2/L3 Processing Researcher A VLANs Researcher B VLANs Researcher C VLANs Production VLANs Controller A Controller B Controller C OpenFlow
Switch OpenFlow
Protocol Craig’s
Controller Heidi’s
Controller Aaron’s
Controller OpenFlow
Protocol Virtualizing OpenFlow OpenFlow
Switch OpenFlow
Switch OpenFlow
Protocol Broadcast Multicast http
Load-balancer Virtualizing OpenFlow OpenFlow
Switch OpenFlow
Switch OpenFlow
Switch Windows
(OS) Windows
(OS) Linux Mac
OS x86
(Computer) Windows
(OS) App App Linux Linux Mac
OS Mac
OS Virtualization App Simple, common, stable, hardware substrate below
+ Programmability + Strong isolation model
+ Competition above
Faster innovation Controller 1 App App Controller
2 Virtualization (FlowVisor) App OpenFlow Controller 1 Controller 1 Controller
2 Controller
2 OpenFlow Deployment OpenFlow Deployments Stanford Deployments
Wired: CS Gates building, EE CIS building, EE Packard building
WiFi: 100 OpenFlow APs across SoE
WiMAX: OpenFlow service in SoE
Other deployments
Internet2 (NetFPGA switches)
JGN2plus, Japan (NEC switches)
10-15 research groups have switches
|