Table Of ContentSDN and NFV Simplified
Jim Doherty
Copyright © 2016 Pearson Education, Inc.
All rights reserved. Printed in the United States of America. This publication is
protected by copyright, and permission must be obtained from the publisher
prior to any prohibited reproduction, storage in a retrieval system, or
transmission in any form or by any means, electronic, mechanical,
photocopying, recording, or likewise. For information regarding permissions,
request forms and the appropriate contacts within the Pearson Education Global
Rights & Permissions Department, please visit
www.pearsoned.com/permissions/.
ISBN-13: 978-0-13-430640-7
ISBN-10: 0-13-430640-6
Library of Congress Control Number: 2015956324
Text printed in the United States on recycled paper at RR Donnelley in
Kendallville, Indiana.
First printing: March 2016
Many of the designations used by manufacturers and sellers to distinguish their
products are claimed as trademarks. Where those designations appear in this
book, and the publisher was aware of a trademark claim, the designations have
been printed with initial capital letters or in all capitals.
The author and publisher have taken care in the preparation of this book, but
make no expressed or implied warranty of any kind and assume no responsibility
for errors or omissions. No liability is assumed for incidental or consequential
damages in connection with or arising out of the use of the information or
programs contained herein.
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs; and
content particular to your business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact [email protected].
Visit us on the Web: informit.com/aw
Editor-in-Chief
Dave Dusthimer
Executive Editor
Mary Beth Ray
Development Editor
Jeff Riley
Managing Editor
Sandra Schroeder
Project Editor
Mandie Frank
Copy Editor
Keith Cline
Indexer
Tim Wright
Proofreader
Debbie Williams
Technical Reviewer
Brian Gracely
Editorial Assistant
Vanessa Evans
Designer
Mark Shirar
Compositor
Studio Galou
To Katie, Samantha, and Conor
1. Primer on Virtualization
Over the past decade, we have witnessed a revolution in the way information
technology (IT) works from an infrastructure perspective. Gone are the days of
purchasing and deploying a new server for each new application. Instead, IT has
become much more adept at sharing existent underutilized resources across a
catalog of IT applications, and this technique is called virtualization.
Of course, sharing of resources is nothing new. Back in the days of mainframe
computers, IT would segment processing capacity to provide many smaller
logical central processing units (CPUs). Similarly, networks have used
virtualization for years to create segregated logical networks that share the same
wire (for example, virtual local-area networks [VLANs]). Even in desktop
computing, IT has logically partitioned large hard disks to create several smaller
independent drives, each with a dedicated label and capacity that is capable of
storing its own data and applications.
Therefore, we can see that the technique of virtualization has existed in
processing, storage, and networking for several decades. What has made
virtualization in the past decade different is the concept and development of the
virtual machine (VM).
If you look up the definition of a VM, you’ll find one such as the one found on
the VMware website: “A VM is a tightly isolated software container that runs its
own operating system and applications as if it were a physical computer.”
Obviously, this is correct, but it does not really give you the “a-ha” you were
probably hoping for, nor does it give you an understanding of why or how these
things have transformed servers, storage, and networking.
To help get to that a-ha, let’s start with the big problem that VMs fix.
Server Proliferation, Massive Power Bills, and Other IT
Nightmares
Over the past 30 years, we have become a data-driven society that relies on
computers (in their many forms) to help run both our daily lives and businesses.
In fact, businesses have become so reliant on computers, servers, and networking
that the absence of these things (due to an outage, virus, attack, or some other
calamity) even for a short period of time can have a significant financial impact
on a company.
As a result of living in this data-centric world, enterprises need to run
applications for nearly every facet of their business, and all these applications
have to run on servers. In fact, not only do these applications need to run on
servers, but each application often requires its own server to run on even when
the server is not used to its fullest capacity by the application. In fact, that turns
out to be the case most of the time. This has led to a phenomenon known as
server proliferation, which is as bad as it sounds. It refers to the ever-increasing
need to buy more and more single-use servers to account for increasing data and
application usage.
Data proliferation is compounded by the need for businesses to always have their
computing resources available (meaning that you have to have a backup for
every server and sometimes the backups need backups), effectively doubling or
tripling the number of servers that a company needs, and that number was
already big and getting bigger.
Figure 1-1 shows a single physical server, running a single operating system,
which runs a single application.
Figure 1-1 Single physical server, running a single operating system, which
runs a single application
So, at this point you may be asking, “What’s so bad about server proliferation?”
The answer is that it wastes a lot of money, and it requires a lot of resources
(power, cooling, and personnel to manage it all), and it’s all very inefficient.
How inefficient? Well, numerous studies have shown that on average
nonvirtualized enterprise servers are only running at about 5% to 10%
utilization. This is partly due to advancements in chip density (they can just do
more stuff in less time), but it’s mostly because almost every server only
supports just one application. Multiple applications can be supported on a single
server as long as they are all supported by the same version of the operating
system (OS), but for the purposes of this chapter, however, we will assume that
servers have a 1:1 OS to application ratio. This means that when a new
application is needed, you can’t just load it on an existing underutilized server;
you have to buy a new one.
Figure 1-2 shows three servers, each with different operating systems. Although
each is underutilized, any new application would require its own server.
Figure 1-2 Three servers, each with different operating systems
So, now you have two servers, each running at about 10% to 15%.
Unfortunately, you have to have these servers backed up in case a power outage
or some other disaster occurs. Now you have four servers, two here, two
somewhere else, and all four have about 90% of their computing resources idle.
Making matters worse is that these servers have to be powered on all the time,
and that takes a good amount of power. Still, it gets worse, because even idle
servers generate a lot of heat; so, wherever they are, you need to keep them cool,
and that takes a lot more power. (Servers are often more expensive to cool than
they are to keep on.) There’s also a matter of real estate. If your business is
growing, chances are that your server farm is growing really quickly, and now
you may have a space-planning issue on top of everything else.
Now let’s say that you are up to 100 or so of these servers and there’s a need for
a new application. As a result, your IT team has to go through the process of
spec’ing out the server(s), getting them ordered through procurement, getting
them installed, making sure they are replicated, loading the application...you get
the picture. It’s a lot of work and a lot of expense, and we can’t lose site of the
fact that these servers are mostly underutilized and that many are just running on
idle.
To put this in perspective, imagine if you had to buy a new car for every location
you had to visit, and even when you were not going anywhere you had to leave
all the cars running. Oh yeah, you also need two of each car in case one of them
breaks down, and all of those backup cars have to be running all the time too.
You don’t have to have an MBA to figure out that server proliferation and all the
associated cost is a bad deal for business. That is, of course, unless you’re in the
business of selling the servers, but more on that later.
This problem is exactly what VMs fix. Before we get to them, however, let’s
look first at why servers are so inefficient.
How Servers Work
A server, like most computers, is a collection of specialized hardware resources
that are accessed by a software OS through a series of specialized drivers. These
resources can be many things, but commonly consist of the following:
CPU processor: Does the computing part
RAM (or memory): Stores and stacks short-term instructions and
information
Storage: Keeps long-term data
Network interface card (NIC): (Pronounced “nick”) Allows the
machine to connect to other computers or devices via a network
The OS communicates with the drivers to access these resources, and a one-to-
one relationship exists between them such that they comprise a set (hardware,
drivers, and OS). Once the OS and drivers are loaded, the hardware is locked in.
(Technically, you could reformat it and start over, but this is the equivalent of
throwing out the old one and buying a new one.)
Now comes the application. The application is loaded onto the OS, thus
becoming locked in to the OS, drivers, and hardware. Once loaded, you are all
set, and you’ve defined that server’s job. Running that application is what the
server does, or more accurately that’s all it does. Now this might be confusing to
you if your only experience with applications is what you load onto your phone
or tablet. After all, those things hold tons of apps, and they’re really little; so
why can’t these big honkin’ servers do more than one thing? Well, the reason is
twofold:
The applications running on servers are not the “angry birds” variety
of application. They are much more complicated and do things such as run
email systems for large enterprises.
The operating systems are dedicated to the application and are the
link to the drivers, which enable access to the hardware resources (CPU,
RAM, and disk). Also, it turns out that operating systems are not good at
sharing.
Figure 1-3 shows the relationship between the hardware resources, the OS, and
the application running on a server.
Figure 1-3 Relationship between the hardware resources, the OS, and the
application running on a server
At this point, you might be saying to yourself, “That’s dumb. Given all of those