Lee H. Marzke on 17 May 2018 06:04:27 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Fwd: VMware Releases Security Update


Perhaps some examples,   it likely has many definitions.

Traditional networking was setup with switches ,  routers, .etc  in the data center which have grown larger over time
and there are many limitations of the current setup.     Typically each switch / router makes decisions locally based
on the rules entered into its configuration,  which typically involved command line changes at the unit.   The management
configuration of the switch is done in the same box as that making the real-time decisions,  e.g. the management-plane
is in the same box the control-plane ( actual config) and the data-plane ( the data switching )

So when you remove the management-plane out of the box ( like a Cumulus Linux switch )  to manage it in central location
that is more 'software' defined,  you get central control of the data-center,  orchestrated updates of all units, etc.

There are two main commercial implementations of data-center SDN, that are competing somewhat.   Cisco has ACI which
removes the management from the switch/router and puts some intelligence in three  hardware 'APIC' controllers.  So in
effect you need to buy new ACI switches, and 3 controllers,  plus management software.     The other architecture is VMware
which has switches and routers in ESXi kernel , and also uses three software based controllers.    VMware also has
edge gateways, VPN, etc in virtual machines all controlled from the NSX manager software.  

So one advantage is two VM' of a multi-tier application  on the same host  would need to route packets all the way up through
switches and physical router and back to communicate. (e.g hairpinning)   While on VMware,  each ESXi kernel has a copy of the master
routing table,  so these VM's can talk without traffic needing to leave the host.   Since  intra-host connectivity is > 20GB/sec
( via software) and physical networking is usually 10GB or less,   this has lots of benefits and lower latency.

The other advantage is that since all networking can be in software,  that configuration can be replicated or duplicated to
your DR site.  DR sites often have different physical networking hardware - so the ability to have identical network setups
makes configuring DR easier.       There is also no need to ever program the actual physical switches again ( to add vLANs,  etc ) because a
tunnel is setup between hosts , and all vLAN's,  ( actually vxLAN's ) are now hidden from the physical hardware,  so any
future networking change is done via the NSX GUI and happens in seconds instead of the typical days/weeks for a
network change request to get approved.

As you add more software,  things get interesting.  NSX supports micro-segmentation,  or firewalling traffic between each VM.
This means effectively white-listing all current intra-VM traffic and then anything else is suspicious.     You don't even
need to separate multi-tier applications into different networks.    Two apps on the same layer-2 can now be fire-walled
because the firewall in ESXi kernel is effectively filtering traffic entering/leaving the virtual NIC. 

The firewall rules can now be written in terms of  "vm-name" or Folder where the VM resides,  and all the IP rules are
auto generated automatically from that info.  So the firewall rules can become much simpler and automatically update
when new VM's are added to the folder .etc.

The same things is happening to SD-WAN with vMware purchase of Velocloud.   Typically SD-WAN does smarter routing
over both MPLS and  Internet-VPN tunnels.     Many legacy companies setup MPLS for SAP etc. to communicate.   MPLS
is fine for low-latency , low-bandwidth , but it can't keep up with todays traffic.      It makes send to automatically route
less latency sensitive traffic over VPN  with central manage of the world wide network from one headquarters for
ease of management.

To further complicate this  VMware NSX is splitting from the entirely commercial parts in ESX,  to running NSX compatible
networking on KVM and other hyper-visors.        On open-source  'openswitch'  is used and parts of NSX
are run as agents inside each VM.    If this is setup you get transparent networking across most of the major-clouds
and on-prem with vSphere.    The new  Pivotal cloud foundry (devops) releases are starting to use cross platform
NSX to allow one single network (security) design to be pushed out to multiple clouds.

Lee








From: "Ron Mansolino" <rmsolino@gmail.com>
To: "Philadelphia Linux User's Group Discussion List" <plug@lists.phillylinux.org>
Sent: Thursday, May 17, 2018 7:17:56 AM
Subject: Re: [PLUG] Fwd: VMware Releases Security Update
Can someone provide a definition for "Software Defined" networking?
(as far as I can tell, it's "We have a control panel for provisioning")


On Wed, May 16, 2018 at 4:28 PM Lee H. Marzke <lee@marzke.net> wrote:

Not sure many VMware people use it yet, but it does appear to be one of the top SD-WAN products out there.

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug

--
"Between subtle shading and the absence of light lies the nuance of iqlusion..."  - Kryptos

Lee Marzke,  lee@marzke.net     http://marzke.net/lee/
IT Consultant, VMware, VCenter, SAN storage, infrastructure, SW CM
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug