NSX for DEV Environment (Networking Part)

2021-04-13

This articles is about configuring NSX-T as networking (not yet covering the Security part).
Summary environment guide:
a. NSX-T 3.1
b. vSphere 7.0

Note: This step by step guide might suitable for Development Environment or PoC

Summary tasks:

  1. Configure Prepreq on DNS, vSphere, and also the BGP peering in the physical router.
  2. Deploy and configure NSX Mgr.
  3. Configure NSX network for Host
  4. Configure NSX network for Edge
  5. Configure BGP peering
  6. Test Segment Networking

Configure Prereq on DNS and vSphere and physical router

  1. Create required DNS record:
  1. Assume the vsphere cluster already configured along with the storage part, then configure the VDS.
  1. Configure or review the BGP Peering on the physical router. On my lab, I use the quagga (default setup if using vPodRouter from VMware Lab).
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    root@vPodRouter-HOL:/etc/quagga# cat bgpd.conf
    !
    ! Zebra configuration saved from vty
    ! 2019/12/31 11:13:11
    !
    hostname bgpd
    log file /var/log/quagga/quagga.log
    !
    router bgp 65002
    bgp router-id 192.168.100.1
    neighbor 192.168.210.3 remote-as 65012
    neighbor 192.168.210.3 default-originate
    neighbor 192.168.210.4 remote-as 65012
    neighbor 192.168.210.4 default-originate

In this sample:
The physical BGP ID will be 65002
The NSX BGP ID will be 65012
Accepted peering interface from NSX are 192.168.210.3 and 192.168.210.4

Deploy and configure NSX Mgr

This NSX Mgr will be used as mgmt plane (UI, API), and also network control plane. So if you have more resources, better to deploy 3 NSX Mgr Appliances.
Data plane will be in the each of ESXi nodes or Edge nodes.
If you have dedicated management cluster, deploy NSX Mgr appliance(s) into mgmt cluster.

  1. Deploying NSX Mgr Appliances. I skip the deploying from OVA since this is pretty easy.
  2. Configure NSX Mgr VIP
  3. Make sure the MTU in configure in the Global Fabric Settings
  4. Add NSX License
  5. Add the vCenter into compute manager. Use the service account that already configure in vCenter. Make sure to tick the Enable Trust.
  6. Configure IP Address Pools for the TEP both Host nodes and Edge Nodes.
    In my lab as example:
  1. Configure RBAC

Configure NSX network for Host

  1. In NSX Mgr, there’s quick start wizard that can be used. For configuring NSX Network, use the “Prepare NSX Transport Nodes”
  2. Select option “Host Cluster” to configure Host Overlay
  3. Choose the target vSphere Cluster Confirm the Installation Wait up to 30 seconds, it will automatically installed into the host
  4. Configure Transport Zone for Host Overlay
  5. Configure Uplink profile for Host Overlay
    It would be best to use “Load Balance Source” as Teaming Policy. Enter two Active Uplinks. Make sure to input the correct Transport VLAN.
  6. Configure the Transport Node Target
    Select the Target VDS. Assign the IP Pool. Assign the VDS uplink into the Host profile created in step #5 above.
  7. Review and if you apply this, the error will occured. Don’t be upset. this because the GUI mistakenly see as mismatch between global MTU and profile assigned.
  8. Duplicate the browser tab so the existing wizard steps still persist. Then go to the uplink Profiles. See that the MTU of the profile created is hardcoded and not using Global MTU. Edit the profile. Make sure the MTU setting is empty. Check again the MTU settings
  9. Go back to the previous browser tab that have the quick wizard. Click Finish Wait for a while up to 30 seconds.
  10. Check the Host Transport Node. Make sure that the NSX Configuration status are “Success”, TEP IP Addresses have 2 ip. and Node status are “Up”

Configure NSX network for Edge

  1. Use the quick start wizard again. For configuring NSX Network, use the “Prepare NSX Transport Nodes”
  2. Choose the NSX Edge VM
  3. Create new NSX Edge VM
  1. Select the transport zone. the transport zone must be the same with the ESX TZ that previously created. If you require tenancy/DMZ, then configure separate TZ that linked between ESXi nodes and the target Edge nodes.
  2. Create new Uplink profile for Edge Overlay. We need to create new profile because the edge VM assigned directly into the portgroup TEP, so not using the VLAN ID.
  3. Configure Transport Zone for the overlay
    Use same IP Pool as for the host. Use the eth0 (it’s located at the bottom one) as the uplink for overlay.
  4. Create New Transport Zone for Peering. In my lab, I only create single TZ since the target peering only have single VLAN. Name the Switch Name different with the Host TZ switch.
  5. Create New Uplink Profile for Peering. Again since the portgroup assigned to the VM is already have VLAN, then in this profile, we don’t use VLAN ID.
  6. Configure the TZ for peering. Make sure to pointing the corrrect network interface.
  7. Add to Edge Cluster
  8. Name the Transport Node.
  9. Create another Edge VM with the similar concept.
  10. Review the Edge VM Nodes

Configure BGP Peering

  1. Create New NSX Segment to be used for peering interface. Leave the connected Gateway empty. Use Transport Zone using uplink TZ. Enter the VLAN ID as zero.
  2. Create T0 Gateway. Name the T0 gateway. HA Mode can use Active-Active or Active-Standby. Use Active Standby if you want to utilize network function (NAT,LB,VPN,FW). then save it.
  3. Continue editing the T0 gateway by setting the peering interfaces. Configure both peering interface for each Edge nodes. Make sure to pointing the edge node.
  4. Configure the BGP. Configure the Local AS#. If using active-standby, make sure to disable the ECMP. configure the Route Filter as necessary. Improve the timer as necesaary.
  5. Configure BGP neightbors. Make sure to include both interfaces
  6. Review the BGP neightbors Then click on the detail. Make sure the status is “Established”
  7. Configure Route Re-distribution

Configure Test Segment Networking

  1. Create Tier-1 Gateway. If there are no multitenancy or network requirement, use existing edge nodes. Configure the Route Advertisement.
  2. Configure test Segment. Make sure to point to the Tier-1. Enter the subnet as the gateway address.
  3. Check on the vsphere portgroup.
  4. Test ping and test route created in the router
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    root@vPodRouter-HOL:/etc/quagga# ping 172.16.0.1
    PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data.
    64 bytes from 172.16.0.1: icmp_req=1 ttl=64 time=1.46 ms
    64 bytes from 172.16.0.1: icmp_req=2 ttl=64 time=0.938 ms
    ^C
    --- 172.16.0.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1001ms
    rtt min/avg/max/mdev = 0.938/1.199/1.460/0.261 ms
    root@vPodRouter-HOL:/etc/quagga# netstat -rn
    Kernel IP routing table
    Destination Gateway Genmask Flags MSS Window irtt Iface
    0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
    10.10.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
    10.10.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
    10.20.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
    10.20.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
    172.16.0.0 192.168.210.3 255.255.255.0 UG 0 0 0 eth1.210
    192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
    192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

Last part we need to configure firewall. This can be on the Distributed Firewall (mostly E-W traffic) or on the Gateway Firewall (N-S traffic). We live in the world of creative attackers.