Monday, March 05, 2018

Configure Monitors in Netscaler

Monitors are used to poll and monitor services on the Netscaler to ensure that they are in service when required by clients.

Configuration

To create a new monitor
add lb monitor <monitor name> tcp <options>
So to create a new monitor to monitor TCP port 8012 named "alfa_8012"
add lb monitor alfa_8012 tcp -destport 8012
Default options include polling interval, timeouts, retries, etc as per the example below - these can all be adjusted accordingly.
> show monitor alfa_8012
1)   Name.......:      alfa_8012  Type......:       TCP State....:   ENABLED
Standard parameters:
 Interval.........:            5 sec   Retries...........:                3
 Response timeout.:            2 sec   Down time.........:           30 sec 
 Reverse..........:               NO   Transparent.......:               NO
 Secure...........:               NO   LRTM..............:         DISABLED
 Action...........:   Not applicable   Deviation.........:            0 sec 
 Destination IP...:    Bound service
 Destination port.:             8012
 Iptunnel.........:               NO
 TOS..............:               NO   TOS ID............:                0
 SNMP Alert Retries:               0     Success Retries..:                1
 Failure Retries..:                0
Done
After creating a new monitor, you must bind it to an existing service.
bind lb monitor alfa_8012 pdc2axa002_8012
To see current state of bound services to a monitor
> show lb monbindings alfa_8012
       alfa_8012       Type : TCP      State : ENABLED
               1) pdc1axa001_8012 (10.80.16.76:8012)   Type : TCP      Service state : UP      Monitor state :  ENABLED
               2) pdc2axa002_8012 (10.160.16.32:8012)  Type : TCP      Service state : UP      Monitor state :  ENABLED
Done
> 
Finally, you can see which monitor is bound to a given service
> show service pdc1axa001_8012
       pdc1axa001_8012 (10.80.16.76:8012) - TCP
       State: UP
       Last state change was at Mon Jun 27 09:46:39 2016 
       Time since last state change: 7 days, 01:07:14.390
       Server Name: pdc1axa001 
       Server ID : None        Monitor Threshold : 0
       Max Conn: 0     Max Req: 0      Max Bandwidth: 0 kbits
       Use Source IP: NO
       Client Keepalive(CKA): NO
       Access Down Service: NO
       TCP Buffering(TCPB): NO
       HTTP Compression(CMP): NO
       Idle timeout: Client: 9000 sec  Server: 9000 sec
       Client IP: DISABLED 
       Cacheable: NO
       SC: OFF
       SP: ON
       Down state flush: ENABLED
       Appflow logging: ENABLED
        TD: 0
1)      Monitor Name: alfa_8012
               State: UP       Weight: 1       Passive: 0
               Probes: 95      Failed [Total: 0 Current: 0]
               Last response: Success - TCP syn+ack received.
               Response Time: 0.0 millisec
Done

Netscaler Documentation

Monday, February 26, 2018

Configure Persistency in Netscaler

To configure basic source IP based persistence (or "stickiness")
set lb vserver dc2_lb_vs_bobj_pre_443 -persistenceType SOURCEIP
show lb vserver dc2_lb_bobj_pre_443


Link below is to the Citrix Support page for configuring Persistence

Thursday, February 22, 2018

Useful Diagnostics

IPERF3

[Iperf3 https://github.com/esnet/iperf] - Useful for testing site to site throughput
Example
To configure one end as a server
[root@PDC2MGT002 ~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
To configure one end as a client, and to begin throughput testing.
[root@PDC1MGT002 ~]# iperf3 -c pdc2mgt002 -i1
Connecting to host pdc2mgt002, port 5201
[  4] local 10.80.139.17 port 59378 connected to 10.160.139.17 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  97.3 MBytes   816 Mbits/sec   51   1.22 MBytes       
[  4]   1.00-2.00   sec  93.8 MBytes   786 Mbits/sec    0   1.30 MBytes       
[  4]   2.00-3.00   sec  92.5 MBytes   776 Mbits/sec    0   1.36 MBytes       
[  4]   3.00-4.00   sec  66.2 MBytes   556 Mbits/sec    0   1.40 MBytes       
[  4]   4.00-5.00   sec  83.8 MBytes   703 Mbits/sec    0   1.43 MBytes       
[  4]   5.00-6.00   sec   106 MBytes   891 Mbits/sec    0   1.44 MBytes       
[  4]   6.00-7.00   sec   105 MBytes   881 Mbits/sec    0   1.45 MBytes       
[  4]   7.00-8.00   sec   108 MBytes   902 Mbits/sec    0   1.45 MBytes       
[  4]   8.00-9.00   sec   106 MBytes   891 Mbits/sec    0   1.45 MBytes       
[  4]   9.00-10.00  sec   109 MBytes   912 Mbits/sec    0   1.45 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   967 MBytes   811 Mbits/sec   51             sender
[  4]   0.00-10.00  sec   965 MBytes   809 Mbits/sec                  receiver
#
iperf Done.
[root@PDC1MGT002 ~]#

Scamper

[Scamper http://www.wand.net.nz/scamper/] - Usefule for testing site to site latency and other stuff like per hop MTU reporting
Example
To show traceroute and MTU of each hop to host 10.129.1.254
[root@PDC2MGT002 scamper-cvs-20110421]# scamper -c "trace -M" -i 10.129.1.254
traceroute from 10.160.139.17 to 10.129.1.254
 1  10.160.139.253  0.491 ms [mtu: 1500]
 2  10.161.230.251  0.564 ms [mtu: 1500]
 3  100.65.0.2  3.831 ms [mtu: 1422]
 4  10.221.1.253  3.789 ms [mtu: 1422]

ESNET

http://fasterdata.es.net/performance-testing/network-troubleshooting-tools/ - A useful site with reference data to some of these tools.

MTR

mtr combines the functionality of the 'traceroute' and 'ping' programs in a single network diagnostic tool.
As mtr starts, it investigates the network connection between the host mtr runs on and a user-specified destination host. After it determines the address of each network hop between the machines, it sends a sequence ICMP ECHO requests to each one to determine the quality of the link to each machine. As it does this, it prints running statistics about each machine.

Monday, February 19, 2018

Add New Load Balanced Service To Netscaler

For this example we have two web servers, rdc2bob010 & rdc2bob011, configured on the addresses 10.160.16.163 & 10.160.16.164.
We have allocated a Virtual IP Address (VIP) to the load balanced service of 10.160.20.233.
Load Balancing is included with the Standard Edition of NetScaler and NetScaler Express, the free Licenses for the VPX, so long as you have a valid license installed then you will be able to use the load balancing feature.Enable Load Balancing Feature
If not already enabled:
enable feature LB
show feature | grep LB
If we are not sure on the feature name to enable then use the "show feaure" command first and review the output
With load balancing now enabled we can move onto creating the necessary elements to load balance

Elements of Load Balancing

In order to set up basic load balancing on the Netscalers, you need to:
1. Add Servers - the physical servers that connections will be pushed to
2. Add Services - the logical services that reside on the physical servers - HTTP, HTTPS, FTP, etc
3. Create a Virtual Server - the virtual server that clients connect to and that gets load balanced
4. Bind Services to Virtual Servers - Which services reside on the selected Virtual Server

Creating Load Balanced Services

Create entries for backend servers

First we need to create entries for the back-end servers that we will be load balancing across. We have two web servers on our private network so we will create records for them first.
add server rdc2bob010 10.160.16.163
add server rdc2bob011 10.160.16.164
show server
Netscaler show server.png

Create service entries

Now we need to create entries for the services to load balance: With the servers created we now need to create records for the services, in this case HTTP and the servers they are linked to
add service rdc2bob010_443 rdc2bob010 HTTP 443
add service rdc2bob011_443 rdc2bob011 HTTP 443
show service
Netscaler show service.png

Create virtual server entry

Next, we create the virtual server for load balancing:
The virtual server will be assigned an IP Address, often this would be a public address and the DNS record for the website would point to this address. The load balancing method can be Round Robin as in the entry, valid entries include LEASTCONNECTION, LEASTRESPONSETIME, URLHASH, DOMAINHASH. The default is LEASTCONNECTION.
add lb vserver dc2_lb_vs_bobj_pre_443 http 10.160.20.233 443
show vserver lbvswww
Netscaler show lb vserver.png
Part of the output of the show vserver command will show the current state of the load balanced services, this way we can check which services are available to load balance across

Bind services to Virtual Server

To actually make all this work, you need to bind the services to the lb vserver.
bind lb vserver dc2_lb_vs_bobj_pre_443 rdc2bob010_443
bind lb vserver dc2_lb_vs_bobj_pre_443 rdc2bob011_443
Now when we access the virtual server with the a web browser, 10.160.20.233 our connection will be assigned to each server as per LEASTCONNECTION. The DNS record for the website that we need to access should be pointing to this address, the address of the vserver and not the individual web server records.

Sunday, February 18, 2018

Palo Alto HA Failover

Failover testing

To failover a device from the functional role to suspended role you’ll need to perform the following steps:
1. From within the GUI select Device > High-Availability.
2. Towards the top right of the user interface you’ll notice the following; If the FW is currently the master ‘Suspend’ will be displayed, if the FW is currently the slave ‘functional’ will be displayed.
3. To force the current master into slave mode you’ll need to click suspend or use the following command:
>request high-availability state suspend
4. This will automatically put the slave device into functional mode. You may miss 1 ping during this transition.

CLI Reference

Request Commands
Force configuration and session synchronisation to peer device
>request high-availability sync-to-remote
Fail to peer and suspend current device
>request high-availability state suspend
Re-enable HA on suspended system
>request high-availability state functional
Show Commands
Shows the high-availability information on current device
>show high-availability all
Shows the control link statistics
>show high-availability control-link
Shows the high-availability state information
>show high-availability state
Shows the synchronisation state to the peer device
>show high-availability state-synchronisation

Thursday, February 15, 2018

Palo Alto Cheat Sheet

Palo Alto CLI Commands

Device Management

If you want to:
Show general system health information.
show system info
Show percent usage of disk partitions.
show system disk-space
Show the maximum log file size.
show system logdb-quota
Show running processes.
show system software status
Show processes running in the management plane.
show system resources
Show resource utilization in the dataplane.
show running resource-monitor
Show the licenses installed on the device.
request license info
Show when commits, downloads, and/or upgrades are completed.
show jobs processed
Show session information.
show session info
Show information about a specific session.
show session id <session-id>
Show the running security policy.
show running security-policy
Show the authentication logs.
less mp-log authd.log
Restart the device.
request restart system
Show the administrators who are currently logged in to the web interface, CLI, or API.
show admins
Show the administrators who can access the web interface, CLI, or API, regardless of whether those administrators are currently logged in. When you run this command on the firewall, the output includes both local administrators and those pushed from a Panorama template.
show admins all

Tuesday, February 13, 2018

Misc Netscaler Bits


Admin Partitions vs SDX

  • Admin partitions are way of carving up a NetScaler at the Application configuration and administrative layer. The Physical resources (CPU, Memory, Disk, NIC’s and the underlying OS/Firmware are shared across all partitions but the application configuration is separated and traffic flowing through the partitions can use their own VLAN’s. This provides the ability for an organisation to run multiple sets of NetScaler configuration on the same Physical or Virtual Appliance. Licencing is performed on the ‘default’ partition and the available features are inherited by all of the ‘sub’ partitions. A NetScaler firmware upgrade would impact ALL partitions. You do have some controls on resource utilisation for each partition. For example, you can limit the amount of memory used by a single partition and you can also restrict the network bandwidth.

  • The SDX platform is a single physical appliance that provides true multi-tenancy by hosting multiple separate Virtual instances of NetScaler. The difference with SDX is that all virtual instances are totally separated at an OS and firmware perspective. Therefore, each partition can run its own version of NetScaler firmware independently from other partitions. This is ideal if you have a test/dev environment that differs from Production. Furthermore, the Physical resources available on the appliance can be dedicated to individual instances (CPU, Memory, Network Interfaces. Note, physical disks/SSD’s are shared but there is complete isolation of data across virtual instances. One note – It is not possible to use a standalone NetScaler VPX licence on an SDX virtual instance. SDX virtual instances are licenced by way of the number of instances purchased with the SDX appliance itself. Also, an SDX licence contains all NetScaler Platinum features by default.

Some further reading on Admin Partitions and SDX:

Admin Partitions

SDX

Admin Partition Restrictions
There are a few restrictions that you should be aware of when using Admin partitions:
  • Audit Logging - In a partitioned NetScaler, you cannot have specific log servers for a specific partition. The servers that are defined at the default partition are applicable across all admin partitions. Therefore, to view the audit logs for a specific partition, you will have to use the "show audit messages" command. The users of an admin partition do not have access to the shell and therefore are not able to access the log files.
  • VRRP - On a partitioned NetScaler appliance, Virtual Router Redundancy Protocol (VRRP) is supported on non-shared VLANs only. This protocol is blocked on a shared VLAN (tagged or untagged) bound to a default or any administrative partition.
  • Networking - With respect of tunnel configuration in a partition, admin cannot create IPSec tunnels and GRE tunnels with an IPSec profile. Admin can create IPIP tunnel and GRE tunnel with ‘none’ as ipsec profile.


NetScaler Best Practices

Here is a good document on NetScaler security best practices:


Also, as mentioned in the meeting NetScaler MAS will assess the configuration of your NetScaler instances and make recommendations. Its probably best if you download and install it to have a look because the recommendations are based on your own specific config. To show you an example, I have taken a screenshot of what it checks against here:


NetScaler Deployment Options

There is a lot of documentation on our main docs site – https://docs.citrix.com including details of the different deployment topologies we briefly discussed in the meeting (1 armed and inline). See below:

In terms of resiliency, there are 3 main options. Details below:


VPX Resource Requirements

You also asked about resource allocations for VPX appliances. The datasheet linked below details the required number of vCPU’s and Memory for each size of VPX:


NetScaler MAS Overview (Management and Analytics Server)

NetScaler MAS (Management and Analytics System) is something we are very keen to see our customers using. If you have more than 1 NetScaler, it’s well within your interests to use MAS. It provides you with the ability to monitor the health, performance and security of all of your apps, leverage advanced analytics around the delivery of your apps and manage your individual NetScaler instances whether they sit on-prem or in the cloud.

You can use the full capabilities for free if you monitor up-to 30 Vservers. Even above 30 Vservers, all of the ‘fleet management type functionality remains free. It’s well worth you taking a look at to assess the potential. It currently runs as a VM on the usual hypervisors and we have very recently introduced MAS as a service running in Citrix Cloud. You can sign up for a 30 day free trial by heading to https://cloud.citrix.com

As discussed, MAS would be a great way for you guys to manage your separate NetScaler appliances across diverse business unit’s from a single place.

If you want to download and start using the on-prem version, head to our downloads page at https://www.citrix.com and log in to access the installer files. Let me know if you require any help downloading and installing MAS.



Licencing

There are a couple of new licencing features which you may be interested in knowing more about as mentioned in the meeting:

Check In Check Out Licencing (CICO)
This is applicable to VPX appliances only. Instead of applying a licence to an individual VPX instance, you instead load the same licence file into NetScaler MAS and then use MAS as the licence server to check in/out licences to your individual VPX instances. The advantage of doing this is you can manage your licences from a  central tool and you can move licences around as required.

Pooled Capacity
This is a totally new licencing model that would replace the perpetual licencing model you use today for you existing NetScaler’s. With Pooled capacity, you buy a pool of capacity and a pool of instances. You can then allocate capacity to your instances how you see fit. Again, this is all controlled from the licence server on NetScaler MAS. This model is applicable across all NetScaler form-factors – MPX, SDX and VPX.


Orchestration – VMware NSX and Cisco ACI

MAS supports integration with a number of SDN controllers. See below for more details: