Quantcast
Viewing all 76 articles
Browse latest View live

Creating CA,server and client certificates using openssl for SSL VPN

Creating CA,server and client certificates using openssl for SSL VPN

Prerequistics:

1.      Go to “cd /opt/edoceo/etc/ssl
2.      OpenSSL root CA configuration file. Click here to download
# Copy to '/opt/edoceo/etc/ssl#/openssl.cnf'.
3.      Create below folder and files
mkdir certs crl newcerts private csr
chmod 700 private
touch index.txt
echo 1000 > serial

Root CA certificate Creation:

1.      Create the root key:
openssl genrsa -aes256 -out private/ca.key.pem 4096
chmod 400 private/ca.key.pem
2.      Create the root certificate:
openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -out certs/ca.cert.pem
chmod 444 certs/ca.cert.pem
3.      Verify the root certificate:
openssl x509 -noout -text -in certs/ca.cert.pem

Server certificate creation:

1.      Create a key
openssl genrsa -aes256 -out private/www.itzecurity.in.key.pem 2048
chmod 400 private/www.itzecurity.in.key.pem
2.      Create a certificate
openssl req -config openssl.cnf -key private/www.itzecurity.in.key.pem -new -sha256 -out csr/www.itzecurity.in.csr.pem
openssl ca -config openssl.cnf -extensions server_cert -days 375 -notext -md sha256 -in csr/www.itzecurity.in.csr.pem -out certs/www.itzecurity.in.cert.pem
chmod 444 certs/www.itzecurity.in.cert.pem
3.      Verify the certificate
openssl x509 -noout -text -in certs/www.itzecurity.in.cert.pem
            openssl verify -CAfile certs/ca.cert.pem certs/www.itzecurity.in.cert.pem



Client certificate creation:

1.      Create client key
openssl genrsa -des3 -out private/client.key.pem 1024
chmod 400 private/client.key.pem
2.      Create CSR certificate
openssl req -key private/client.key.pem -new -out csr/client.csr.pem
chmod 400 csr/client.csr.pem
3.      Create a certificate for client
openssl x509 -req -days 365 -in csr/client.csr.pem -CA certs/ca.cert.pem -CAkey private/ca.key.pem -set_serial 02 -out certs/user1.cert.pem
chmod 400 certs/user1.cert.pem
4.      Verify the certificate
openssl x509 -noout -text -in certs/user1.crt.pem
openssl x509 -noout -text -in certs/user1.cert.pem
openssl verify -CAfile certs/ca.cert.pem certs/user1.cert.pem
5.      Convert to PKCS12
openssl pkcs12 -export -in certs/user1.cert.pem -inkey private/client.key.pem -certfile certs/ca.cert.pem -name "user1" -out certs/user1.p12

openssl pkcs12 -in certs/user1.p12 -noout -info


Compressing the files

sudo tar cvzf sslramesh.gz /opt/edoceo/etc/ssl

sudo cp sslramesh.gz /var/www/html/ssl


openssl.cnf

# OpenSSL root CA configuration file.
# Copy to '/opt/edoceo/etc/ssl#/openssl.cnf'.

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /opt/edoceo/etc/ssl
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/ca.key.pem
certificate       = $dir/certs/ca.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/ca.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_strict

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request>.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Tamilnadu
localityName_default            = Chennai
0.organizationName_default      = Itzecurity Ltd
organizationalUnitName_default  =
emailAddress_default            =

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = "OpenSSL Generated Client Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = "OpenSSL Generated Server Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature

extendedKeyUsage = critical, OCSPSigning

Reverse proxy web caching and SSL offloading for an Internet web server

Reverse proxy web caching and SSL offloading for an Internet web server
Supported version: FortiOS 5.4.x

In this configuration, clients on the Internet use HTTP and HTTPS to browse to a web server that is behind a FortiGate unit. A policy added to the FortiGate unit forwards the HTTP traffic to the web server. The policy also offloads HTTPS decryption and encryption from the web server so the web server only sees HTTP traffic.

The FortiGate unit also caches HTTP and HTTPS pages from the web server so when users access cached pages the web server does not see the traffic. Replies to HTTPS sessions are encrypted by the FortiGate unit before returning to the clients.

In this configuration, the FortiGate unit is operating as a web cache in reverse proxy mode. Reverse proxy caches can be placed directly in front of a web server. Web caching on the FortiGate unit reduces the number of requests that the web server must handle, therefore leaving it free to process new requests that it has not serviced before.

Using a reverse proxy configuration:
1.      Avoids the capital expense of additional web servers by increasing the capacity of existing servers
2.      Serves more requests for static content from web servers
3.      Serves more requests for dynamic content from web servers
4.      Reduces operating expenses including the cost of bandwidth required to serve content
5.      Accelerates the response time of web servers and of page download times to end users.

When planning a reverse proxy implementation, the web server's content should be written so that it is “cache aware” to take full advantage of the reverse proxy cache.

In reverse proxy mode, the FortiGate unit functions more like a web server for clients on the Internet. Replicated content is delivered from the proxy cache to the external client without exposing the web server or the private network residing safely behind the firewall.

Reverse proxy web caching and SSL offloading for an Internet web server using static one-to-one virtual IPs



General configuration steps

This section breaks down the configuration for this example into smaller procedures. For best results, follow the
procedures in the order given:

·         Configure the FortiGate unit as a reverse proxy web cache server.
·         Configure the FortiGate unit for SSL offloading of HTTPS traffic.
·         Add an SSL server to offload SSL encryption and decryption for the web server.

Also note that if you perform any additional actions between procedures, your configuration may have different results.

·         Enter the following command to add a static NAT virtual IP that translates destination IP addresses from 192.168.10.1 to 172.10.20.30 (and does not translate destination ports):
config firewall vip
edit Reverse_proxy_VIP
set extintf port2
set type static-nat
set extip 192.168.10.1
set mappedip 172.10.20.30
end
·         Enter the following command to add a port2 to port1 security policy that accepts HTTP and HTTPS traffic from the Internet. Enable web caching and HTTPS web caching. Do not select security profiles. Set the destination address to the virtual IP. You do not have to enable NAT.
config firewall policy
edit 0
set srcintf port2
set srcaddr all
set dstintf port1
set dstaddr Reverse_proxy_VIP
set schedule always
set service HTTP HTTPS
set action accept
set webcache enable
set webcache-https ssl-server
end
To add an SSL server to offload SSL encryption and decryption for the web server

·         Place a copy of the web server’s CA (file name Rev_Proxy_Cert_1.crt) in the root folder of a TFTP server.
·         2. Enter the following command to import the web server’s CA from a TFTP server. The IP address of the TFTP server is 10.31.101.30:

execute vpn certificate local import tftp Rev_Proxy_Cert_1.crt 10.31.101.30

The certificate key size must be 1024 or 2048 bits. 4096-bit keys are not supported.
·         From the CLI, enter the following command to add the SSL server. The SSL server ip must match the destination address of the SSL traffic after being translated by the virtual IP (172.10.20.30) and the SSL server port must match the destination port of the SSL traffic (443). The SSL server operates in half mode since it performs a single-step conversion (HTTPS to HTTP or HTTP to HTTPS).

config firewall ssl-server
edit rev_proxy_server
set ip 172.10.20.30
set port 443
set ssl-mode half
set ssl-cert Rev_Proxy_Cert_1
end

·         Configure other ssl-server settings that you may require for your configuration.

How to configure SSL Inspection for Chrome browser and delete HSTS from browsers

How to configure SSL Inspection for Chrome browser and delete HSTS from browsers

HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797.

The Google Chrome browser HTTP Strict Transport Security(HSTS) checks for client protection, which means that the user may get certificate errors when SSL inspection is enabled when using Chrome, which may result in browser activity to stop, leading to session disconnection.

When doing SSL Inspection, the browser may start displaying a certificate warning each time a user is connecting to a HTTPS site. The reason for this warning is that the certificates received by the browser are not being signed by the FortiGate which is a Certificate  Authority (CA) that the browsers do not know and trust.

There are three ways of avoiding this warning:

·         The first option is to download the certificate used on the SSL Proxy and install it in all the workstations as a public authority.
·         The second option is to generate a new SSL Proxy certificate from a private CA.
·         The third option is to purchase a suitable certificate from a public CA.

SSL Inspection for Chrome browser.

HSTS is a security feature of the Google browser Chrome. It is designed to detect the man-in-the-middle SSL attacks by making sure that any certificate presented when accessing the Google resource is signed by a specific CA. If it detects any, CA it will simply refuse to continue the SSL handshake and prevent access to the website.

The Google Chrome browser HSTS checks for client protection, this means that the user may get certificate errors when SSL inspection  is enabled when using Chrome, which may result in browser activity to stop, leading to session disconnection.

  • The only option that will allow content of the traffic to be inspected on Google Chrome is to replace the certificate on the SSL proxy with one that will satisfy the security settings.
  • Another option is to disable the settings causing this. HSTS can be turned off in Chrome, but this is not an option in all environments.
  • The last option is to bypass SSL Inspection of that traffic.
Other servers can have their own requirements on the certificates that are used for SSL.


How to clear HSTS from your browser

If you enabled HSTS on your site, you’ll have to clear it from your browser after you disabled it again. Otherwise, your site willl keep loading over SSL.

In Chrome:

·         In the address bar, type “chrome://net-internals/#hsts”.
·         Type the domain name in the text field below “Delete domain”.
·         Click the “Delete” button.
·         Type the domain name in the text field below “Query domain”.
·         Click the “Query” button.
·         Your response should be “Not found”.

Delete HSTS in Firefox?

1.      Type about:support in firefox
2.      Click show in folder which should open your profile folder.
3.      Find file called SiteSecurityServiceState.txt and open it
4.      Find the entry for your site url and remove it. Entry would looks something like - github.com:HSTS 120 17242 1521194647604,1,1
5.      Make sure for above firefox is closed so that it does not overwrite it.

How to Delete HSTS Settings in Firefox:

We will cover two different methods for deleting HSTS settings in Firefox. The first method should work in most cases – but we also included a manual option if needed.

·         Close all open tabs in Firefox.
·         Open the full History window with the keyboard shortcut Ctrl + Shift + H (Cmd + Shift + H on Mac). You must use this window or the sidebar for the below options to be available.
·         Find the site you want to delete the HSTS settings for – you can search for the site at the upper right if needed.
·         Right-click the site from the list of items and click Forget About This Site.This should clear the HSTS settings (and other cache data) for that domain.
·         Restart Firefox and visit the site. You should now be able to visit the site over HTTP/broken HTTPS. If these instructions did not work, you can try the following manual method as above.

Firefox stores HSTS entries in this file with their expiration periods. Removing this entry should allow you to hit http url. To further prevent it you can probably change permission of this file to read only.
NOTE: This will not work for well-known sites like google as those lists are preloaded by browsers. Works fine for others. See above link for details.


Authenticating SSL VPN users using LDAP


Authenticating SSL VPN users using LDAP

  1. Registering the LDAP server on the FortiGate
  2. Importing LDAP users
  3. Creating the SSL VPN user group
  4. Creating the SSL address range
  5. Configuring the SSL VPN tunnel
  6. Creating security policies



Registering the LDAP server on the FortiGate

·         Go to User & Device > Authentication > LDAP Servers and select Create New.
·         Enter the LDAP Server’s FQDN or IP in Server Name/IP. If necessary, change the Server Port Number (the default is 389.)
·         Enter the Common Name Identifier. Most LDAP servers use “cn” by default.
·         In the Distinguished Name field, enter the base distinguished name for the server, using the correct X.500 or LDAP format.
·         Set the Bind Type to Regular, and enter the LDAP administrator’s distinguished name and password for User DN and Password.


Importing LDAP users

·         Go to User & Device > User > User Definition, and create a new user, selecting Remote LDAP User.
·         Choose your LDAP Server from the dropdown list. You will be presented with a list of user accounts, filtered by the LDAP Filter to include only common user classes.

Note:

·         With a properly configured LDAP server, user and authentication data can be maintained independently of the FortiGate, accessed only when a remote user attempts to connect through the SSL VPN tunnel.

·         Instead of using fetching user always recommended to create VPN group on AD and map with fortigate Group.

About Policy Based Routing


About Policy Based Routing

Traditional routing is destination-based, meaning packets are routed based on destination IP address. However, it is difficult to change the routing of specific traffic in a destination-based routing system. With Policy Based Routing (PBR), you can define routing based on criteria other than destination network—PBR lets you route traffic based on source address, source port, destination address, destination port, protocol, or a combination of these.

Policy Based Routing can implement QoS by classifying and marking traffic at the network edge, and then using PBR throughout the network to route marked traffic along a specific path. This permits routing of packets originating from different sources to different networks, even when the destinations are the same, and it can be useful when interconnecting several private networks.

Some applications of policy based routing are:

1.      Equal-Access and Source-Sensitive Routing
2.      Quality of Service
3.      Cost Saving
4.      Load Sharing

Equal-Access and Source-Sensitive Routing -  The below is an example of allowing internet access to ISP 1 or ISP 2 based source IPs.


Configuration Example:

In this setup, I am having two inside subnets to access internet via different outside interfaces.
debug policy-route
object network LanSubnet1_192.168.1.0
subnet 192.168.1.0 255.255.255.0

object network LanSubnet2_192.168.2.0
subnet 192.168.2.0 255.255.255.0
object network LanSubnet1_192.168.1.0
nat (LanSubnet1,outside1) dynamic interface

object network LanSubnet2_192.168.2.0
nat (LanSubnet2,outside2) dynamic interface
access-list LanSubnet1_internet extended permit ip 192.168.1.0 255.255.255.0 any
access-list LanSubnet2_internet extended permit ip 192.168.2.0 255.255.255.0 any

route-map PBR-MAP permit 10
match ip address LanSubnet1_internet
set interface OUTSIDE1        // this is not mandatory
set ip next-hop x.x.x.x

route-map PBR-MAP permit 20
match ip address LanSubnet2_internet
set interface OUTSIDE2          // this is not mandatory
set ip next-hop y.y.y.y

route-map PBR-MAP permit 30
set interface null0
interface GigabitEthernet0/0
name-if LanSubnet1
policy-route route-map PBR-MAP
interface GigabitEthernet0/1
name-if LanSubnet2
policy-route route-map PBR-MAP

route OUTSIDE1 0 0 <ISP 1> 1

route OUTSIDE2 0 0 <ISP 2> 2
Debug Commands

debug policy-route


Using Fiddler to debug SAML tokens issued from ADFS


Using Fiddler to debug SAML tokens issued from ADFS

Many applications want to federate with leverage certain attributes like nameid (nameidentifier), but the problem is the format is wildly different from one application to another.  In this case, one application might use a unique value like an employee ID, another UPN, another email address, and so on.  Or maybe it isn’t an attribute, but you are leveraging SHA1 as your signature hashing algorithm and the application is looking for MD5.
In this case, sometimes you may not be sure what you are sending to the application and are looking to the vendor to help you understand what you need to change in ADFS or if you are working on a custom application, need help debugging your claims rules to integrate into that application.  In this case, I will show you how to leverage Fiddler to acquire the SAML Tokens issued by ADFS to validate what attributes/values you are passing to the federate application.

Steps to install Fiddler: -
1.      Download the Fiddler from https://www.telerik.com/download/fiddler
2.      Install Fiddler on your local machine
3.      Click Cancel if prompted about AppContainers
Steps to do the SAML trace: -
1.      Open Fiddler
2.      With Fiddler open click on Tools -> Telerik Fiddler Options.
3.      Click on the HTTPS tab and check Decrypt HTTPS traffic and click OK


Note: you may be prompted to trust a certificate.  You must trust the certificate, so Fiddler can intercept your encrypted traffic and decrypt it.  Fiddler will not permanently capture traffic when the application is closed

4.      Close Fiddler
5.      Open Fiddler
6.      Open ZAPP
7.      Drag the Crosshair icon onto ZAPP,
8.      Select the X icon with a dropdown and click Remove all to clear your trace
9.      Try to login the ZAPP by entering credentials,
10.  click File – Capture Traffic to stop the logs on Fiddler.
11.  Within your logs, look for the last 200 responses from your ADFS server before being redirected to your application.
12.  Click on the Inspectors tab, and select the Raw tab at the bottom and copy the value from the hidden input tag with the name of wresult
13.  Paste the encoded HTML into my HTML Encoder/Decoder or in the Encoded text box and click Decode.
OR
Use below methods
Double click on the network activity and select the "Inspectors" tab then "Raw". Copy the SAML response line in its entirety and paste it into a text file.

14.  Copy and paste the SAML response to the SAML decoder. Make sure you remove the text "SAMLResponse=" from the beginning of the text. Remove "&RelayState=" from the end of the text.

Note: The encoder/decoder is all JavaScript based that functions client/side, so no data will leave your network.
15.  Copy the Decoded HTML and paste it into an XML formatter of your choice.  Here I am using Bing:

  1. Copy the result into Notepad and you can now read the information
Going into the claim and how it works is outside the scope of this tutorial, but as you can see in the last screenshot above we have the raw SAML token we will send to the relying party trust to consume.  At this point, the vendor can be involved to help troubleshoot any values or attributes that are in an incorrect format.
Cleaning up your SAML response
Sublime text editor has a plugin for auto-indenting XML format text, this is a great way to clean up the text generated from previous steps.
1.      Copy the text into Sublime
2.      Remove everything before <?xml version="1.0".......
3.      Remove everything after </saml2p:Response>
4.      Highlight all > click Selection > Format > Indent XML
5.      Compare to the following image





Common issues or queries when using PAC file


My web browser doesn’t seem to be using the PAC file despite the PAC URL being configured, what are some possible reasons for this?
Ensure that the web server has a MIME type application/x-ns-proxy-autoconfig configured for the .pac file extension. See the PAC Deployment page for more information.
Disable the PAC file location in the browser proxy settings and enter the location of the PAC file in the browser URL bar, it should be accessible. If not, investigate the web server serving the file.
Confirm that the JavaScript PAC file code is free of syntax errors/failures.
After updating my PAC file, the change I made don’t seem to have taken effect?
Browsers will cache a PAC file rather than retrieve it for each request; in some cases a browser restart is insufficient for obtaining an updated version of the file.In order to obtain the latest version it may be necessary to clear the browser cache, close all browser windows, and reopen the browser application.

Why might web browsing performance degrade when using a PAC file?
A PAC file may leverage several functions which rely on the local DNS server(s) in order to resolve a requested host. These functions are isInNet(), isResolvable(), and dnsResolve().
Should a DNS server be slow to respond, the initialization of these functions for a new (non-cached) host will result in a delay until the result is provided by the DNS server(s). This will only occur the first time the host is requested, or if the local caching period for the host DNS information has expired.
Such an issue can be isolated by utilizing a DNS application such as DIG which can report how long it took the DNS server to respond. While each environment is unique, response times exceeding 500ms are likely to be noticeable to an end-user. Many websites now use content delivery networks which may provide content from several different hosts, thus the delay could be significant for larger websites; each host is requested in serial rather than parallel (each DNS request must complete before the next begins).

Why might a browser initially stall/hang when configured with a PAC file and off-network?
When using the IP address location of the PAC file, the browser will attempt to connect to this IP address and will wait until the connection attempt times out. The timeout may not occur for several seconds or more, and may result in some browsers hanging until this occurs. This delay will occur when the browser is first loaded and a website accessed.
Given this timeout behavior, if the user machine is to be taken off-network it’s recommended that the browser be configured with the DNS hostname for accessing the PAC file.
When using the DNS hostname for the PAC file, the browser will try to resolve the internal DNS hostname against an external DNS server. This will result in the server informing the web browser that no such DNS record exists. This process will take tenths of a second and is unnoticed by the end-user. The browser, being unable to access the PAC file, will fail open (direct to the Internet).

I have multiple network adapters and the myIpAddress() function is returning an undesired IP address. Can I reorder the priority?
When assigning a value to the myIpAddress() function, a browser will use the first network adapter active and offered by the operating. Windows operating systems support re-arranging the network adapter order.
Click Start, click Run, type ncpa.cpl, and click OK.
Available connections can be found in the LAN and High-Speed Internet section of the Network Connections window.
Using the Advanced menu, click Advanced Settings, and click the Adapters and Bindings tab.
In the Connections area, select the connection that you want to reorder. Use the arrow buttons to change the order.
Further instructions and information for configuring the network adapter order can be found in the Microsoft Support Center.

I’m trying to load balance traffic between proxies using a PAC file, but applications and websites fail to load or produce error messages. How can I fix this?
There are several code examples available which make attempts at a viable load balancing solution when using a PAC file, unfortunately none of these examples can achieve true load balancing and often result in issues with connection management when using applications or websites requiring persistent connections and/or expecting traffic to traverse the same route. These solutions often depend on randomization in traffic routing using various JavaScript hacks, thus these solutions aren’t even providing true load-based distribution of traffic.
Load balancing across proxies is best achieved using a hardware solution that sits in front of the proxies themselves, which can track the load across each, and distribute this traffic based on current volume.


ZAPP On - Captive Portal Detection


ZAPP On - Captive Portal Detection

The forwarding mechanism like GRE/IPSec Tunnel to Zscaler with Zapp On will be the best approach if we doesn’t default route to the gateway. Few DNS mapping might be required at local DNS server like pac server - pac.zscaler.net, Mobile.zscaler.net, clients4.google.com, Mobilesupport.zscaler.com, d32a6ru7mhaq0c.cloudfront.net, ocsp.digicert.com, crl3.digicert.com, crl4.digicert.com etc. Most importantly the global ZEN IPs will be advertised inside the infra.

The captive portal detection is complex, but the requirements that we check are very basic:

·         First, ZAPP attempt to connect to http://clients4.google.com/generate_204 - This page is guaranteed to return a 204-response code. Typically, a captive portal will return 302, but some of them return a 200 and re-write the page contents. Essentially if we get anything but a 204, we trigger captive portal.

·         Second, some captive portals know this and will return a 204 with their own page contents, so we do one additional check, which is to download the cloud default pac file and try to parse it. If it’s not parsable it generally means they are replacing the page contents with their own page.

So, in summary captive portal will trigger if:

1.      Do we not get a 204-response code from the first test?
2.      Do we get a 204, but the default pac file is not actually a pac file.

If you can make these conditions match ok then you should never see Captive Portal.

Time Intervals


Time Intervals

You can define time intervals for use in policies. For example, if you want to block users from accessing shopping sites from 8 AM to 5 PM on weekdays, you can create a time interval called Weekdays that includes Monday through Friday from 8 AM to 5 PM.

When an organization creates time bound policies, policy behavior might differ between users. If the user is coming from a known location, the policy will be applied based on the time zone configured for their location. If the user is a remote user (including users using the Z-App), the policy will be applied according to the time at the Zscaler Enforcement Node (ZEN) they are connected to.

So, the outcome of time Interval rules for traffic from location and for road-warriors as given below,

1.      If I sent Traffic from a defined location, then the policy action would be base on the time zone defined on the location.

2.      If I sent traffic from unknown location (consider ZAPP or PAC user.), then the policy action would be based on the time zone on the ZEN node. (which ZEN is processing the traffic.)

Improve upload/download speed of SSL VPN users


Improve upload/download speed of SSL VPN users

The Datagram Transport Layer Security (DTLS) protocol is supported for SSL VPN connections.
DTLS tunneling implementation avoids TCP over TCP issues and can improve throughput. DTLS support can be enabled in the CLI as described below:

To configure DTLS tunneling - CLI:

config vpn ssl settings
set dtls-tunnel [enable | disable] (default: enabled)
end

VPN options on forticlient

To configure VPN options:

1.      Go to File > Settings from the toolbar, and expand the VPN section.
2.      Select Enable VPN before logon to enable VPN before log on.
3.      For the Preferred DTLS Tunnel option, do one of the following:

a.      Select the Preferred DTLS Tunnel checkbox to use DTLS if it is enabled on the FortiGate. If DTLS is disabled on the FortiGate or tunnel establishment is not successful, TLS is used even if the Preferred DTLS Tunnel option is enabled in FortiClient. DTLS tunnel uses UDP instead of TCP and can increase throughput over VPN.
b.      To use TLS, ensure the Preferred DTLS Tunnel checkbox is unselected.

4.      Click OK.


Allow specific channels while blocking access to the rest of YouTube


Allow specific channels while blocking access to the rest of YouTube

The following configuration explains how to allow certain content while still blocking access to the rest of YouTube.

Create an custom category for allow list

A)   Go to YouTube and record the relevant information for the content you want to allow. For channels and playlists make a copy of their URLs. For individual videos, make a copy of the ID code. You can find the IDs in the YouTube URLs.
Video ID code:

Channel ID:



B)     In the Zscaler Admin Portal go to Administration > URL Categories.

Click Add to create a new custom URL category.
Complete the following:
·         Name: Enter a descriptive name for your category. For example, "YouTube Allowed".
·         URL Super Category: Leave this field as User-Defined.
·         Scope Type: Select the appropriate type for your organization.
·         Custom URLs: Leave this field empty.
·         URLs retaining parent category: Add the URLs for the channels and playlists you recorded in step 1.a. and then add the following URLs:

.googlevideo.com
.yt3.ggpht.com
.youtube.com/yts/
.ytimg.com
s.youtube.com

·         Custom keywords: Add each video ID and channel ID that you recorded in step 1.a.
·         Keywords retaining parent category: Leave this field empty.
·         Click Save.

2. Create a URL Category to Block Access to the Rest of YouTube

Create a custom URL category to block access to the remainder of YouTube.
·         Go to Administration > URL Categories.
·         Click Add to create a new custom URL category
·         Complete the following:
·         Name: Enter a descriptive name for your category. For example, "YouTube Blocked".
·         URL Super Category: Leave this field as User-Defined.
·         Scope Type: Select the appropriate type for your organization.
·         Custom URLs: Leave this field empty.
·         URLs retaining parent category: Add the following URLs:

.googlevideo.com
.yt3.ggpht.com
.youtube.com/yts/
.ytimg.com
s.youtube.com
suggestqueries.google.com
youtubei.googleapis.com
yt3.ggpht.com

·         Custom keywords: Leave this field empty.
·         Keywords retaining parent category: Leave this field empty.
·         Click Save.

3. Create a URL Filtering Rule to Allow Specific Content

·         Create a rule to control the specific content you want to allow. To do this:
·         Go to Policy > URL & Cloud App Control.
·         Click Add URL Filtering Rule.
·         Complete the following:
·         Status: Select Enabled to ensure the rule is actively enforced.
·         Admin Rank: Enter a value from 1-7 (1 is the highest rank).
·         Rule Name: Enter a unique name for the URL Filtering rule. For example, "YouTube Allowed Rule".
·         Rule Order: Policy rules are evaluated in ascending numerical order (Rule 1 before Rule 2, and so on), and the Rule Order reflects this rule's place in the order. Ensure this rule has a greater rule order than the rule you will create in Step 4. For example, if this rule has an order of 3, the next rule should have an order of 4 or lower.
·         URL Categories: Select the URL category you created in step 1.a. This is the category that contains the specific content you want to allow. In this example, it's "YouTube-Allowed".
·         HTTP Requests: Select All to apply the rule to all HTTP requests.
·         Users: Select the user this rule will apply to. Consider applying this rule to a small number of users to test the policies function as intended.
·         Groups: Select Any to apply the rule to all groups or select up to 8 groups.
·         Departments: Select Any to apply the rule to all departments or select up to 8 departments.
·         Locations: Select Any to apply the rule to all locations or select up to 8 locations.
·         Time: Select Always to apply this rule to all time intervals or select up to two time intervals.
·         Protocols: Select the protocols to which the rule applies.
·         Action: Select Allow.
·         Click Save.

4. Create a URL Filtering Rule to Block Access to the Rest of YouTube

Next, create another URL Filtering policy rule. This will block the remainder of the YouTube traffic. To do this:
·         Go to Policy > URL & Cloud App Control.
·         Click Add URL Filtering Rule.
·         Complete the following:
·         Status: Select Enabled to ensure the rule is actively enforced.
·         Admin Rank: Enter a value from 1-7 (1 is the highest rank).
·         Rule Name: Enter a unique name for the URL Filtering rule. For example, "YouTube Blocked Rule".
·         Rule Order: Ensure this rule has a lower order than the rule you created in step 3.
·         URL Categories:  Select the custom URL category you created in step 2. This is the category that blocks the rest of YouTube.
·         HTTP Requests: Select All to apply the rule to all HTTP requests.
·         Users: Select the user this rule will apply to. Consider applying this rule to a small number of users to test the policies function as intended.
·         Groups: Select Any to apply the rule to all groups, or select up to 8 groups.
·         Departments: Select Any to apply the rule to all departments, or select up to 8 departments.
·         Locations: Select Any to apply the rule to all locations, or select up to 8 locations.
·         Time: Select Always to apply this rule to all time intervals, or select up to two time intervals.
·         Protocols: Select the protocols to which the rule applies.
·         Action: Select Block and leave Allow Override disabled.
·         Click Save and activate your changes. Ensure the rules and categories are in the correct order.

Procedure to access the specific video

1.      Go to Youtube.com
2.      Search and select the video or Enter the URL directly on the address bar.

Procedure to access the specific channels:
1.      Go to youtube.com
2.      Search the channel name and hit enter

3.      Select the channel and go to Playlist
4.      Click appropriate playlist


5.      And play the specific video using drop down bar from the playlist.




Personnel Gmail restrictions for specific group only

Personnel Gmail restrictions for specific group only.


In Zscaler as of now there is no option to block the personnel gmail only for specific group. But there is an option to allow only specific domains to access the google APPs.

Since this the configuration is global, this change will applicable for all users. In an Enterprise the top management will always look for full internet access. So we have to split this into two.

1. Allow the personnel Gmail to all Enterprises top management.
                        2. Block the personnel Gmail to all other users. 


Allow the personnel Gmail to all Enterprises top management.

Bypass the personnel gmail or Google APPs URLS in PAC file and use that PAC file for top management. 

For example.

// Gmail Go direct

If (shExpMatch(host, “*.gmail.com”) ||
     shExpMatch(host, “accounts.google.com”) ||
     shExpMatch(host, “myaccount.google.com”) ||
     shExpMatch(host, “hangouts.google.com”) ||
     shExpMatch(host, “calender.google.com”) ||
     shExpMatch(host, “contacts.google.com”) ||
     shExpMatch(host, “mail.google.com”) )


            return “DIRECT”;

Challenge:


Here the challenge is, after bypass the URLs we have to forward those into direct internet, which is more complex if you are using router or CPE without application intelligence. If the gateway is application aware CPE or UTM enabled firewall or NGFW, the Application specific allow will be easy. Otherwise Identify the list of IP addresses of gmail become tedious. 

Block the personnel Gmail to all other users.


In order to block the personnel Gmail accounts use another pac file without gmail bypass , and forward them directly to Zscaler. Since the global configuration on the zscaler to allow only specific Google Apps, the users can access personnel gmail only if we use the allowed domains to login the gmail.

Note: SSL inspection has to enable in location.

HTTP header trace in Chrome and Mozilla Firefox

To capture HTTP headers in Chrome:

  1. Open the developer tools window by pressing CTRL + SHIFT + i
or,
Open the menu on the top-right corner and select More Tools > Developer Tools.
  1. Click the Network tab.
  2. Make sure the record button is red and the Preserve log option is checked.
  3. Once the test is completed, right click on one transaction and select Save all as HAR with content.

Capturing HTTP Headers on Mozilla Firefox:-

  1. In Firefox, go to the desired website. In this example, it's www.itzecurity.com.
  2. Click the Menu button and then click Web Developer. The Web Developer menu appears.
  3. In this menu, select Inspector. The Inspector window appears.
  4. In this window, select the Network tab.
  5. In the Network tab, right-click the element you wish to inspect and select Save All As HAR. In this view, the HTTP headers are also visible in the Headers box on the right-hand side.
  6. Click OK in the dialog box that appears to download the file.

Client to Client communication in Zscaler Private Access

 Client to Client communication in Zscaler Private Access

Validating a client hostname allows you to enable client-based remote assistance. To enable remote assistance, a regular expression of allowed hostnames is configured per tenant. This regular expression controls the targets to which the Zscaler Client Connector allows the client-to-client remote access traffic to be sent.

If an application configured for Privileged Remote Access (PRA) matches a valid client hostname configured for client-based remote assistance, and the user’s device is also configured for client-based remote assistance, then PRA is not supported.


Prior to enabling remote assistance, the following prerequisites must be met:

Devices must be domain-joined Windows devices.

Devices must be running Client Connector version 3.7 or above.


Enable Client hostname validation 

1.Go to Administration 

2.Select Application segments – 

3.Click 



4.Choose client hostname validation 








 5.Type the regular expression  “ .*\.itzecurity\.com” or  “ .*.itzecurity.com”

6.Click save


Once client hostname enabled , ZCC PA client will changed from IP address to hostname. 


Note : Machine tunnel has to enable if the remote desktop access with another account.



Internal Error Please contact Administrator (3005)

Internal Error Please contact Administrator (3005)

 









This error used to see when deploy ZCC con user machines. In most cases this issue was not solved with retry, connect from another internet, and restart the machine.

 

This seems a WMI Error.

Windows Management Instrumentation (WMI) is a set of specifications from Microsoft for consolidating the management of devices and applications in a network from Windows computing systems. WMI provides users with information about the status of local or remote computer systems.


Put below script into a *.bat file and run it as Administrator. OR check with local IT and do the WMI cleanup. 

-------------------------------------------------------------------------

@echo off

sc config winmgmt start= disabled

net stop winmgmt /y

%systemdrive%

cd %windir%\system32\wbem

for /f %%s in ('dir /b *.dll') do regsvr32 /s %%s

wmiprvse /regserver

winmgmt /regserver

sc config winmgmt start= auto

net start winmgmt

for /f %%s in ('dir /s /b *.mof *.mfl') do mofcomp %%s

----------------------------------------------------------------------------------------------------


After executing the WMI cleanup script user were able to connect successfully. 

 


Viewing all 76 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>