Quantcast
Channel: IT Security - Multi Platform
Viewing all 76 articles
Browse latest View live

Restrict YouTube content on your network or managed devices

$
0
0
Restrict YouTube content on your network or managed devices

Google provides an article entitled "Restrict YouTube content on your network or managed devices" athttps://support.google.com/youtube/answer/6214622

At this time, there are two options to restrict inappropriate content: DNS and HTTP header.

This is a FortiGate configuration example for adding the HTTP header to YouTube requests to implement safe-search for YouTube (FortiOS v5.4).

1. Configure web proxy profile to add header

config web-proxy profile
    edit Restrict
        set header-via-request add
        set header-via-response add
        config headers
            edit 1
                set name "YouTube-Restrict" ----{The name can be any value
                set content "Strict"
            next
        end
    next
end

2. Add a profile to the url-filter profile

 config webfilter urlfilter
    edit 1
        set name "Youtube"  ----{The name can be any value
        config entries
            edit 1
                set url "www.youtube.com"
                set action allow
                set web-proxy-profile "Restrict"
            next
            edit 2
                set url "m.youtube.com"
                set action allow
                set web-proxy-profile "Restrict"
            next
            edit 3
                set url "youtubei.googleapis.com"
                set action allow
                set web-proxy-profile "Restrict"
            next
            edit 4
                set url "youtube.googleapis.com"
                set action allow
                set web-proxy-profile "Restrict"
            next
            edit 5
                set url "www.youtube-nocookie.com"
                set action allow
                set web-proxy-profile "Restrict"
            next
        end
    next
end

3. Add urlfilter profile to webfilter

config webfilter profile
    edit "Youtube-Restrict" ----{The name can be any value
        config web
            set urlfilter-table 1
        end 
next
end

4. Apply the above profile to the outgoing firewall policies

  • Enable the web-filter and select the created web-filter
  • Enable SSL Deep Inspection
 


Event log "NAT port is exhausted"

$
0
0
Event log "NAT port is exhausted"

The following commands will help to understand if NAT port is exhausted. 


·         Ensure the necessary logging is enabled. Check that the default setting on the FortiGate GUI in Log&Report>Local Logging & Archiving, logging to memory is activated.
·         The following message will display when the NAT port is exhausted:

Message meets Alert conditiondate=2011-02-01 time=19:52:01 devname=master device_id=”” log_id=0100020007 type=event subtype=system pri=critical vd=root service=kernel status=failure msg="NAT port is exhausted."

·         NAT port exhaustion is also highlighted by a raise of the 'clash' counter from the 'diag sys session stat' command:

FWF60B # diagnose sys session stat
misc info: session_count=16 setup_rate=0 exp_count=0 clash=889
memory_tension_drop=0 ephemeral=1/16384 removeable=3
delete=0, flush=0, dev_down=16/69
firewall error stat:
error1=00000000
error2=00000000
error3=00000000
error4=00000000
tt=00000000
cont=0005e722
ids_recv=000fdc94
url_recv=00000000
av_recv=001fee47
fqdn_count=00000000
tcp reset stat: syncqf=119 acceptqf=0 no-listener=3995 data=0 ses=2 ips=0
global: ses_limit=0 ses6_limit=0 rt_limit=0 rt6_limit=0



Usage of arp permit-non-connected

$
0
0
Usage of arp permit-non-connected

We came across a problem after upgrade from 8.2(5) to 9.1.x, where some of the public IP address / servers hosted inside the firewall through public IP are not reachable from internet. Whereas immediately after upgrade team was confirmed the access.  We suspect the traffic to upgraded firewall access are working because ARP was not cleared.

Below was the router configuration:

interface GigabitEthernet0/1
 description ***LAN***
 ip address a.b.c.d 255.255.255.240 secondary
 ip address e.f.g.h 255.255.255.240 secondary
 ip address p.q.r.s  255.255.255.248
 ip accounting output-packets
 standby 1 ip x.x.x.x
 standby 1 timers 1 3
 standby 1 priority 110
 standby 1 preempt
 standby 1 name Primary
 standby 1 track 1 decrement 20
 load-interval 30
 duplex full
 speed 100
On the above configuration we notice that multiple IPs are configured on the firewall connected ISP interface.

After upgrade the directly configured public pool IPs are working fine whereas the secondary and standby public pool IPs are not working. And suspect that the firewall is not responding for ARP request if not gateway connected IPs.

After our investigation to got two solutions.

1.      Change the router configuration. Keep only one IP on interface and route all other IPs towards that interface.

interface GigabitEthernet0/1
 description ***LAN***
 ip address a.b.c.d 255.255.255.240 secondary
 ip accounting output-packets
 load-interval 30
 duplex full
 speed 100
And add appropriate routes on the device.

2.      Enable arp-permit non-connected, the firewall will start respond for arp requests if the public IP is configured as secondary IP in ISP interface.

no arp-permit non-connected Feature: default command

As a security device, ASA will not populate its Address Resolution Protocol (ARP) table with entries from non-directly-connected subnets. Furthermore, the ASA will not issue ARP requests for hosts on such subnets. This secure behavior may cause issues with suboptimal network configurations where a device is expected to process ARP packets to and from non-directly-connected subnets (as configured locally).

This enhancement request is filed to request a configuration command that would disable this security check and allow the ASA to process ARP packets to and from non-directly-connected subnets. This command should be used with caution as it reduces the level of protection that the ASA provides.

The most common reason for someone to configure "arp permit-nonconnected" on the new software on their ASA is when the ISP has allocated 2 public subnets to the customer and configured both of those networks on their gateway interface. For example the network that is link network between the ASA and the ISP gateway and an additional subnet as an "secondary" network on the gateway interface.
In this case you would run into problems the first time if you upgraded to software level 8.4(3) which changed the ARP behavior with ASA firewalls. In this software there was no simple command to change this behavior and the mentioned command only became available in the next updates to the software.

The reason you might not be facing any problems depends on how your network is set up.
For example, let’s take the situation that you have 2 public subnets from the ISP. Other subnet is configured directly on the "outside" interface of the ASA and the other one is just used as Static NAT IP addresses or similiar. Now if your ISP has configured a route for this secondary subnet and routed it towards the current "outside" IP address of the ASA then we will NOT run into any problems with ARP and nonconnected subnets as the ISP will never ARP for the MAC address of any of those IP addresses from the secondary IP addresses as they are not a part of a directly connected network for the ISP (Problem would arise if the gateway device had this secondary subnet as directly connected). Instead the ISP forwards the traffic to the next hop which is ASA and there will be no problems.

Now if we consider a situation next where the ISP has configured the 2 subnets directly on its gateway interface. One as the link network between the ISP gateway and your ASA and the other as an additional "secondary" subnet for NAT use. Also if we consider that you have the default setting for your software level and have "no arp permit-nonconnected" you will run into problems with connectivity with the secondary subnet.


What will happen is that a user on the Internet will try to connect to some of your servers using the secondary public IP address space. The traffic will reach the ISP gateway which will see the public IP address as a part of a directly connected network. Now the ISP will send an ARP request that tells the ASA that the ARP requests senders IP address is from the secondary subnet and this secondary subnet is not an directly connected network to any ASA interface so the ASA wont populate its ARP table with the ISP gateway interfaces secondary IP/MAC where the ARP request came from or was sourced from. In other words ISP will never get an ARP reply from the ASA. And naturally when the ISP isnt able to determine the MAC address of the secondary subnets destination IP address the connections will fail.

DNS Resolution – with & without Proxy

$
0
0
DNS Resolution – with & without Proxy

If you configure IE with an explicit proxy:

1.            When the user enter www.itzecurity.in
2.            IE checks the address for a string match against the IE proxy exceptions list (i.e. "Bypass proxy for these addresses:")
3.            If it matches an entry in the bypass list, the client uses its DNS to resolve the name, and then the client connects directly to the target IP address on port 80 (assumed), then sends a request like:
                        GET /index.html HTTP/1.1
             Host: www.itzecurity.in

                        and that's the end of it for a matching entry.
If no bypass list entries match, continue:
4.            IE connects to its configured proxy, and sends a request of the form:
                        GET https:// www.itzecurity.in/index.html  HTTP/1.1.
This use of the FQDN as the URL is one way you can tell that a client thinks it's talking to a proxy instead of a real web server.
5.            The proxy then resolves the host name using its own DNS, connects to the target site, etc, etc

When using WPAD/PAC:

In the case of using a WPAD or Auto configuration script (such as provided by ISA/TMG when auto configuration is enabled), it's different:
·         User types an address
·         Client downloads the current wpad.dat/autoproxy.js/.pac file from its configured location.
·         Client looks for the entry point "FindProxyForUrl" in the js file, and executes it
·         The Autoproxy script processes the hostname and URL. This is a limited-function javascript file, but lots of things are still possible:

1.            this may include name resolution (IsInNet, DnsResolve)
2.            this may include string matching (ShExpMatch)
3.            this may include counting to a million (i++)
4.            this may include narky alert popup messages if the admin's a jerk (or just funny (or debugging))

·         The FindProxyForUrl function returns at least one string: an ordered list of the best proxies to use (semicolon separated)

1.            either "DIRECT", in which case the client then needs to resolve the name itself, as per the bypass case above
2.            or "PROXY proxyname:8080" or similar, in which case the client connects to that port on the proxy, tells it to GET the full URL, and the proxy performs name resolution.

·         As an example: if the script function returned "PROXY yourProxy:8080; DIRECT" that tells the client to connect to yourproxy on TCP port 8080 to request this URL, and if that connection can't be established, just try going direct.

·         Note that TCP session setup failure isn't exactly quick, so this isn't likely to be a pleasant failover experience for a user, but beats nothing.

FortiGate Firewall session list and state

$
0
0
FortiGate Firewall session list  and state

To display the session table: diagnose sys session list


Description of the State field in the session table


Proto_state field for TCP



Proto_state field for UDP

Proto_state field for ICMP (proto 1)


There are no states for ICMP, it always show proto_state=00

Site Review Utility in Zscaler

$
0
0
Site Review Utility in Zscaler

1.      Login to the URL:  https://sitereview.zscaler.com . This feature is only available for Zscaler customers. The traffic should be route via Zscaler when the user accesses this site.
2.      Enter the URL. e.g.: Itzecurity.in
3.      Click Submit
4.      Click Modify categories,
5.      Select appropriate category and suggest changes,
6.      Also, whenever you make a request, please add this in the comment section:

**********
Requester: Ramesh M
E-mail:           ramesh.m@itzecurity.in
Please update when the review has been done
**********
7.      Zscaler ops team will receive this request and do appropriate categorizations.
8.      The requester will receive an update once done.


Zscaler Guide lines for URL categories:

$
0
0
Following are some guidelines for URL categories:
·         You cannot add classes, or edit or delete the predefined classes.
·         Each class has super-categories. You cannot add or delete super-categories, but you can move them from one class to another for easier management. For example, your organization is in the entertainment field and your users frequent entertainment sites. You can move the Entertainment/Recreation super-category to another class, such as Business Use.
·         You can add custom categories to any super-category. You cannot delete a category that is actively used in a URL Filtering rule. To delete the category, first deselect it in the URL Filtering rule.
·         For the predefined categories, you can add URLs and keywords for web sites that the service did not automatically categorize, but which you believe should be included in that category (note that URL keywords are simply text strings found within the URL).
·         You can add URLs to both predefined and custom categories. When manually adding URLs, you can enter sub-domains (for example, mail.google.com) and re-categorize them differently than their parent domain.
·         If you manually add a URL to an existing super-category, category, or custom category, you can specify whether you want the URL also to retain its original parent category. For example, if you manually add www.google.com to a User-Defined category, you can specify whether you want google.com also to retain its original "Web Search" category.
·         You can add up to 25000 custom URLs (across all categories), and up to 48 custom categories. You can add up to 30 keywords per category, and up to 1,000 across all categories.

·         In URL filtering, File Type Control, SSL inspection, FTP and DLP policies, you can specify super-categories and select individual categories within super-categories as well.

Sniffer and debug flow in presence of NP2 ports

$
0
0
Sniffer and debug flow in presence of NP2 ports

On FortiGate that have NP2 interfaces (for example: FortiGate-310B, FortiGate-620B....), some traffic is off-loaded at hardware level. That means that the traffic should not go to the CPU (unless it is traffic destined to the FortiGate itself) and therefore not seen by a debug flow command or a sniffer trace.

However, what will be always seen are the first packets of any new session establishment, for example the syn/syn-ack/ack. Once the session is established, no further packets will be seen anymore as they will use the fast-path.

To optimize performance, NP2/NP4 processors do not include traffic logging capabilities. Because of this and because offloaded traffic bypasses FortiOS, no traffic logs are generated for traffic offloaded to NP2/NP4 processors.

For troubleshooting purpose and when it is desired to capture packets or check the flow on the FortiGate, you can bypass H/W acceleration with the following command on a specific port.

Be aware that this might affect performance and should only be used for troubleshooting purpose.

 diagnose npu np2 fastpath-sniffer enable <port(s)_number>

==> this now shows all traffic for all sessions to/from this or those port(s) when using the sniffer or the diag debug flow commands

The command below will re-enable H/W offloading :

 diagnose npu np2 fastpath-sniffer disable <port(s)_number>

Note that this is not saved in the configuration and will be lost after a reboot. You can also use the "config system npu" to disable offloading of IPSec VPN traffic.



Functions used in PAC files

$
0
0
Functions used in PAC files

isPlainHostName()
This function returns true if the hostname contains no dots. Example: http://intranet
Useful when applying exceptions for internal websites that may not require resolution of a hostname to IP address to determine if local.
Example:
if (isPlainHostName(host)) return "DIRECT";

dnsDomainIs()
Evaluates hostnames and returns true if hostnames match. Used mainly to match and exception individual host names.
Example:
if (dnsDomainIs(host, ".google.com")) return "DIRECT";

localHostOrDomainIs()
Evaluates hostname and only returns true if an exact hostname match is found.
Example:
if (localHostOrDomainIs(host, "www.google.com")) return "DIRECT";

isResolvable()
Attempts to resolve a hostname to an IP address and returns true if successful. WARNING - This may cause a browser to temporarily hang if a domain is not resolvable.
Example:
if (isResolvable(host)) return "PROXY proxy1.example.com:8080";

isInNet()
This function evaluates the IP address of a hostname and if a specified subnet returns true. If a hostname is passed, the function will resolve the hostname to an IP address.
Example:
if (isInNet(host, "172.16.0.0", "255.240.0.0")) return "DIRECT";

dnsResolve()
Resolves hostnames to an IP address. This function can be used to reduce the number of DNS lookups.
Example:
var resolved_ip = dnsResolve(host);
if (isInNet(resolved_ip, "10.0.0.0", "255.0.0.0") ||
isInNet(resolved_ip, "172.16.0.0", "255.240.0.0") ||
isInNet(resolved_ip, "192.168.0.0", "255.255.0.0") ||
isInNet(resolved_ip, "127.0.0.0", "255.255.255.0"))
return "DIRECT";

myIpAddress()
Returns the IP address of the host machine.
Example:
if (isInNet(myIpAddress(), "10.10.1.0", "255.255.255.0")) return "DIRECT";

dnsDomainLevels()
This function returns the number of DNS domain levels (number of dots) in the hostname. Can be used to exception internal websites which use short DNS names, such as: http://intranet
Example:
if (dnsDomainLevels(host) > 0)
return "PROXY proxy1.example.com:8080";
else return "DIRECT";

shExpMatch()
Attempts to match hostname or URL to a specified shell expression and returns true if matched.
Example:
if (shExpMatch(url, "*vpn.domain.com*") ||
shExpMatch(url, "*abcdomain.com/folder/*"))
return "DIRECT";

weekdayRange()
Can be used to specify different proxies for a specific day range. Note: the example employs 'proxy1.example.com' Monday through Friday.
Example:
if (weekdayRange("MON", "FRI"))
return "PROXY proxy1.example.com:8080";
else return "DIRECT";

dateRange()
Can be used to specify different proxies for a specific date range. Note: The example employs 'proxy1.example.com' January through March.
Example:
if (dateRange("JAN", "MAR"))
return "PROXY proxy1.example.com:8080";
else return "DIRECT";

timeRange()
Can be used to specify different proxies for a specific time range. Note: The example employs 'proxy1.example.com' 8 AM to 6 PM.
Example:
if (timeRange(8, 18))
return "PROXY proxy1.example.com:8080";
else return "DIRECT";

Potential PAC function issues
A PAC file may have the following limitations:

dnsResolve
The function dnsResolve (and similar other functions) performs a DNS lookup that can block your browser for a long time if the DNS server does not respond.
If you cache proxy auto-configuration results by domain name in your browser (such as Microsoft's Internet Explorer 5.5 or higher) instead of the path of the URL, it limits the flexibility of the PAC standard. Alternatively, you can disable caching of proxy auto-configuration results by editing the registry.
It is recommended to always use IP addresses instead of host domain names in the isInNet function for compatibility with other Windows components that make use of the Internet Explorer PAC settings, such as .NET 2.0 Framework. For example,

if (isInNet(host, dnsResolve(sampledomain) , "255.255.248.0"))
// .NET 2.0 will resolve proxy properly
if (isInNet(host, sampledomain, "255.255.248.0"))
// .NET 2.0 will not resolve proxy properly

The current convention is to fail over to the direct connection when a PAC file is unavailable. When switching quickly between network configurations (for example, when entering or leaving a VPN), dnsResolve may give outdated results due to DNS caching. For instance, Firefox usually keeps 20 domain entries cached for 60 seconds. This may be configured via the network.dnsCacheEntries and network.dnsCacheExpiration preference variables. Flushing the system's dns cache may also help, (such as by using the sudo service dns-clean start in Linux).

myIpAddress
The myIpAddress function has often been reported to give wrong or unusable results (for example, 127.0.0.1, the IP address of the localhost). It may help to remove any lines referring to the machine hostname on the system's host file (such as /etc/hosts on Linux).
Also, when the browser is Firefox 3 or higher, and the operating system has IPv6 enabled, which is the default in Windows 7 and Vista, the myIpAddress function returns the IPv6 address, which is not usually expected nor programed for in the PAC file.

Note 

Some versions of Java have had problems with common proxy PAC file functions such as isInNet(). Please review the Java open issues in the release notes for the versions of Java used by your client browsers.

SAML Troubleshooting (ADFS,)

$
0
0
Troubleshooting
Authentication – SAML - Browser Settings
Ø  This section describes the common issues faced due to incorrect browser settings.A user’s browser displays the error "Can't display the webpage."
This error can be due to any of the following reasons:
The connection is redirected to Zscaler.
The user's workstation cannot connect to the URL.
To avoid this issue, do the following:
Check whether an exception is present in your PAC file to make the connection to the server direct (bypassing Zscaler). You can use the default PAC file with the following exception:
if shExpMatch(host, "*.company.tld") return "DIRECT";
Check whether the workstation can connect to the remote server with the following command line in a DOS shell: > telnet <hostname> 443. If you get a black screen, it means that the connection has been established.
Ø  A user’s browser is stuck on the redirection page.
If you are seeing the redirection for more than 10 seconds, it means that SAML redirection is looping. This is often due to your explicit proxy configuration. (Note that the connection to the server must be direct and not redirected to Zscaler.)
To avoid this issue, add an exception to your PAC file. You can use the default PAC file with the following exception:
if shExpMatch(host, "*.company.tld") return "DIRECT";
Ø  I am getting a basic Login pop-up; authentication is not transparent.
This issue arises because of two reasons:
User is not authenticated on the organization’s Federation Identity. Check the following:
a.      If the user's computer is registered in Active Directory.
b.      If the user is logged in to the Windows Domain.
The browser is not configured to forward the user token to this SAML server: By default, Firefox and Chrome browsers do not relay NTLM tokens to the SAML server.  To enable this, do the following:


Verify ADSI connected to AD.

Start – Administrative Tools – click ASDI edit –
Check if the If the default naming context tree is available on the left panel.
If not right click the ASDI tree and click connect to
On the DC was missing the configuration for the ADSI which is mandatory to have IWA working properly. Once connected ADSI to the ADFS and added the SPN the IWA process started to work properly.

SPN: http/adfsserver.itzecurity.in
Select managed service accounts – right click the CN properties – Add the SPN URI (http/adfsserver.itzecurity.in) - click ok


Prompting for username and password on IE, chrome and mozila Firefox.

Internet Explorer 
Internet Explorer supports IWA out-of-the-box, but may need additional configuration due to the network or domain environment.
In Active Directory (AD) environments, the default authentication protocol for IWA is Kerberos, with a fall back to NTLM.  If the IWA Adapter is configured for Kerberos within an AD environment, domain-joined clients will request a Kerberos ticket to be used within the Authenticate header response during an IWA transaction.  If Kerberos cannot be negotiated for whatever reason, the IWA adapter will fall back to NTLM challenge/response authentication.  In that case, a user will be prompted for their AD domain credentials.
Additionally, Internet Explorer uses security zones for distinguishing which hosts are Internet, Local intranet, Trusted sites, or Restricted sites.
Security zones in IE (Tools → Internet Options → Security):        Security zones in Internet Explorer
 By default, any IWA authentication request originating from an Internet host will not be allowed.  The default setting is to only allow clients to automatically provide credentials to hosts within the Intranet zone.  Sites are considered to be in the Intranet zone: if the connection was established using a UNC path (i.e. \\pingsso); the site bypasses the proxy server; or host names that don't contain periods (i.e. http://pingsso).
Intranet Zone security settings:
    Intranet Zone
Most PingFederate SSO connections will use the fully-qualified domain name (FQDN) in SSO URLs, so it will not be categorized as being in the Intranet zone.  As such, the browser must be configured trust the host by adding the Ping Federate hostname to the Trusted sites zone.  Here, the default setting is Automatic logon with current user name and password, which implies Kerberos will be used if available, then NTLM.  The setting Prompt for user name and password will bypass Kerberos and go straight to NTLM authentication.  Even if the IWA Adapter supports Kerberos, the client will not attempt to send a Kerberos token within the Authenticate header.
On computers (i.e: servers) with Internet Explorer Enhanced Security Configuration enabled the automatic login behavior will be overridden with a logon prompt. The logon prompt will allow Kerberos and NTLM logon functionality however it will not use the cached credentials from the user login.
To configure Internet Explorer to fully support the IWA adapter, within Internet Explorer, choose Tools → Internet Options → click the Security tab → click on Trusted sites →and click Custom level... Scroll all the way to bottom under User Authentication and under Logon, select Automatic logon with current user name and password.
Trusted Sites Zone security settings:
Trusted Sites Zone Settings
Once this is configured click OK, then click on the Sites button under Trusted sites, and insert the PingFederate server's hostname.  Optionally, wildcards can be included to trust any host name within the AD domain (i.e. *.adexample.pingidentity.com).
Trusted Sites:
Trusted Sites
The above settings work for domain-joined computers (i.e. computers with an Active Directory account principal and trust relationship), as well as non-domain-joined computers.  For domain-joined computers, an AD user account would need to be logged in, and the Kerberos authentication protocol would be negotiated during SSO.  In the case of a non-domain-joined computer, the Kerberos protocol (Negotiate in the WWW-Authenticate header) would not be negotiated, thus a fall back to NTLM.  In this case, the user would be prompted for credentials, which they would enter ADEXAMPLE\joe and the password to be authenticated.**

**Note:  The NetBIOS domain name (ADEXAMPLE in the example above) MUST be used to qualify the user name if: (1) the computer is not joined to an AD domain; or (2) there are multiple AD domains or forests and the user is authenticating over a cross-domain trust (i.e. the user is in DomainA, but the PingFederate NTLM computer account is joined to DomainB).  The NTLM protocol assumes the user is logging in to the domain where the PingFederate computer account exists.  This is why the user name must be qualified by the domain to function correctly.
Also note it is possible to add the PingFederate URL to the Local Intranet zone as an alternative to adding it to the Trusted sites zone. Reasons for this may vary based on the network design of the environment, but setting automatic logon for the Trusted sites zone implies that Negotiate/Authorization credentials may be sent in requests to sites outside of the Intranet Zone.

Firefox

Mozilla Firefox supports the SPNEGO authentication protocol, but must be configured to work correctly for Kerberos authentication.  Firefox does not use the concept of security zones like Internet Explorer, but will not automatically present Kerberos credentials to any host unless explicitly configured.  By default, Firefox rejects all SPNEGO challenges from any Web server, including the IWA Adapter.  Firefox must be configured for a whitelist of sites permitted to exchange SPNEGO protocol messages with the browser.
The two settings are:

network.negotiate-auth.trusted-uris
network.automatic-ntlm-auth.trusted-uris

These settings can be defined by:
1. Navigate to the URL about:config in Firefox.  Click the I'll be careful, I promise! button:
About Config
2. In the Search dialog box, search for the above preferences:
Firefox Negotiate Settings
3. In each of the preferences, specify any host or domain names, delimited with commas.  Please note that domains can wildcarded by specifying a domain suffix with a dot in front (i.e. .adexample.pingidentity.com):

Delegation URIs
Just like in Internet Explorer, the computer making the SSO request to the IWA adapter must also be joined to Active Directory (AD) and be logged on with a domain user account.  The same goes for Kerberos vs. NTLM negotiation -- if the computer is not domain-joined, it will fall back to NTLM.
For Firefox running on Mac OS, SPNEGO will negotiate both Kerberos and NTLM if the computer is joined to AD. On non-domain-joined Mac OS, only NTLM will be selected as a mechanism for SPNEGO.

Chrome 

Google Chrome in Windows will use the Internet Explorer settings, so configure within Internet Explorer's Tools, Internet Options dialog, or by going to Control Panel and selecting Internet Options within sub-category Network and Internet.
For Chrome under Mac OS X, SPNEGO will work without any additional confguration, but will only negotiate to NTLM.  It is possible to configure a setting named AuthServerWhitelist to authorize host or domain names for SPNEGO protocol message exchanges.  There are a couple ways this can be done:  (1) from the command line; or (2) joining Mac OS to AD.
·         Within a Mac OS Terminal shell use the following command:
You will need to get an initial ticket granting ticket (TGT) from your Kerberos KDC (domain controller) in order to request service tickets for the IWA Adapter:
>kinit joe@ADEXAMPLE.PINGIDENTITY.COM
joe@ADEXAMPLE.PINGIDENTITY.COM's Password: (password here)

Now, cd into the Chrome directory and start Chrome with the AuthServerWhitelist parameter:
>cd /Applications/Google Chrome.app/Contents/MacOS
>./"Google Chrome" --auth-server-whitelist="*.adexample.pingidentity.com"

Once configured, this setting will persist every time Chrome is launched.  You will still need to run kinit every 10 hours in order to allow Chrome to request service tickets for the IWA adapter.
·         Joining Mac OS to Windows Active Directory:
For information on joining Mac OS to AD, please refer to the following: http://training.apple.com/pdf/Best_Practices_for_Integrating_OS_X_with_Active_Directory.pdf
For iOS (iPad and iPhone), only NTLM via SPNEGO has been tested.  Kerberos has not been tested or verified.
Safari
Safari on Windows supports SPNEGO with no further configuration.  It supports both Kerberos and NTLM as sub-mechanisms of SPNEGO.  The same rules apply to Safari as Firefox or Chrome, where the computer doing SSO must be domain-joined and logged in by a domain users.  Otherwise, it will fall back to NTLM authentication.
Safari on Max OS supports SPNEGO with Kerberos as an authentication mechanism if Mac OS is joined to AD (see here: http://training.apple.com/pdf/Best_Practices_for_Integrating_OS_X_with_Active_Directory.pdf). If Mac OS is not joined to AD, then SPNEGO will always negotiate NTLM as the authentication mechanism.

The Firefox and Chrome browsers may not relay NTLM tokens to the SAML server.

·         Additional configuration is required on the browsers as follows:
Google Chrome: Specify this parameter on the command line : google-chrome --auth-server-whitelist="*.clientdomain.tld"

·         Extended protection mode is set as required on your IIS configuration.
In such case, IIS will not accept Chrome browser.  Extended protection mode has to be disabled with following steps:

Turn off Extended Protection. To do that, login to the AD FS server,
·         Launch IIS Manager
·         On the left side tree view, go to access Sites -> Default Web Site -> adfs -> ls.
·         Once you’ve selected the "/adfs/ls" folder, double-click the Authentication icon,
·         Right-click Windows Authentication and select Advanced Settings.
·         On the Advanced Settings dialog, choose Off for Extended Protection.

Under IIS 7.0, the protected mode configuration is hidden. You need to run the following command in order to disable it:

C:\Windows\System32\inetsrv>appcmd.exe set config "Default Web Site/adfs/ls" -section:system.webServer/security/authentication/windowsAuthentication
 /extendedProtection.tokenChecking:"None" /extendedProtection.flags:"None" /commit:apphost

Under Windows 2012:

1.         Open PoweShell Command Window
2.         Load ADFS Poweshell SnapIn
3.         Add-PsSnapIn Microsoft.Adfs.Powershell
4.         Set ADFS to diable EAP at the farm level
5.         Set-ADFSProperties -ExtendedProtectionTokenCheck:None
6.         Restart ADFS and IIS (IISReset, Net Stop ADFS, Net Start ADFS)

Mozilla User-Agent are not authorized to authenticate under ADFS 3.0

Update supported browser list with the following command to enable authentication with Chrome / Firefox / Safari browsers: Open Powershell CLI and execute the following command

Set-ADFSProperties -WIASupportedUserAgents @(“MSIE 6.0″, “MSIE 7.0″, “MSIE 8.0″, “MSIE 9.0″, “MSIE 10.0″, “Trident/7.0″, “MSIPC”, “Windows Rights Management Client”, “Mozilla/5.0″, “Safari/6.0″, “Safari/7.0″)


NTLM and Kerberos work out of the box in some browsers. Others may require some configuration. See the details below for each browser.

Internet Explorer - Ensure that Internet Explorer can save session cookies.
Navigate to your Windows control panel and select Internet Options > Advanced.
Within Security, select Enable Integrated Windows Authentication, and then select OK.

Chrome - Chrome will only supply the NTLM token to the site if that site is on the approved list, provided as a parameter at browser startup. Without this parameter, the permission list will include Local Machine servers or Local Intranet security zone servers. To configure this parameter:
·         Create a shortcut for your Chrome browser.
·         Right-click on your Chrome browser shortcut and select Properties.
·         On the Shortcut tab, edit the Target field, adding the following parameter to the end of the existing value:
--auth-server-whitelist="hostname.company.com"
where hostname.company.com is the hostname and domain of the server hosting the OneLogin IWA. The URL must match exactly.
Select OK to confirm your settings.

Firefox - Make sure that the configuration has been configured in HTTPS mode to avoid a user transition warning.
In the address bar, enter "about:config".
If you see "This might void your warranty!" click "I'll be careful, I promise!"
On the configuration page, go to the network.negotiate-auth.trusted.uris and network.automatic-ntlm-auth.trusted-uris preference fields, double-click them, and enter the hostname of the OneLogin IWA server(s). You can enter multiple values separated by a comma, if two or more server instances are deployed.

If you are not entering the fully qualified domain name (FQDN) of your host servers, you will need to toggle the values network.automatic-ntlm-auth.allow-non-fqdn and network.negotiate-auth.allow-non-fqdn to true, by selecting and right-clicking the Value column for each and changing the value to True.

Network Slowness - Verify using Wireshark

$
0
0
Network can be slow for various reasons. If the root cause isn't obvious by looking at performance graphs, cabling, and other hardware, Wireshark can be put to use to narrow down. Following are some of ways Wireshark can help:
·         What is being downloaded?

Once you have a packet capture opened in Wireshark, go to Statics --> Protocol Hierarchy. This will show what types of traffic are going through the network. A high percentage of broadcast and peer-to-peer are not good. Also look for other protocols that look suspicious.

·         Quick snapshot of errors and connection issues.

In the Wireshark go to Analyze --> Expert Info.You should be worried if there are high number of errors and warnings.

·         Connection speed to a particular website.

Use a Filter field to see traffic to only a particular website. For example, if your client has an IP of 192.168.2.25 and the website has an IP of 72.27.72.72, you can use a filter such as "ip.addr==192.168.2.25 && ip.addr==72.27.72.72".
Now go to Statistics --> Flow Graph. In the pop-up, choose "Displayed packets, "General flow, and "Standard source/destination addresses" and hit "OK". This flow graph will show, if the connection establishment is taking too long, if there are too many retransmissions, and if the connection is getting re-established too many times.

·         Particular traffic type is consistently high over time.

This is much more useful after you have done protocol analysis explained in point # 1 or you suspect a particular traffic flow.
In the Wireshark go to Statics --> IO Graphs.

It will plot total number of packets seen over time by default. If you want to see particular traffic as a portion of this total packets, type into the "Filter" that is next to "Graph 2" and click on "Graph 2". This will show another graph under the default graph.

·         How much time is spent in waiting for a response?

You can add a delta time column for this. Right click on any of the column headers in the Wireshark and then click on the "Column Preferences". Click on "Add" and then change the "Field type" to "Delta time". You can also move around this new column. This column will show time difference from the previous packet.



Restricting Groups

$
0
0
Restricting Groups
AD FS 2.0 federates all the groups of a user, by default. You can restrict the groups to only those to which policies will be applied. Zscaler recommends putting users in groups that begin with a specific word, such as Internet, to facilitate applying restrictions on group federations, For example, you can create groups such as Internet general, Internet Restricted, etc.
To restrict the groups:
1.      Remove the group mapping from the rule that you created when you added a claim rule. 
To edit the existing claim rule:
a.      In the AD FS 2.0 Management window, open the Trust Relationships > Relying Party Trustsfolder.
b.      Right-click the relying party trust that you created and select Edit Claim Rules.
c.       When the Edit Claim Rules window appears, click Edit Rule to modify the rule that you created when adding a claim rule.
d.     In the Configure Claim Rule window, delete the row that mapped the LDAP attribute for group to a claim rule type.
e.      Click OK.
2.      Create a new rule for group membership.
To add a new rule for group membership:
a.      In the ADFS 2.0 Management window, open the Trust Relationships > Relying Party Trustsfolder.
b.      Right-click the relying party trust that you created and select Edit Claim Rules.
c.       When the Edit Claim Rules window appears, click Add Rule.
d.     Select Send Claims Using a Custom Rule and click Next.
e.      In the Custom Rule window, do the following:
·         Enter a name for this rule, such as “Return Group Membership”.
·         In the custom rule box, enter the following text to enumerate the group membership and put it into an array called
memberOf): c:[Type== http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => add(store = "Active Directory", types = ("memberOf"), query = ";tokenGroups;{0}", param = c.Value);
f.        Click Finish.
g.      Click Add Rule.
h.      Select Send Claims Using a Custom Rule and click Next.
i.        In the Custom Rule window, do the following:
·         Enter a name for this rule, such as  “Restrict Group Membership”.
·         In the custom rule box enter something like the following:
c:[Type == "memberOf", Value =~ "Internet.+"] => issue(claim = c);
The preceding regular expression matches any group name that begins with “Internet”.  E.g. ‘Internet Access’ or ‘InternetGroup3’ or ‘Internet Restricted’
c:[Type == "memberOf", Value =~ “Sales|Marketing|HR”] => issue(claim = c);
The preceding regular expression matches only 3 groups - “Sales”, “Marketing” and “HR”

c:[Type == "memberOf", Value =~ “AccessLevel[1-9]”] => issue(claim = c);
The preceding regular expression matches any group name that begins with “AccessLevel” and then has a number 1 to 9.  E.g. ‘AccessLevel1’ or ‘AccessLevel7’\
f. Click Finish.


Bandwidth quota and Bandwidth control

$
0
0


Bandwidth Quota

The bandwidth quota includes data uploaded to and downloaded from the URL category. To enforce the quota on specific users, groups, or departments, SSL inspection and authentication must be enabled.
If a user comes from a known location, the quota is reset at midnight based on the location time zone; for road warriors, the quota is reset based on the organization’s time zone.
The minimum value you can enter is 10 MB and the maximum value is 100000 MB.
Daily Time Quota: The time quota is based on the amount of time elapsed in a session while uploading and downloading data. The session idle times are ignored. The minimum value you can enter is 15 minutes and the maximum value is 600 mins.

 If you apply Bandwidth quota and Daily time quota for certain users keeping locations and groups as any , the rule will apply for those particular users coming from any location and being a part of any group or department.

Bandwidth Control Policy

1.      Enable bandwidth control for the location.
 Specify the maximum upload and download bandwidth limits for each location in your organization. Note that about       5% - 7% of TCP traffic is overhead, such as packet headers. The Zscaler service does not include these in its bandwidth     calculations. It only includes the application traffic. Therefore, the best practice for computing a location's bandwidth is     as follows:
    Actual bandwidth – (5% - 7%  overhead) = Upload and Download bandwidth.
2.      Add rules to the policy.
            Policy > Web > Bandwidth Control.
NOTE: The service applies bandwidth controls to traffic from known locations only; that is, locations that are configured on the Zscaler admin portal. 

The Zscaler service re balances the bandwidth in real time and buffers packets for application classes that hit the bandwidth quota limitduring 1 second intervals. This behavior ensures that business critical applications run at full speed, with no deterioration in quality.

  1. Always have two measurable parameters captured in policy, guaranteed - the guaranteed % of bandwidth allocated for the policy and can use up to maximum.  
  2. Policy execution from top to bottom. 
  3. The last policy will hit only if the guaranteed bandwidth for all above policy allocated.
  4. The service applies bandwidth controls to traffic from known locations only; that is, locations that are configured on the Zscaler admin portal. The Bandwidth Control policy does not apply to road warriors because their traffic does not come from a configured location and their source IP address has unknown upload and download bandwidth values.

Tips : Zscaler Portal

$
0
0
Tips: Custom URL

1.      We can add 25000 custom URL across all categories.  
2.      We can add 48 custom Categories  
3.      We can add 30 keywords per category
4.      We can add 1000 keywords across all categories

Tips: Policies

1.      User, group, departments are using OR logical function  
2.      Location and time using Logical AND operations.
3.      Policy execution from top to bottom
4.      Cloud App policies will take precedence over URL filtering. If you want to change this behavior do the following
·         Go to Administration - Cloud configuration - Advanced settings
·         Under Advanced Web App Control Options enable Allow Cascading to URL Filtering
·         save and activate

Troubleshoot: Split brain seen intermittently on FGT a-p HA

$
0
0
Fortinet TAC requires below details to investigate the issue further,
Provide the below from both the HA units in 2 separate files:

#get system status
#get system performance status
#diag sys top 1 40 (Run for 30 Sec and CTRL C to stop)
#diag sys top-summary (Run for 30 Sec and CTRL C to stop) -----> only works on version 5 and need to troubleshoot memory issues
#diagnose autoupdate versions
#diagnose hardware sys shm
#diag hard sys mem
#get sys ha status
#diag sys ha showcsum
#diag hard sys slab
#diag sys session stat
#diag firewall statistic show
#diag debug crashlog read

TAC will investigate the logs and can expect the update. Ideally the split brain happens if the HA connectivity fails. Suggested to verify the physical connectivity before reach out the TAC.

Forward specific URL or domain domain traffic using FOR loop

$
0
0
Route specific URL or domain traffic to internal proxy and all other traffic to Zscaler.

function FindProxyForURL(url, host)
{
// Route the .cn domains to Specific Internal proxy_list
            var Primary_proxy = proxy1;
            var Secondary_proxy = proxy2;
// List of hosts to connect via the PROXY server
            var proxy_list = new Array("*.cn", "*.cn/*");
//Return proxy name for matched domains/hosts
            for (var i = 0; i < proxy_list.length; i++)
            {
             var value = proxy_list[i];
                    if (shExpMatch(url, value) )
                        {
                         return "PROXY Primary_proxy:80; PROXY Secondary_proxy:80";
                        }
            }
  /* Default Traffic Forwarding. */
            return "PROXY ${GATEWAY}:80; PROXY ${SECONDARY_GATEWAY}:80";
 }


FTP Control

$
0
0

FTP Control
By default, the Zscaler service does not allow users from a location to upload or download files from FTP sites. You can configure the FTP Control policy to allow access to specific sites. Zscaler Nodes can be used to download/upload files to any FTP server on Internet. Users from known locations can connect to FTP sites through Zscaler. 
Note the following:
·         The FTP policy applies to traffic from the known locations of an organization.
·         The service supports FTP over HTTP. The anti-virus engine will scan the content for viruses and spyware. These connections are also subject to rules created under the URL Filtering Policy in the admin portal.
·         The service supports passive FTP only. If the destination server does not support passive FTP, the service generates an alert message to this effect in the end user's browser.
·         If a road warrior uses a dedicated port, then the service supports FTP over HTTP for road warriors. So when a road warrior’s browser connects to FTP sites and downloads files, the anti-virus engine of the service will be able to scan the content for viruses and spyware.
·         The service does not support AV scanning for native FTP traffic.
·         URL Filtering Policy rules take precedence over the FTP Control policy. For example, if you have a URL Filtering Policy rule that blocks access to Adult Material, the Zscaler service will block users who try to transfer files from ftp://ftp.playboy.com/
·         User-, department-, or group-level URL filtering rules blocking access to specific sites will not be enforced for FTP sites because FTP does not support cookies. Only rules applied to all users will be enforced. For example, if you have a catch-all URL Filtering rule that blocks access to Adult Material, anybody trying to ftp to ftp://ftp.playboy.com/ will get blocked.

Configuration and Use cases:

Under Policy FTP Control, you will find FTP over HTTP and Native FTP Control.  It is global settings and user based policy cannot be applied to FTP connections.
Enabling FTP over HTTP allows users to connect to FTP sites using browser like IE or Firefox (Manual proxy or PAC configured).  URL Policy will be scanned to allow/deny access to FTP.  For instance, user is trying to access ftp://tickets.zscaler.comwhich is categorized in Professional services.  Professional services category should be configured in URL policy to allowed for the Location where user is trying to connect to FTP site. 
Enabling Native FTP control allows users to use FTP clients like FileZilla FTP client with proxy setting to connect to FTP sites.  You can configure URL categories in this section to allow FTP connections.

Following is an example for each type of FTP control:

Case 1

 ftp://tickets.zscaler.com/should be allowed and rest all ftp sites should be blocked
 FTP works for a Known Location users only
 UI > Policy > Web > FTP Control > Enable -  FTP over HTTP, then create Rule as follows
  This looks for URL policies: where who is set to ALL, since I have Rule #3 block ALLand Rule # 2 to allow FTP - ftp://ftp-it.denner.ch



Note # 1: If I don’t have rule #3? All FTP sites would be allowed

Note # 2: if I have rule #3 and WHO is set to some user or group? All FTP sites would be allowed

Create rule in UI > Policy > Web > URL & Cloud App Control accordingly.

Case 2

 How to block and allow Client connections
 Eg: Using FileZilla Client - connection to ftp://ftp.ptcinfo.org/ 
 Make them go through the proxy; Settings :

ftp://ftp.ptcinfo.org/ would not connect – image below


Create URL Custom Category and add URL ftp.ptcinfo.org.  Select the category or the URL ftp.ptcinfo.org at UI > Policy > Web > FTP Control > Native FTP Control.


 Now the Connection is successful

Note 1: FTP controls are global configuration; there are no dedicated policies for FTP traffic to control based on source. 


SSL VPN conserve mode, one-time login per user, WAN link load balancing

$
0
0
SSL VPN conserve mode

FortiGate units perform all security profile processing in physical RAM. Since each model has a limited amount of memory, Kernel conserve mode is activated when the remaining free memory is nearly exhausted or the AV proxy has reached the maximum number of sessions it can service. SSL VPN also has its own conserve mode. The FortiGate enters the SSL VPN conserve mode before the Kernel conserve mode in an attempt to prevent the Kernel conserve mode from triggering. During the SSL VPN conserve mode, no new SSL connections are allowed. It starts when free memory is <25% of the total memory (when the memory on the FortiGate is less than 512Mb) or <10% of the total memory (when the FortiGate has more than 512Mb built in). To determine if the FortiGate has entered SSL VPN conserve mode - CLI
Run the following command in the CLI Console:
diagnose vpn ssl statistics
Result (showing conserve mode state in red):
FGVM080000120082 # diagnose vpn ssl statistics
SSLVPN statistics (root):
------------------
Memory unit:               1
System total memory:       2118737920
System free memory:        218537984
SSLVPN memory margin:      314572800
SSLVPN state:              conserve
Max number of users:       1
Max number of tunnels:     0
Max number of connections: 6
Current number of users:       0
Current number of tunnels:     0
Current number of connections: 0

Allow one-time login per user

You can set the SSL VPN tunnel such that each user can only log into the tunnel one time concurrently per user per login. That is, once logged into the portal, they cannot go to another system and log in with the same credentials again. To allow one-time login per user - web-based manager:
Go to VPN > SSL-VPN Portals, select a portal, and enable Limit Users to One SSL-VPN Connection at a Time. It is disabled by default.
To allow one-time login per user - CLI:
config vpn ssl web portal
edit <portal_name>
set limit-user-logins enable
end

WAN link load balancing

You can set virtual-wan-link as the destination interface in a firewall policy (when SSL VPN is the source interface) for WAN link load balancing. This allows logging into a FortiGate via SSL VPN for traffic inspection and then have outbound traffic load balanced by WAN link load balancing.
CLI syntax
config firewall policy
edit <example>
set dstintf virtual-wan-link


end

Client device certificate authentication with multiple groups

$
0
0
Client device certificate authentication with multiple groups
Supported Fortios version 5.6.2

In the following example, we require clients connecting to a FortiGate SSL VPN to have a device certificate installed on their machine in order to authenticate to the VPN. Employees (in a specific OU in AD) will be required to have a device certificate to connect, while vendors (in a separate OU in AD) will not be required to have a device certificate. In VPN > SSL-VPN Settings, do not enable Require Client Certificate, but selectively enable client-cert in each authentication-rule based on the requirements through CLI instead. The following example assumes that remote LDAP users/groups have been pre-configured.

config vpn ssl settings
set reqclientcert disable
set servercert "Fortinet_Factory"
set tunnel-ip-pools "SSLVPN_TUNNEL_ADDR1"
set port 443
set source-interface "wan1"
set source-address "all"
 set default-portal "full-access"
 config authentication-rule
        edit 1
            set groups "Employee"
            set portal "tunnel-access"
            set realm ''
            set client-cert enable
            set cipher high
            set auth any
        next
        edit 2
            set groups "Vendor"
            set portal "tunnel-access"
            set realm ''
            set client-cert disable
            set cipher high
            set auth any
        next
    end
end



config user group
    
edit "Employee"
        set member "user1"
    next
    edit "Vendor"
        set member "user2"
    next
end

Configure the remainder of the SSL VPN tunnel as normal (creating a firewall policy allowing SSL VPN access to the internal network, including the VPN groups, necessary security profiles, etc.).

If configured correctly, only the 'Employees' group should require a client certificate to authenticate to the VPN.

Generate a self-signed SSL certificate using the OpenSSL for DPI / Full inspection

$
0
0
To generate a self-signed SSL certificate using the OpenSSL, complete the following steps:

1.      Write down the Common Name (CN) for your SSL Certificate. The CN is the fully qualified name for the system that uses the certificate. If you are using Dynamic DNS, your CN should have a wild-card, for example: *.itzecurity.in. Otherwise, use the hostname or IP address set in your Gateway Cluster (for example. 192.16.183.131 or dp1.acme.com).

2.      Run the following OpenSSL command to generate your private key and public certificate. Answer the questions and enter the Common Name when prompted.
openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem

3.      Review the created certificate:
openssl x509 -text -noout -in certificate.pem

4.      Combine your key and certificate in a PKCS#12 (P12) bundle:
openssl pkcs12 -inkey key.pem -in certificate.pem -export -out certificate.p12

5.      Validate your P2 file.
openssl pkcs12 -in certificate.p12 -noout -info

Note:
·         Your P12 file must contain the private key, the public certificate from the Certificate Authority and all intermediate certificates used for signing.

·         Your P12 file can contain a maximum of 10 intermediate certificates.

Viewing all 76 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>