Stumbling around the Fortinet CLI…

Blaming the firewall is a time-honored technique practiced by users, IT managers, and sysadmins alike.  As network engineers we could point out that solar flares are as likely a cause of the [insert issue of the day] as the firewall, but honestly, if they can’t see that the software updates they just did are likely the true reason the thing that wasn’t broken now is, chances are you aren’t going to convince them the firewall isn’t actively plotting against them.

My most successful strategy has been to take up residence in Wireshark Land, where the packets don’t lie and blame-storming takes a back burner. Recently, for example, I took captures on two Linux servers, one a web server in the DMZ, and one a database server on the internal network. The captures showed that the web server could initially reach the database server, but that communications broke down after a few minutes.  The database server clearly didn’t get the last of the web server’s packets.

Realizing there may actually be something to the “it’s the firewall” claim, I turned to the CLI of the firewall to see if the packets were even getting to the firewall interface and then out the other side. If you haven’t done this in the Fortigate world, it looks something like this, where port2 is my DMZ port:

My_Fortigate1 (MY_INET) # diag sniffer packet port2 ‘host 10.10.X.X’
interfaces=[port2]
filters=[host 10.10.X.X]
1.753661 10.10.X.X.33619 -> 10.10.X.X.5101: fin 669887546 ack 82545707
2.470412 10.10.X.X.33617 -> 10.10.X.X.5101: fin 990903181 ack 1556689010

I ran a similar sniffer session to confirm that the database server wasn’t seeing the traffic in question on the trust side of the network.  Sure enough, a few minutes after initially establishing communications, packets making it from the web server to the DMZ side of the firewall, quit making their way to the trust side of the firewall, not even getting a chance to talk the database server.  The traffic log from the FortiAnalyzer showed the packets being denied for reason code “No session matched.” Fabulous.

Thinking it looked to be a session timer of some kind, I examined the Fortigate policies from the GUI admin page, but couldn’t find anything labeled “hey dummy, here’s the setting that’s timing out your sessions.”  That’s because the setting I was looking for is apparently only seen in the CLI.*

The CLI showed the full policy (output abbreviated), including the set session-ttl:

My_Fortigate1 (My_INET) # config firewall policy
My_Fortigate1 (policy) # edit 50
My_Fortigate1 (50) # show full
config firewall policy
    edit 50
        set srcintf “port2”
        set dstintf “port1”
            set srcaddr “MY WEBSERVER”            
            set dstaddr “10.10.X.X” “Servers_10.10.X.X/32”            
        set rtp-nat disable
        set action accept
        set status enable
set session-ttl 0  

A session-ttl of 0 says “use the default” which in my case was 300 seconds. I was able to up this just for the policy in question using these commands:

My_Fortigate1 (My_INET) # config firewall policy
My_Fortigate1 (policy) # edit 50
My_Fortigate1 (50) # set session-ttl 3900
My_Fortigate1 (50) # end

This gave the application we were dealing with in this instance enough time to gracefully end sessions before the firewall so rudely cut them off and also managed to keep my database guy from bugging me anymore (that day).

*If this is in the GUI, I certainly do not possess patience levels high enough to take the time to find it, but feel free to point me to its location in the comments.

Published 2/4/2014