Fortigate NGFW VM From Azure Marketplace

Draft

Table of Contents

FortiGate Azure Edge Design

  • port1 = WAN (private IP, Azure LB SNAT)
  • port2 = LAN (static IP, no gateway)
  • Single default route via port1
  • UDR: AKS > FortiGate
  • Firewall policy: LAN > WAN with SNAT
  • Policy route: LAN > WAN with gateway
  • Asymmetric routing enabled
  • Inbound HTTPS via VIP

Check available VMs:

$ az vm image list \
  --publisher fortinet \
  --offer fortinet_fortigate-vm_v5 \
  --all \
  --output table

Create Fortigate Infrastructure Using Terraform

Create WAN and LAN subnets:

resource "azurerm_subnet" "fgt_wan_dev" {
  name                 = "FortigateWanSubnet"
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  virtual_network_name = azurerm_virtual_network.application_dev.name
  address_prefixes     = ["10.X.8.0/24"]
}

resource "azurerm_subnet" "fgt_lan_dev" {
  name                 = "FortigateLanSubnet"
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  virtual_network_name = azurerm_virtual_network.application_dev.name
  address_prefixes     = ["10.X.9.0/24"]
}

Create WAN and LAN subnets network security groups (NSG):

resource "azurerm_network_security_group" "fgt_wan_dev_nsg" {
  name                 = "fgt-wan-dev-nsg"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name

  tags = {
    environment = "dev"
    application = "application"
  }
}

resource "azurerm_network_security_group" "fgt_lan_dev_nsg" {
  name                 = "fgt-lan-dev-nsg"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name

  tags = {
    environment = "dev"
    application = "application"
  }
}

Create public IP address to accept incoming traffic:

resource "azurerm_public_ip" "application_fgt_wan_ip_dev" {
  name                 = "application-fortigate-wan-pip-dev"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  allocation_method    = "Static"
  sku                  = "Standard"
}

output "application_fgt_wan_ip_dev" {
  value = azurerm_public_ip.application_fgt_wan_ip_dev.ip_address
}

Create Azure load balancer (LB):

resource "azurerm_lb" "application_fgt_ingress_lb_dev" {
  name                = "application-fgt-ingress-lb-dev"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  sku                 = "Standard"

  frontend_ip_configuration {
    name                 = "public-frontend"
    public_ip_address_id = azurerm_public_ip.application_fgt_wan_ip_dev.id
  }
}

Configure Azure LB backend address pool:

resource "azurerm_lb_backend_address_pool" "application_fgt_backend_dev" {
  name            = "application-fgt-backend-dev"
  loadbalancer_id = azurerm_lb.application_fgt_ingress_lb_dev.id
}

resource "azurerm_network_interface_backend_address_pool_association" "application_fgt_wan_lb_attach_dev" {
  network_interface_id    = azurerm_network_interface.fgt_wan_nic_dev.id
  ip_configuration_name   = "ipconfig1"
  backend_address_pool_id = azurerm_lb_backend_address_pool.application_fgt_backend_dev.id
}

Create Azure LB probe:

resource "azurerm_lb_probe" "https" {
  name                = "https-probe"
  loadbalancer_id     = azurerm_lb.application_fgt_ingress_lb_dev.id
  protocol            = "Tcp"
  port                = 443
  interval_in_seconds = 5
  number_of_probes    = 2
}

Create Azure LB inbound HTTPS rule:

resource "azurerm_lb_rule" "https" {
  name                           = "https-rule"
  loadbalancer_id                = azurerm_lb.application_fgt_ingress_lb_dev.id
  protocol                       = "Tcp"
  frontend_port                  = 443
  backend_port                   = 443
  frontend_ip_configuration_name = "public-frontend"
  backend_address_pool_ids       = [azurerm_lb_backend_address_pool.application_fgt_backend_dev.id]
  probe_id                       = azurerm_lb_probe.https.id
  disable_outbound_snat          = true # NOTE: Must be true if outbound rules exist!
}

Create Azure LB outbound rule:

resource "azurerm_lb_outbound_rule" "fgt_outbound" {
  name                           = "fgt-outbound-rule"
  loadbalancer_id                = azurerm_lb.application_fgt_ingress_lb_dev.id
  protocol                       = "All"

  backend_address_pool_id        = azurerm_lb_backend_address_pool.application_fgt_backend_dev.id

  frontend_ip_configuration {
    name = "public-frontend"
  }

  idle_timeout_in_minutes = 30
  enable_tcp_reset        = true
}

Create LAN network interface (NIC):

resource "azurerm_network_interface" "fgt_lan_nic_dev" {
  name                 = "application-lan-nic-dev"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  enable_ip_forwarding = true

  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = azurerm_subnet.fgt_lan_dev.id
    private_ip_address_allocation = "Static"
    private_ip_address            = "10.X.9.10"
  }
}

Create WAN network interface (NIC):

resource "azurerm_network_interface" "fgt_wan_nic_dev" {
  name                = "application-wan-nic-dev"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  enable_ip_forwarding = true

  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = azurerm_subnet.fgt_wan_dev.id
    private_ip_address_allocation = "Static"
    private_ip_address            = "10.X.8.10"
  }
}

Create network security rule to allow incoming traffic on WAN interface:

resource "azurerm_network_security_rule" "fgt_wan_dev_https_inbound" {
  name                        = "fgt_wan_dev_https_inbound"
  priority                    = 101
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  source_address_prefix       = "*"
  destination_port_range      = "443"
  destination_address_prefixes = [
    azurerm_network_interface.fgt_wan_nic_dev.private_ip_address,
    azurerm_public_ip.application_fgt_wan_ip_dev.ip_address
  ]
  resource_group_name         = data.azurerm_resource_group.main_rg.name
  network_security_group_name = azurerm_network_security_group.fgt_wan_dev_nsg.name
}

Create network security rule to allow inbound SSH traffic on WAN private IP:

resource "azurerm_network_security_rule" "fgt_wan_dev_ssh_inbound" {
  name                        = "fgt_wan_dev_ssh_inbound"
  priority                    = 102
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  source_address_prefix       = "*"
  destination_port_range      = "22"
  destination_address_prefixes = [
    azurerm_network_interface.fgt_wan_nic_dev.private_ip_address,
  ]
  resource_group_name         = data.azurerm_resource_group.main_rg.name
  network_security_group_name = azurerm_network_security_group.fgt_wan_dev_nsg.name
}

Create network security rule to allow Frotigate admin WEBUI traffic on WAN private IP:

resource "azurerm_network_security_rule" "fgt_wan_dev_fgtwebui_inbound" {
  name                        = "fgt_wan_dev_fgtwebui_inbound"
  priority                    = 103
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  source_address_prefix       = "*"
  destination_port_range      = "8443"
  destination_address_prefixes = [
    azurerm_network_interface.fgt_wan_nic_dev.private_ip_address,
  ]
  resource_group_name         = data.azurerm_resource_group.main_rg.name
  network_security_group_name = azurerm_network_security_group.fgt_wan_dev_nsg.name
}

Create network security group to allow incoming traffic from VNET CIDR and VPN CIDR on LAN subnet:

resource "azurerm_network_security_rule" "fgt_lan_dev_aks_inbound" {
  name                        = "fgt_lan_dev_aks_inbound"
  priority                    = 101
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  source_address_prefixes     = ["10.X.0.0/16", "172.22.0.0/16"]
  destination_port_range      = "*"
  destination_address_prefix  = "*"
  resource_group_name         = data.azurerm_resource_group.main_rg.name
  network_security_group_name = azurerm_network_security_group.fgt_lan_dev_nsg.name
}

Associate network security groups with WAN and LAN subnets:

resource "azurerm_subnet_network_security_group_association" "fgt_wan_dev_subnet_nsg_association" {
  subnet_id                 = azurerm_subnet.fgt_wan_dev.id
  network_security_group_id = azurerm_network_security_group.fgt_wan_dev_nsg.id
}

resource "azurerm_subnet_network_security_group_association" "fgt_lan_dev_subnet_nsg_association" {
  subnet_id                 = azurerm_subnet.fgt_lan_dev.id
  network_security_group_id = azurerm_network_security_group.fgt_lan_dev_nsg.id
}

Finally, create Fortigate NGFW VM:

resource "azurerm_virtual_machine" "fortigate" {
  name                  = "${var.prefix}-vm"
  location             = data.azurerm_resource_group.main_rg.location
  resource_group_name  = data.azurerm_resource_group.main_rg.name
  vm_size               = "Standard_F2s_v2"
  primary_network_interface_id = azurerm_network_interface.fgt_wan_nic_dev.id
  network_interface_ids = [
    azurerm_network_interface.fgt_wan_nic_dev.id,
    azurerm_network_interface.fgt_lan_nic_dev.id,
  ]

  storage_image_reference {
    publisher = "fortinet"
    offer     = "fortinet_fortigate-vm_v5"
    sku       = "fortinet_fg-vm_payg_2023"
    version   = "latest"
  }

  storage_os_disk {
    name              = "application-dev-ftg-os-disk0"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name  = "${var.prefix}-host"
    admin_username = var.fgt_admin_username
    admin_password = var.fgt_admin_password
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  plan {
    name      = "fortinet_fg-vm_payg_2023"
    product   = "fortinet_fortigate-vm_v5"
    publisher = "fortinet"
  }

}

Configure UDR for AKS subnet to route all outbound traffic to Fortigate NGFW VM:

resource "azurerm_route" "application_dev_aks_default" {
  name                   = "application-dev-aks-default-to-fortigate"
  resource_group_name    = data.azurerm_kubernetes_cluster.dev_aks_cluster.node_resource_group
  route_table_name       = "${element(split("/", data.azurerm_subnet.k8s_dev.route_table_id), length(split("/", data.azurerm_subnet.k8s_dev.route_table_id)) - 1)}"
  address_prefix         = "0.0.0.0/0"
  next_hop_type          = "VirtualAppliance"
  next_hop_in_ip_address = azurerm_network_interface.fgt_lan_nic_dev.private_ip_address
}

VM creation will take approx 2 minutes and 5 minutes more can take for Fortigate initialization.

Port 2 IP Configuration

A FortiGate VM (or any Azure multi-NIC VM) will always take the IP of the primary NIC first and assign that as its "main" interface during boot.

If the WAN NIC is primary, FortiGate will bring up port1 (WAN) first. Assign the private IP from the WAN subnet. Treat port1 as the default management interface. For FortiOS v5.x it will ignore the static IP for port2 (LAN) until configuration is applied inside FortiGate.

Azure sets the IP on the NIC, but FortiGate does not automatically configure the interface inside the firewall OS unless bootstrapped:

fgt-host # show system interface
config system interface
    edit "port1"
        set vdom "root"
        set mode dhcp
        set allowaccess ping https ssh fgfm
        set type physical
        set role wan
        set snmp-index 1
    next
    edit "ssl.root"
        set vdom "root"
        set type tunnel
        set alias "SSL VPN interface"
        set snmp-index 2
    next
end

Enable DHCP on port2:

config system interface
    edit port2
        set mode dhcp
    next
end

Enable remote acces on port2:

config system interface
    edit port2
        set allowaccess ping https ssh
    next
end

Check NSG attached to port2 using AZ CLI:

az network nic list-effective-nsg -g <RG> -n <lan-nic-dev>

Check routing table:

fgt-host # get router info routing-table all
Routing table for VRF=0
Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP
       O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
       * - candidate default

S*      0.0.0.0/0 [5/0] via 10.X.8.1, port1
C       10.X.8.0/24 is directly connected, port1

Add a custom static route BACK to the VPN client subnet if you want to access Fortigate WebUI from port2 (Optional):

fgt-host # config router static
    edit 100
        set dst 172.16.211.0 255.255.255.0
        set gateway 10.X.9.1
        set device port2
    next
end

Get system information using SSH access:

get system interface physical
get system global
get system interface

Change default port for webui:

fgt-host # config system global
    set admin-sport 8443
end

Configure Virtual IP For External Access

Internet
   |
[ Public IP ]
   |
Azure LB
   |
port1 (WAN + LAN) FortiGate VM
   |
   +── 10.X.3.0/24
        └── 10.X.3.250:443

Virtual IP - Mapping Public IP on port1 for AKS Ingress:

config firewall vip
    edit "fgt-private-ip-to-aks-ingress"
        set uuid be0ac9ec-dac2-51f0-d64a-b767e8d69c47
        set extip 10.X.8.10
        set extintf "port1"
        set portforward enable
        set mappedip "10.X.3.250"
        set extport 443
        set mappedport 443
    next
end
config firewall policy
    edit 1
        set name "WAN-to-AKS-Ingress"
        set uuid bd0d0dac-dad1-51f0-3941-847b5347b630
        set srcintf "port1"
        set dstintf "port1"
        set srcaddr "all"
        set dstaddr "fgt-private-ip-to-aks-ingress"
        set action accept
        set schedule "always"
        set service "HTTP" "HTTPS"
        set fsso disable
    next
end

Diagnostics:

diagnose sniffer packet any "tcp port 443" 4 0 a
interfaces=[any]
filters=[tcp port 443]
2025-12-16 15:59:37.890804 port1 out 10.X.9.10.443 -> 172.16.211.2.41460: fin 759501264 ack 1582031139
2025-12-16 15:59:38.052880 port1 in 172.16.211.2.41460 -> 10.X.9.10.443: psh 1582031139 ack 759501265
2025-12-16 15:59:38.052914 port1 out 10.X.9.10.443 -> 172.16.211.2.41460: rst 759501265
2025-12-16 15:59:38.070814 port1 out 10.X.9.10.443 -> 172.16.211.2.41472: fin 4134871143 ack 686119872
2025-12-16 15:59:38.160840 port1 out 10.X.9.10.443 -> 172.16.211.2.41478: fin 1996628204 ack 1748540287
config system settings
    set inspection-mode flow
end

Enable PORT2 > PORT1 Traffic

Sniffer A (LAN side)

diag sniffer packet port2 'tcp and port 443' 4

Sniffer B (WAN side)

diag sniffer packet port1 'tcp and port 443' 4

Result

fgt-host # diag sniffer packet port2 'tcp and port 443' 4
interfaces=[port2]
filters=[tcp and port 443]
1.543467 port2 -- 10.X.3.6.39846 -> 18.159.197.225.443: syn 3728816835
6.297131 port2 -- 10.X.3.5.37502 -> 34.240.123.193.443: syn 3353424429
7.431359 port2 -- 10.X.3.6.54384 -> 172.65.251.78.443: syn 2086802187
8.199222 port2 -- 10.X.3.6.44830 -> 34.241.54.72.443: syn 3797170027
12.075067 port2 -- 10.X.3.6.42914 -> 34.96.126.106.443: syn 1227266818
...
^C
43 packets received by filter
0 packets dropped by kernel

fgt-host # diag sniffer packet port1 'tcp and port 443' 4
interfaces=[port1]
filters=[tcp and port 443]
3.274125 port1 -- 168.63.129.16.63156 -> 10.X.8.10.443: fin 4101568114 ack 1410983035
3.274146 port1 -- 10.X.8.10.63156 -> 10.X.3.250.443: fin 4101568114 ack 1410983035
3.274749 port1 -- 168.63.129.16.63216 -> 10.X.8.10.443: syn 3634019582
3.274783 port1 -- 10.X.8.10.63216 -> 10.X.3.250.443: syn 3634019582
3.274988 port1 -- 10.X.3.250.443 -> 10.X.8.10.63156: fin 1410983035 ack 4101568115
3.274991 port1 -- 10.X.8.10.443 -> 168.63.129.16.63156: fin 1410983035 ack 4101568115
...
^C
12 packets received by filter
0 packets dropped by kernel

On port2 (LAN side) you see LOTS of SYNs like:

10.X.3.6.xxxxx > Internet:443  SYN
10.X.3.5.xxxxx > Internet:443  SYN
  • Traffic from AKS reaches FortiGate - OK.
  • Source IPs are 10.X.3.x (AKS nodes IPs) - OK.

On port1 (WAN side) you see ONLY THIS:

168.63.129.16 > 10.X.8.10:443
10.X.8.10 > 10.X.3.250:443

That is Azure Load Balancer health probes and your inbound VIP traffic.

  • ZERO packets from 10.X.3.x going out
  • ZERO SNATed packets
  • ZERO SYNs toward Internet

CONSLUTION: FortiGate is NOT forwarding AKS traffic from port2 to port1 at all. This is 100% a FortiGate policy / object / matching issue.

This is strict Reverse Path Forwarding (RPF) problem:

  • Pods enter on port2
  • Policy route forces exit on port1
  • FortiGate sees asymmetric path
  • Strict RPF silently drops packets
  • No logs, no counters, no errors

Looks like black magic.

Most important - enable asymmetric route! Without it, policy-routed traffic in Azure dies.

config system settings
    set asymroute enable
end

Add a rule to allow outgoing traffic from AKS nodes:

config firewall policy
    edit 100
        set name "AKS-to-Internet"
        set srcintf "port2"        # interface toward AKS subnet
        set dstintf "port1"        # WAN interface
        set srcaddr "AKS-Nodes"    # 10.X.3.0/24 AKS Nodes subnet
        set dstaddr "all"
        set service "HTTPS"
        set action accept
        set nat enable
        set schedule "always"
    next
end

Good To Know

Stop/Deallocate Fortigate VM will silently destroy your static routes.

How to create address:

config firewall address
    edit "AKS-Nodes"
        set subnet 10.X.3.0 255.255.255.0
    next
    edit "AKS-Pods"
        set subnet 172.22.0.0 255.255.0.0
    next
end

On older FortiOS 5.x versions creating router policies can help in some cases (On FortiOS 7.x everything works fine without any policy routes):

config router policy
    edit 1
        set input-device "port2"
        set src "10.X.3.0/24"
        set dst "0.0.0.0/0"
        set output-device "port1"
        set gateway 10.X.8.1
    next
end

NSG

You should add 443 allow rule for external ip and public ip dst to make everything work.

Subnets UDR

Workload Subnet > UDR > next hop = FortiGate private IP on WAN side
(or the FortiGate LAN interface, depending on topology)

FortiGate performs NAT to its WAN private IP. Azure translates to Public IP when needed.

Route name: default-to-fgt
Prefix: 0.0.0.0/0
Next hop type: Virtual appliance
Next hop: <FortiGate private IP on receiving interface>

AKS UDR

Reboot Survival

Execute:

fgt-host # get system status
Version: FortiGate-VM64-AZUREONDEMAND v6.0.4,build0231,190107 (GA)
Virus-DB: 1.00000(2018-04-09 18:07)
Extended DB: 1.00000(2018-04-09 18:07)
Extreme DB: 1.00000(2018-04-09 18:07)
IPS-DB: 6.00741(2015-12-01 02:30)
IPS-ETDB: 0.00000(2001-01-01 00:00)
APP-DB: 6.00741(2015-12-01 02:30)
INDUSTRIAL-DB: 6.00741(2015-12-01 02:30)
Serial-Number: FGTAZRJEIY24W618
IPS Malicious URL Database: 5.00609(2025-12-03 01:15)
Botnet DB: 4.00987(2025-12-11 00:17)
VM Resources: 2 CPU, 3968 MB RAM
BIOS version: 04000002
Log hard disk: Not available
Hostname: fgt-host
Operation Mode: NAT
Current virtual domain: root
Max number of virtual domains: 1
Virtual domains status: 1 in NAT mode, 0 in TP mode
Virtual domain configuration: disable
FIPS-CC mode: disable
Current HA mode: standalone
Branch point: 0231
Release Version Information: GA
FortiOS x86-64: Yes
System time: Fri Dec 19 06:26:59 2025

Dump Fortigate configuration:

ssh admin@<FGT-PRIVATE-IP> "show full-configuration" > ~/tmp/fgt-config.cfg

Logging

Syslogd > AKS Loki Promtail Daemonset

FortiOS v7.6.0. The idea is to expose AKS Loki Promtail and configure it to recieve syslogd traffic from Fortigate VM.

Show all available logging options:

config log ?

Confirm that the syslogd available for your FortiOS configuration:

syslogd                   Configure first syslog device.
syslogd2                  Configure second syslog device.
syslogd3                  Configure third syslog device.
syslogd4                  Configure fourth syslog device.

For official Loki Helm chart:

Enable external access in Promtail configuration:

promtail:
  extraPorts:
    syslog:
      name: tcp-syslog
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "k8s_subnet_name"
      labels: {}
      containerPort: 1514
      protocol: TCP
      service:
        type: LoadBalancer
        clusterIP: null
        port: 1514
        externalIPs: []
        nodePort: null
        loadBalancerIP: 10.X.X.X
        loadBalancerSourceRanges: []
        externalTrafficPolicy: null

Enable syslog scraping configuration:

promtail:
  enabled: true
  config:
    logLevel: info
    serverPort: 3101
    clients:
      - url: http://{{ .Release.Name }}:3100/loki/api/v1/push
    snippets:
      scrapeConfigs: |
        - job_name: syslog
          syslog:
            listen_address: 0.0.0.0:1514
            idle_timeout: 60s
            label_structured_data: true
            labels:
              job: vm-syslog
          relabel_configs:
          - source_labels: ['__syslog_message_hostname']
            target_label: host

FortiGate is sending RFC3164, while Promtail is parsing RFC5424 unless told otherwise.

error parsing syslog stream
err="expecting a version value in the range 1-999 [col 5]"

RFC5424:

<PRI>VERSION TIMESTAMP HOST APP PROCID MSGID MSG
<134>1 2025-12-23T11:41:15Z fortigate FGT 1234 ID47 User login successful

RFC3164 (BSD syslog):

<PRI>MMM dd hh:mm:ss HOST TAG: MSG
<134>Dec 23 11:41:15 fortigate sshd[1234]: Accepted password for admin

Configure Fortigate:

config log syslogd setting
  set status enable
  set server "10.X.X.X"
  set port 1514
  set mode reliable
  set facility local7
  set source-ip <FORTIGATE_INTERFACE_IP>
  set format default
end

Enable log categories. FortiGate does not send logs unless categories are enabled.

config log syslogd filter
  set severity information
  set forward-traffic enable
  set local-traffic enable
  set multicast-traffic enable
  set sniffer-traffic enable
  set anomaly enable
  set voip enable
end

Force Fortigate to emit true RFC5424 format:

config log syslogd setting
  set format rfc5424
  set mode reliable
end

Verify using kubectl -n monitoring get svc:

loki-promtail-syslog                  LoadBalancer   172.23.252.124   10.X.X.X   1514:31742/TCP   18s

December 5, 2025