Category: Security


After the basics in part I, on to IPv6 and NAT. The title is misleading here: iptables exists for IPv6 and iptables can do NAT, but iptables cannot do NAT for IPv6 connections.

As for IPv6, this part is very simple: just add a ‘6’ between ‘ip’ and ‘tables’…

iptables04

… and it will work for IPv6. As you can see above, since IPv6 addresses are longer, rules tend to split over two rows in a smaller console window.

NAT is an entirely different matter as it involves the translation of an IP address in the IP header of a packet. It doesn’t have much use for a standalone system, but if the Linux is used for routing it’s often needed.

I mentioned before iptables uses chains, but it also uses tables. The table that has been discussed so far is the ‘filter’ table. The ‘nat’ table takes care of NAT. Since it’s an entirely different table, it has its own set of chains:

  • PREROUTING, which applies NAT before the packet is routed or checked by the ‘filter’ table. It is most useful for destination NAT.
  • INPUT is not present in all recent versions anymore and does not serve any real purpose anymore.
  • OUTPUT is for packets originating from the local machine. In general they don’t need NAT s this is rarely used.
  • POSTROUTING applies NAT after the routing of the packet, if it hasn’t been filtered by the ‘filter’ table. It is most useful for source NAT.

A look at the current rule set can be done with iptables -L -v -t nat and rules can be added the same way as in the filter table, except that the parameter -t nat is added every time. The only difference is in the action to take for a rule that matches, the -j parameter.

For the ‘filter’ tables, possible target are ACCEPT and DROP, but for the ‘nat’ table this is different:

  • DNAT specifies destination NAT. It must be followed by –to-destination and the destination. The destination can be an IP address, but if a port was specified in the rule it can also be a socket, e.g. 192.168.0.5:80. This makes port translations possible. A typical use case is if you want to make a server inside your network with a private IP address reachable from the internet.
  • SNAT is source NAT, and typically used for static NAT translations for an inside host with a private IP address towards its public IP address. It must be followed by the –to parameter that defines an IP address. It can also define a pool of IP addresses (e.g. 203.0.113.5-203.0.113.8) and a range of source ports.
  • MASQUERADE is a special case: it is a source NAT behind the outgoing interface’s IP address (hide-NAT). This is ideal for interfaces which use DHCP to receive a public IP address from a provider. No other parameters need to be specified, so it’s not required to change this rule every time the public IP address changes.

These targets by themselves do not block or allow a connection. It’s still required to define the connection in the main ‘filter’ table and allow it.

Examples for NAT:

  • Forward incoming SIP connections (control traffic and voice payload) towards an inside IP phone at 192.168.1.3. Allow the control traffic only from one outside SIP server at 203.0.113.10. The outside interface is eth1.
    iptables -t nat -A PREROUTING -i eth1 -p udp –dport 16384 -j DNAT –to-destination 192.168.1.3
    iptables -t nat -A PREROUTING -i eth1 -p udp –dport 5060 -j DNAT –to-destination 192.168.1.3
    iptables -A FORWARD -d 192.168.1.3 -p udp –dport 16384 -j ACCEPT
    iptables -A FORWARD -s 203.0.113.10 -d 192.168.1.3 -p udp –dport 5060 -j ACCEPT
  • Use IP addres 203.0.113.40 as outgoing IP address for SMTP traffic from server 192.168.2.20
    iptables -t nat -A POSTROUTING -s 192.168.2.20 -p tcp –dport 25 -j SNAT –to 203.0.11.40
    iptables -A FORWARD -s 192.168.2.20 -p tcp –dport 25 -j ACCEPT
  • Use the interface IP address for all other outgoing connections on interface eth1.
    iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

iptables05

Notice that for rules in the ‘filter’ table that correspond to a NAT rule in the PREROUTING table, the IP addresses are used that are seen after the NAT has taken place, and for the POSTROUTING it’s the original IP addresses that are used. This is because the following order, as mentioned earlier, is very important here.

This is the IPv6 and NAT part of iptables. Up next: optimization and hardening of the rule set.

Advertisements

Most modern Linux distributions come with a firewall package already active. Since it’s often set in an ‘allow-all’ mode, people are often unaware of it.

iptables01

Meet iptables, a basic yet powerful stateful firewall. You can see a default ‘allow-all’ policy above. Note that there are three different chains: INPUT, FORWARD and OUTPUT. Traffic can only match one of these three chains.

  • INPUT is for all traffic that is destined for the local Linux. It’s typically used to filter local services, e.g. you can allow only certain subnets to connect to the Linux via SSH, or shield off a port used by a process that you don’t want to be visible from the internet. This is also for response traffic from connections initiated locally.
  • FORWARD is for all traffic traversing the device. This requires routing functionality to be activated. On Debian-based Linux versions you can do this by adding or modifying the line net.ipv4.ip_forward=1 in /etc/sysctl.conf and perhaps adding some static routes.
  • OUTPUT is all traffic that originates from the local Linux. This is both for outbound connections as for response traffic for local services.

Looking at the current rule set can be done with iptables -L -v, where the optional -v parameter displays extra detail. Adding a rule can be done with iptables -A followed by the chain and the parameters of the rule. The most common ones are:

  • -p defines the protocol: udp, tcp, icmp or an IP Protocol number. You can modify the /etc/protocols file to make Linux recognize an IP Protocol number by name.
  • -i is the incoming interface. This is not supported in the OUTPUT chain for obvious reasons.
  • -o is the outgoing interface. This is not supported in the INPUT chain.
  • -s is the source subnet or host.
  • -d is the destination subnet or host.
  • – -dport (without the space) is the destination port, only valid if the protocol is defined as TCP or UDP. This can be a single port or a range, separated by a double colon.
  • – -sport (without the space) is the source port, or range of ports.
  • -j is the action to take. For the purposes of this article, let’s assume only ACCEPT and DROP are possible. More options will be discussed in upcoming blog posts.

The parameters not defined in a rule are assumed to have the value ‘any’. Examples:

  • Add a rule to allow SSH to the local Linux from one single host 192.168.5.5:
    iptables -A INPUT -p tcp -s 192.168.5.5 –dport 22 -j ACCEPT
  • Allowing subnet 10.0.1.0/24 to do Remote Desktop to server 10.100.0.10:
    iptables -A FORWARD -p tcp -s 10.0.1.0/24 -dport 3389 -j ACCEPT
  • Block any traffic through the device towards UDP ports 10000 to 11000, regardless of source and destination:
    iptables -A FORWARD -p udp –dport 10000:11000 -j DROP
  • Don’t allow any traffic from interface eth4 to interface eth6:
    iptables -A FORWARD -i eth4 -o eth6 -j DROP

There is one additional rule which you will likely need in the configuration, but which differs from the rest of the rules: the stateful traffic rule. Although iptables by default keeps a state tables, it does not use it for traffic matching unless you tell it to. The rules to do this:

iptables -A INPUT -m conntrack –ctstate ESTABLISHED -j ACCEPT
iptables -A FORWARD -m conntrack –ctstate ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m conntrack –ctstate ESTABLISHED -j ACCEPT

This calls in the module for connection tracking (-m conntrack) and matches any connections which have been seen before (ESTABLISHED). It is best to add this rule first this to avoid any drop rules from dropping traffic from known connections.

To modify the default policy, use the -P switch. For example, to block all local incoming connections by default:

iptables -P INPUT DROP

WARNING: When configuring iptables for the first time, especially via SSH, you have to be careful not to lock yourself out of the system. On top of that, some processes use IP communication using the 127.0.0.1 loopback IP address internally. If you tell iptables to block this, it may break some applications!

Review what you want to achieve. For example, say you want to change the default iptables policy to drop any incoming connections, except http traffic and ssh from your computer:

  1. First add the rules allowing internal and stateful traffic:
    iptables -A INPUT -m conntrack –ctstate ESTABLISHED -j ACCEPT
    iptables -A INPUT -i lo -j ACCEPT
  2. Then add the rules allowing the connections:
    iptables -A INPUT -p tcp –dport 80 -j ACCEPT
    iptables -A INPUT -p tcp -s 192.168.1.10 –dport 22 -j ACCEPT
  3. Finally set the policy to deny by default:
    iptables -P INPUT DROP

iptables02

The OUTPUT chain can stay with a default allow action. If you modify it as well, be sure to add the rules for internal and stateful traffic again.

You can check the state table live via cat /proc/net/ip_conntrack

iptables03

Here you see an example of one SSH connection and a NTP connection in the state table.

These are the iptables basics. In upcoming blog posts, I’ll talk about NAT, IPv6, optimization, hardening iptables security and increasing the scalability for large rule sets.

I know, it’s been quiet on this blog for the past months. But here we are again, starting off with a simple post. Maybe not much real world practical use, but fun to know.

Dealing with ACLs requires more protocol knowledge compared to dealing with a stateful firewall. A stateful firewall takes care of return traffic for you, and often even has some higher layer functionality so it can automatically allow the incoming port of a FTP connection, for example.

ACLs don’t do this. They’re static and don’t care for return traffic. On a switch in particular, they’re done on the ASIC, in hardware. This means there is no logging possible. On the plus side, the filtering doesn’t consume CPU. Many engineers assume stateful firewalls are superior to ACLs, and while this is certainly true concerning scalability and manageability, it’s not 100% true for security. ACLs don’t get fooled by some attacks: they’re not dynamic like the stateful filtering principle, so they will also not work for attacks that try to use the stateful functionality of a firewall against it. Attacks involving many packets attempting to use op CPU also don’t work. True, these attacks are not common, but it’s still a forgotten advantage.

Concerning logging, an ACL does have the option, even on a switch. By adding the ‘log’ parameter to the end of a line you can count the hits on the ACL. However, all this does on the ASIC is punt the packet towards the CPU, who then processes the packet in software and increases a counter and logs a syslog message. If you do this for a lot of packets, or all of them, you’re essentially using the CPU for switching. This defeats the purpose of the ASIC. Most Cisco switch don’t have the CPU for that, limiting throughput and causing high CPU, latency, jitter, even packetloss.

But there’s a more efficient way to do this, for TCP at least: only log the connection initiations.

permit tcp any any established
permit tcp any any eq 80
permit tcp any any eq 443
permit tcp any any eq 22 log
permit udp any any
deny ip any any

In the above ACL the first line will allow all TCP traffic for which there already is a connection established. Just this rule doesn’t allow any traffic: you still need to be able to initiate traffic too. The two rules below it allow HTTP and HTTPS. Finally, the fourth rule allows SSH but also logs it. When a user creates an SSH session through the interface, the first SYN packet will match the log rule and the connection will be logged. All packets of the same flow after this will match the first rule and be switched in hardware. Minimal CPU strain, but connections are logged. Apart from the first packet the flow is forwarded in hardware.

This way of ACL building also has advantages on CPU-based platforms like (low-end) routers: most packets will hit the first line of the ACL and only a few packets will require multiple ACL entries to be checked by the CPU, decreasing general load.

DoS attack types.

Everyone has heard of a DoS attack: a Denial of Service attack that consumes a server’s resources, taking it (temporarily) offline. However, more that one type of DoS attack exists. I’m going to discuss a few here to clarify the complexity in defending against them.

The SYN attack

DDoS-Type1

One of the most simple and well-known DoS attacks: just sent TCP packets with only the SYN flag set towards a web server. The server will reserve resources (sockets, memory, CPU) for the incoming connections and reply, but the connection is never completed. This can go up to millions of packets per second.

While this will take down a server eventually, it can also affect the firewall in front of it: the state table will fill up. Worst case this affects reachability of all servers behind the firewall.

One way to deal with this is rate-limiting the number of incoming connections towards each server on the firewall if the firewall supports it, making it one of the few attacks that can be countered without a dedicated anti-DoS appliance.

Another way is using SYN cookies: the firewall will send the reply (SYN,ACK) on behalf of the server and only if the client completes the connection (ACK), the firewall will connect to the server and tie both connections together.

The HTTP GET attack

DDoS-Type2

This attack bypasses the firewall, making it much more difficult to counter. A single TCP connection towards a web server is established. In that connection, instead of requesting the web page once using an HTTP GET, the web page is continuously requested over and over again, using up all server resources.

Countering this requires packet inspection on layer 6-7 which must be done by an IPS or anti-DoS appliance. Firewalls will not detect this.

The UDP flooding attack

DDoS-Type3

A UDP flooding attack is most often done using open DNS resolvers. A DNS request with a spoofed source IP address is sent towards a DNS server. This DNS server replies towards the spoofed IP address (the target server) with a large output. An example is a NS query for the root servers: the response is about four times larger compared to the request. This means about 250 Mbps of request traffic is required to flood a gigabit uplink towards a server in this case.

DDoS-rootNSqueries

Larger multipliers exist for other types of queries, generating 10, 20, 30,… times as much output compared to the query. While this example uses DNS, UDP-based attacks now also exist for NTP and SNMP. Advantages of NTP and SNMP are large possible multiplier values and less awareness of the attack’s existence.

However, the bad part is that this kind of attack floods uplinks towards data centers and LANs, which are shared with other servers or companies. Placing an anti-DoS appliance in the data center right before the firewall, but after the uplink towards the ISP, will be ineffective. Countering this attack requires an appliance at the ISP before the uplink, or DoS cloud services that reroute BGP IP ranges (assuming you have them) for filtering in case of a large-scale DoS.

Without an appliance options are limited. Placing a firewall in front of it will demand a lot of CPU from the firewall as it will still have to receive, check and drop packets. Usually two options remain: switch ACLs and black hole routing. Black hole routing means setting up a Null route of the destination server before it reaches your infrastructure (an ISP or BGP router), essentially giving up your server to save the rest of the network. ACLs on switches are hard to set up in the middle of an attack and not always possible, but the advantage is that packets will be dropped in hardware, using the switch ASIC and not consume any firewall, router or server CPU. Most likely your server will still end up unreachable.

Others
The above are just a few common ones. Fact is, any layer 3 point in the network can be attacked. Even if it doesn’t have any ports listening, it will have to use CPU to look at packets arriving for the IP address it has. And if it’s a web server behind a firewall, it will be reachable on a port which can be exploited. Encryption (SSL) can ironically make this worse because anti-DoS appliances can’t check in the HTTPS session for GET requests, or the encryption itself can be continuously renegotiated, taking up CPU.

Remember, most DoS attacks are legitimate packets. Just a lot of them.

Yes, I’m riding the ‘OMG-NSA!’ wave, but it’s proven to be interesting. Eventually one starts pondering about it and even trying some stuff in a lab. Hereby the results: I’ve managed to introduce a backdoor in a Cisco router so I can log in remotely using my own username and non-standard port. Granted, it’s far from perfect: it’s detectable and will be negated if you use RADIUS/TACACS+. But if you’re not paying attention it can go unnoticed for a long time. And a mayor issue for real life implementation: it requires privileged EXEC access to do it in the first place (which is why I’m publishing this: if someone untrusted has privileged EXEC access, you have bigger problems on your hands).

The compromised system
Backdoor-IOS

The router which I tested is a Cisco 2800 series, IOS 15.1(2)GC. Nothing special here. The router is managed by SSH, a local user database and uses an ACL for the management plane.

Backdoor-VTY

The goal

Accidentally getting the password and gaining access is not a backdoor. I want to log in using my own private username and password, use a non-standard port for SSH access, and bypass the ACL for the management plane.

The Setup part 1 – Backdoor configuration

How it’s done: two steps. First, just plainly configure the needed commands.

Backdoor-Config

  • The username is configured.
  • The non-standard high port is configured using a rotary group.
  • The rotary group is added to the VTY (SSH) lines. Just 0 to 4 will do.
  • The ACL for the management plane has an extra entry listing a single source address from which we will connect.

The setup part 2 – Hiding the backdoor

So far it’s still not special. Anyone checking the configuration can find this. But it can be altered using Embedded Event Manager.

Backdoor-EEM

These three EEM applets will filter out the commands and show a clean configuration instead!

  • The “backDOORrun” is the main applet which replaces the standard “show run” by one that doesn’t list the rotary group, the extra ACL entry, the username and the EEM applets themselves. Note that it’s handy to name all objects part of the backdoor in a similar way, e.g. “backDOOR”, so they are matched with a single string.
  • Since the above only affects “show run” two more applets are required for “show access-lists” and “show ip access-lists”. Note that these are only needed if a non-standard port is used, to mask the ACL.

Detectability

Several things do give away that there might be a back door. First of all, port 4362 will respond (SYN,ACK) to a port scan, revealing that something is listening. Second, although the commands are replaced, there’s a distinct ‘extra’ CLI prompt after the commands:

Backdoor-Detection

This only shows if you don’t do any pipe commands yourself and easily mistaken for an accidental extra hit on the ‘enter’ key, but when you’re aware of it, it does stand out.

And last, once you take the running config from the device (through TFTP for example) and open it in a text editor, everything will show as normal. And by knowing the EEM applet names, you can remove them.

I have to admit, this article will sound a bit like an advertisement. But given that Cisco has gotten enough attention on this blog already, it can only bring variation into the mix.

A short explanation of a series of different products offered by F5 Networks. Why? If you’re a returning reader to this blog and work in the network industry, chances are you’ll either have encountered one of these appliances already, or could use them (or another vendor’s equivalent of course).

F5-LTM

LTM
The Local Traffic Manager’s main function is load balancing. This means it can divide incoming connections over multiple servers.
Why you would want this:
A typical web server will scale up to a few hundred or thousand connections, depending on the hardware and services it is running and presenting. But there may be more connections needed than one server can handle. Load balancing allows for scalability.
Some extra goodies that come with it:

  • Load balancing method: of course you can choose how to divide the connections. Simply round-robin, weighted in favor of a better server that can handle more, always to the server with the least connections,…
  • SSL Offloading: the LTM can provide the encryption for HTTPS websites and forward the connections in plain HTTP to the web servers, so they don’t have to consume CPU time for encryption.
  • OneConnect: instead of simply forwarding each connection to the servers in the load balancing pool, the LTM can set up a TCP connection with each server and reuse it for every incoming connection, e.g. a new HTTP GET for each external connection over the same inbound connection. Just like SSL Offloading, it consumes fewer resources on the servers. (Not every website handles this well.)
  • Port translation: not really NAT but you can configure the LTM for listening on port 80 HTTP or 443 HTTPS while the servers have their webpage running on different ports.
  • Health checks: if one of the servers in the pool fails, the LTM can detect this and stop sending connections to the server. The service or website will stay up, it will just be able to accept fewer resources. You can even upgrade servers one by one without downtime for the website (but make sure to plan this properly).
  • IPv6 to IPv4 translation: your web servers and entire network does not have to be IPv6 capable. Just the network up to the LTM has to be.

F5-ASM

ASM
The Application Security Manager can be placed in front of servers (one server per external IP address) and functions as an IPS.
Why you would want this:
If you have a server reachable from the internet, it is vulnerable to attack. Simple as that. Even internal services can be attacked.
Some extra goodies that come with it:

  • SSL Offloading: the ASM can provide the encryption for HTTPS websites just like the LTM. The benefit here is that you can check for attack vectors inside the encrypted session.
  • Automated requests recognition: scanning tools can be recognized and prevented access to the website or service.
  • Geolocation blocks: it’s possible to block out entire countries with automatic lists of IP ranges. This way you can offer the service only where you want it, or stop certain untrusted regions from connecting.

GTM
The Global Traffic Manager is a DNS forwarding service that can handle many requests at the same time with some custom features.
Why you would want this:
This one isn’t useful if the service you’re offering isn’t spread out over multiple data centers in geographically different regions. If it is, it will help redirect traffic to the nearest data center and provide some DDoS resistance too.
Some extra goodies that come with it:

  • DNSSec: secured DNS support which prevents spoofing.
  • Location-based DNS: by matching the DNS request with a list of geographical IP allocations, the DNS reply will contain an A record (or AAAA record) that points to the nearest data center.
  • Caching: the GTM also caches DNS requests to respond faster.
  • DDoS proof: automated DNS floods are detected and prevented.

F5-APM

APM
The Access Policy Manager is a device that provides SSLVPN services.
Why you would want this:
The APM will connect remote devices with encryption to the corporate network with a lot of security options.
Some extra goodies that come with it:

  • SSLVPN: no technical knowledge required for the remote user and works over SSL (TCP 443) so there’s a low chance of firewalls blocking it.
  • SSO: Single Sign On support. Log on to the VPN and credentials for other services (e.g. Remote Desktop) are automatically supplied.
  • AAA: lots of different authentication options, local, Radius, third-party,…
  • Application publishing: instead of opening a tunnel, the APM can publish applications after the login page (e.g. Remote Desktop, Citrix) that open directly.

So what benefit would you have from knowing this? More than you think: many times when a network or service is designed, no attention is given to these components. Yet they can help scale out a service without resorting to complex solutions.

ASA: nice-to-know features.

I’ve already made an introduction to the ASA, but when working with them on a regular basis, it’s nice to know some features that come with the product to explain how it reacts and help troubleshooting. So for the interested reader with little ASA experience, below a few features that have proven handy to me.

Full NAT & socket state
Most consumer-grade routers with NAT keep a NAT state table that keeps state only with the source socket . A socket is an IP address and port paired together. For example, the following setup:

ASA-NAT

When connecting to the web server, remote socket 198.51.100.5:80, a local socket, for example 192.168.1.2:37004 is created. The router will then do a NAT translation to its outside IP address (a NAT/PAT with overloading or hide NAT) to socket 203.0.113.10:37004. This means that if return traffic arrives for destination 203.0.113.10 port 37004, it will be translated to 192.168.1.2 port 37004. However, without stateful firewalling, any packet will be translated back in again on port 37004, regardless of source. This is how some software like torrent programs do NAT hole punching. Also, no matter how big the pool of private IP addresses, the public IP address translations have a maximum of about 64,000 ports available (okay, 65,535 technically but there are probably some reserved and a source port below 1,024 is generally not recommended).

The ASA handles this differently: in combination with the stateful firewall a full state is made for each connection, both source and destination socket. This means the above translation is still done but no return traffic from another source is allowed. On top of that, if another inside host makes a connection towards a different web server, the ASA can reuse that port 37004 for a translation. Return traffic from that different web server will be translated to the other inside host because the ASA keeps a full state. Result: no 64,000 ports per public IP address the device has, but 64,000 per remote public IP address! This allows for even more oversubscription of a single public IP address, assuming not everyone is going to browse the exact same website.

ASA-DoubleSocket

Sequence randomization
A bit further into layer 4: TCP uses sequence numbers to keep track of the right order in a packet flow. The initial sequence number is supposed to be random, but this is not often the case in practice. In fact, one quick Wireshark from a connection to Google gives me this:

ASA-Sequence

The problem is that guessing sequence numbers allows an attacker to intercept a TCP connection or guess an operating system based on the sequence number pattern. That’s where the second nice-to-know ASA feature comes into play: sequence randomization. By adding a random number to each sequence number (the same random number for each packet per flow) it becomes impossible to guess the initial sequence number of the next connection, as well as difficult to do any OS fingerprinting based on it.

Inspect policy-maps
For someone not familiar with the ASA, this is often a point of trouble. By default the ASA has no awareness above layer 4. This means any information not in the UDP or TCP header isn’t checked. Examples are HTTP headers, the FTP port used for transfer (which is in the payload) and ICMP Sequence numbers.

ASA-ICMP

ASA requires configuration of policy-maps for this. This is why by default ping requests through the ASA don’t work: it cannot create a state for it. And for HTTP inspect, it checks for proper HTTP headers as well as the presence of a user-agent header. This means non-HTTP traffic cannot be sent through port 80, and incoming telnets on port 80 towards web servers aren’t accepted either, preventing some scans.

Capturing
Finally, one of the most useful functions. While many other platforms with a Unix-based OS allow some form of tcpdump, Cisco does not support it. However, you can do some form of capturing on an ASA, even with proper filtering.

First configure the ACL that will be used as a filter, otherwise you’ll capture all traffic for that interface.

ASA#configure terminal
ASA(config)#access-list ExampleCapture extended permit ip host 172.16.16.16 any
ASA(config)#exit

Next, find the correct interface name: the ‘nameif’ because the usual interface name will not do.

ASA#show run int vlan16
!
interface Vlan16
nameif Internal
security-level 50
ip address 172.16.16.1 255.255.255.0

Now you can start and show the capture.

ASA#capture TestCap interface Internal access-list ExampleCapture
ASA#show capture TestCap
76 packets captured

1: 16:45:13.991556 802.1Q vlan#16 P0 172.16.16.249.44044 > 203.0.113.10.22: S 3599242286:3599242286(0) win 8192 <mss 1460,nop,wscale 8,nop,nop,sackOK>
2: 16:45:14.035474 802.1Q vlan#16 P0 172.16.16.249.44044 > 203.0.113.10.22: . ack 1303526390 win 17520
3: 16:45:14.037824 802.1Q vlan#16 P0 172.16.16.249.44044 > 203.0.113.10.22: P 3599242287:3599242338(51) ack 1303526390 win 17520
4: 16:45:14.067196 802.1Q vlan#16 P0 172.16.16.249.44044 > 203.0.113.10.22: . ack 1303526754 win 17156
5: 16:45:14.072887 802.1Q vlan#16 P0 172.16.16.249.44044 > 203.0.113.10.22: P 3599242338:3599242898(560) ack 1303526754 win 17156
6: …

Note that traffic is seen in only one direction here. To see return traffic, add the reverse flow to the capture ACL as well. Unfortunately, the capture must stay running while watching the output here. The capture can be stopped as following:

ASA#no capture TestCap

This will erase the capture also, so the show command will no longer work.

Additionally, you can do a real-time by adding the parameter ‘real-time’, but it’s a bit more tricky. This is not recommended for traffic-intensive flows, but ideal to see if a SYN is actually arriving or not.

ASA#capture TestCap interface External access-list ExampleCapture real-time
Warning: using this option with a slow console connection may
result in an excessive amount of non-displayed packets
due to performance limitations.

Use ctrl-c to terminate real-time capture

1: 16:45:51.755454 802.1Q vlan#16 P0 172.16.16.16.43969 > 203.0.113.10.22: . ack 2670019600 win 16220
2: 16:45:51.768698 802.1Q vlan#16 P0 172.16.16.16.43969 > 203.0.113.10.22: . ack 2670019768 win 17520
3: 16:45:51.768774 802.1Q vlan#16 P0 172.16.16.16.43969 > 203.0.113.10.22: . ack 2670019968 win 17320
4: 16:45:51.777501 802.1Q vlan#16 P0 172.16.16.16.43969 > 203.0.113.10.22: . ack 2670020104 win 17184
5: …

Just don’t forget to remove the ACL after you’re done.

Setting up a routing protocol neighborship isn’t hard. In fact, it’s so easy I’ve made them by accident! How? There were already two OSPF neighbors in a subnet and I was configuring a third router for OSPF with yet another fourth router. But because the third router had an interface in that same subnet and I used the command ‘network 0.0.0.0 255.255.255.255 area 0’ the neighborships came up. This serves as an example that securing a neighborship is not only to avoid malicious intent, but also to minimize human error.

Session authentication
The most straightforward way to secure a neighborship is adding a password to the session. However, this is not as perfect as it should be: it doesn’t encrypt the session so everyone can still read it, and the hash used is usually done in md5, which can easily be broken at the time of writing. Nevertheless, a quick overview of the password protection for EIGRP, OSPF and BGP:

Router(config)#key chain KEY-EIGRP
Router(config-keychain)#key 1
Router(config-keychain-key)#key string
Router(config-keychain-key)#exit
Router(config-keychain)#exit
Router(config)#interface Fa0/0
Router(config-if)#ip authentication mode eigrp 65000 md5
Router(config-if)#ip authentication key-chain eigrp 65000 KEY-EIGRP
Router(config-if)#exit

Router(config)#interface Fa0/1
Router(config-if)#ip ospf message-digest-key 1 md5
Router(config-if)#exit
Router(config)#router ospf 1
Router(config-router)#area 0 authentication message-digest
Router(config-router)#exit

Router(config)#router bgp 65000
Router(config-router)#neighbor 10.10.10.10 remote-as 65000
Router(config-router)#neighbor 10.10.10.10 password

Note a few differences. EIGRP uses a key chain. The positive side about this is that multiple keys can be used, each with his own lifetime. The downside: administrative overhead and unless the keys change every 10 minutes it’s not of much use. I doubt anyone uses this in a production network.

BGP does the configuration for a neighbor (or a peer-group of multiple peers at the same time for scalability). Although there’s no mention of hashing, it still uses md5. It works with eBGP as well but you’ll need to agree on this with the service provider.

OSPF sets the authentication key on the interface and can activate authentication on the interface, but here it’s shown in the routing process, as it’s likely you’ll want it on all interfaces. It would have been even better if it was possible to configure the key under the routing process, saving some commands and possible misconfigurations on the interfaces. OSPF authentication commands can be confusing, as Jeremy points out. However:

OSPFv3 authentication using IPsec
The new OSPF version allows for more. Now before you decide that you’re not using this because you don’t run IPv6, let it be clear that OSPFv3 can be used for IPv4 as well. OSPFv3 does run on top of IPv6, but only link-local addresses. This means that you need IPv6 enabled on the interfaces, but you don’t need IPv6 routing and there’s no need to think about an IPv6 addressing scheme.

Router(config)#router ospfv3 1
Router(config-router)#address-family ipv4 unicast
Router(config-router-af)#area 0 authentication ipsec spi 256 sha1 8a3fe4a551b81dc24f6148b03e865b803fec49f7
Router(config-router-af)#exit
Router(config-router)#exit
Router(config)#interface Fa0/0
Router(config-if)#ipv6 enable
Router(config-if)#ospfv3 1 area 0 ipv4
Router(config-if)#ospfv3 bfd

This new OSPF version shows two advantages: you can configure authentication per area instead of per interface, and you can use SHA1 for hashing. The key has to be a 40-digit hex string, it will not accept anything else. A non-hex character or 39 or 41 digits gives a confusing ‘command not recognized’ error. The SPI vaue needs to be the same on both sides, just like the key of course. The final command is to show optional BFD support.

EIGRP static neighbors
For EIGRP you can define the neighbors on the router locally, instead of discovering them using multicast. This way, the router will not allow any neighborships from untrusted routers.

Router(config)#router eigrp 65000
Router(config-router)#network 10.0.0.0 0.0.0.255
Router(config-router)#neighbor 10.0.0.2 Fa0/0
Router(config-router)#exit

Static neighbor definition is one command, but there is a consequence: EIGRP will stop multicasting hello packets on he interface where the static neighbor is. This is expected behavior, but easily forgotten when setting it up. Also, the routing process still needs the ‘network’ command to include that interface, or nothing will happen.

BGP Secure TTL
Small yet useful: checking TTL for eBGP packets. By default an eBGP session uses a TTL of 1. By issuing the ‘neighbor ebgp-multihop ‘ you can change this value. The problem is that an attacker can send SYN packets towards a BGP router with a spoofed source of a BGP peer. This will force the BGP router to respond to the session request (SYN) with a half-open session (SYN-ACK). Many half-open sessions can overwelm the BGP process and bring it down entirely.

TTL-Security

Secure TTL solves this by changing the way TTL is checked: instead of setting it to the hop count where the eBGP peer expects a TTL of 1, the TTL is set to 255 to begin with, and the peer checks upon arrival of the packet if the TTL is 255 minus the number of hops. Result: an attacker can send spoofed SYN packets, but since he’ll be more hops away and the TTL can’t be set higher than 255, the packets will arrive with a too low TTL value and are dropped without any notification. The configuration needs to be done on both sides:

Router(config)#router bgp 1234
Router(config-router)#neighbor 2.3.4.5 remote-as 2345
Router(config-router)#neighbor 2.3.4.5 ttl-security hops

These simple measures can help defend against the unexpected, and although it’s difficult in reality to implement them in a live network, it’s good to know when (re)designing.

If you ever managed a Campus LAN, you’ll know what happens with a lot of end users that have access to ethernet cables on desks. The occasional rogue hub, a loop now and then, and if they have access to some more advanced tools, some BPDU’s and a rogue DHCP server. Most of these events are not intended to be malicious (even the BPDU’s and rogue DHCP), but they happen because end users are not aware of the impact of some devices on the network.

But, given a malicious intend, what are the possibilities of attacking a switched Cisco network from a directly attached interface? Operating system for all the upcoming attacks: BackTrack Linux, which has many interesting tools installed.

MAC Flood
The classic attack first: flooding the switch’s CAM table with random source MAC addresses.
Tool: macof
Countermeasure: port-security

First attacking without port security: as expected, CAM table fills, CPU increases and everything is flooded. Congestion everywhere.

L2Attack-1

Finding this attack without port-security is feasible by checking CPU processes: HLFM address learning doesn’t normally consume that much CPU. Turning of MAC address learning does the same but without CPU impact.

So does turning port-security on solve the problem? It depends. Turning it on and setting it to only block new mac addresses but not shutting down the port actually makes things worse for the CPU:

L2Attack-2

The best solution: port-security with shutdown of the port in case of too many MAC addresses. No flooding, no CPU hogging.

CDP Flood
A Cisco-only attack. Flooding CDP frames with fake neighbors, causing not only CPU spikes, but also clogging the memory with all the neighbor entries. ‘show cdp neighbor’ becomes like showing the route table on a BGP router: endless.
Tool: Yersinia
Countermeasure: disabling CDP on the port or globally.

The attack with CDP turned on (the default) is very effective:

L2Attack-3

Finding the attack is easy, as both CPU and memory will clearly show the CDP process is using up resources. However, without CDP on the port, the attack does nothing. So the best solution: always turn CDP off towards a user-facing port. Even behind an IP Phone, although some functionality will be lost.

Root BPDU inject
A funny one. Inject a BPDU claiming to be root to cause spanning-tree recalculations and creating suboptimal paths in the network.
Tool: Yersinia
Countermeasure: Root Guard.

L2Attack-4

Notice the root ID, which has a nearly identical MAC address to make it difficult to spot the difference, and the aging time of two days, making this an attack that will last while the attacker is no longer connected. Root Guard on the port counters this attack easily though.

BPDU Flood
This attack doesn’t try to change the spanning-tree topology, but rather overload the STP process. Consequence is high CPU and eventually spanning tree inconsistencies.
Tool: Yersinia
Countermeasure: BPDU Guard

L2Attack-5

Spanning-tree should not use that much CPU on a switch. HLFM address learning will increase too due to the random source MAC addresses, and depending on the switch, Hulc LED Process will increase too. This is the process that governs the LED status of all switchports: the more ports the switch, the more this process will consume CPU if flooding attacks are happening.

BPDU Guard stops this effectively by shutting down the port. BPDU Filter not so much: it still needs to look at the BPDU to drop it and not forward it in hardware. BPDU Filter is generally not recommended anyway.

DHCP Discover Flood
Not really a layer 2 attack, but still impacting for the local subnet. Sending a flood of DHCP Discover messages, quickly overloading the DHCP Server(s) for the subnet.
Tool: Yersinia
Countermeasure: DHCP Snooping and DHCP Snooping Rate Limit

If DHCP Snooping isn’t enabled on the switch, it behaves like a MAC Flood attack and can be countered accordingly. Simply enabling DHCP Snooping, which is against rogue servers and not against flooding, makes things worse.

L2Attack-6Not only does it make the CPU spike, but it’s one of the few attacks that makes the switch unresponsive in the data plane, meaning not only management is lost, but the switch stops forwarding most frames, with packet loss on all ports. Simple snooping does prevent execution from a virtual machine:

L2Attack-7

But to really protect against this attack, DHCP Snooping rate-limiting helps:

L2Attack-8

OSPF Flood
Sending a flood of OSPF Hello packets over a switch.
Tool: a virtual machine running ospfd (Vyatta, OpenBSD), and a hub between switch and computer with cable loop to cause the flood.
Countermeasure: ACL

For this one I didn’t use any specific tool. I just made my computer send out an OSPF Hello, and made sure the hub between computer and switch was wired so it would flood the frame. Result: spectacular. The switch CPU rises to 100% and management connections, including console, are dropped. Reason is that the OSPF process has higher priority. But now the shocking part: this was done on a layer 3 Cisco switch without OSPF configured, and without an IP address in the attacker VLAN.

Explanation: Cisco switches use something called pak_priority. It means that certain packets on ingress are labeled by the interface driver as priority and to be checked by CPU (Source). This is done to make sure network control packets get to the CPU in case of congestion. It’s the case for RIP, OSPF and EIGRP, but not for BGP packets.

I retried it with EIGRP (although this required a second Cisco device to generate the EIGRP hello) and the result is the same: no EIGRP configuration on the switch, still impact. The data plane does not have any impact: forwarding stays as usual mostly.

Solution? Strange enough, an ACL on each port blocking EIGRP (IP Protocol 88) and OSPF (IP Protocol 89) and allowing everything else seems to work. The ACL is checked in hardware as long as the ‘log’ parameter isn’t present. So for better security, it seems you’re stuck with an ACL on each switchport of a layer 3 switch.

Conclusion
I’m sure most of the readers now conclude that there’s still a security leak somewhere in their network. Just for reference, I’ll include the CPU graph of an hour of testing all these attacks.

L2Attack-9

MPLS, part II: VRF-aware MPLS-VPN.

Where MPLS part I explains the basics of labeling packets, it’s not giving any advantage over normal routing, apart from faster table lookups. But extensions to MPLS allow for more. In this article I’ll explain MPLS-VPN, and more specifically a Virtual Private Routed Network (VPRN).

A VPRN is a routed (layer 3) network over an MPLS cloud, that is VRF-aware, or customer-aware. This means several different routing instances (VRFs, remember?) can share the same MPLS cloud. How is this achieved when there’s only a point-to-point link between routers with one IP address? After all, an interface can only be assigned to one VRF. Solution: by adding a second MPLS label to the data: the ‘outer’ label (the one closest to the layer 2 header) is used to specify the destination router, the ‘inner’ label (closest to the original layer 3 header) is used to specify to which VRF a packet belongs. The outer label workings are identical to standard MPLS: these are learned by LDP and matched with a prefix in the routing table. But for the inner label, a VRF-aware process needs to run on each router that can handle label information and propagate it to other routers. That process is Multiprotocol-BGP, or MP-BGP.

MPLS-VPN-Header

The outer label is used to route the packet through the MPLS cloud, and the last router(s) use the inner label to see to which VRF a packet belongs. Let’s look at the configuration to understand it more. First, the basic setup:

MPLS-VPN-VRF

Notice the router names, as these are often used in MPLS terminology.

  • The Customer Edge router a router that directly connects to a customer network. It’s usually the demarcation point, where the equipment governed by the MPLS provider begins. Contrary to the name, the CE itself is often managed by the provider as well.
  • The Provider Edge router is the ‘first’ router (seen from a customer site point of view) that has MPLS enabled interfaces. It’s where the labels are applied for the first time.
  • A Provider router is a router completely internal to the MPLS cloud, having only MPLS enabled interfaces.

The connection between CE and PE is a point-to-point so a /30 subnet is logical. Routing between CE and PE is done using a simple routing protocol, like RIP, OSPF, EIGRP or even static or standard BGP. The only notable part is that the PE router has to use a VRF to place the customer in. Below an example configuration:

Router-CE(config)#interface G0/1
Router-CE(config-if)#ip address 10.1.0.1 255.255.255.252
Router-CE(config-if)#exit
Router-CE(config)#router ospf 1
Router-CE(config-router)#network 10.1.0.0 0.0.0.3 area 0

Router-PE(config)#vrf definition VRF
Router-PE(config-vrf)#address-family ipv4
Router-PE(config-vrf-af)#exit
Router-PE(config-vrf)#exit
Router-PE(config)#interface G0/1
Router-PE(config-if)#vrf forwarding VRF
Router-PE(config-if)#ip address 10.1.0.2 255.255.255.252
Router-PE(config-if)#exit
Router-PE(config)#router ospf 2 vrf VRF
Router-PE(config-router)#network 10.1.0.0 0.0.0.3 area 0
*Mar  1 00:06:20.991: %OSPF-5-ADJCHG: Process 2, Nbr 10.1.0.1 on GigabitEthernet0/1 from LOADING to FULL, Loading Done

If you happen to run a router on an IOS before 15.0, the commands for the VRF change: it becomes ‘ip vrf VRF’ to define a VRF, without the need to specify an address family, as 12.x IOS versions aren’t VRF aware for IPv6. On the interface, the command is ‘ip vrf forwarding VRF’.

So far so good. Now on to activating MPLS between the PE and the P router, and making sure the routers learn the MPLS topology:

Router-PE(config)#interface G0/2
Router-PE(config-if)#mpls ip
Router-PE(config-if)#ip address 10.0.0.1 255.255.255.252
Router-PE(config-if)#ip ospf network point-to-point
Router-PE(config-if)#exit
Router-PE(config)#interface Loopback0
Router-PE(config-if)#ip address 10.0.1.1
Router-PE(config-if)#exit
Router-PE(config)#router ospf 1
Router-PE(config-router)#network 10.0.0.0 0.0.1.255 area 0

Router-P(config)#interface G0/1
Router-P(config-if)#mpls ip
Router-P(config-if)#ip address 10.0.0.2 255.255.255.252
Router-P(config-if)#ip ospf network point-to-point
Router-P(config-if)#exit
Router-P(config)#interface Loopback0
Router-P(config-if)#ip address 10.0.1.2 255.255.255.255
Router-P(config-if)#exit
Router-P(config)#router ospf 1
Router-P(config-router)#network 10.0.0.0 0.0.1.255 area 0
*Mar  1 00:02:29.023: %OSPF-5-ADJCHG: Process 1, Nbr 10.0.0.1 on GigabitEthernet0/2 from LOADING to FULL, Loading Done
*Mar  1 00:02:33.127: %LDP-5-NBRCHG: LDP Neighbor 10.0.0.1:0 (1) is UP

The ‘ip ospf network point-to-point’ is not really needed but used to reduce OSPF overhead. The loopbacks are needed for BGP later on.

Up until this point, we have MPLS in the default VRF and a separate VRF per customer for routing, but no routing of the VRFs over the MPLS. To exchange the inner labels needed to specify the VRF, MP-BGP between PE and P is configured:

Router-PE(config)#router bgp 65000
Router-PE(config-router)#neighbor 10.0.1.2 remote-as 65000
Router-PE(config-router)#neighbor 10.0.1.2 update-source Loopback0
Router-PE(config-router)#no address-family ipv4
Router-PE(config-router)#address-family vpnv4
Router-PE(config-router-af)#neighbor 10.0.1.2 activate
Router-PE(config-router-af)#neighbor 10.0.1.2 send-community extended

Router-P(config)#router bgp 65000
Router-P(config-router)#neighbor 10.0.1.1 remote-as 65000
Router-P(config-router)#neighbor 10.0.1.1 update-source Loopback0
Router-P(config-router)#no address-family ipv4
Router-P(config-router)#address-family vpnv4
Router-P(config-router-af)#neighbor 10.0.1.1 activate
Router-P(config-router-af)#neighbor 10.0.1.1 send-community extended
*Mar  1 00:11:31.879: %BGP-5-ADJCHANGE: neighbor 10.0.1.1 Up

Again some explanation. First off, BGP neighbors always need to be defined under the main process, after which they need to be activated for a specific address-family. The ‘no address-family ipv4’ command means that no conventional routing information for the default VRF will be exchanged (we already have OSPF for that). The ‘address-family vpnv4’ activates the VPRN capability, and the label exchange for VRFs. In this process, the neighbor is activated. The ‘send-community extended’ means BGP will exchange the community Path Attributes (PA), in which the label information is present. The loopbacks are used to connect to each other, not only for redundancy in case a physical interface should go down, but also because LDP does not exchange labels with another router on a connected subnet for that subnet. This means that if the directly connected interfaces are used as BGP neighbors, the BGP process can’t figure out the labeling properly.

So VRF-aware MPLS is running now, but on each PE router it needs to be specified which VRF needs to be injected in the MPLS cloud. MP-BGP does this using import and export of routing tables. For MP-BGP, a route needs to be uniquely identified, and explained to which VRF it belongs. This is done with a Route Distinguisher (RD) and Route Target (RT). Both are 64 bits, and usually in the format AS:nn, or the first 32 bits the AS number and the last 32 bits a unique chosen number.

  • RD uniquely identifies a route. Inside MP-BGP, a route is prepended with its RD, e.g. 65000:1:192.168.1.0/24. This way, if the route exists twice (in different VRFs), it’s still unique because the RD part of the prefix is different.
  • RT specifies to which VRF a route belongs. It handles the import and export of routes from a VRF to the BGP process. In its basic form, it’s the same number as the RD, and the same at all PE routers for a certain client.

Configuration of these parameters is done inside the VRF:

Router-PE(config)#vrf definition VRF
Router-PE(config-vrf)#rd 65000:1
Router-PE(config-vrf)#route-target both 65000:1

Now that the VRF can be used in the BGP process, it is imported in the process as following:

Router-PE(config)#router bgp 65000
Router-PE(config-router)#address-family ipv4 vrf VRF
Router-PE(config-router-af)#redistribute ospf 2

The redistribution, of course, needs to be mutual between OSPF and BGP, so a few more lines of configuration are needed to complete everything:

Router-PE(config)#router ospf 2 vrf VRF
Router-PE(config-router)#redistribute bgp 65000 subnets

MPLS-VPN-Forwarding

And now everything is completed: the PE router learns routes from the P router by OSPF, and redistributes it into BGP to propagate them over the MPLS cloud. From this point on, configuration is modular: on another PE router, the configuration is likewise. Adding a P router isn’t any different from the P router in this example, the processes and parameters are the same each time. Do remember that routers don’t advertise iBGP-learned routes to other iBGP peers, so the PE and P routers need to form a full mesh, unless you’re using route reflectors or confederations.