F5 Load Balancer


F5 load balancers are very important devices for distributing and balancing application and network traffic across servers. That is done in order to increase system capacity, with a fast and seamless delivery of packets. We can divide load balancer appliance in general into two main categories.

1 – Transport layer load balancer

Those are performing load balancing based on layer-3 or layer-4 information (e.g. TCP IP flags, source or destination IP, transport layer ports). For instance, it is possible to create simple policies on Juniper router to load balance per-packet. In addition, aggregated links are basically load balancing the traffic across multiple links based on LAG hashing algorithm that differentiate among different hardware platforms. Alternatively, we could load balance using BGP protocol with multi-path feature and so on.

2 – Application layer load balancer

Performing deep packet inspection and extract Layer-7 information is not an easy task and there’s some specialized hardware to be used at extra cost. A key factor of this solution is about the amount of traffic that needs to be inspected. While you can have low-end F5 load balancers like the 2000 series that are able to accommodate from 200 to 400 thousand of L7 requests per second, there are also 12000 series that can do up to 4M requests per second. It’s all about the cost and proper licensing, since application layer load balancing is generally more CPU intensive.

Why load balancing at the Application Layer

Why would I need to use Layer-7 load balancing rather than Layer-3 and Layer-4? Simply put, in many cases transport layer load balancing is not enough. The main goal is to balance across the servers as evenly as possible and with L3-L4 information it could be way too difficult to handle the task, because you simply do not have enough information. The more sophisticated is the inspection of the traffic, the more effective is the load balancing. Let’s see a typical high-level example for Local Traffic Management setup (LTM) with virtual F5 appliances. There are two destination servers but in a real setup there can be hundreds of them.

F5 Load Balancer design
F5 Load Balancer scenario

Hardware load balancer vs. Virtual load balancer

It’s all about requirements and software limitations. By requirements we can define our current setup: normally the amount of request per second is the tie breaker to decide for hardware or virtual load balancer. If we are using cloud hosted by someone else it might be better to use virtual (more flexible in this case) option that will enable us with full quick migration features.

However there are limitations with the F5 virtual appliances. First is you need to have proper physical server resources allocated to the virtual machine or use recommended hardware to obtain ideal performance. And second more importantly, the maximum L7 requests per second is around 400K, which compared to some other hardware F5 boxes, sometimes is not enough. 

Global Traffic Manager vs. Local Traffic Manager

Some people can misunderstand the use of Global Traffic Manager (GTM) versus Local Traffic Manager (LTM). Multiple LTMs across different locations will not create a GTM. The LTM would be used for local network or in cloud solutions. It would distribute the traffic as HTTP proxy among multiple destination servers. Instead, GTM is an intelligent DNS that would load balance among multiple locations. Of course, these solutions  can be combined, there’s a great article that can be found here.

More features with F5

Load balancing is not the only feature you can get when installing an F5 load balancer. Additonal tools are  Advanced Firewall Manager, Application Security Manager and much more. Multi-tenant design is illustrated in the following picture, where the F5 services fabric creates a scalable, all-active container of powerful layer 4–7 application services delivered as a flexible resource pool.

F5 Multitennant design
F5 Multi-tenant design

Multi-tenancy designs aim to the obvious benefit of cost savings that are driven by better utilization of the underlying infrastructure, same as server virtualization and cloud platforms promised. By consolidating more services onto fewer devices (physical or virtual), savings in acquisition, ongoing support, and administration can be realized.


Milan Zapletal

Milan Zapletal

I am an open-minded network engineer with 5+ years of experience with passion for learning and technology in general and I enjoy networking and network automation. GNS3 early contributor and fan.

What do you think about this article?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About us

RouterFreak is a blog dedicated to professional network engineers. We
focus on network fundamentals, product/service reviews, and career advancements.


As an Amazon Associate, I earn from qualifying purchases.

RouterFreak is supported by its audience. We may receive a small commission from the affiliate links in this post, at no extra cost to our readers.