text.skipToContent text.skipToNavigation

Understanding Link Aggregation on a Linksys Switch

Link aggregation, or IEEE 802.3ad, is a computer networking term which describes using multiple Ethernet network cables/ports in parallel to increase the link speed beyond the limits of any one single cable or port, and to increase the redundancy for higher availability. Other terms for this include Ethernet trunk, NIC teaming, port teaming, port trunking, EtherChannel, Multi-Link Trunking (MLT), DMLT, SMLT, DSMLT, R-SMLT, NIC bonding, and link aggregate group (LAG). Most implementations now conform to clause 43 of IEEE 802.3 standard, informally referred to as 802.3ad.

The diagram below is an example of Link aggregation:

NOTE: A limitation of Link aggregation is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and RSMLT technologies remove this limitation by allowing the physical ports to be split between two switches.

Understanding Link Aggregation as a Network Backbone

Link aggregation is an inexpensive way to set up a high-speed backbone network that transfers much more data than any one single port or device can deliver. Although in the past various vendors used proprietary techniques, the preference today is to use the IEEE standard Link Aggregation Control Protocol (LACP). This allows several devices to communicate simultaneously at their full single-port speed while not allowing any one single device to monopolize all available backbone capacity.

Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and buy new hardware.

For most backbone installations it is common to install more cabling or fiber optic pairs than are initially necessary, even if there is no immediate need for the additional cabling. This is done because labor costs are higher than the cost of the cable and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available.

The benefits of link aggregation are:

  • Higher link availability
  • Increased link capacity
  • Improvements are obtained using existing hardware (no upgrading to higher-capacity link technology is necessary) 

Higher Link Availability

Link aggregation prevents the failure of any single component link from leading to a disruption of the communications between the interconnected devices. The loss of a link within an aggregation reduces the available capacity but the connection is maintained and the data flow is not interrupted.

Increased Link Capacity

The performance is improved because the capacity of an aggregated link is higher than each individual link alone. Standard LAN technology provides data rates of 10 Mb/s, 100 Mb/s, and 1000 Mb/s. Link Aggregation can fill the gaps of these available data rates when an intermediate performance level is more appropriate; a factor of 10 increase may be overkill in some environments. If a higher capacity than 1000 Mb/s is needed, the user can group several SysKonnect 1000 Mb/s adapters together to form a high speed connection and additionally benefit from the failover function the SysKonnect driver for Link Aggregation supports. This provides migration to 10 Gigabit Ethernet solutions which are not yet available.

Aggregating Replaces Upgrading

If the link capacity is to be increased, there are usually two possibilities: either upgrade the native link capacity or use an aggregate of two or more lower-speed links (if provided by the card’s manufacturer). Upgrades typically occur in factors of 10. In many cases, however, the device cannot take advantage of this increase. A performance improvement of 1:10 is not achieved, moreover the bottleneck is just moved from the network link to some other element within the device. Thus, the performance will always be limited by the weakest link, the end-to-end connection.

The figures below show the different types of link aggregation:

Fig. 1 Switch to Switch Connection

Fig. 2 Switch to Station (switch to server)

Fig. 3 Station to Station

Understanding Link Aggregation on Network Interface Cards

Link aggregation (LAG) is not just for the core switching equipment. Network interface cards (NICs) can also sometimes be trunked together to form network links beyond the speed of any one single NIC. For example, this allows a central file server to establish a 2-gigabit connection using two 1-gigabit NICs trunked together.

Note that when using Microsoft Windows, establishing a trunk with NICs usually only works among certain NIC types, and all must usually be of the same brand. The trunk itself is typically established at the device driver or NDIS level.

In Linux, Ethernet bonding (trunking) is implemented on a higher level, and can hence deal with NICs from different manufacturers or drivers, as long as the NIC is supported by the kernel.

Trunking of Different Types of Cabling and Speeds

Typically the ports used in a trunk should be all of the same type, such as all copper ports (CAT-5E/CAT-6), all multi-mode fiber ports (SX), or all single-mode fiber ports (LX).

The ports also need to operate at the same speed. It is possible to trunk 100-megabit ports together, but trunking a 100-megabit port and a gigabit port together will most likely not work, even though mixing port sizes within a trunk is technically supported in the 802.3ad standard. Ports operating in different duplex will not aggregate. One half duplex and a full duplex port cannot aggregate.

Trunking Support and Cross-brand Compatibility

A limitation on link aggregation is that it would like to avoid reordering Ethernet frames. That goal is approximated by sending all frames associated with a particular session across the same link. Depending on the traffic, this may not provide even distribution across the links in the trunk.

Most gigabit trunking is now based on clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force. Other proprietary trunking protocols existed before this standard was established. Some examples include Port Aggregation Protocol (PAgP), Adaptec's Duralink trunking, and Nortel MLT Multi-link trunking. These custom trunking protocols typically only work for interconnecting equipment from the same manufacturer or product line.

Even though many manufacturers now implement the standard, issues may occur (for example Ethernet auto-negotiation). Testing before production implementation is prudent.

Intel has released a package for Linux called Advanced Networking Services (ANS) to bind Intel Fast Ethernet and Gigabit cards. Also, newer Linux kernels support bonding between NICs of the same type.


Was this support article useful?

Additional Support Questions?

Search Again