TCP Congestion Control Mechanisms | a detailed overview!

Aqeel Ahmed
6 min readMay 16, 2022

--

According to the Cambridge dictionary, congestion means, “a situation in which a place is too blocked or crowded, causing difficulties”. Similarly, computer networks can also face congestion if the data they carry is beyond the capacity of the network. In computer networking terms congestion is defined as;

Congestion is a network state on a node or a link when it carries much greater data than its capacity which results in a reduction of quality of service, queuing delay, frame or packet loss, and blocking of new connections.

In a congested network, response time slows with reduced network throughput (congestion collapse). Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity. Congestion has been described as a fundamental effect of limited network resources, especially router processing time and link throughput.

Avoiding network congestion and collapse requires two major components:

  • Routers are capable of reordering or dropping data packets when received rates reach critical levels
  • Flow control mechanisms that respond appropriately when data flow rates reach critical levels

Approaches to Congestion Control

In an end-to-end approach to congestion control, the network layer provides no explicit support to the transport layer for congestion control purposes. Even the presence of congestion in the network must be inferred by the end systems based only on observed network behavior (for example, packet loss and delay), that TCP must necessarily take this end-to-end approach toward congestion control since the IP layer provides no feedback to the end systems regarding network congestion.

Network-assisted congestion control

With network-assisted congestion control, network-layer components (that is, routers) provide explicit feedback to the sender regarding the congestion state in the network. This feedback may be as simple as a single bit indicating congestion at a link.

TCP Congestion Control

TCP must use end-to-end congestion control rather than network-assisted congestion control since the IP layer provides no explicit feedback to the end systems regarding network congestion.

The approach taken by TCP is to have each sender limit the rate at which it sends traffic into its connection as a function of perceived network congestion. If a TCP sender perceives that there is little congestion on the path between itself and the destination, then the TCP sender increases its send rate; if the sender perceives that there is congestion along the path, then the sender reduces its send rate

How does a TCP sender limit the rate at which it sends traffic into its connection?

The TCP congestion-control mechanism operating at the sender keeps track of the congestion window. The congestion window, denoted cwnd, imposes a constraint on the rate at which a TCP sender can send traffic into the network Specifically, the amount of unacknowledged data at a sender may not exceed the minimum of cwnd and rwnd, that is:LastByteSent — LastByteAcked < min{cwnd, rwnd}. The tcp limits the sending rate by controlling the congestion window, it increases or decreased the congestion windows according to retransmission time and acknowledgment of the received packets.

How does a TCP sender perceive that there is congestion on the path between itself and the destination?

the tcp understand the loss of the packet if it gets time out notification or does not receive the acknowledgement of the packet.When there is excessive congestion, then one (or more) router buffers along the path overflows, causing a datagram (containing a TCP segment) to be dropped. The dropped datagram, in turn, results in a loss event at the sender — either a timeout or the receipt of three duplicate ACKs — which is taken by the sender to be an indication of congestion on the sender-to-receiver path.

when the network is congestion-free, that is, when a loss event doesn’t occur. In this case, acknowledgments for previously unacknowledged segments will be received at the TCP sender. TCP will take the arrival of these acknowledgments as an indication that all is well — that segments being transmitted into the network are being successfully delivered to the destination — and will use acknowledgments to increase its congestion window size (and hence its transmission rate). Note that if acknowledgments arrive at a relatively slow rate (e.g., if the end-end path has a high delay or contains a low-bandwidth link), then the congestion window will be increased at a relatively slow rate. On the other hand, if acknowledgments arrive at a high rate, then the congestion window will be increased more quickly.

Because TCP uses acknowledgments to trigger (or clock) its increase in congestion window size, TCP is said to be self-clocking.

What algorithm should the sender use to change its send rate as a function of perceived end-to-end congestion?

If TCP senders collectively send too fast, they can congest the network, leading to the type of congestion collapse.

  • A lost segment implies congestion, and hence, the TCP sender’s rate should be decreased when a segment is lost.
  • An acknowledged segment indicates that the network is delivering the sender’s segments to the receiver, and hence, the sender’s rate can be increased when an ACK arrives for a previously unacknowledged segment.

Slow Start

When a TCP connection begins, the value of cwnd is typically initialized to a small value of 1 MSS , resulting in an initial sending rate of roughly MSS/RTT. For example, if MSS = 500 bytes and RTT = 200 msec, the resulting initial sending rate is only about 20 kbps. Since the available bandwidth to the TCP sender may be much larger than MSS/RTT, the TCP sender would like to find the amount of available bandwidth quickly. Thus, in the slow-start state, the value of cwnd begins at 1 MSS and increases by 1 MSS every time a transmitted segment is first acknowledged.

Slow start Mechanisms (figure 1)

In the figure shown above, TCP sends the first segment into the network and waits for an acknowledgment. When this acknowledgment arrives, the TCP sender increases the congestion window by one MSS and sends out two maximum-sized segments. These segments are then acknowledged, with the sender increasing the congestion window by 1 MSS for each of the acknowledged segments, giving a congestion window of 4 MSS, and so on. This process results in a doubling of the sending rate every RTT. Thus, the TCP send rate starts slow but grows exponentially during the slow start phase.

Fast Recovery

In fast recovery, the value of cwnd is increased by 1 MSS for every duplicate ACK received for the missing segment that caused TCP to enter the fast-recovery state. Eventually, when an ACK arrives for the missing segment, TCP enters the congestion-avoidance state after deflating cwnd. If a timeout event occurs, fast recovery transitions to the slow-start state after performing the same actions as in slow start and congestion avoidance: The value of cwnd is set to 1 MSS, and the value of ssthresh is set to half the value of cwnd when the loss event occurred.

Additive-increase, multiplicativedecrease (AIMD)

TCP congestion control is often referred to as an additive-increase, multiplicative decrease (AIMD) form of congestion control. AIMD congestion control gives rise to the “sawtooth” behavior shown in Figure 2, which also nicely illustrates our earlier intuition of TCP “probing” for bandwidth — TCP linearly increases its congestion window size (and hence its transmission rate) until a triple duplicate-ACK event occurs. It then decreases its congestion window size by a factor of two but then again begins increasing it linearly, probing to see if there is additional available bandwidth

AIMD mechanism (figure 2)

Congestion Avoidance

On entry to the congestion-avoidance state, the value of cwnd is approximately half its value when congestion was last encountered — congestion could be just around the corner! Thus, rather than doubling the value of cwnd every RTT, TCP adopts a more conservative approach and increases the value of cwnd by just a single MSS . This can be accomplished in several ways. A common approach is for the TCP sender to increase cwnd by MSS bytes (MSS/cwnd) whenever a new acknowledgment arrives. For example, if MSS is 1,460 bytes and cwnd is 14,600 bytes, then 10 segments are being sent within an RTT Each arriving ACK (assuming one ACK per segment) increases the congestion window size by 1/10 MSS, and thus, the value of the congestion window will have increased by one MSS after ACKs when all 10 segments have been received

References

Computer Networking A top-down approach by KUROS | ROSS 6th edition

Data Communications and Networking by Behrouz A. Forouzan, Sophia Chung Fegan

--

--

Aqeel Ahmed
Aqeel Ahmed

Written by Aqeel Ahmed

PhD (DL & IoT Security). My expertise include Signal Processing, ML, DL, and Cybersecurity. Follow me for easy to understand tutorials in these areas

No responses yet