TCP details

1. Introduction

  we all know that TCP is a protocol located in the transport layer. He also has a brother, UDP, which together constitute the transport layer. Obviously, there are great differences between them. Otherwise, only one is needed at the transport layer.

   the most important difference is that one is connection oriented and the other is not. This difference leads to whether they can ensure stable transmission. Obviously, UDP that is not connection oriented cannot ensure reliable transmission. It can only be guaranteed by the underlying network layer and link layer. We all know that the network layer adopts unreliable IP protocol. Well, the network layer can't guarantee reliable transmission, so UDP can only rely on the link layer to ensure reliable transmission.

  TCP has not only the support of the underlying link layer, but also its own link oriented service to ensure reliable transmission. Of course, TCP is not only more reliable than UDP, but also mentioned earlier. This is only an important difference between them. In fact, his three important characteristics are the differences between them.

  * reliable transmission   * flow control   * congestion control

2. Reliable transmission

TCP is mainly used to confirm the retransmission mechanism, verify that the data is reasonably divided and sorted, and traffic control and congestion control rely on to complete reliable transmission. These methods to ensure reliable transmission are described in detail below.

1. Confirmation and retransmission

To confirm retransmission, simply put, the receiver sends an ACK reply to the sender after receiving the message, indicating that it has received the data sent by the sender. If the sender waits for a specific time and does not receive the ACK from the receiver, he will consider the packet lost, and the receiver will resend the packet if it does not receive it.

OK, the above mechanism is easy to understand, but we will find a problem, that is, if the receiver has received the data and the returned ack is lost, the sender will misjudge and cause retransmission. At this time, the receiver will receive redundant data, but how can the receiver determine whether the data is redundant or new?

This involves another mechanism of TCP, that is, the serial number and confirmation number are adopted, that is, each time data is sent, this message segment includes the serial number of the current message segment and the confirmation number of the above message, so that our receiver can determine whether to accept duplicate message segments according to the data already in its acceptance cache. At this time, if the ACK mentioned above is lost, resulting in the acceptance of duplicate message segments, the client discards the redundant message segment.

OK, now we have a general understanding of the acknowledgement retransmission mechanism, but there are still some things we haven't figured out, that is, what the real implementation of TCP is.

1. Cumulative confirmation / single stop agreement

This is the first problem we need to solve, that is, how to confirm. There are two confirmation methods involved here, which are called cumulative confirmation (piggyback confirmation) and single stop.

Use a diagram to quickly understand, that is, each time you send data, you will confirm it. The next transmission cannot be performed until the sender receives the ACK.

Cumulative confirmation

The same ack mechanism is adopted, but it should be noted that not every message segment is confirmed, but only the last message segment is confirmed, with the confirmation of No. 203 and previous messages in the figure above. Summary: it can be seen from the above that the efficiency of cumulative confirmation is higher. Firstly, there are fewer confirmation packets, that is, most of the data to be transmitted in the network, rather than half of the data and half of the ACK, Then we can see in the second figure that we can continuously send multiple message segments (how many can be sent at one time depends on the sending window, and the sending window is determined by the receiving window and the congestion window). Sending multiple data at one time will improve the network throughput and efficiency. It can be proved that it is relatively simple and will not be repeated here!

Conclusion: it is obvious that the latter has advantages in any way. The implementer of TCP naturally adopts the cumulative confirmation method!

2. Timeout calculation

The specific time above is the timeout time. Why is there this value? In fact, when the sender sends data, a timer is started for the data. The initial value of this timer is the timeout time.

The calculation of timeout is actually a little troublesome, mainly because it is difficult for us to determine a certain value. If it is too long, it will lead to meaningless waiting, and if it is too short, it will lead to redundant packets. TCP designers have designed a formula to calculate the timeout. This formula has many concepts and is a little troublesome, but it doesn't matter. Let's do it bit by bit.

First of all, we think about how to design a formula for calculating the timeout time. The timeout time must be related to the data transmission time, It must be greater than the round-trip time of data (the time taken for the data to and from the sending end to the receiving end). Well, let's start with the round-trip time, but another problem is that the round-trip time is not fixed. How do we determine this value? Naturally, we think that we can take the average of the round-trip time over a short period of time to represent the round-trip time at this point, that is, the idea of calculus Think!

OK, we found the round-trip time (RTT). The next timeout time should be the round-trip time plus a number to get the timeout time. This number should also be dynamic. We selected it as the fluctuation difference of the round-trip time, that is, the difference between the two adjacent round-trip times.

The following is our estimated timeout formula:

TimeOut = AvgRTT2 + | AvgRTT2 - AvgRTT1 |

Good. Actually, you have almost understood the calculation method of timeout, but our formula is not perfect, but the idea is right. Let's take a look at the methods adopted by TCP implementers.

RTT_New = (1-a)RTT_Current + a*Avg_RTT (计算平均 RTT,a 通常取0.125)
DevRTT = (1-b)DevRTT + b|RTT_New - Avg_RTT|  (计算差值,b 通常取0.25)
TimeOut = RTT_New + 4*DevRTT (计算超时时间) 

OK, this is how TCP implements the timeout, but it is not always used in practical applications. If we say that our current network state is very poor and has been losing packets, we don't need to calculate it like this, but directly double the original timeout as the new timeout.

3. Fast retransmission

As we can see above, the sender will start retransmission after waiting for a timeout retransmission time, but the timeout retransmission time we calculate may be very accurate, that is to say, one thing we often do is to wait, and generally the waiting time is quite long. So can we optimize it?

Of course, the TCP implementation is optimized, that is, the fast retransmission mechanism mentioned here. His principle is that when the sender receives three redundant acks, it starts to retransmit that message segment. So why three redundant acks? Note that three redundant acks are actually four acks. Let's first understand the sending ack policy, which is specified in RFC 5681 document.

OK, now we can see that if there are three redundant acks, it can only happen twice, that is, two data larger than the expected value are sent. However, note that there are two possibilities in case 3, one is packet loss, and the other is out of order arrival. For example, we have arrived in disorder. Let's take a look.

The first disorder

Another disorder

Packet loss

4. Data retransmission mode

After we find packet loss, we need retransmission, but there are also two methods for retransmission: GBN and Sr. in translation, pull back retransmission and selective retransmission. Well, in fact, we can see their action mode from the name. Pull back retransmission is to retransmit the data from that place and later. This implementation is really very simple, that is, move the sending window and receiving window back, but we also find that this method is not practical, doing a lot of repetitive things and low efficiency.

Then choosing retransmission is what you think of. Whoever loses it will pass it on. There is no useless work.

   data verification, in fact, this is relatively simple, that is, a verification of the header, and then calculate and compare checksum during data verification.

3. Reasonable segmentation and sorting of data

   in UDP, UDP directly "throws" the application layer data to the other party's port. It basically doesn't have any processing. Therefore, if the data he sends to the network layer is greater than 1500 bytes, it is greater than MTU. At this time, the sender IP layer needs to be fragmented. Divide the datagram into several pieces so that each piece is smaller than MTU The receiver IP layer needs to reorganize datagrams. This will do many more things. What's more, due to the characteristics of UDP, when a piece of data is lost in transmission, it is convenient to receive and cannot reorganize the datagram, which will lead to the discarding of the whole UDP datagram.

   in TCP, it will be divided reasonably according to MTU, That is, there is a concept in TCP called maximum message segment length (MSS), which specifies the maximum length of TCP message segment. Note that this does not include TCP header, that is, its typical value is 1460 bytes (TCP and IP headers occupy 20 bytes respectively). And because TCP has sequence number and confirmation number, the receiver will cache the data that does not arrive in order, reorder the message segments according to the sequence number, and then hand them over to the application layer.

4. Flow control

   flow control generally refers to that when the receiver accepts the message segment, the upper program of the application layer may be busy doing some other things and have no time to process the data in the cache. If the sender does not control its speed when sending, it is likely to cause the receiving cache overflow and data loss.

   on the other hand, the network between two hosts is relatively congested. If the sender still sends at a relatively fast speed, it may lead to a large number of packet losses. At this time, the sender also needs to reduce the transmission speed.

   although it seems that the above two situations may cause data loss and reduce the transmission speed of the transmitting host, we must separate the two situations, because the former belongs to flow control and the latter belongs to congestion control, which will be discussed later. Don't confuse the two concepts.

  in fact, when it comes to flow control, we have to mention the sliding window protocol, which is the basis of flow control. Because the TCP connection is full duplex, that is, it is acceptable at the time of sending, the sending window and receiving window are maintained at the sending end and receiving end at the same time. Here, for the convenience of discussion, we will discuss it in one direction.

  the receiver maintains an acceptance window and the sender a transmission window. When sending, you should know how much space there is in the receiving window, that is, the amount of data sent cannot exceed the size of the receiving window, otherwise it will overflow. When we receive an ACK from a receiver, we can move the receiving window to slide the confirmed data out of the window, and the sending window can move the confirmed data out in the same way. In this way, the two window sizes are maintained. When the receiver cannot accept data, it will adjust its window size to 0, and the sending window will not send data. However, there is a problem. At this time, when the receiving window is enlarged again, it will not take the initiative to notify the sender. Here, the sender takes the initiative to ask.

  it's more intuitive to draw a picture:

5. Congestion control

   congestion control is generally caused by too much data sent by the host in the network. Generally, some routes with high load are congested. At this time, in order to obtain better data transmission stability, we must adopt congestion control. Of course, in order to reduce the load of the route and prevent collapse.

  here are two congestion control methods. One is slow start and the other is fast recovery.

1. Slow start

2. Fast recovery

3. Connection management

1. Establish a connection and shake hands 3 times

So the question is, why do you need a serial number? Why three handshakes instead of two? And what are SYN flooding attacks?

2. Release the link and wave four times

What needs to be explained here is the last long time_ The wait state is generally for the client to issue an ACK. Generally, its value is 1 minute or 2 minutes

4. Summary

  well, I really wrote a lot today, mainly to clarify the reliable transmission and connection management of TCP, as well as the details. It really takes a lot of time. What is not involved is that the TCP header is not analyzed in detail. In fact, it is not very difficult, but now the space is really large. First, the header is fixed and does not need much understanding.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>