tcp.txt 3.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
  1. TCP protocol
  2. ============
  3. Last updated: 3 June 2017
  4. Contents
  5. ========
  6. - Congestion control
  7. - How the new TCP output machine [nyi] works
  8. Congestion control
  9. ==================
  10. The following variables are used in the tcp_sock for congestion control:
  11. snd_cwnd The size of the congestion window
  12. snd_ssthresh Slow start threshold. We are in slow start if
  13. snd_cwnd is less than this.
  14. snd_cwnd_cnt A counter used to slow down the rate of increase
  15. once we exceed slow start threshold.
  16. snd_cwnd_clamp This is the maximum size that snd_cwnd can grow to.
  17. snd_cwnd_stamp Timestamp for when congestion window last validated.
  18. snd_cwnd_used Used as a highwater mark for how much of the
  19. congestion window is in use. It is used to adjust
  20. snd_cwnd down when the link is limited by the
  21. application rather than the network.
  22. As of 2.6.13, Linux supports pluggable congestion control algorithms.
  23. A congestion control mechanism can be registered through functions in
  24. tcp_cong.c. The functions used by the congestion control mechanism are
  25. registered via passing a tcp_congestion_ops struct to
  26. tcp_register_congestion_control. As a minimum, the congestion control
  27. mechanism must provide a valid name and must implement either ssthresh,
  28. cong_avoid and undo_cwnd hooks or the "omnipotent" cong_control hook.
  29. Private data for a congestion control mechanism is stored in tp->ca_priv.
  30. tcp_ca(tp) returns a pointer to this space. This is preallocated space - it
  31. is important to check the size of your private data will fit this space, or
  32. alternatively, space could be allocated elsewhere and a pointer to it could
  33. be stored here.
  34. There are three kinds of congestion control algorithms currently: The
  35. simplest ones are derived from TCP reno (highspeed, scalable) and just
  36. provide an alternative congestion window calculation. More complex
  37. ones like BIC try to look at other events to provide better
  38. heuristics. There are also round trip time based algorithms like
  39. Vegas and Westwood+.
  40. Good TCP congestion control is a complex problem because the algorithm
  41. needs to maintain fairness and performance. Please review current
  42. research and RFC's before developing new modules.
  43. The default congestion control mechanism is chosen based on the
  44. DEFAULT_TCP_CONG Kconfig parameter. If you really want a particular default
  45. value then you can set it using sysctl net.ipv4.tcp_congestion_control. The
  46. module will be autoloaded if needed and you will get the expected protocol. If
  47. you ask for an unknown congestion method, then the sysctl attempt will fail.
  48. If you remove a TCP congestion control module, then you will get the next
  49. available one. Since reno cannot be built as a module, and cannot be
  50. removed, it will always be available.
  51. How the new TCP output machine [nyi] works.
  52. ===========================================
  53. Data is kept on a single queue. The skb->users flag tells us if the frame is
  54. one that has been queued already. To add a frame we throw it on the end. Ack
  55. walks down the list from the start.
  56. We keep a set of control flags
  57. sk->tcp_pend_event
  58. TCP_PEND_ACK Ack needed
  59. TCP_ACK_NOW Needed now
  60. TCP_WINDOW Window update check
  61. TCP_WINZERO Zero probing
  62. sk->transmit_queue The transmission frame begin
  63. sk->transmit_new First new frame pointer
  64. sk->transmit_end Where to add frames
  65. sk->tcp_last_tx_ack Last ack seen
  66. sk->tcp_dup_ack Dup ack count for fast retransmit
  67. Frames are queued for output by tcp_write. We do our best to send the frames
  68. off immediately if possible, but otherwise queue and compute the body
  69. checksum in the copy.
  70. When a write is done we try to clear any pending events and piggy back them.
  71. If the window is full we queue full sized frames. On the first timeout in
  72. zero window we split this.
  73. On a timer we walk the retransmit list to send any retransmits, update the
  74. backoff timers etc. A change of route table stamp causes a change of header
  75. and recompute. We add any new tcp level headers and refinish the checksum
  76. before sending.