| | 63 | |
| | 64 | == Increasing network performance == |
| | 65 | |
| | 66 | For projects using "fast" links, or even for internal networks, there |
| | 67 | can be some performance gains from tuning system TCP settings. |
| | 68 | |
| | 69 | |
| | 70 | A few links: |
| | 71 | http://dsd.lbl.gov/TCP-tuning/background.html |
| | 72 | |
| | 73 | http://dsd.lbl.gov/TCP-tuning/linux.html |
| | 74 | |
| | 75 | http://proj.sunet.se/E2E/tcptune.html |
| | 76 | |
| | 77 | http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1 |
| | 78 | |
| | 79 | http://www.psc.edu/networking/projects/tcptune/ |
| | 80 | |
| | 81 | http://www.acc.umu.se/~maswan/linux-netperf.txt |
| | 82 | |
| | 83 | http://datatag.web.cern.ch/datatag/howto/tcp.html |
| | 84 | |
| | 85 | http://www.aarnet.edu.au/engineering/networkdesign/mtu/local.html |
| | 86 | |
| | 87 | http://www.hep.ucl.ac.uk/~ytl/tcpip/linux/txqueuelen/ |
| | 88 | |
| | 89 | And for bonding multiple links/interfaces: |
| | 90 | |
| | 91 | http://www.linux-corner.info/bonding.html |
| | 92 | |
| | 93 | http://www.devco.net/archives/2004/11/26/linux_ethernet_bonding.php |
| | 94 | |
| | 95 | |
| | 96 | The short story is that you can greatly improve your data transfer rate |
| | 97 | over long internet links by increasing the system TCP tx and rx |
| | 98 | window/buffer values. Note also the comments in |
| | 99 | http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1 |
| | 100 | about software set buffer sizes. |
| | 101 | |
| | 102 | For internal networks fully under your control, you can greatly increase |
| | 103 | transfer rates and reduce CPU overheads by using "Jumbo packets" with an |
| | 104 | MTU of 9000. (Most new switches should support that. Check further |
| | 105 | before trying to go any larger.) |
| | 106 | |
| | 107 | {{{ |
| | 108 | Other comments are in this example /etc/sysctl.conf (for Linux 2.6.xx): |
| | 109 | |
| | 110 | # Run "sysctl -p" to effect any changes made here |
| | 111 | # |
| | 112 | # TCP tuning |
| | 113 | # See: |
| | 114 | # http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1 |
| | 115 | # |
| | 116 | # optimal TCP buffer size for a given network link is double the value |
| | 117 | for delay times bandwidth: |
| | 118 | # buffer size = 2 * delay * bandwidth |
| | 119 | # For example, assume a 100Mbits/s link between California and the |
| | 120 | United Kingdom, an RTT of 150ms. The optimal TCP buffer size for this |
| | 121 | link is 1.9MBytes |
| | 122 | # |
| | 123 | # increase TCP maximum buffer size |
| | 124 | # Example for 16 MBytes |
| | 125 | #net.core.rmem_max = 16777216 |
| | 126 | #net.core.wmem_max = 16777216 |
| | 127 | |
| | 128 | # For a 10Mbits/s link and worst case is Australia at 350ms RTT, so |
| | 129 | 1MByte is more than enough |
| | 130 | # Linux 2.6.17 (later?) defaults to 4194304 max, so match that instead... |
| | 131 | net.core.rmem_max = 4194304 |
| | 132 | net.core.wmem_max = 4194304 |
| | 133 | |
| | 134 | # increase Linux autotuning TCP buffer limits |
| | 135 | # min, default, and maximum number of bytes to use |
| | 136 | # Example for 16 MBytes |
| | 137 | #net.ipv4.tcp_rmem = 4096 87380 16777216 |
| | 138 | #net.ipv4.tcp_wmem = 4096 65536 16777216 |
| | 139 | |
| | 140 | # Scaled for 4MByte: |
| | 141 | net.ipv4.tcp_rmem = 4096 87380 4194304 |
| | 142 | net.ipv4.tcp_wmem = 4096 49152 4194304 |
| | 143 | |
| | 144 | # Notes: |
| | 145 | # |
| | 146 | # Defaults: |
| | 147 | # net.ipv4.tcp_rmem = 4096 87380 174760 |
| | 148 | # net.ipv4.tcp_wmem = 4096 16384 131072 |
| | 149 | # net.ipv4.tcp_mem = 49152 65536 98304 |
| | 150 | # |
| | 151 | # Do not adjust tcp_mem unless you know exactly what you are doing. |
| | 152 | # This array (in units of pages) determines how the system balances the |
| | 153 | # total network buffer space against all other LOWMEM memory usage. The |
| | 154 | # three elements are initialized at boot time to appropriate fractions |
| | 155 | # of the available system memory and do not need to be changed. |
| | 156 | # |
| | 157 | # You do not need to adjust rmem_default or wmem_default (at least not |
| | 158 | # for TCP tuning). These are the default buffer sizes for non-TCP sockets |
| | 159 | # (e.g. unix domain and UDP sockets). |
| | 160 | # |
| | 161 | # |
| | 162 | # Also use for example: |
| | 163 | # /sbin/ifconfig eth2 txqueuelen 2000 |
| | 164 | # |
| | 165 | # The default of 1000 is inadequate for long distance, high throughput |
| | 166 | pipes. |
| | 167 | # For example, a rtt of 120ms at Gig rates, a txqueuelen of at least |
| | 168 | 10000 is recommended. |
| | 169 | # |
| | 170 | # txqueuelen should not be set too large for slow links to avoid |
| | 171 | excessive latency, |
| | 172 | # |
| | 173 | # If you are seeing "TCP: drop open request" for real load (not a DDoS), |
| | 174 | # you need to increase tcp_max_syn_backlog (8192 worked much better than |
| | 175 | # 1024 on heavy webserver load). |
| | 176 | # |
| | 177 | # If you see stuff like "swapper: page allocation failure. order:0, |
| | 178 | mode:0x20" |
| | 179 | # you definately need to increase min_free_kbytes for the virtual memory. |
| | 180 | # |
| | 181 | # |
| | 182 | # All tcp settings listed by |
| | 183 | # sysctl -a | fgrep tcp |
| | 184 | # |
| | 185 | # Run "sysctl -p" to effect any changes made here |
| | 186 | }}} |