Changes between Version 5 and Version 6 of MultiHost


Ignore:
Timestamp:
May 27, 2007, 7:49:45 PM (17 years ago)
Author:
davea
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • MultiHost

    v5 v6  
     1[[PageOutline]]
     2
    13= Increasing server capacity =
    24
     
    5961}}}
    6062You can run scheduling servers on multiple hosts by running an instance of the feeder on each host, and including the URLs in your master file.
     63
     64== Increasing network performance ==
     65
     66For projects using "fast" links, or even for internal networks, there
     67can be some performance gains from tuning system TCP settings.
     68
     69
     70A few links:
     71http://dsd.lbl.gov/TCP-tuning/background.html
     72
     73http://dsd.lbl.gov/TCP-tuning/linux.html
     74
     75http://proj.sunet.se/E2E/tcptune.html
     76
     77http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1
     78
     79http://www.psc.edu/networking/projects/tcptune/
     80
     81http://www.acc.umu.se/~maswan/linux-netperf.txt
     82
     83http://datatag.web.cern.ch/datatag/howto/tcp.html
     84
     85http://www.aarnet.edu.au/engineering/networkdesign/mtu/local.html
     86
     87http://www.hep.ucl.ac.uk/~ytl/tcpip/linux/txqueuelen/
     88
     89And for bonding multiple links/interfaces:
     90
     91http://www.linux-corner.info/bonding.html
     92
     93http://www.devco.net/archives/2004/11/26/linux_ethernet_bonding.php
     94
     95
     96The short story is that you can greatly improve your data transfer rate
     97over long internet links by increasing the system TCP tx and rx
     98window/buffer values. Note also the comments in
     99http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1
     100about software set buffer sizes.
     101
     102For internal networks fully under your control, you can greatly increase
     103transfer rates and reduce CPU overheads by using "Jumbo packets" with an
     104MTU of 9000. (Most new switches should support that. Check further
     105before trying to go any larger.)
     106
     107{{{
     108Other comments are in this example /etc/sysctl.conf (for Linux 2.6.xx):
     109
     110# Run "sysctl -p" to effect any changes made here
     111#
     112# TCP tuning
     113# See:
     114# http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html?page=1
     115#
     116# optimal TCP buffer size for a given network link is double the value
     117for delay times bandwidth:
     118# buffer size = 2 * delay * bandwidth
     119# For example, assume a 100Mbits/s link between California and the
     120United Kingdom, an RTT of 150ms. The optimal TCP buffer size for this
     121link is 1.9MBytes
     122#
     123# increase TCP maximum buffer size
     124# Example for 16 MBytes
     125#net.core.rmem_max = 16777216
     126#net.core.wmem_max = 16777216
     127
     128# For a 10Mbits/s link and worst case is Australia at 350ms RTT, so
     1291MByte is more than enough
     130# Linux 2.6.17 (later?) defaults to 4194304 max, so match that instead...
     131net.core.rmem_max = 4194304
     132net.core.wmem_max = 4194304
     133
     134# increase Linux autotuning TCP buffer limits
     135# min, default, and maximum number of bytes to use
     136# Example for 16 MBytes
     137#net.ipv4.tcp_rmem = 4096 87380 16777216
     138#net.ipv4.tcp_wmem = 4096 65536 16777216
     139
     140# Scaled for 4MByte:
     141net.ipv4.tcp_rmem = 4096 87380 4194304
     142net.ipv4.tcp_wmem = 4096 49152 4194304
     143
     144# Notes:
     145#
     146# Defaults:
     147# net.ipv4.tcp_rmem = 4096        87380   174760
     148# net.ipv4.tcp_wmem = 4096        16384   131072
     149# net.ipv4.tcp_mem = 49152        65536   98304
     150#
     151# Do not adjust tcp_mem unless you know exactly what you are doing.
     152# This array (in units of pages) determines how the system balances the
     153# total network buffer space against all other LOWMEM memory usage. The
     154# three elements are initialized at boot time to appropriate fractions
     155# of the available system memory and do not need to be changed.
     156#
     157# You do not need to adjust rmem_default or wmem_default (at least not
     158# for TCP tuning). These are the default buffer sizes for non-TCP sockets
     159# (e.g. unix domain and UDP sockets).
     160#
     161#
     162# Also use for example:
     163# /sbin/ifconfig eth2 txqueuelen 2000
     164#
     165# The default of 1000 is inadequate for long distance, high throughput
     166pipes.
     167# For example, a rtt of 120ms at Gig rates, a txqueuelen of at least
     16810000 is recommended.
     169#
     170# txqueuelen should not be set too large for slow links to avoid
     171excessive latency,
     172#
     173# If you are seeing "TCP: drop open request" for real load (not a DDoS),
     174# you need to increase tcp_max_syn_backlog (8192 worked much better than
     175# 1024 on heavy webserver load).
     176#
     177# If you see stuff like "swapper: page allocation failure. order:0,
     178mode:0x20"
     179# you definately need to increase min_free_kbytes for the virtual memory.
     180#
     181#
     182# All tcp settings listed by
     183# sysctl -a | fgrep tcp
     184#
     185# Run "sysctl -p" to effect any changes made here
     186}}}