| 28 | |
| 29 | {{{ |
| 30 | <min_sendwork_interval> N </min_sendwork_interval> |
| 31 | }}} |
| 32 | Minimum number of seconds between sending jobs to a given host. |
| 33 | You can use this to limit the impact of faulty hosts. |
| 34 | |
| 35 | {{{ |
| 36 | <max_wus_in_progress> N </max_wus_in_progress> |
| 37 | [<max_wus_in_progress_gpu> M </max_wus_in_progress_gpu> |
| 38 | }}} |
| 39 | Limit the number of jobs in progress on a given host. |
| 40 | Starting with 6.8, the BOINC client report the resources used by in-progress jobs; |
| 41 | in this case, the max CPU jobs in progress is '''N*NCPUS''' |
| 42 | and the max GPU jobs in progress is '''M*NGPUs'''. |
| 43 | Otherwise, the overall maximum is '''N*NCPUS + M*NGPUS)'''. |
| 44 | |
| 45 | {{{ |
| 46 | <gpu_multiplier> GM </gpu_multiplier> |
| 47 | }}} |
| 48 | If your project uses GPUs, set this to roughly the ratio |
| 49 | of GPU speed to CPU speed. |
| 50 | Used in the calculation of job limits (see next 2 items). |
31 | | Maximum results sent per scheduler RPC. |
32 | | Helps prevent hosts from getting too many jobs and trashing them. |
33 | | But you should set this large enough so that a host which is only connected to the network |
34 | | at intervals has enough work to keep it occupied between connections. |
35 | | {{{ |
36 | | <max_wus_in_progress> A </max_wus_in_progress> |
37 | | [ <max_wus_in_progress_cpu> B </max_wus_in_progress_cpu> ] |
38 | | [ <max_wus_in_progress_gpu> C </max_wus_in_progress_gpu> ] |
39 | | }}} |
40 | | The maximum jobs in progress on a given host is '''min(A, (B*#CPUs + C*#GPUS)'''. |
41 | | Starting with 6.8, the BOINC client report the resources used by in-progress jobs; |
42 | | in this case, the max CPU jobs in progress is '''B*#CPUs''', |
43 | | the max GPU jobs in progress is '''C*#GPUs''', |
44 | | and the max total jobs in progress is A. |
45 | | {{{ |
46 | | <min_sendwork_interval> N </min_sendwork_interval> |
47 | | }}} |
48 | | Minimum number of seconds to wait after sending results to a given host, before new results are sent to the same host. Helps prevent hosts with download or application problems from trashing lots of results by returning lots of error results. But don't set it to be so long that a host goes idle after completing its work, before getting new work. |
| 54 | Maximum jobs returned per scheduler RPC is '''N*(NCPUS + GM*NGPUS)'''. |
| 55 | You can use this to limit the impact of faulty hosts. |
| 56 | |
54 | | The maximum number of jobs sent to a given host in a 24-hour period is MRD * NCPUs. |
55 | | |
56 | | Set daily_result_quota large enough that a host can download enough work |
57 | | to keep it busy if disconnected from the net for a few days. |
58 | | {{{ |
59 | | <cuda_multiplier>N</cuda_multiplier> |
60 | | }}} |
61 | | See the above. |
62 | | If a host has M CUDA devices, use M*N instead of NCPUS. |
63 | | Set this to the speed ratio of an average CUDA device to an average CPU, i.e. about 10 or 20. |
64 | | |
| 62 | The maximum number of jobs sent to a given host in a 24-hour period is '''MRD*(NCPUS + GM*NGPUS)'''. |
| 63 | You can use this to limit the impact of faulty hosts. |