Changes between Version 26 and Version 27 of CreditNew
- Timestamp:
- Mar 25, 2010, 2:48:20 PM (15 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
CreditNew
v26 v27 1 1 = New credit system design = 2 2 3 == Peak FLOPS and efficiency==4 5 BOINC estimates the peak FLOPSof each processor.3 == Definitions == 4 5 BOINC estimates the '''peak FLOPS''' of each processor. 6 6 For CPUs, this is the Whetstone benchmark score. 7 7 For GPUs, it's given by a manufacturer-supplied formula. … … 12 12 So a given job might take the same amount of CPU time 13 13 and a 1 GFLOPS host as on a 10 GFLOPS host. 14 The "efficiency"of an application running on a given host14 The '''efficiency''' of an application running on a given host 15 15 is the ratio of actual FLOPS to peak FLOPS. 16 16 … … 22 22 23 23 * For our purposes, the peak FLOPS of a device 24 is single or double precision, whichever is higher. 25 Differentiating between single and double would unnecessarily 26 complicate things, and the distinction will disappear soon anyway. 24 uses single or double precision, whichever is higher. 27 25 28 26 == Credit system goals == … … 34 32 35 33 * Project neutrality: different projects should grant 36 about the same amount of credit per host, 37 averaged over all hosts. 38 39 * Cheat-proof 40 41 It's easy to show that both goals can't be satisfied simultaneously. 34 about the same amount of credit per host, averaged over all hosts. 42 35 43 36 == The first credit system == … … 93 86 == Goals of the new (third) credit system == 94 87 95 * Completely automated - projects don't have to 96 change code, settings, etc. 88 * Completely automated - projects don't have to change code, settings, etc. 97 89 98 90 * Device neutrality … … 126 118 (e.g., a CPU job that does lots of disk I/O) 127 119 PFC() won't reflect this. That's OK. 128 The key thing is that BOINC reserved the device forthe job,120 The key thing is that BOINC allocated the device to the job, 129 121 whether or not the job used it efficiently. 130 122 * peak_flops(J) may not be accurate; e.g., a GPU job may take … … 153 145 ''A posteriori'' estimates of job size may exist also 154 146 (e.g., an iteration count reported by the app) 155 but using this for anything introduces a new cheating risk, 156 so it's probably better not to. 147 but this introduces a new cheating risk. 157 148 158 149 == Cross-version normalization == 159 150 160 If a given application has multiple versions (e.g., CPU and GPU versions) 161 the credit per job is adjusted 151 A given application may have multiple versions (e.g., CPU and GPU versions). 152 If jobs are distributed uniformly to versions, 153 all versions should get the same average credit. 154 We adjust the credit per job 162 155 so that the average is the same for each version. 163 156 … … 167 160 An app version V's jobs are then scaled by the factor 168 161 169 S(V) = (X/PFC^mean^(V)) 170 171 The "Version-Normalized Peak FLOP Count", or VNPFC(J) is 172 173 VNPFC(J) = S(V) * PFC(J) 162 Scale(V) = (X/PFC^mean^(V)) 174 163 175 164 Notes: … … 182 171 It's not exactly "Actual FLOPs", since the most efficient 183 172 version may not be 100% efficient. 173 * If jobs are not distributed uniformly among versions 174 (e.g. if SETI@home VLAR jobs are done only by the CPU version) 175 then this mechanism doesn't work as intended. 176 One solution is to create separate apps for separate types of jobs. 184 177 185 178 == Cross-project normalization == … … 203 196 V's jobs are then scaled by S(V) as above. 204 197 198 Projects will export the following data: 199 {{{ 200 for each app version 201 app name 202 platform name 203 recent average granted credit 204 plan class 205 scale factor 206 }}} 207 208 The BOINC server will collect these from several projects 209 and will export the following: 210 {{{ 211 for each plan class 212 average scale factor (weighted by RAC) 213 }}} 214 We'll provide a script that identifies app versions 215 for GPUs with no corresponding CPU app version, 216 and sets their scaling factor based on the above. 217 205 218 Notes: 206 219 … … 215 228 we maintain PFC^mean^(H, A), 216 229 the average of PFC(J)/E(J) for jobs completed by H using A. 217 The '''claimed FLOPS''' for a given job J is then 218 219 F = VNPFC(J) * (PFC^mean^(V)/PFC^mean^(H, A)) 220 221 and the claimed credit (in Cobblestones) is 222 223 C = F*100/86400e9 230 231 This yields the host scaling factor 232 233 Scale(H) = (PFC^mean^(V)/PFC^mean^(H, A)) 224 234 225 235 There are some cases where hosts are not sent jobs uniformly: … … 238 248 than average. 239 249 250 == Claimed credit == 251 252 The '''claimed FLOPS''' for a given job J is then 253 254 F = PFC(J) * S(V) * S(H) 255 256 and the claimed credit (in Cobblestones) is 257 258 C = F*100/86400e9 259 240 260 == Computing averages == 241 261 … … 250 270 The code that does this is 251 271 [http://boinc.berkeley.edu/trac/browser/trunk/boinc/lib/average.h here]. 252 253 == Cross-project scaling factors ==254 255 Projects will export the following data:256 {{{257 for each app version258 app name259 platform name260 recent average granted credit261 plan class262 scale factor263 }}}264 265 The BOINC server will collect these from several projects266 and will export the following:267 {{{268 for each plan class269 average scale factor (weighted by RAC)270 }}}271 We'll provide a script that identifies app versions272 for GPUs with no corresponding CPU app version,273 and sets their scaling factor based on the above.274 272 275 273 == Anonymous platform ==