Changes between Version 5 and Version 6 of CreditNew
- Timestamp:
- Nov 3, 2009, 11:38:32 AM (15 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
CreditNew
v5 v6 189 189 == Host normalization == 190 190 191 For a given application, 192 all hosts should get the same average granted credit per job. 191 Assuming that hosts are sent jobs for a given app uniformly, 192 then for a given app 193 hosts should get the same average granted credit per job. 193 194 To ensure this, for each application A we maintain the average VNPFC*(A), 194 195 and for each host H we maintain VNPFC*(H, A). 195 The "claimed credit"for a given job J is then196 The '''claimed credit''' for a given job J is then 196 197 {{{ 197 198 VNPFC(J) * (VNPFC*(A)/VNPFC*(H, A)) 198 199 }}} 200 201 There are some cases where hosts are not sent jobs uniformly: 202 * job-size matching 203 * GPUGrid.net's scheme for sending some (presumably larger) 204 jobs to GPUs with more processors. 205 In these cases we must scale 199 206 200 207 Notes: … … 204 211 than average. 205 212 * VNPFC* is averaged over jobs, not hosts. 206 * This assumes that all hosts are sent the same distribution of jobs.207 There are two situations where this is not the case:208 a) job-size matching, and b) GPUGrid.net's scheme for sending209 some (presumably larger) jobs to GPUs with more processors.210 This can be dealt with using app units (see below).211 213 212 214 == Computing averages == … … 221 223 and we can't let this mess up the average. 222 224 223 In addition, we may as well maintain the standard deviation 224 of the quantities, 225 In addition, we may as well maintain the variance of the quantities, 225 226 although the current system doesn't use it. 226 227 227 228 So for each quantity we maintain the following object: 228 229 {{{ 230 #define MIN_SAMPLES 20 231 // after this many samples, use exponentially averaged version 232 #define SAMPLE_WEIGHT 0.001 233 // new samples get this weight in exp avg 234 #define SAMPLE_LIMIT 10 235 // cap samples at recent_mean*10 236 229 237 struct STATS { 230 238 int nsamples; 231 double sum; 232 double exp_avg; 239 double mean; 240 double sum_var; 241 double recent_mean; 242 double recent_var; 233 243 234 244 void update(double sample) { 235 } 236 237 double mean() { 245 if (sample < 0) return; 246 if (nsamples > MIN_SAMPLES) { 247 if (sample > recent_mean*SAMPLE_LIMIT) { 248 sample = recent_main*SAMPLE_LIMIT; 249 } 250 } 251 // see http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance 252 nsamples++; 253 double delta = sample - mean; 254 mean += delta/nsamples; 255 sum_var += delta*(sample-mean); 256 257 if (nsamples < MIN_SAMPLES) { 258 recent_mean = mean; 259 recent_var = sum_var/nsamples; 260 } else { 261 // update recent averages 262 delta = sample - recent_mean; 263 recent_mean += SAMPLE_WEIGHT*delta; 264 double d2 = delta*delta - recent_var; 265 recent_var += SAMPLE_WEIGHT*d2; 266 } 238 267 } 239 268 };