Changes between Version 5 and Version 6 of ValidationSummary
- Timestamp:
- Apr 14, 2010, 8:39:45 PM (15 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
ValidationSummary
v5 v6 1 = Validation , credit,and replication =1 = Validation and replication = 2 2 3 The execution of a job produces: 4 5 * The output files; 6 * The amount of CPU time used; this may be used to determine how much credit to grant for the result. 7 8 In general, neither of these can be trusted, because: 3 The results of a job cannot be trusted, because: 9 4 10 5 * Some hosts have consistent or sporadic hardware problems, typically causing errors in floating-point computation. 11 6 * Some volunteers may maliciously return wrong results; they may even reverse-engineer your application, deciphering and defeating any internal validation mechanism it might contain. 12 * Some volunteers may return correct results but falsify the CPU time.13 7 14 BOINC offers several mechanisms for validating results and credit.8 BOINC offers several mechanisms for validating results. 15 9 However, there is no "one size fits all" solution. 16 10 The choice depends on your requirements, and on the nature of your applications … … 34 28 of detecting wrong results with high probability. 35 29 36 The credit question remains.37 Some possibilities:38 39 * Grant fixed credit (feasible if your jobs are uniform).40 * Put a cap on granted credit (this allows cheating).41 * If claimed credit exceeds a threshold, replicate the job.42 43 30 == Replication == 44 31 … … 46 33 each job gets done on N different hosts, 47 34 and a result is considered valid if a strict majority of hosts return it. 48 49 Replication also provides a good solution to credit cheating, even for non-uniform apps:50 grant the average claimed credit, throwing out the low and high first51 (if N=2, grant the minimum).52 35 53 36 One problem with replication is that there are discrepancies