Changes between Version 5 and Version 6 of JobSizeMatching
- Timestamp:
- Apr 19, 2013, 12:56:39 PM (12 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
JobSizeMatching
v5 v6 1 1 = Job size matching = 2 2 3 The difference in throughput between a slow resource3 The difference in throughput between a slow processor 4 4 (e.g. an Android device that runs infrequently) 5 and a fast resource(e.g. a GPU that's always on)5 and a fast processor (e.g. a GPU that's always on) 6 6 can be a factor of 1,000 or more. 7 7 Having a single job size can therefore present problems: 8 8 9 * If the size is too small, hosts with GPUs get huge numbers of jobs 10 (which causes various problems) and there is a high DB load on the server. 9 * If the size is too small, hosts with GPUs get huge numbers of jobs. 10 This causes performance problems on the client 11 and a high DB load on the server. 11 12 * If the size is too large, slow hosts can't get jobs, 12 13 or they get jobs that take weeks to finish. … … 14 15 This document describes a set of mechanisms that address these issues. 15 16 16 == Regulating the flow of jobs into shared memory==17 == Job size classes == 17 18 18 Let's suppose that an app's work generator can produce several sizes of job - 19 say, small, medium, and large. 20 '''We won't address the issue of how to pick these sizes.''' 19 We'll assume that jobs for a given application can be generated 20 in several discrete '''size classes''' 21 (the number of size classes is a parameter of the application). 21 22 22 How can we prevent shared memory from becoming "clogged" with jobs one size? 23 BOINC will try to send jobs of size class i 24 to devices whose effective speed is in the ith quantile, 25 where 'effective speed' is the product of the 26 device speed and the host's on-fraction. 23 27 24 One approach would be to allocate slots for each size. 25 This would be complex because we already have two allocation schemes 26 (for HR and all_apps). 28 This involves 3 new integer DB fields: 29 * app.n_size_classes 30 * workunit.size_class 31 * result.size_class 27 32 28 We could modify the work generator to so that it polls the number of unsent 29 jobs of each size, and creates a few more jobs of a given size when this 30 number falls below a threshold. 33 The size class of a job is specified in the call to create_work(). 31 34 32 Problem: this might not be able to handle a large spike in demand. 33 We'd like to be able to have a large buffer of unsent jobs in the DB. 35 Notes: 36 * It's up to the project to decide how large jobs of different 37 size classes should be. 38 E.g. if there are 2 classes, the ratio of large to small could be 2:1 or 100:1. 39 We should design tools to guide this decision. 40 * It's up to the project to decide how many jobs 41 of each size class to create. 42 * app.n_size_classes can be changed but you need to be careful 43 because of existing jobs. 34 44 35 Solution: 36 * when jobs are created (in the transitioner) set their state to 37 INACTIVE rather than UNSENT. 38 (a per-app flag would indicate this should be done). This flag would be numeric and indicate the number of job classes (num_size_classes). A value of > 1 would enable this mechanism. 39 * have a new daemon (called it the "regulator") that polls for number of unsent 40 jobs of each type, and changes a few jobs from INACTIVE to UNSENT. This daemon should be designed to manage all apps with num_size_classes > 1 or a specific app passed in on the command line. The size of buffer for each job class should be based on a parameter from the command line for this daemon. Additionally, the frequency of polling should also be based upon this field. The criteria for which jobs to advance to the next state should be the same as those available in the feeder. 41 * Add a "size_class" field to workunit and result to indicate S/M/L. This field should be integer (or small int). A project should not set this field larger than the value of num_size_classes. 45 == Computing device quantiles == 46 47 The order statistics of device effective speed will be computed 48 by a new program '''size_census'''. 49 For each app with n_size_classes>1 this does: 50 51 * enumerate host_app_versions for that app 52 * compute "effective speed" as hav.et.avg * host.available_frac 53 * sort the results, and find the quantile points 54 * write these to a flat file 55 56 The feeder reads these flat files (on startup, and when they change) 57 and stores the quantile points in shared memory. 42 58 43 59 == Scheduler changes == 44 60 45 We need to revamp the scheduler. 46 Here's how things currently work: 61 When the scheduler sends jobs of a given app to a given processor, 62 it should preferentially send jobs whose size class matches 63 the quantile of the processor. 64 Failing that, it should preferentially send jobs of smaller 65 size classes rather than larger. 47 66 48 * The scheduler makes up to 5 passes through the array: 49 * "need reliable" jobs 50 * beta jobs 51 * previously infeasible jobs 52 * locality scheduling lite (job uses file already on client) 53 * unrestricted 54 * We maintain a data structure that maps app to the "best" app version for that app. 55 * In the "need reliable" phase this includes only reliable app versions; 56 the map is cleared at the end of the phase. 57 * If we satisfy the request for a particular resource and the best app version 58 uses that resource, we clear the entry. 67 To implement this policy, 68 we'll use a new version of score-based scheduling. 69 This will work as follows. 59 70 60 New approach: 61 Do it one resource at a time (GPUs first). 71 The scheduler will make separate pass for each resource types (GPUs first). 62 72 For each resource: 63 73 64 74 * For each app, find the best app version and decide whether it's reliable 65 * For each of these app versions, find the expected speed 66 (taking on-fraction etc. into account). 67 Based on this, and the statistics of the host population, 68 decide what size job to send for this resource. 69 * Scan the job array, starting at a random point. 75 * For each of these app versions, find the effective speed. 76 Find which quantile the device falls into. 77 * Scan N entries of the job array, starting at a random point. 70 78 Make a list of jobs for which an app version is available, 71 79 and that are of the right size. 72 * Sort the list by a score that combines the above criteria80 * For each job, compute a "score" that includes various factors. 73 81 (reliable, beta, previously infeasibly, locality scheduling lite). 82 * Include a factor for job size; 83 decrement the score of jobs that are too small, 84 and decrement more for jobs that are too large. 85 * Sort the list by score. 74 86 * Scan the list; for each job 75 87 * Make sure it's still in the array … … 79 91 * Leave loop if resource request is satisfied or we're out of disk space 80 92 81 [knreed] - I think that we will need to add a parameter that controls how many jobs in the job array that are scanned by each host. I think that for WCG, we would probably do something like have the job array be 3-4000 in length and have a given device scan 500 entries. We would need to experiment with this to minimize contention between active requests and the resource load maintaining a job array of that size. 82 == Open questions == 93 Notes: 94 * The scan size N is configurable. 95 E.g. WCG plans to have a job array of 3-4000, and N = ~500. 96 We'd need to experiment with this to minimize contention between active requests 97 and the resource load maintaining a job array of that size. 98 * All other factors being equal, the scheduler will send jobs of other apps 99 rather than send a wrong-size job. 100 This could potentially lead to starvation issues; we'll have to see. 83 101 84 1) How to choose job sizes? 85 [knreed] - Projects should make the call. However, it would be interesting to create tool that would scan the db and report back a distribution of device-resource available compute power over a 24 hour (elapsed) time period. This would help the project identify target 'flops' sizes. 102 == Regulating the flow of jobs into shared memory == 86 103 104 How can we prevent shared memory from becoming "clogged" with jobs of one size? 87 105 88 2) Given an estimated speed, how to decide which size to send? 89 [knreed] - One thought. Using the distribution of device-resource available compute power in a 24 hour period for a given app, break the population into app.num_size_classes groups. The feeder will maintain a distribution of jobs that it has assigned to shared memory such that is also divides the jobs into app.num_size_classes groups. A device-resource should be assigned a job that matches the same category as the device-resource. 106 One approach would be to allocate slots for each size. 107 This would be too complex because we already have two allocation schemes 108 (for HR and all_apps). 90 109 110 We could modify the work generator to so that it polls the number of unsent 111 jobs of each size, and creates a few more jobs of a given size when this 112 number falls below a threshold. 113 Problem: this might not be able to handle a large spike in demand. 114 We'd like to be able to have a large buffer of unsent jobs in the DB. 91 115 92 3) How do we prevent infeasible jobs from clogging the shared memory? I think the only new consideration is that if an app has homogenous app version set and the resource currently looking at the job can process the job, just not with the best app version, should the job be assigned? 116 Instead, we'll do the following: 117 * when jobs are created (in the transitioner) set their state to 118 INACTIVE rather than UNSENT. 119 This is done if app.n_size_classes > 1 120 * have a new daemon ('''size_regulator''') that polls for the number of unsent 121 jobs of each type, and changes a few jobs from INACTIVE to UNSENT 122 is N falls below a certain level. 93 123 124 The parameters of '''size_regulator''' are: 125 * The name of the app 126 * Low- and high-water marks for the number of UNSENT jobs of each size class. 127 * The polling frequency. 128 * Ordering options as for the feeder 129 (--priority_order, --priority_order_create_time, --random_order). 130 This determines the order in which jobs are marked as UNSENT.