wiki:JobSizeMatching

Version 5 (modified by Kevin Reed, 11 years ago) (diff)

thoughts from kevin

Job size matching

The difference in throughput between a slow resource (e.g. an Android device that runs infrequently) and a fast resource (e.g. a GPU that's always on) can be a factor of 1,000 or more. Having a single job size can therefore present problems:

  • If the size is too small, hosts with GPUs get huge numbers of jobs (which causes various problems) and there is a high DB load on the server.
  • If the size is too large, slow hosts can't get jobs, or they get jobs that take weeks to finish.

This document describes a set of mechanisms that address these issues.

Regulating the flow of jobs into shared memory

Let's suppose that an app's work generator can produce several sizes of job - say, small, medium, and large. We won't address the issue of how to pick these sizes.

How can we prevent shared memory from becoming "clogged" with jobs one size?

One approach would be to allocate slots for each size. This would be complex because we already have two allocation schemes (for HR and all_apps).

We could modify the work generator to so that it polls the number of unsent jobs of each size, and creates a few more jobs of a given size when this number falls below a threshold.

Problem: this might not be able to handle a large spike in demand. We'd like to be able to have a large buffer of unsent jobs in the DB.

Solution:

  • when jobs are created (in the transitioner) set their state to INACTIVE rather than UNSENT. (a per-app flag would indicate this should be done). This flag would be numeric and indicate the number of job classes (num_size_classes). A value of > 1 would enable this mechanism.
  • have a new daemon (called it the "regulator") that polls for number of unsent jobs of each type, and changes a few jobs from INACTIVE to UNSENT. This daemon should be designed to manage all apps with num_size_classes > 1 or a specific app passed in on the command line. The size of buffer for each job class should be based on a parameter from the command line for this daemon. Additionally, the frequency of polling should also be based upon this field. The criteria for which jobs to advance to the next state should be the same as those available in the feeder.
  • Add a "size_class" field to workunit and result to indicate S/M/L. This field should be integer (or small int). A project should not set this field larger than the value of num_size_classes.

Scheduler changes

We need to revamp the scheduler. Here's how things currently work:

  • The scheduler makes up to 5 passes through the array:
    • "need reliable" jobs
    • beta jobs
    • previously infeasible jobs
    • locality scheduling lite (job uses file already on client)
    • unrestricted
  • We maintain a data structure that maps app to the "best" app version for that app.
    • In the "need reliable" phase this includes only reliable app versions; the map is cleared at the end of the phase.
    • If we satisfy the request for a particular resource and the best app version uses that resource, we clear the entry.

New approach: Do it one resource at a time (GPUs first). For each resource:

  • For each app, find the best app version and decide whether it's reliable
  • For each of these app versions, find the expected speed (taking on-fraction etc. into account). Based on this, and the statistics of the host population, decide what size job to send for this resource.
  • Scan the job array, starting at a random point. Make a list of jobs for which an app version is available, and that are of the right size.
  • Sort the list by a score that combines the above criteria (reliable, beta, previously infeasibly, locality scheduling lite).
  • Scan the list; for each job
    • Make sure it's still in the array
    • Do quick checks
    • Lock entry and do slow checks
    • Send job
    • Leave loop if resource request is satisfied or we're out of disk space

[knreed] - I think that we will need to add a parameter that controls how many jobs in the job array that are scanned by each host. I think that for WCG, we would probably do something like have the job array be 3-4000 in length and have a given device scan 500 entries. We would need to experiment with this to minimize contention between active requests and the resource load maintaining a job array of that size.

Open questions

1) How to choose job sizes?

[knreed] - Projects should make the call. However, it would be interesting to create tool that would scan the db and report back a distribution of device-resource available compute power over a 24 hour (elapsed) time period. This would help the project identify target 'flops' sizes.

2) Given an estimated speed, how to decide which size to send?

[knreed] - One thought. Using the distribution of device-resource available compute power in a 24 hour period for a given app, break the population into app.num_size_classes groups. The feeder will maintain a distribution of jobs that it has assigned to shared memory such that is also divides the jobs into app.num_size_classes groups. A device-resource should be assigned a job that matches the same category as the device-resource.

3) How do we prevent infeasible jobs from clogging the shared memory? I think the only new consideration is that if an app has homogenous app version set and the resource currently looking at the job can process the job, just not with the best app version, should the job be assigned?