| | 211 | |
| | 212 | === Prioritizing a user's batches === |
| | 213 | |
| | 214 | With the above design, the batches of a particular user |
| | 215 | are processed in FCFS (possibly with overlap). |
| | 216 | It's possible to refine the mechanism to |
| | 217 | let users prioritize their own batches. |
| | 218 | |
| | 219 | Example: suppose a user U has a long batch A in progress, with LST(A)=x, |
| | 220 | and they submit a short batch B, |
| | 221 | and they want B to have priority over A. |
| | 222 | |
| | 223 | Then: let LST(B) = x |
| | 224 | and add R(B)/share(U) to both LST(A) and LET(A). |
| | 225 | |
| | 226 | In effect, B inherits A's initial global position, |
| | 227 | and A's position is moved back accordingly. |
| | 228 | |
| | 229 | === Notes === |
| | 230 | |
| | 231 | The scheme uses a-priori batch size estimates. |
| | 232 | These may be wildly wrong, perhaps intentionally. |
| | 233 | We need a way to adjust logical start and end times |
| | 234 | while batches are done (or even in progress) |
| | 235 | to compensate for bad initial estimates. |
| | 236 | |
| | 237 | For throughput-oriented users |
| | 238 | (i.e., infinite streams of single jobs) |
| | 239 | the scheme should handle them OK by viewing each |
| | 240 | job as a 1-element batch. |
| | 241 | |
| | 242 | The scheme doesn't use the idea of |
| | 243 | accumulated credit proposed above. |
| | 244 | This is replaced by LST(U). |