Amazon's 'UltraCompute' service receives an array of n job fragments. Each fragment's size is recorded in jobSizes[i] for 0 ≤ i < n. At the same time, the fleet has m worker instances whose maximum throughputs are throughout[i] for 0 ≤ i < m.
A worker finishes a fragment in exactly 1 second if jobSize[i] is less than or equal to throughput[j]; otherwise it cannot run that fragment.
Each worker can process at most one fragment per second. If a worker is assigned multiple fragments, there is a mandatory cooldown (pause) between completing one fragment and starting the next. Different workers may process different fragments in parallel.
Your task is to compute the minimum number of seconds needed to finish all fragments, or return -1 if at least one fragment is too large for every worker.
n = 3, jobSizes = [2, 5, 3] m = 3, throughput = [6, 2, 4]
The fragment of size 8 exceeds every worker's throughput, so executing all fragments is impossible.
Worker 7: fragments 1 and 6 (1s + 1s pause + 1s) → 3s. Worker 4: fragments 2 and 4 (same timeline) → 3 s. Worker 4 (duplicate): fragment 3 alone in 1s. All fragments are done after 3 seconds.
Hackerrank • Pending