Consider the following scenario. A simulation is split into 10 independent (parallel) tasks. The simulation has a bunch of parameters. 4 Engines are up and connected to a single controller. Several clients submit the same simulation but with different simulation parameters to the controller via load balanced views.
What seems to happen now is that the controller schedules the 10 tasks of the first client on it's engines. The 10 tasks of the second client get started after all tasks of the previous client are finished. Lets assume all tasks take exactly the same time T to compute. That means that during the first 8 tasks of the first client all engines are at full capacity. During the remaining 2 tasks, 2 of the engines are idle. That leads to an overall computation time of 6T. If the tasks of all connected clients would share a task pool, the computation time would have been 5T.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4