nbodykit.batch

Functions

enum(*sequential, **named) Enumeration values to serve as status tags passed between processeseee
split_ranks(N_ranks, N[, include_all]) Divide the ranks into chunks, attempting to have N ranks in each chunk.

Classes

TaskManager(cpus_per_task[, comm, debug, …]) An MPI task manager that distributes tasks over a set of MPI processes, using a specified number of independent workers to compute each task.
class nbodykit.batch.TaskManager(cpus_per_task, comm=None, debug=False, use_all_cpus=False)[source]

An MPI task manager that distributes tasks over a set of MPI processes, using a specified number of independent workers to compute each task.

Given the specified number of independent workers (which compute tasks in parallel), the total number of available CPUs will be divided evenly.

The main function is iterate which iterates through a set of tasks, distributing the tasks in parallel over the available ranks.

Parameters:
  • cpus_per_task (int) – the desired number of ranks assigned to compute each task
  • comm (MPI communicator, optional) – the global communicator that will be split so each worker has a subset of CPUs available; default is COMM_WORLD
  • debug (bool, optional) – if True, set the logging level to DEBUG, which prints out much more information; default is False
  • use_all_cpus (bool, optional) – if True, use all available CPUs, including the remainder if cpus_per_task is not divide the total number of CPUs evenly; default is False

Methods

is_root() Is the current process the root process?
is_worker() Is the current process a valid worker?
iterate(tasks) A generator that iterates through a series of tasks in parallel.
map(function, tasks) Like the built-in map() function, apply a function to all of the values in a list and return the list of results.
__enter__()[source]

Split the base communicator such that each task gets allocated the specified number of cpus to perform the task with

__exit__(exc_type, exc_value, exc_traceback)[source]

Exit gracefully by closing and freeing the MPI-related variables

is_root()[source]

Is the current process the root process?

Root is responsible for distributing the tasks to the other available ranks

is_worker()[source]

Is the current process a valid worker?

Workers wait for instructions from the master

iterate(tasks)[source]

A generator that iterates through a series of tasks in parallel.

Notes

This is a collective operation and should be called by all ranks

Parameters:tasks (iterable) – an iterable of task items that will be yielded in parallel across all ranks
Yields:task – the individual items of tasks, iterated through in parallel
map(function, tasks)[source]

Like the built-in map() function, apply a function to all of the values in a list and return the list of results.

If tasks contains tuples, the arguments are passed to function using the *args syntax

Notes

This is a collective operation and should be called by all ranks

Parameters:
  • function (callable) – The function to apply to the list.
  • tasks (list) – The list of tasks
Returns:

results – the list of the return values of function()

Return type:

list

nbodykit.batch.enum(*sequential, **named)[source]

Enumeration values to serve as status tags passed between processeseee

nbodykit.batch.split_ranks(N_ranks, N, include_all=False)[source]

Divide the ranks into chunks, attempting to have N ranks in each chunk. This removes the master (0) rank, such that N_ranks - 1 ranks are available to be grouped

Parameters:
  • N_ranks (int) – the total number of ranks available
  • N (int) – the desired number of ranks per worker
  • include_all (bool, optional) – if True, then do not force each group to have exactly N ranks, instead including the remainder as well; default is False