nbodykit.source.catalog.uniform

Classes

RandomCatalog(csize[, seed, comm])

A CatalogSource that can have columns added via a collective random number generator.

UniformCatalog(nbar, BoxSize[, seed, dtype, …])

A CatalogSource that has uniformly-distributed Position and Velocity columns.

class nbodykit.source.catalog.uniform.RandomCatalog(csize, seed=None, comm=None)[source]

A CatalogSource that can have columns added via a collective random number generator.

The random number generator stored as rng behaves as numpy.random.RandomState but generates random numbers only on the local rank in a manner independent of the number of ranks.

Parameters
  • csize (int) – the desired collective size of the Source

  • seed (int, optional) – the global seed for the random number generator

  • comm (MPI communicator) – the MPI communicator; set automatically if None

Attributes
Index

The attribute giving the global index rank of each particle in the list.

attrs

A dictionary storing relevant meta-data about the CatalogSource.

columns

All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.

csize

The total, collective size of the CatalogSource, i.e., summed across all ranks.

hardcolumns

A list of the hard-coded columns in the CatalogSource.

rng

A MPIRandomState that behaves as numpy.random.RandomState but generates random numbers in a manner independent of the number of ranks.

size

The number of objects in the CatalogSource on the local rank.

Methods

Selection(self)

A boolean column that selects a subset slice of the CatalogSource.

Value(self)

When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.

Weight(self)

The column giving the weight to use for each particle on the mesh.

compute(self, \*args, \*\*kwargs)

Our version of dask.compute() that computes multiple delayed dask collections at once.

copy(self)

Return a shallow copy of the object, where each column is a reference of the corresponding column in self.

get_hardcolumn(self, col)

Construct and return a hard-coded column.

gslice(self, start, stop[, end, redistribute])

Execute a global slice of a CatalogSource.

make_column(array)

Utility function to convert an array-like object to a dask.array.Array.

persist(self[, columns])

Return a CatalogSource, where the selected columns are computed and persist in memory.

read(self, columns)

Return the requested columns as dask arrays.

save(self, output[, columns, dataset, …])

Save the CatalogSource to a bigfile.BigFile.

sort(self, keys[, reverse, usecols])

Return a CatalogSource, sorted globally across all MPI ranks in ascending order by the input keys.

to_mesh(self[, Nmesh, BoxSize, dtype, …])

Convert the CatalogSource to a MeshSource, using the specified parameters.

to_subvolumes(self[, domain, position, columns])

Domain Decompose a catalog, sending items to the ranks according to the supplied domain object.

view(self[, type])

Return a “view” of the CatalogSource object, with the returned type set by type.

create_instance

property Index

The attribute giving the global index rank of each particle in the list. It is an integer from 0 to self.csize.

Note that slicing changes this index value.

Selection(self)

A boolean column that selects a subset slice of the CatalogSource.

By default, this column is set to True for all particles, and all CatalogSource objects will contain this column.

Value(self)

When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.

The mesh field is a weighted average of Value, with the weights given by Weight.

By default, this array is set to unity for all particles, and all CatalogSource objects will contain this column.

Weight(self)

The column giving the weight to use for each particle on the mesh.

The mesh field is a weighted average of Value, with the weights given by Weight.

By default, this array is set to unity for all particles, and all CatalogSource objects will contain this column.

__delitem__(self, col)

Delete a column; cannot delete a “hard-coded” column.

Note

If the base attribute is set, columns will be deleted from base instead of from self.

__finalize__(self, other)

Finalize the creation of a CatalogSource object by copying over any additional attributes from a second CatalogSource.

The idea here is to only copy over attributes that are similar to meta-data, so we do not copy some of the core attributes of the CatalogSource object.

Parameters

other – the second object to copy over attributes from; it needs to be a subclass of CatalogSourcBase for attributes to be copied

Returns

return self, with the added attributes

Return type

CatalogSource

__getitem__(self, sel)

The following types of indexing are supported:

  1. strings specifying a column in the CatalogSource; returns a dask array holding the column data

  2. boolean arrays specifying a slice of the CatalogSource; returns a CatalogSource holding only the revelant slice

  3. slice object specifying which particles to select

  4. list of strings specifying column names; returns a CatalogSource holding only the selected columns

Notes

  • Slicing is a collective operation

  • If the base attribute is set, columns will be returned from base instead of from self.

__len__(self)

The local size of the CatalogSource on a given rank.

__setitem__(self, col, value)

Add columns to the CatalogSource, overriding any existing columns with the name col.

property attrs

A dictionary storing relevant meta-data about the CatalogSource.

property columns

All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.

Note

If the base attribute is set, the value of base.columns will be returned.

compute(self, *args, **kwargs)

Our version of dask.compute() that computes multiple delayed dask collections at once.

This should be called on the return value of read() to converts any dask arrays to numpy arrays.

. note::

If the base attribute is set, compute() will called using base instead of self.

Parameters

args (object) – Any number of objects. If the object is a dask collection, it’s computed and the result is returned. Otherwise it’s passed through unchanged.

copy(self)

Return a shallow copy of the object, where each column is a reference of the corresponding column in self.

Note

No copy of data is made.

Note

This is different from view in that the attributes dictionary of the copy no longer related to self.

Returns

a new CatalogSource that holds all of the data columns of self

Return type

CatalogSource

property csize

The total, collective size of the CatalogSource, i.e., summed across all ranks.

It is the sum of size across all available ranks.

If the base attribute is set, the base.csize attribute will be returned.

get_hardcolumn(self, col)

Construct and return a hard-coded column.

These are usually produced by calling member functions marked by the @column decorator.

Subclasses may override this method and the hardcolumns attribute to bypass the decorator logic.

Note

If the base attribute is set, get_hardcolumn() will called using base instead of self.

gslice(self, start, stop, end=1, redistribute=True)

Execute a global slice of a CatalogSource.

Note

After the global slice is performed, the data is scattered evenly across all ranks.

Note

The current algorithm generates an index on the root rank and does not scale well.

Parameters
  • start (int) – the start index of the global slice

  • stop (int) – the stop index of the global slice

  • step (int, optional) – the default step size of the global size

  • redistribute (bool, optional) – if True, evenly re-distribute the sliced data across all ranks, otherwise just return any local data part of the global slice

property hardcolumns

A list of the hard-coded columns in the CatalogSource.

These columns are usually member functions marked by @column decorator. Subclasses may override this method and use get_hardcolumn() to bypass the decorator logic.

Note

If the base attribute is set, the value of base.hardcolumns will be returned.

static make_column(array)

Utility function to convert an array-like object to a dask.array.Array.

Note

The dask array chunk size is controlled via the dask_chunk_size global option. See set_options.

Parameters

array (array_like) – an array-like object; can be a dask array, numpy array, ColumnAccessor, or other non-scalar array-like object

Returns

a dask array initialized from array

Return type

dask.array.Array

persist(self, columns=None)

Return a CatalogSource, where the selected columns are computed and persist in memory.

read(self, columns)

Return the requested columns as dask arrays.

Parameters

columns (list of str) – the names of the requested columns

Returns

the list of column data, in the form of dask arrays

Return type

list of dask.array.Array

property rng

A MPIRandomState that behaves as numpy.random.RandomState but generates random numbers in a manner independent of the number of ranks.

save(self, output, columns=None, dataset=None, datasets=None, header='Header', compute=True)

Save the CatalogSource to a bigfile.BigFile.

Only the selected columns are saved and attrs are saved in header. The attrs of columns are stored in the datasets.

Parameters
  • output (str) – the name of the file to write to

  • columns (list of str) – the names of the columns to save in the file, or None to use all columns

  • dataset (str, optional) – dataset to store the columns under.

  • datasets (list of str, optional) – names for the data set where each column is stored; defaults to the name of the column (deprecated)

  • header (str, optional, or None) – the name of the data set holding the header information, where attrs is stored if header is None, do not save the header.

  • compute (boolean, default True) – if True, wait till the store operations finish if False, return a dictionary with column name and a future object for the store. use dask.compute() to wait for the store operations on the result.

property size

The number of objects in the CatalogSource on the local rank.

If the base attribute is set, the base.size attribute will be returned.

Important

This property must be defined for all subclasses.

sort(self, keys, reverse=False, usecols=None)

Return a CatalogSource, sorted globally across all MPI ranks in ascending order by the input keys.

Sort columns must be floating or integer type.

Note

After the sort operation, the data is scattered evenly across all ranks.

Parameters
  • keys (list, tuple) – the names of columns to sort by. If multiple columns are provided, the data is sorted consecutively in the order provided

  • reverse (bool, optional) – if True, perform descending sort operations

  • usecols (list, optional) – the name of the columns to include in the returned CatalogSource

to_mesh(self, Nmesh=None, BoxSize=None, dtype='f4', interlaced=False, compensated=False, resampler='cic', weight='Weight', value='Value', selection='Selection', position='Position', window=None)

Convert the CatalogSource to a MeshSource, using the specified parameters.

Parameters
  • Nmesh (int, optional) – the number of cells per side on the mesh; must be provided if not stored in attrs

  • BoxSize (scalar, 3-vector, optional) – the size of the box; must be provided if not stored in attrs

  • dtype (string, optional) – the data type of the mesh array

  • interlaced (bool, optional) – use the interlacing technique of Sefusatti et al. 2015 to reduce the effects of aliasing on Fourier space quantities computed from the mesh

  • compensated (bool, optional) – whether to correct for the resampler window introduced by the grid interpolation scheme

  • resampler (str, optional) – the string specifying which resampler interpolation scheme to use; see pmesh.resampler.methods

  • weight (str, optional) – the name of the column specifying the weight for each particle

  • value (str, optional) – the name of the column specifying the field value for each particle

  • selection (str, optional) – the name of the column that specifies which (if any) slice of the CatalogSource to take

  • position (str, optional) – the name of the column that specifies the position data of the objects in the catalog

  • window (str, deprecated) – use resampler instead.

Returns

mesh – a mesh object that provides an interface for gridding particle data onto a specified mesh

Return type

CatalogMesh

to_subvolumes(self, domain=None, position='Position', columns=None)

Domain Decompose a catalog, sending items to the ranks according to the supplied domain object. Using the position column as the Position.

This will read in the full position array and all of the requested columns.

Parameters
  • domain (pmesh.domain.GridND object, or None) – The domain to distribute the catalog. If None, try to evenly divide spatially. An easiest way to find a domain object is to use pm.domain, where pm is a pmesh.pm.ParticleMesh object.

  • position (string_like) – column to use to compute the position.

  • columns (list of string_like) – columns to include in the new catalog, if not supplied, all catalogs will be exchanged.

Returns

A decomposed catalog source, where each rank only contains objects belongs to the rank as claimed by the domain object.

self.attrs are carried over as a shallow copy to the returned object.

Return type

CatalogSource

view(self, type=None)

Return a “view” of the CatalogSource object, with the returned type set by type.

This initializes a new empty class of type type and attaches attributes to it via the __finalize__() mechanism.

Parameters

type (Python type) – the desired class type of the returned object.

class nbodykit.source.catalog.uniform.UniformCatalog(nbar, BoxSize, seed=None, dtype='f8', comm=None)[source]

A CatalogSource that has uniformly-distributed Position and Velocity columns.

The random numbers generated do not depend on the number of available ranks.

Parameters
  • nbar (float) – the desired number density of particles in the box

  • BoxSize (float, 3-vector) – the size of the box

  • seed (int, optional) – the random seed

  • comm – the MPI communicator

Attributes
Index

The attribute giving the global index rank of each particle in the list.

attrs

A dictionary storing relevant meta-data about the CatalogSource.

columns

All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.

csize

The total, collective size of the CatalogSource, i.e., summed across all ranks.

hardcolumns

A list of the hard-coded columns in the CatalogSource.

rng

A MPIRandomState that behaves as numpy.random.RandomState but generates random numbers in a manner independent of the number of ranks.

size

The number of objects in the CatalogSource on the local rank.

Methods

Position(self)

The position of particles, uniformly distributed in BoxSize

Selection(self)

A boolean column that selects a subset slice of the CatalogSource.

Value(self)

When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.

Velocity(self)

The velocity of particles, uniformly distributed in 0.01 x BoxSize

Weight(self)

The column giving the weight to use for each particle on the mesh.

compute(self, \*args, \*\*kwargs)

Our version of dask.compute() that computes multiple delayed dask collections at once.

copy(self)

Return a shallow copy of the object, where each column is a reference of the corresponding column in self.

get_hardcolumn(self, col)

Construct and return a hard-coded column.

gslice(self, start, stop[, end, redistribute])

Execute a global slice of a CatalogSource.

make_column(array)

Utility function to convert an array-like object to a dask.array.Array.

persist(self[, columns])

Return a CatalogSource, where the selected columns are computed and persist in memory.

read(self, columns)

Return the requested columns as dask arrays.

save(self, output[, columns, dataset, …])

Save the CatalogSource to a bigfile.BigFile.

sort(self, keys[, reverse, usecols])

Return a CatalogSource, sorted globally across all MPI ranks in ascending order by the input keys.

to_mesh(self[, Nmesh, BoxSize, dtype, …])

Convert the CatalogSource to a MeshSource, using the specified parameters.

to_subvolumes(self[, domain, position, columns])

Domain Decompose a catalog, sending items to the ranks according to the supplied domain object.

view(self[, type])

Return a “view” of the CatalogSource object, with the returned type set by type.

create_instance

property Index

The attribute giving the global index rank of each particle in the list. It is an integer from 0 to self.csize.

Note that slicing changes this index value.

Position(self)[source]

The position of particles, uniformly distributed in BoxSize

Selection(self)

A boolean column that selects a subset slice of the CatalogSource.

By default, this column is set to True for all particles, and all CatalogSource objects will contain this column.

Value(self)

When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.

The mesh field is a weighted average of Value, with the weights given by Weight.

By default, this array is set to unity for all particles, and all CatalogSource objects will contain this column.

Velocity(self)[source]

The velocity of particles, uniformly distributed in 0.01 x BoxSize

Weight(self)

The column giving the weight to use for each particle on the mesh.

The mesh field is a weighted average of Value, with the weights given by Weight.

By default, this array is set to unity for all particles, and all CatalogSource objects will contain this column.

__delitem__(self, col)

Delete a column; cannot delete a “hard-coded” column.

Note

If the base attribute is set, columns will be deleted from base instead of from self.

__finalize__(self, other)

Finalize the creation of a CatalogSource object by copying over any additional attributes from a second CatalogSource.

The idea here is to only copy over attributes that are similar to meta-data, so we do not copy some of the core attributes of the CatalogSource object.

Parameters

other – the second object to copy over attributes from; it needs to be a subclass of CatalogSourcBase for attributes to be copied

Returns

return self, with the added attributes

Return type

CatalogSource

__getitem__(self, sel)

The following types of indexing are supported:

  1. strings specifying a column in the CatalogSource; returns a dask array holding the column data

  2. boolean arrays specifying a slice of the CatalogSource; returns a CatalogSource holding only the revelant slice

  3. slice object specifying which particles to select

  4. list of strings specifying column names; returns a CatalogSource holding only the selected columns

Notes

  • Slicing is a collective operation

  • If the base attribute is set, columns will be returned from base instead of from self.

__len__(self)

The local size of the CatalogSource on a given rank.

__setitem__(self, col, value)

Add columns to the CatalogSource, overriding any existing columns with the name col.

property attrs

A dictionary storing relevant meta-data about the CatalogSource.

property columns

All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.

Note

If the base attribute is set, the value of base.columns will be returned.

compute(self, *args, **kwargs)

Our version of dask.compute() that computes multiple delayed dask collections at once.

This should be called on the return value of read() to converts any dask arrays to numpy arrays.

. note::

If the base attribute is set, compute() will called using base instead of self.

Parameters

args (object) – Any number of objects. If the object is a dask collection, it’s computed and the result is returned. Otherwise it’s passed through unchanged.

copy(self)

Return a shallow copy of the object, where each column is a reference of the corresponding column in self.

Note

No copy of data is made.

Note

This is different from view in that the attributes dictionary of the copy no longer related to self.

Returns

a new CatalogSource that holds all of the data columns of self

Return type

CatalogSource

property csize

The total, collective size of the CatalogSource, i.e., summed across all ranks.

It is the sum of size across all available ranks.

If the base attribute is set, the base.csize attribute will be returned.

get_hardcolumn(self, col)

Construct and return a hard-coded column.

These are usually produced by calling member functions marked by the @column decorator.

Subclasses may override this method and the hardcolumns attribute to bypass the decorator logic.

Note

If the base attribute is set, get_hardcolumn() will called using base instead of self.

gslice(self, start, stop, end=1, redistribute=True)

Execute a global slice of a CatalogSource.

Note

After the global slice is performed, the data is scattered evenly across all ranks.

Note

The current algorithm generates an index on the root rank and does not scale well.

Parameters
  • start (int) – the start index of the global slice

  • stop (int) – the stop index of the global slice

  • step (int, optional) – the default step size of the global size

  • redistribute (bool, optional) – if True, evenly re-distribute the sliced data across all ranks, otherwise just return any local data part of the global slice

property hardcolumns

A list of the hard-coded columns in the CatalogSource.

These columns are usually member functions marked by @column decorator. Subclasses may override this method and use get_hardcolumn() to bypass the decorator logic.

Note

If the base attribute is set, the value of base.hardcolumns will be returned.

static make_column(array)

Utility function to convert an array-like object to a dask.array.Array.

Note

The dask array chunk size is controlled via the dask_chunk_size global option. See set_options.

Parameters

array (array_like) – an array-like object; can be a dask array, numpy array, ColumnAccessor, or other non-scalar array-like object

Returns

a dask array initialized from array

Return type

dask.array.Array

persist(self, columns=None)

Return a CatalogSource, where the selected columns are computed and persist in memory.

read(self, columns)

Return the requested columns as dask arrays.

Parameters

columns (list of str) – the names of the requested columns

Returns

the list of column data, in the form of dask arrays

Return type

list of dask.array.Array

property rng

A MPIRandomState that behaves as numpy.random.RandomState but generates random numbers in a manner independent of the number of ranks.

save(self, output, columns=None, dataset=None, datasets=None, header='Header', compute=True)

Save the CatalogSource to a bigfile.BigFile.

Only the selected columns are saved and attrs are saved in header. The attrs of columns are stored in the datasets.

Parameters
  • output (str) – the name of the file to write to

  • columns (list of str) – the names of the columns to save in the file, or None to use all columns

  • dataset (str, optional) – dataset to store the columns under.

  • datasets (list of str, optional) – names for the data set where each column is stored; defaults to the name of the column (deprecated)

  • header (str, optional, or None) – the name of the data set holding the header information, where attrs is stored if header is None, do not save the header.

  • compute (boolean, default True) – if True, wait till the store operations finish if False, return a dictionary with column name and a future object for the store. use dask.compute() to wait for the store operations on the result.

property size

The number of objects in the CatalogSource on the local rank.

If the base attribute is set, the base.size attribute will be returned.

Important

This property must be defined for all subclasses.

sort(self, keys, reverse=False, usecols=None)

Return a CatalogSource, sorted globally across all MPI ranks in ascending order by the input keys.

Sort columns must be floating or integer type.

Note

After the sort operation, the data is scattered evenly across all ranks.

Parameters
  • keys (list, tuple) – the names of columns to sort by. If multiple columns are provided, the data is sorted consecutively in the order provided

  • reverse (bool, optional) – if True, perform descending sort operations

  • usecols (list, optional) – the name of the columns to include in the returned CatalogSource

to_mesh(self, Nmesh=None, BoxSize=None, dtype='f4', interlaced=False, compensated=False, resampler='cic', weight='Weight', value='Value', selection='Selection', position='Position', window=None)

Convert the CatalogSource to a MeshSource, using the specified parameters.

Parameters
  • Nmesh (int, optional) – the number of cells per side on the mesh; must be provided if not stored in attrs

  • BoxSize (scalar, 3-vector, optional) – the size of the box; must be provided if not stored in attrs

  • dtype (string, optional) – the data type of the mesh array

  • interlaced (bool, optional) – use the interlacing technique of Sefusatti et al. 2015 to reduce the effects of aliasing on Fourier space quantities computed from the mesh

  • compensated (bool, optional) – whether to correct for the resampler window introduced by the grid interpolation scheme

  • resampler (str, optional) – the string specifying which resampler interpolation scheme to use; see pmesh.resampler.methods

  • weight (str, optional) – the name of the column specifying the weight for each particle

  • value (str, optional) – the name of the column specifying the field value for each particle

  • selection (str, optional) – the name of the column that specifies which (if any) slice of the CatalogSource to take

  • position (str, optional) – the name of the column that specifies the position data of the objects in the catalog

  • window (str, deprecated) – use resampler instead.

Returns

mesh – a mesh object that provides an interface for gridding particle data onto a specified mesh

Return type

CatalogMesh

to_subvolumes(self, domain=None, position='Position', columns=None)

Domain Decompose a catalog, sending items to the ranks according to the supplied domain object. Using the position column as the Position.

This will read in the full position array and all of the requested columns.

Parameters
  • domain (pmesh.domain.GridND object, or None) – The domain to distribute the catalog. If None, try to evenly divide spatially. An easiest way to find a domain object is to use pm.domain, where pm is a pmesh.pm.ParticleMesh object.

  • position (string_like) – column to use to compute the position.

  • columns (list of string_like) – columns to include in the new catalog, if not supplied, all catalogs will be exchanged.

Returns

A decomposed catalog source, where each rank only contains objects belongs to the rank as claimed by the domain object.

self.attrs are carried over as a shallow copy to the returned object.

Return type

CatalogSource

view(self, type=None)

Return a “view” of the CatalogSource object, with the returned type set by type.

This initializes a new empty class of type type and attaches attributes to it via the __finalize__() mechanism.

Parameters

type (Python type) – the desired class type of the returned object.