nbodykit.source.catalog.hod.
HODBase
(halos, seed=None, use_cache=False, comm=None, **params)[source]¶Bases: nbodykit.source.catalog.array.ArrayCatalog
A base class to be used for HOD population of a halo catalog.
The user must supply the __makemodel__()
function, which returns
the halotools composite HOD model.
This abstraction allows the user to potentially implement several different types of HOD models quickly, while using the population framework of this base class.
Attributes
attrs |
A dictionary storing relevant meta-data about the CatalogSource. |
columns |
All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user. |
csize |
The total, collective size of the CatalogSource, i.e., summed across all ranks. |
hardcolumns |
The union of the columns in the file and any transformed columns. |
size |
|
use_cache |
If set to True , use the built-in caching features of dask to cache data in memory. |
Methods
Position () |
Galaxy positions, in units of Mpc/h |
Selection () |
A boolean column that selects a subset slice of the CatalogSource. |
Value () |
When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell. |
Velocity () |
Galaxy velocity, in units of km/s |
VelocityOffset () |
The RSD velocity offset, in units of Mpc/h |
Weight () |
The column giving the weight to use for each particle on the mesh. |
compute (*args, **kwargs) |
Our version of dask.compute() that computes multiple delayed dask collections at once. |
copy () |
Return a copy of the CatalogSource object |
get_hardcolumn (col) |
Return a column from the underlying data array/dict. |
make_column (array) |
Utility function to convert a numpy array to a dask.array.Array . |
read (columns) |
Return the requested columns as dask arrays. |
repopulate ([seed]) |
Update the HOD parameters and then re-populate the mock catalog |
save (output, columns[, datasets, header]) |
Save the CatalogSource to a bigfile.BigFile . |
to_mesh ([Nmesh, BoxSize, dtype, interlaced, …]) |
Convert the CatalogSource to a MeshSource, using the specified parameters. |
update_csize () |
Set the collective size, csize . |
Selection
()¶A boolean column that selects a subset slice of the CatalogSource.
By default, this column is set to True
for all particles.
Value
()¶When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.
The mesh field is a weighted average of Value
, with the weights
given by Weight
.
By default, this array is set to unity for all particles.
VelocityOffset
()[source]¶The RSD velocity offset, in units of Mpc/h
This multiplies Velocity by 1 / (a*100*E(z)) = 1 / (a H(z)/h)
Weight
()¶The column giving the weight to use for each particle on the mesh.
The mesh field is a weighted average of Value
, with the weights
given by Weight
.
By default, this array is set to unity for all particles.
__delitem__
(col)¶Delete a column; cannot delete a “hard-coded” column
__getitem__
(sel)¶The following types of indexing are supported:
__len__
()¶The local size of the CatalogSource on a given rank.
__makemodel__
()[source]¶Abstract class to be overwritten by user; this should return the HOD model instance that will be used to do the mock population.
See the documentation for more details.
Returns: | the halotools object implementing the HOD model |
---|---|
Return type: | HodModelFactory |
__makesource__
()[source]¶Make the source of galaxies by performing the halo HOD population
Note
The mock population is only done by the root, and the resulting catalog is then distributed evenly amongst the available ranks
__setitem__
(col, value)¶Add columns to the CatalogSource, overriding any existing columns
with the name col
.
attrs
¶A dictionary storing relevant meta-data about the CatalogSource.
columns
¶All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.
compute
(*args, **kwargs)¶Our version of dask.compute()
that computes
multiple delayed dask collections at once.
This should be called on the return value of read()
to converts any dask arrays to numpy arrays.
If use_cache
is True
, this internally caches data, using
dask’s built-in cache features.
Parameters: | args (object) – Any number of objects. If the object is a dask collection, it’s computed and the result is returned. Otherwise it’s passed through unchanged. |
---|
Notes
The dask default optimizer induces too many (unnecesarry) IO calls – we turn this off feature off by default. Eventually we want our own optimizer probably.
copy
()¶Return a copy of the CatalogSource object
Returns: | the new CatalogSource object holding the copied data columns |
---|---|
Return type: | CatalogCopy |
csize
¶The total, collective size of the CatalogSource, i.e., summed across all ranks.
It is the sum of size
across all available ranks.
get_hardcolumn
(col)¶Return a column from the underlying data array/dict.
Columns are returned as dask arrays.
hardcolumns
¶The union of the columns in the file and any transformed columns.
logger
= <logging.Logger object>¶make_column
(array)¶Utility function to convert a numpy array to a dask.array.Array
.
read
(columns)¶Return the requested columns as dask arrays.
Parameters: | columns (list of str) – the names of the requested columns |
---|---|
Returns: | the list of column data, in the form of dask arrays |
Return type: | list of dask.array.Array |
repopulate
(seed=None, **params)[source]¶Update the HOD parameters and then re-populate the mock catalog
Warning
This operation is done in-place, so the size of the Source changes
Parameters: |
|
---|
save
(output, columns, datasets=None, header='Header')¶Save the CatalogSource to a bigfile.BigFile
.
Only the selected columns are saved and attrs
are saved in
header
. The attrs of columns are stored in the datasets.
Parameters: |
|
---|
size
¶to_mesh
(Nmesh=None, BoxSize=None, dtype='f4', interlaced=False, compensated=False, window='cic', weight='Weight', value='Value', selection='Selection', position='Position')¶Convert the CatalogSource to a MeshSource, using the specified parameters.
Parameters: |
|
---|---|
Returns: | mesh – a mesh object that provides an interface for gridding particle data onto a specified mesh |
Return type: |
update_csize
()¶Set the collective size, csize
.
This function should be called in __init__()
of a subclass,
after size
has been set to a valid value (not NotImplemented
)
use_cache
¶If set to True
, use the built-in caching features of dask
to cache data in memory.
nbodykit.source.catalog.hod.
HODCatalog
(halos, logMmin=13.031, sigma_logM=0.38, alpha=0.76, logM0=13.27, logM1=14.08, seed=None, use_cache=False, comm=None)[source]¶Bases: nbodykit.source.catalog.hod.HODBase
A CatalogSource that uses the HOD prescription of Zheng et al 2007 to populate an input halo catalog with galaxies.
The mock population is done using halotools
. See the documentation
for halotools.empirical_models.Zheng07Cens
and
halotools.empirical_models.Zheng07Sats
for further details
regarding the HOD.
The columns generated in this catalog are:
csize
size
conc_NFWmodel
Velocity
Position
For futher details, please see the documentation.
Note
Default HOD values are from Reid et al. 2014
Parameters: |
|
---|
References
Zheng et al. (2007), arXiv:0703457
Attributes
attrs |
A dictionary storing relevant meta-data about the CatalogSource. |
columns |
All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user. |
csize |
The total, collective size of the CatalogSource, i.e., summed across all ranks. |
hardcolumns |
The union of the columns in the file and any transformed columns. |
size |
|
use_cache |
If set to True , use the built-in caching features of dask to cache data in memory. |
Methods
Position () |
Galaxy positions, in units of Mpc/h |
Selection () |
A boolean column that selects a subset slice of the CatalogSource. |
Value () |
When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell. |
Velocity () |
Galaxy velocity, in units of km/s |
VelocityOffset () |
The RSD velocity offset, in units of Mpc/h |
Weight () |
The column giving the weight to use for each particle on the mesh. |
compute (*args, **kwargs) |
Our version of dask.compute() that computes multiple delayed dask collections at once. |
copy () |
Return a copy of the CatalogSource object |
get_hardcolumn (col) |
Return a column from the underlying data array/dict. |
make_column (array) |
Utility function to convert a numpy array to a dask.array.Array . |
read (columns) |
Return the requested columns as dask arrays. |
repopulate ([seed]) |
Update the HOD parameters and then re-populate the mock catalog |
save (output, columns[, datasets, header]) |
Save the CatalogSource to a bigfile.BigFile . |
to_mesh ([Nmesh, BoxSize, dtype, interlaced, …]) |
Convert the CatalogSource to a MeshSource, using the specified parameters. |
update_csize () |
Set the collective size, csize . |
Position
()¶Galaxy positions, in units of Mpc/h
Selection
()¶A boolean column that selects a subset slice of the CatalogSource.
By default, this column is set to True
for all particles.
Value
()¶When interpolating a CatalogSource on to a mesh, the value of this array is used as the Value that each particle contributes to a given mesh cell.
The mesh field is a weighted average of Value
, with the weights
given by Weight
.
By default, this array is set to unity for all particles.
Velocity
()¶Galaxy velocity, in units of km/s
VelocityOffset
()¶The RSD velocity offset, in units of Mpc/h
This multiplies Velocity by 1 / (a*100*E(z)) = 1 / (a H(z)/h)
Weight
()¶The column giving the weight to use for each particle on the mesh.
The mesh field is a weighted average of Value
, with the weights
given by Weight
.
By default, this array is set to unity for all particles.
__delitem__
(col)¶Delete a column; cannot delete a “hard-coded” column
__getitem__
(sel)¶The following types of indexing are supported:
__len__
()¶The local size of the CatalogSource on a given rank.
__makemodel__
()[source]¶Return the Zheng 07 HOD model.
This model evaluates Eqs. 2 and 5 of Zheng et al. 2007
__makesource__
()¶Make the source of galaxies by performing the halo HOD population
Note
The mock population is only done by the root, and the resulting catalog is then distributed evenly amongst the available ranks
__setitem__
(col, value)¶Add columns to the CatalogSource, overriding any existing columns
with the name col
.
attrs
¶A dictionary storing relevant meta-data about the CatalogSource.
columns
¶All columns in the CatalogSource, including those hard-coded into the class’s defintion and override columns provided by the user.
compute
(*args, **kwargs)¶Our version of dask.compute()
that computes
multiple delayed dask collections at once.
This should be called on the return value of read()
to converts any dask arrays to numpy arrays.
If use_cache
is True
, this internally caches data, using
dask’s built-in cache features.
Parameters: | args (object) – Any number of objects. If the object is a dask collection, it’s computed and the result is returned. Otherwise it’s passed through unchanged. |
---|
Notes
The dask default optimizer induces too many (unnecesarry) IO calls – we turn this off feature off by default. Eventually we want our own optimizer probably.
copy
()¶Return a copy of the CatalogSource object
Returns: | the new CatalogSource object holding the copied data columns |
---|---|
Return type: | CatalogCopy |
csize
¶The total, collective size of the CatalogSource, i.e., summed across all ranks.
It is the sum of size
across all available ranks.
get_hardcolumn
(col)¶Return a column from the underlying data array/dict.
Columns are returned as dask arrays.
hardcolumns
¶The union of the columns in the file and any transformed columns.
logger
= <logging.Logger object>¶make_column
(array)¶Utility function to convert a numpy array to a dask.array.Array
.
read
(columns)¶Return the requested columns as dask arrays.
Parameters: | columns (list of str) – the names of the requested columns |
---|---|
Returns: | the list of column data, in the form of dask arrays |
Return type: | list of dask.array.Array |
repopulate
(seed=None, **params)¶Update the HOD parameters and then re-populate the mock catalog
Warning
This operation is done in-place, so the size of the Source changes
Parameters: |
|
---|
save
(output, columns, datasets=None, header='Header')¶Save the CatalogSource to a bigfile.BigFile
.
Only the selected columns are saved and attrs
are saved in
header
. The attrs of columns are stored in the datasets.
Parameters: |
|
---|
size
¶to_mesh
(Nmesh=None, BoxSize=None, dtype='f4', interlaced=False, compensated=False, window='cic', weight='Weight', value='Value', selection='Selection', position='Position')¶Convert the CatalogSource to a MeshSource, using the specified parameters.
Parameters: |
|
---|---|
Returns: | mesh – a mesh object that provides an interface for gridding particle data onto a specified mesh |
Return type: |
update_csize
()¶Set the collective size, csize
.
This function should be called in __init__()
of a subclass,
after size
has been set to a valid value (not NotImplemented
)
use_cache
¶If set to True
, use the built-in caching features of dask
to cache data in memory.