nbodykit.io.tpm¶
Classes
|
Read snapshot binary files from Martin White's TPM simulations. |
- class nbodykit.io.tpm.TPMBinaryFile(path, precision='f4')[source]¶
Read snapshot binary files from Martin White’s TPM simulations.
These files are stored column-wise with a format, with a header of size 28 bytes to begin the file.
The columns are:
- Position‘f4’, ‘f8’ precision
the position data
- Velocity‘f4’, ‘f8’ precision
the velocity data
- ID‘u8’ precision
integers specfiying the particle ID
- Parameters
path (str) – the path to the binary file to load
precision ({'f4', 'f8'}, optional) – the string dtype specifying the precision
References
White M., 2002, ApJS, 579, 16
- Attributes
Methods
asarray
()Return a view of the file, where the fields of the structured array are stacked in columns of a single numpy array
get_dask
(column[, blocksize])Return the specified column as a dask array, which delays the explicit reading of the data until
dask.compute()
is calledkeys
()Aliased function to return
columns
read
(columns, start, stop[, step])Read the specified column(s) over the given range
- __getitem__(s)¶
This function provides numpy-like array indexing of the file object.
It supports:
integer, slice-indexing similar to arrays
string indexing using column names in
keys()
array-like indexing using integer lists or boolean arrays
Note
If a single column is being returned, a numpy array holding the data is returned, rather than a structured array with only a single field.
- asarray()¶
Return a view of the file, where the fields of the structured array are stacked in columns of a single numpy array
Examples
Start with a file object with three named columns,
ra
,dec
, andz
>>> ff.dtype dtype([('ra', '<f4'), ('dec', '<f4'), ('z', '<f4')]) >>> ff.shape (1000,) >>> ff.columns ['ra', 'dec', 'z'] >>> ff[:3] array([(235.63442993164062, 59.39099884033203, 0.6225500106811523), (140.36181640625, -1.162310004234314, 0.5026500225067139), (129.96627807617188, 45.970130920410156, 0.4990200102329254)], dtype=(numpy.record, [('ra', '<f4'), ('dec', '<f4'), ('z', '<f4')]))
Select a subset of columns and switch the ordering and convert output to a single numpy array
>>> x = ff[['dec', 'ra']].asarray() >>> x.dtype dtype('float32') >>> x.shape (1000, 2) >>> x.columns ['dec', 'ra'] >>> x[:3] array([[ 59.39099884, 235.63442993], [ -1.16231 , 140.36181641], [ 45.97013092, 129.96627808]], dtype=float32)
Now, select only the first column (
dec
)>>> dec = x[:,0] >>> dec[:3] array([ 59.39099884, -1.16231 , 45.97013092], dtype=float32)
- Returns
a file object that will return a numpy array with the columns representing the fields
- Return type
- property columns¶
A list of the names of the columns in the file.
This defaults to the named fields in the file’s
dtype
attribute, but differ from this if a view of the file has been returned withasarray()
- property dtype¶
A
numpy.dtype
object holding the data types of each column in the file.
- get_dask(column, blocksize=None)¶
Return the specified column as a dask array, which delays the explicit reading of the data until
dask.compute()
is calledThe dask array is chunked into blocks of size blocksize
- Parameters
- Returns
the dask array holding the column, which computes the necessary functions to read the data, but delays evaluating until the user specifies
- Return type
- property ncol¶
The number of data columns in the file.
- read(columns, start, stop, step=1)¶
Read the specified column(s) over the given range
‘start’ and ‘stop’ should be between 0 and
size
, which is the total size of the binary file (in particles)- Parameters
- Returns
structured array holding the requested columns over the specified range of rows
- Return type
numpy.array
- property shape¶
The shape of the file, which defaults to
(size, )
Multiple dimensions can be introduced into the shape if a view of the file has been returned with
asarray()
- property size¶
The size of the file, i.e., number of rows