rail.evaluation.evaluator module

Abstract base class defining an Evaluator

The key feature is that the evaluate method.

class rail.evaluation.evaluator.Evaluator(args, comm=None)[source]

Bases: RailStage

Evaluate the performance of a photo-z estimator against reference point estimate

config_options = {'_random_state': <ceci.config.StageParameter object>, 'chunk_size': <ceci.config.StageParameter object>, 'exclude_metrics': <ceci.config.StageParameter object>, 'force_exact': <ceci.config.StageParameter object>, 'metric_config': <ceci.config.StageParameter object>, 'metrics': <ceci.config.StageParameter object>, 'output_mode': <ceci.config.StageParameter object>}
evaluate(data, truth)[source]

Evaluate the performance of an estimator

This will attach the input data and truth to this Evaluator (for introspection and provenance tracking).

Then it will call the run() and finalize() methods, which need to be implemented by the sub-classes.

The run() method will need to register the data that it creates to this Estimator by using self.add_data(‘output’, output_data).

Parameters:
  • data (qp.Ensemble) – The sample to evaluate

  • truth (Table-like) – Table with the truth information

Returns:

output – The evaluation metrics

Return type:

Table-like

finalize()[source]

Finalize the stage, moving all its outputs to their final locations.

metric_base_class = None
name = 'Evaluator'
outputs = [('output', <class 'rail.core.data.Hdf5Handle'>), ('summary', <class 'rail.core.data.Hdf5Handle'>), ('single_distribution_summary', <class 'rail.core.data.QPDictHandle'>)]
run()[source]

Run the stage and return the execution status

run_single_node()[source]
class rail.evaluation.evaluator.OldEvaluator(args, comm=None)[source]

Bases: RailStage

Evaluate the performance of a photo-Z estimator

Configuration Parameters: output_mode [str]: What to do with the outputs (default=default) zmin [float]: min z for grid (default=0.0) zmax [float]: max z for grid (default=3.0) nzbins [int]: # of bins in zgrid (default=301) pit_metrics [str]: PIT-based metrics to include (default=all) point_metrics [str]: Point-estimate metrics to include (default=all) hdf5_groupname [str]: Name of group in hdf5 where redshift data is located (default=) do_cde [bool]: Evaluate CDE Metric (default=True) redshift_col [StageConfig]: (default={hdf5_groupname:photometry,zmin:0.0,zmax:3.0,nzbins:301,dz:0.01,nondetect_val:99.0,bands:[‘mag_u_lsst’, ‘mag_g_lsst’, ‘mag_r_lsst’, ‘mag_i_lsst’, ‘mag_z_lsst’, ‘mag_y_lsst’],err_bands:[‘mag_err_u_lsst’, ‘mag_err_g_lsst’, ‘mag_err_r_lsst’, ‘mag_err_i_lsst’, ‘mag_err_z_lsst’, ‘mag_err_y_lsst’],mag_limits:{‘mag_u_lsst’: 27.79, ‘mag_g_lsst’: 29.04, ‘mag_r_lsst’: 29.06, ‘mag_i_lsst’: 28.62, ‘mag_z_lsst’: 27.98, ‘mag_y_lsst’: 27.05},ref_band:mag_i_lsst,redshift_col:redshift,calculated_point_estimates:[],})

config_options = {'do_cde': <ceci.config.StageParameter object>, 'hdf5_groupname': <ceci.config.StageParameter object>, 'nzbins': <ceci.config.StageParameter object>, 'output_mode': <ceci.config.StageParameter object>, 'pit_metrics': <ceci.config.StageParameter object>, 'point_metrics': <ceci.config.StageParameter object>, 'redshift_col': {'bands': ['mag_u_lsst', 'mag_g_lsst', 'mag_r_lsst', 'mag_i_lsst', 'mag_z_lsst', 'mag_y_lsst'], 'calculated_point_estimates': [], 'dz': 0.01, 'err_bands': ['mag_err_u_lsst', 'mag_err_g_lsst', 'mag_err_r_lsst', 'mag_err_i_lsst', 'mag_err_z_lsst', 'mag_err_y_lsst'], 'hdf5_groupname': 'photometry', 'mag_limits': {'mag_g_lsst': 29.04, 'mag_i_lsst': 28.62, 'mag_r_lsst': 29.06, 'mag_u_lsst': 27.79, 'mag_y_lsst': 27.05, 'mag_z_lsst': 27.98}, 'nondetect_val': 99.0, 'nzbins': 301, 'redshift_col': 'redshift', 'ref_band': 'mag_i_lsst', 'zmax': 3.0, 'zmin': 0.0}, 'zmax': <ceci.config.StageParameter object>, 'zmin': <ceci.config.StageParameter object>}
evaluate(data, truth)[source]

Evaluate the performance of an estimator This will attach the input data and truth to this Evaluator (for introspection and provenance tracking). Then it will call the run() and finalize() methods, which need to be implemented by the sub-classes. The run() method will need to register the data that it creates to this Estimator by using self.add_data(‘output’, output_data). :param data: The sample to evaluate :type data: qp.Ensemble :param truth: Table with the truth information :type truth: Table-like

Returns:

output – The evaluation metrics

Return type:

Table-like

inputs = [('input', <class 'rail.core.data.QPHandle'>), ('truth', <class 'rail.core.data.Hdf5Handle'>)]
name = 'OldEvaluator'
outputs = [('output', <class 'rail.core.data.Hdf5Handle'>)]
run()[source]

Run method Evaluate all the metrics and put them into a table .. rubric:: Notes

Get the input data from the data store under this stages ‘input’ tag Get the truth data from the data store under this stages ‘truth’ tag Puts the data into the data store under this stages ‘output’ tag