facet.explanation.parallel.ExplainerQueue#

class facet.explanation.parallel.ExplainerQueue(explainer, X, y=None, *, interactions, max_job_size, **kwargs)[source]#

A queue splitting a data set to be explained into multiple jobs.

Bases

JobQueue [Union [ndarray [Any, dtype [Any]], List [ndarray [Any, dtype [Any]]]], Union [ndarray [Any, dtype [Any]], List [ndarray [Any, dtype [Any]]]]]

Metaclasses

ABCMeta

Parameters
  • explainer (BaseExplainer) – the SHAP explainer to use

  • X (Union[ndarray[Any, dtype[Any]], DataFrame, Pool]) – the feature values of the observations to be explained

  • y (Union[ndarray[Any, dtype[Any]], Series, None]) – the target values of the observations to be explained

  • interactions (bool) – if False, calculate SHAP values; if True, calculate SHAP interaction values

  • max_job_size (int) – the maximum number of observations to allocate to each job

  • kwargs (Any) – additional arguments specific to the explanation method

Raises

NotImplementedError – if X is a Pool; this is currently not supported

Method summary

aggregate

Called by JobRunner.run_queue() to aggregate the results of all jobs once they have all been run.

jobs

Iterate the jobs in this queue.

on_run

See pytools.parallelization.JobQueue.on_run()

Attribute summary

lock

The lock used by class JobRunner to prevent parallel executions of the same queue

explainer

the SHAP explainer to use

interactions

if False, calculate SHAP values; otherwise, calculate SHAP interaction values

X

the feature values of the observations to be explained

y

the target values of the observations to be explained

max_job_size

the maximum number of observations to allocate to each job

kwargs

additional arguments specific to the explanation method

Definitions

aggregate(job_results)[source]#

Called by JobRunner.run_queue() to aggregate the results of all jobs once they have all been run.

Parameters

job_results (List[Union[ndarray[Any, dtype[Any]], List[ndarray[Any, dtype[Any]]]]]) – list of job results, ordered corresponding to the sequence of jobs generated by method jobs()

Return type

Union[ndarray[Any, dtype[Any]], List[ndarray[Any, dtype[Any]]]]

Returns

the aggregated result of running the queue

jobs()[source]#

Iterate the jobs in this queue.

Return type

Iterable[Job[Union[ndarray[Any, dtype[Any]], List[ndarray[Any, dtype[Any]]]]]]

Returns

the jobs in this queue

on_run()#

See pytools.parallelization.JobQueue.on_run()

X: numpy.ndarray[Any, numpy.dtype[Any]]#

the feature values of the observations to be explained

explainer: facet.explanation.base.BaseExplainer#

the SHAP explainer to use

interactions: bool#

if False, calculate SHAP values; otherwise, calculate SHAP interaction values

kwargs: Dict[str, Any]#

additional arguments specific to the explanation method

max_job_size: int#

the maximum number of observations to allocate to each job

y: Optional[numpy.ndarray[Any, numpy.dtype[Any]]]#

the target values of the observations to be explained