facet.explanation.parallel.ParallelExplainer#

class facet.explanation.parallel.ParallelExplainer(explainer, *, max_job_size=10, n_jobs, shared_memory=None, pre_dispatch=None, verbose=None)[source]#

A wrapper class, turning an explainer into a parallelized version, explaining chunks of observations in parallel.

Bases

ParallelizableMixin, BaseExplainer

Metaclasses

ABCMeta

Parameters
  • explainer (BaseExplainer) – the explainer to be parallelized by this wrapper

  • max_job_size (int) – the maximum number of observations to allocate to any of the explanation jobs running in parallel

  • n_jobs (Optional[int]) – number of jobs to use in parallel; if None, use joblib default (default: None)

  • shared_memory (Optional[bool]) – if True, use threads in the parallel runs; if False or None, use multiprocessing (default: None)

  • pre_dispatch (Union[int, str, None]) – number of batches to pre-dispatch; if None, use joblib default (default: None)

  • verbose (Optional[int]) – verbosity level used in the parallel computation; if None, use joblib default (default: None)

Method summary

explain_row

See shap.explainers.Explainer.explain_row()

load

See shap.explainers.Explainer.load()

save

See shap.explainers.Explainer.save()

shap_interaction_values

Estimate the SHAP interaction values for a set of samples.

shap_values

Estimate the SHAP values for a set of samples.

supports_model_with_masker

See shap.explainers.Explainer.supports_model_with_masker()

Attribute summary

supports_interaction

True if the explainer supports interaction effects, False otherwise.

explainer

The explainer being parallelized by this wrapper

max_job_size

the maximum number of observations to allocate to any of the explanation jobs running in parallel

n_jobs

Number of jobs to use in parallel; if None, use joblib default.

shared_memory

If True, use threads in the parallel runs; if False or None, use multiprocessing.

pre_dispatch

Number of batches to pre-dispatch; if None, use joblib default.

verbose

Verbosity level used in the parallel computation; if None, use joblib default.

Definitions

__call__(*args, **kwargs)[source]#

Forward the call to the wrapped explainer.

Parameters
  • args (Any) – positional arguments to be forwarded to the wrapped explainer

  • kwargs (Any) – keyword arguments to be forwarded to the wrapped explainer

Return type

Explanation

Returns

the explanation returned by the wrapped explainer

explain_row(*row_args, max_evals, main_effects, error_bounds, outputs, silent, **kwargs)#

See shap.explainers.Explainer.explain_row()

classmethod load(in_file, model_loader=<bound method Model.load of <class 'shap.models.Model'>>, masker_loader=<bound method Serializable.load of <class 'shap.maskers.Masker'>>, instantiate=True)#

See shap.explainers.Explainer.load()

save(out_file, model_saver='.save', masker_saver='.save')#

See shap.explainers.Explainer.save()

shap_interaction_values(X, y=None, **kwargs)[source]#

Estimate the SHAP interaction values for a set of samples.

Parameters
  • X (Union[ndarray[Any, dtype[Any]], DataFrame, Pool]) – matrix of samples (# samples x # features) on which to explain the model’s output

  • y (Union[ndarray[Any, dtype[Any]], Series, None]) – array of label values for each sample, used when explaining loss functions (optional)

  • kwargs (Any) – additional arguments specific to the explainer implementation

Return type

Union[ndarray[Any, dtype[float64]], List[ndarray[Any, dtype[float64]]]]

Returns

SHAP values as an array of shape \((n_\mathrm{observations}, n_\mathrm{features}, n_\mathrm{features})\); a list of such arrays in the case of a multi-output model

shap_values(X, y=None, **kwargs)[source]#

Estimate the SHAP values for a set of samples.

Parameters
  • X (Union[ndarray[Any, dtype[Any]], DataFrame, Pool]) – matrix of samples (# samples x # features) on which to explain the model’s output

  • y (Union[ndarray[Any, dtype[Any]], Series, None]) – array of label values for each sample, used when explaining loss functions (optional)

  • kwargs (Any) – additional arguments specific to the explainer implementation

Return type

Union[ndarray[Any, dtype[float64]], List[ndarray[Any, dtype[float64]]]]

Returns

SHAP values as an array of shape (n_observations, n_features); a list of such arrays in the case of a multi-output model

static supports_model_with_masker(model, masker)#

See shap.explainers.Explainer.supports_model_with_masker()

explainer: facet.explanation.base.BaseExplainer#

The explainer being parallelized by this wrapper

max_job_size: int#

the maximum number of observations to allocate to any of the explanation jobs running in parallel

property supports_interaction: bool#

True if the explainer supports interaction effects, False otherwise.

Return type

bool