facet.explanation.parallel.ParallelExplainer#
- class facet.explanation.parallel.ParallelExplainer(explainer, *, max_job_size=10, n_jobs, shared_memory=None, pre_dispatch=None, verbose=None)[source]#
A wrapper class, turning an explainer into a parallelized version, explaining chunks of observations in parallel.
- Bases
- Metaclasses
- Parameters
explainer (
BaseExplainer
) – the explainer to be parallelized by this wrappermax_job_size (
int
) – the maximum number of observations to allocate to any of the explanation jobs running in paralleln_jobs (
Optional
[int
]) – number of jobs to use in parallel; ifNone
, use joblib default (default:None
)shared_memory (
Optional
[bool
]) – ifTrue
, use threads in the parallel runs; ifFalse
orNone
, use multiprocessing (default:None
)pre_dispatch (
Union
[int
,str
,None
]) – number of batches to pre-dispatch; ifNone
, use joblib default (default:None
)verbose (
Optional
[int
]) – verbosity level used in the parallel computation; ifNone
, use joblib default (default:None
)
Method summary
See
shap.explainers.Explainer.explain_row()
See
shap.explainers.Explainer.load()
See
shap.explainers.Explainer.save()
Estimate the SHAP interaction values for a set of samples.
Estimate the SHAP values for a set of samples.
See
shap.explainers.Explainer.supports_model_with_masker()
Attribute summary
True
if the explainer supports interaction effects,False
otherwise.The explainer being parallelized by this wrapper
the maximum number of observations to allocate to any of the explanation jobs running in parallel
n_jobs
Number of jobs to use in parallel; if
None
, use joblib default.shared_memory
If
True
, use threads in the parallel runs; ifFalse
orNone
, use multiprocessing.pre_dispatch
Number of batches to pre-dispatch; if
None
, use joblib default.verbose
Verbosity level used in the parallel computation; if
None
, use joblib default.Definitions
- __call__(*args, **kwargs)[source]#
Forward the call to the wrapped explainer.
- Parameters
- Return type
- Returns
the explanation returned by the wrapped explainer
- explain_row(*row_args, max_evals, main_effects, error_bounds, outputs, silent, **kwargs)#
See
shap.explainers.Explainer.explain_row()
- classmethod load(in_file, model_loader=<bound method Model.load of <class 'shap.models.Model'>>, masker_loader=<bound method Serializable.load of <class 'shap.maskers.Masker'>>, instantiate=True)#
See
shap.explainers.Explainer.load()
- save(out_file, model_saver='.save', masker_saver='.save')#
See
shap.explainers.Explainer.save()
- shap_interaction_values(X, y=None, **kwargs)[source]#
Estimate the SHAP interaction values for a set of samples.
- Parameters
X (
Union
[ndarray
[Any
,dtype
[Any
]],DataFrame
,Pool
]) – matrix of samples (# samples x # features) on which to explain the model’s outputy (
Union
[ndarray
[Any
,dtype
[Any
]],Series
,None
]) – array of label values for each sample, used when explaining loss functions (optional)kwargs (
Any
) – additional arguments specific to the explainer implementation
- Return type
Union
[ndarray
[Any
,dtype
[float64
]],List
[ndarray
[Any
,dtype
[float64
]]]]- Returns
SHAP values as an array of shape \((n_\mathrm{observations}, n_\mathrm{features}, n_\mathrm{features})\); a list of such arrays in the case of a multi-output model
- shap_values(X, y=None, **kwargs)[source]#
Estimate the SHAP values for a set of samples.
- Parameters
X (
Union
[ndarray
[Any
,dtype
[Any
]],DataFrame
,Pool
]) – matrix of samples (# samples x # features) on which to explain the model’s outputy (
Union
[ndarray
[Any
,dtype
[Any
]],Series
,None
]) – array of label values for each sample, used when explaining loss functions (optional)kwargs (
Any
) – additional arguments specific to the explainer implementation
- Return type
Union
[ndarray
[Any
,dtype
[float64
]],List
[ndarray
[Any
,dtype
[float64
]]]]- Returns
SHAP values as an array of shape (n_observations, n_features); a list of such arrays in the case of a multi-output model
- static supports_model_with_masker(model, masker)#
See
shap.explainers.Explainer.supports_model_with_masker()
- explainer: facet.explanation.base.BaseExplainer#
The explainer being parallelized by this wrapper