BeyondAttention#
- class txv.exp.BeyondAttention(model: Module)#
Link to Paper: Transformer Interpretability Beyond Attention Visualization
- __init__(model: Module) None #
- Parameters:
model (torch.nn.Module) – A model from
txv.vit
Caution
The model must be an LRP model. You can use the LRP version of a model by passing
lrp=True
in the model function.
- explain(input: Tensor, index: int | None = None, alpha: float = 1.0, abm: bool = True, **kwargs) Tensor #
- Parameters:
input (torch.Tensor) – Input tensor
index (int, optional) – Index of the class to explain, by default the predicted class is explained
alpha (float, optional) – Alpha value for LRP, by default 1.0
abm (bool, optional) – Architecture based modification, by default True