GenericAttention#
- class txv.exp.GenericAttention(model)#
Link to Paper: Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
- __init__(model)#
- Parameters:
model (torch.nn.Module) – A model from
txv.vit
Tip
Use the model with
lrp=False
as LRP models have higher memory footprint.
- explain(input: Tensor, index: int | None = None, layer: int = 1, abm: bool = True, **kwargs) Tensor #
- Parameters:
input (torch.Tensor) – Input tensor
index (int, optional) – Index of the class to explain, by default the predicted class is explained
layer (int, optional) – Layer number to start the computation of attention weights, by default 1
abm (bool, optional) – Architecture based modification, by default True