GradSAM#

class txv.exp.GradSAM(model: Module)#

Link to Paper: Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

__init__(model: Module) None#
Parameters:

model (torch.nn.Module) – A model from txv.vit

Tip

Use the model with lrp=False as LRP models have higher memory footprint.

explain(input: Tensor, index: int | None = None, abm: bool = True) Tensor#
Parameters:
  • input (torch.Tensor) – Input tensor

  • index (int, optional) – Index of the class to explain, by default the predicted class is explained

  • abm (bool, optional) – Architecture based modification, by default True