BeyondIntuition#

class txv.exp.BeyondIntuition(model: Module)#

Link to paper: Beyond Intuition: Rethinking Token Attributions inside Transformers

__init__(model: Module) None#
Parameters:

model (torch.nn.Module) – A model from txv.vit

Tip

Use the model with lrp=False as LRP models have higher memory footprint.

explain(input: Tensor, method: Literal['head', 'token'] = 'head', index: int | None = None, layer: int = 0, steps: int = 20, baseline: Tensor | None = None, abm: bool = True) Tensor#
Parameters:
  • input (torch.Tensor) – Input tensor

  • method (Literal['head','token'], optional) – Type of attention map: head-wise or token-wise, by default ‘head’

  • index (int, optional) – Index of the class to explain, by default the predicted class is explained

  • layer (int, optional) – Layer number to start the computation of attention weights, by default 0

  • steps (int, optional) – Number of steps in Riemann approximation of integral, by default 20

  • baseline (torch.Tensor, optional) – Baseline tensor, by default None(tensor of zeros)

  • abm (bool, optional) – Architecture based modification, by default True