• Docs >
  • torch.nn >
  • torch.nn.modules.module.register_module_full_backward_hook
Shortcuts

torch.nn.modules.module.register_module_full_backward_hook

torch.nn.modules.module.register_module_full_backward_hook(hook)[source]

Registers a backward hook common to all the modules.

Warning

This adds global state to the nn.module module and it is only intended for debugging/profiling purposes.

The current implementation will not have the presented behavior for complex Module that perform many operations. In some failure cases, grad_input and grad_output will only contain the gradients for a subset of the inputs and outputs. For such Module, you should use torch.Tensor.register_hook() directly on a specific input or output to get the required gradients.

The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> Tensor or None

The grad_input and grad_output are tuples. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments will not appear in the hook. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

Global hooks are called before hooks registered with register_backward_hook

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources