Operations¶
Custom operations for compression models.
Overview¶
This module contains custom operations optimized for learned compression, including quantization and entropy coding operations.
ops
¶
LowerBound
¶
Bases: Module
Lower bound operator, computes torch.max(x, bound) with a custom
gradient.
The derivative is replaced by the identity function when x is moved
towards the bound, otherwise the gradient is kept to zero.
NonNegativeParametrizer
¶
Bases: Module
Non negative reparametrization.
Used for stability during training.
compute_padding
¶
Returns tuples for padding and unpadding.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_h
|
int
|
Input height. |
required |
in_w
|
int
|
Input width. |
required |
out_h
|
int | None
|
Output height. |
None
|
out_w
|
int | None
|
Output width. |
None
|
min_div
|
int
|
Length that output dimensions should be divisible by. |
1
|
quantize_ste
¶
Rounding with non-zero gradients. Gradients are approximated by replacing the derivative by the identity function.
Used in "Lossy Image Compression with Compressive Autoencoders"
<https://arxiv.org/abs/1703.00395>_
.. note::
Implemented with the pytorch `detach()` reparametrization trick:
`x_round = x_round - x.detach() + x`