Skip to content

Operations

Custom operations for compression models.

Overview

This module contains custom operations optimized for learned compression, including quantization and entropy coding operations.

ops

LowerBound

LowerBound(bound)

Bases: Module

Lower bound operator, computes torch.max(x, bound) with a custom gradient.

The derivative is replaced by the identity function when x is moved towards the bound, otherwise the gradient is kept to zero.

NonNegativeParametrizer

NonNegativeParametrizer(minimum=0, reparam_offset=2 ** -18)

Bases: Module

Non negative reparametrization.

Used for stability during training.

compute_padding

compute_padding(in_h, in_w, *, out_h=None, out_w=None, min_div=1)

Returns tuples for padding and unpadding.

Parameters:

Name Type Description Default
in_h int

Input height.

required
in_w int

Input width.

required
out_h int | None

Output height.

None
out_w int | None

Output width.

None
min_div int

Length that output dimensions should be divisible by.

1

quantize_ste

quantize_ste(x)

Rounding with non-zero gradients. Gradients are approximated by replacing the derivative by the identity function.

Used in "Lossy Image Compression with Compressive Autoencoders" <https://arxiv.org/abs/1703.00395>_

.. note::

Implemented with the pytorch `detach()` reparametrization trick:

`x_round = x_round - x.detach() + x`