Layers¶
Neural network layers for compression models.
GDN - Generalized Divisive Normalization¶
The GDN layer is commonly used in learned image compression for its effectiveness at decorrelating features.
GDN
¶
Bases: Module
Generalized Divisive Normalization layer.
Introduced in "Density Modeling of Images Using a Generalized Normalization
Transformation" <https://arxiv.org/abs/1511.06281>_,
by Balle Johannes, Valero Laparra, and Eero P. Simoncelli, (2016).
.. math::
y[i] = \frac{x[i]}{\sqrt{\beta[i] + \sum_j(\gamma[j, i] * x[j]^2)}}
Source code in tinify/layers/gdn.py
forward
¶
Source code in tinify/layers/gdn.py
Attention Modules¶
AttentionBlock
¶
Bases: Module
Self attention block.
Simplified variant from "Learned Image Compression with
Discretized Gaussian Mixture Likelihoods and Attention Modules"
<https://arxiv.org/abs/2001.01568>_, by Zhengxue Cheng, Heming Sun, Masaru
Takeuchi, Jiro Katto.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
N
|
int
|
Number of channels) |
required |
Convolutional Layers¶
conv3x3
¶
subpel_conv3x3
¶
3x3 sub-pixel convolution for up-sampling.
Residual Blocks¶
ResidualBlock
¶
Bases: Module
Simple residual block with two 3x3 convolutions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_ch
|
int
|
number of input channels |
required |
out_ch
|
int
|
number of output channels |
required |
ResidualBlockUpsample
¶
Bases: Module
Residual block with sub-pixel upsampling on the last convolution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_ch
|
int
|
number of input channels |
required |
out_ch
|
int
|
number of output channels |
required |
upsample
|
int
|
upsampling factor (default: 2) |
2
|
ResidualBlockWithStride
¶
Bases: Module
Residual block with a stride on the first convolution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_ch
|
int
|
number of input channels |
required |
out_ch
|
int
|
number of output channels |
required |
stride
|
int
|
stride value (default: 2) |
2
|