The Table shows the results of a userstudy comparing our results vs state of the art on the Cityscapes dataset.
Our method is preferred over BPG, even when BPG uses more than twice the number of bits per pixels (bpp).
In the paper, we obtain similar results for the ADE20K dataset and the well known Kodak compression benchmark.
We measure the preservation of semantics through the mIoU of a pre-trained PSPNet for semantic segmentation on CityScapes in the following Figure.
We obtain a significantly higher mIoU compared to BPG, which is further improved when guiding the training with semantics.
Our method allows for selectively preserving some regions while fully synthesizing the rest of the image (keeping the semantics in tact).
Check the the paper for more details!