Friday, November 27, 2015

Future Directions in Lossless Compression

My current guesses on where this field could go. This is biased towards asymmetric codecs (offline compression for data distribution, not real-time compression/decompression).

Short Term

LZ4: Higher ratio using near-optimal parsing but same basic decompressor/instruction set (note I doubt the LZ4 arrow can move up as much as I've illustrated).

LZHAM: Faster decompression by breaking out literals/delta literals/matches to separate entropy coded blocks, switch to new entropy decoder. Other ideas: multithreaded entropy decoding, combine multiple binary symbols into single non-binary symbols.

ZSTD: Refine current implementation: stronger compressor, profile and optimize all loops.

BROTLI: Brotli's place on the decompression frontier is currently too fuzzy on the ratio axis, so an easy prediction is the compressor will get tightened up. Its current entropy coding design may have trouble expanding to the right much further.

Long Term

New Territory: Theoretical future "holy grail" codec which will obsolete most other codecs. Once this codec is on the scene most others will be as relevant as compress. If you are working in the compression space commercially this is where you should be heading.

Note the circle is rough. I tried to roughly match Brotli's ratio in region 4, but it could go higher to be closer to LZMA/LZHAM.

Some ideas: blocked interleaved entropy coding with SIMD optimizations, entropy decoding in parallel with decompression, near-optimal parsing with best of X arrivals, cloud compression to search through hundreds of compression options, universal preprocessing, LZMA-like instruction set/state machine, rep matches with relative distances, partial matches with compressed fixup sideband.

Until very recently I thought codecs in this region were impossible.

Another interesting place to target is to the direct right of region 3 (or directly above region 2). Target this spot too and you've redefined the entire frontier.

Interesting Recent Developments

- Key paper showing a very promising path forward in the entropy decoding space:

- Squash benchmark - Easily explore the various frontiers with a variety of test data and CPU's.

- Squash library - Universal compression library wrapping over 30 codecs behind a single API with streaming support.

- Zstd - Promising new codec showing interesting ways of breaking up the usual monolithic decoder loop into separate blocks (i.e. it decouples entropy decoding out from the main decoder loop)

Thursday, November 26, 2015

Quick Survey of the Lossless Decompression Pareto Frontier

I first learned about the compression "Pareto Frontier" concept on Matt Mahoney's Large Text Compression Benchmark page. Those charts are for compression throughput vs. ratio, not decompression throughput vs. ratio which I personally find far more interesting. Simple charts like this allow engineers to judge at a glance what codec(s) they should consider for specific use cases.

This chart was generated by the Squash Benchmark (options selected: Core i5-2400, 20.61MB of tarred Samba source code). Using exclusively text data is sub-optimal for a comparison like this, but this was one of the larger files in the Squash corpus. The ugly circles are my loose categorizations (or clusterizations):

1. Speed Kings, compression and decompression throughput >= disk read rate
Examples: LZ4, Snappy
Typical properties: low memory, low ratio, block-based API, symmetrical (super fast compression/decompression)

2. High ratio, decompression throughput >= avg network read rate
Examples: LZMA, LZHAM, LZNA, Brotli
Properties: Asymmetrical (compression is kinda slow but tolerable), decompression is fast enough to occur in parallel with network download (so it's "free"), ideal codec has best ratio while being just fast enough to overlap with decompression

3. Godlike ratio, decompression throughput < avg network rate
Examples: PAQ series
Properties: Needs godly amounts of RAM to match its godly ratio, extremely slow (seconds per mb), symmetrical

Possible use is for data that must be transmitted to a remote destination with lots of compute but an extremely limited network or radio link (deep space?)

Note: I made 3's circle on the ratio axis very wide because in practice I've seen PAQ's ratio tank on many binary files.

4. Intermediate ratio, decompression throughput > avg_network, but < disk read rate
Examples: zlib, zstd
Major property: Symmetrical (fast compression and decompression), very low to reasonable amount of RAM

5. Specialized/wildcards
Examples: Heatshrink for embedded CPU's, or RAM compression
Properties: Low ratio, work memory: fixed, extremely low, or none, code size: usually tiny

Note there are some outliers that defy this simple categorization, like LZ4-HC which is cat 1 but with a cat 4-like ratio.


- brotli appears to have pushed the decompression frontier forward and endangered (obsoleted?) several codecs. It's even endangering several region 4 codecs (but its compressor isn't as fast as the other region 4 codecs)

Right now Brotli's compressor is still getting tuned, and will undoubtedly improve. It's currently weak on large binary files, and its max dictionary size is not big enough (so it's not as strong for large files/archives, and it'll never fare very well on huge file benchmarks until this is fixed). So its true position on the frontier is fuzzy, i.e. somewhat dependent on your source data.

- zstd is smack dab in the middle of region 4. If it moves right just a bit more (faster decompression) it's going to obsolete a bunch of codecs in its category.

If zstd's decompressor is speeded up and it gets a stronger parser it could be a formidable competitor.

Currently brotli is putting zstd in danger until zstd's decompressor is further optimized.

- brotli support should be added to 7-zip as a plugin. Actually, probably all the major Decompression Frontier leaders should be added to 7-zip because they all have value in different usage scenarios.

- LZHAM must move to the right of this graph or it's in trouble. Switching the literals, delta literals, and the match/len symbols over to using Zstd's blocked coding scheme seems like the right path forward.

- Perhaps the "Holy Grail" in practical lossless compression is a region 1 codec with  region 2-like ratio. (Is this even possible?) Maybe a highly asymmetrical codec with a hyper-fast SIMD entropy decoder could do it.

Saturday, November 21, 2015

LZHAM custom codec plugin for 7-zip v15.12

7-zip is a powerful and reliable command line and GUI archiver. I've been privately using 7-zip to thoroughly test the LZHAM codec's streaming API for several years. There's been enough interest in this plugin that I've finally decided to make it an official part of LZHAM.

Why bother using this? Because LZHAM extracts and tests much faster, around 2x-3x faster than LZMA, with similar compression ratios.

Importantly, if you create any archives using this custom codec DLL, you'll (obviously) need this DLL to also extract these archives. The LZHAM v1.x bitstream format is locked in stone, so future DLL's using newer versions of LZHAM will be forwards and backwards compatible with this release.

You can find the source to the plugin on github here. The plugin itself lives in the lzham7zip directory.


I've uploaded precompiled DLL's for the x86 and x64 versions of 7-zip v15.12 here.

To use this, create a new directory named "codecs" wherever you installed 7-zip, then copy the correct DLL (either x86 or x64) into this directory. For example, if you've installed the 32-bit version of 7-zip, extract the file LzhamCodec_x86.dll into "C:\Program Files (x86)\7-Zip\codecs". For the 64-bit version, extract it into  "C:\Program Files\7-Zip\codecs".

To verify the installation, enter "7z.exe i" in a command prompt (cmd.exe) to list all the installed codecs. You should see this:

 0  ED  6F00181 AES256CBC
 1  ED  4F71001 LZHAM

Build Instructions

If you want to compile this yourself, first grab the source code to 7-zip v15.12 and extract the archive somewhere. Next, "git clone" into this directory. Your final directory structure should be:

11/21/2015  05:00 PM    <DIR>          .
11/21/2015  05:00 PM    <DIR>          ..
11/21/2015  05:00 PM    <DIR>          Asm
11/21/2015  05:00 PM    <DIR>          bin
11/21/2015  05:00 PM    <DIR>          C
11/21/2015  05:00 PM    <DIR>          CPP
11/21/2015  05:00 PM    <DIR>          DOC
11/21/2015  05:00 PM    <DIR>          lzham_codec_devel

Now load this Visual Studio solution: lzham_codec_devel/lzham7zip/LzhamCodec.sln.

It builds with VS 2010 and VS 2015. The DLL's will be output into the bin directory.

Usage Instructions

Example command line:

7z -m0=LZHAM -ma=0 -mmt=8 -mx=9 -md=64M a temp.7z *.dll

For maximum compression, use the 64-bit version and use:

7z -m0=LZHAM -ma=0 -mmt=8 -mx=9 -md=512M a temp.7z *.dll

Command line options (also see this guide):

-m0=LZHAM - Use LZHAM for compression
-ma=[0,1] - 0=non-deterministic mode, 1=deterministic (slower)
-mmt=X - Set total # of threads to use. Default=total CPU's.
-mx=[0-9] - Compression mode, 0=fastest, 9=slowest
-md={Size}[b|k|m] - Set dictionary size, up to 64MB in x86, 512MB in x64

Unfortunately, the file manager GUI doesn't allow you to select LZHAM via the UI. Instead, you must specify custom parameters: 0=bcj 1=lzham mt=8

Note if you don't specify "mt=X", where X is the number of threads to use for compression, LZHAM will just use whatever value is in the GUI's "Number of CPU threads" pulldown (1 or 2 threads), which will be very slow.

Thursday, November 12, 2015

Visualizing the Calgary compression corpus

The Calgary corpus is a collection of text and binary files commonly used to benchmark and test lossless compression programs. It's now quite dated, but it still has some value because the corpus is so well known.

Anyhow, here are the files visualized using the same approached described in my previous blog post. The block size (each pixel) = 512 bytes.

paper1 and paper6 have some interesting shifts at the ends of each file, which corresponds to the bottom right section of the images. Turns out these are the appendixes, which have very different content vs. the rest of each paper's content.






















Wednesday, November 11, 2015

Visualization of file contents using LZ compression

I love tools that can create cool looking images out of piles of raw bits.

These images were created by a new tool I've been working on named "fileview", a file data visualization and debugging tool similar to Matt Mahoney's useful FV utility. FV visualizes string match lengths and distances at the byte level, while this tool visualizes the compressibility of each file block using every other block as a static dictionary. Tools like this reveal high-level file structure and the overall compressibility of each region in a file. These images were computed from single files, but it's possible to build them from two different files, too.

Each pixel represents the ratio of matched bytes for a single source file block. The block size ranges from 32KB-2MB depending on the source file's size. To compute the image, the tool compresses every block in the file against every other block, using LZ77 greedy parsing against a static dictionary built from the other block. The dictionary is not updated as the block is compressed, unlike standard LZ. Each pixel is set to 255*total_match_bytes/block_size.

The brighter a pixel is in this visualization, the more the two blocks being compared resemble each other in an LZ compression ratio sense. The first scanline shows how compressible each block is using a static dictionary built from the first block, the second scanline uses a dictionary from the 2nd block, etc.:

X axis = Block being compressed
Y axis = Static dictionary block

The diagonal pixels are all-white, which makes sense because a block LZ compressed using a static dictionary built from itself should be perfectly compressible (i.e. just one big match).

- High-resolution car image in BMP format:

Source image in PNG format:

- An old Adobe installer executable - notice the compressed data block near the beginning of the file:

- A test file from my data corpus that caused an early implementation of LZHAM codec's "best of X arrivals" parser to slow to an absolute crawl:

- enwik8 (XML text) - a highly compressible text file, so all blocks highly resemble each other:

- Test file (cp32e406.exe) from my corpus containing large amounts of compressed data with a handful of similar blocks:

- Hubble space telescope image in BMP format:

Source image in PNG format:

- Large amount of JSON text from a game:

- Unity46.exe:

- Unity demo screenshot in BMP format:

- English word list text file:

Monday, November 2, 2015

More on bsdiff and delta compression

bsdiff is a simple delta compression algorithm, and it performs well compared to its open and closed source competitors. (Note bsdiff doesn't scale to large files due to memory, but that's somewhat fixable by breaking the problem up into blocks.) It also beats LZHAM in static dictionary mode by a large margin, and I want to understand why.

It conceptually works in three high-level phases:

1. Build suffix array of original file

2. Preprocess the new file, using the suffix array to find exact or approximate matches against the original file. This results in a series of 24 byte patch control commands, followed by a "diff" block (bytes added to various regions from the original file) and an "extra" block (unmatched literal data).

3. Separately compress this data using bzip2, with a 512KB block size.

As a quick experiment, switching step 3 to LZMA or LZHAM results in easy gains (no surprise as bzip2 is pretty dated):

Original file: Unity510.exe 52,680,480
New file: Unity512.exe 52,740,568

bsdiff preprocess+lzma: 3,012,581
bsdiff preprocess+lzham_devel: 3,152,188
bsdiff preprocess+bzip2: 3,810,343
lzham_devel (in delta mode, seed dict=Unity510.exe): 4,831,025

The bsdiff patch control blocks consist of a series of Add/Copy/Skip triples (x, y, z). Each command consists of three 8-byte values:

- Add x bytes from the original file to x bytes from the diff block, write results to output
(Literally - add each individual byte.)

- Copy y bytes from the extra block to output

- Skip forwards or backwards z bytes in the original file

Most of the output consists of diff bytes in this test:

Commands: 33,111 (794,664 bytes)
Diff bytes: 51,023,396
Extra bytes: 1,717,172

The bsdiff control block data can be viewed as a sort of program that outputs the new file, given the old file and a list of "addcopyskip x,y,z" instructions along with three automatically updated machine "registers": the offsets into the old file data, the diff bytes block, and the extra bytes blocks. bsdiff then compresses this program and the two data blocks (diff+extra) individually with bzip2.

I think there are several key reasons why bsdiff works well relative to LZHAM with a static dictionary (and LZMA too, except it doesn't have explicit support for static dictionaries):

- LZHAM has no explicit way of referencing the original file (the seed dictionary). It must encode high match distances to reach back into the seed dictionary, beyond the already decoded data. This is expensive, and grows increasingly so as the dictionary grow larger.

- bsdiff is able to move around its original file pointer either forwards or backwards, by encoding the distance to travel and a sign bit. LZHAM can only used previous match distances (from the 4 entry LRU match history buffer), or it must spend many bits coding absolute match distances.

- LZHAM doesn't have delta matches (bsdiff's add operation) to efficiently encode the "second order" changes talked about in Colin Percival's thesis. The closest it gets are special handling for single byte LAM's (literals after matches), by XOR'ing the mismatch byte vs. the lookahead byte and coding the result with Huffman coding. But immediately after a LAM it always falls back to plain literals or exact matches.

- The bsdiff approach is able to apply the full bzip2 machinery to the diff bytes. In this file, most diff bytes are 0 with sprinklings of 2 or 3 byte deltas. Patterns are readily apparent in the delta data.

After some thought, one way to enhance LZHAM with stronger support for delta compression: support multibyte LAM's (perhaps by outputting the LAM length with arithmetic coding when in LAM mode), and maybe add the ability to encode REP matches with delta distances.

Or, just use LZHAM or LZMA as-is and just do the delta compression step as a preprocess, just like bzip2 does.

A few observations about Unity

I've only been at Unity Technologies for a month, and obviously I have much to learn. Here are a few things I've picked up so far:

High programmer empathy, low ego developers tend to be successful here.

Interesting company attributes: Distributed, open source style development model, team oriented, high cooperation. Automated testing, good processes, build tools, etc. are very highly valued. "Elegant code" is a serious concept at Unity.

Hiring: Flexible. Unity has local offices almost everywhere, giving it great freedom in hiring.