view release on metacpan or search on metacpan
* Add Perl 5.34
Fri May 28 11:05:31 2021 +0100
d09a13acfe81f8c4f5fa2b29b35a2c4bbbee095a
0.202 2020-07-30
- Add ZSTD_MIN_CLEVEL
5039d7e079f68760c86b94ef3fc0b2cf339f84b2
0.201 2020-07-20
- branch to Cmpress-Stream-Zstd
- added extra streaming support
0.20 2019-09-29T10:25:55Z
- zstd 1.4.3
0.19 2019-08-12T08:33:36Z
- zstd 1.4.2
0.18 2019-04-29T08:45:05Z
- zstd 1.4.0
0.13 2018-04-01T03:50:50Z
- zstd 1.3.4
0.12 2018-03-19T17:28:58Z
- zstd 1.3.3
0.11 2017-10-28T09:58:01Z
- zstd 1.3.2
0.10 2017-09-30T07:13:12Z
- Add streaming interfaces
0.06 2017-09-24T12:23:22Z
- zstd 1.3.1
- Support >= Perl 5.26.0 (@INC problem)
- Support *BSD
0.05 2017-03-18T08:02:21Z
- zstd 1.1.4
0.04 2017-03-11T09:32:56Z
lib/Compress/Stream/Zstd/CompressionContext.pm
lib/Compress/Stream/Zstd/CompressionDictionary.pm
lib/Compress/Stream/Zstd/Compressor.pm
lib/Compress/Stream/Zstd/DecompressionContext.pm
lib/Compress/Stream/Zstd/DecompressionDictionary.pm
lib/Compress/Stream/Zstd/Decompressor.pm
minil.toml
ppport.h
t/00_compile.t
t/01_basic.t
t/02_streaming.t
t/03_context.t
t/04_dictionary.t
t/05_streaming_windowlog.t
t/test.dic
typemap
xt/01_leaktrace.t
ext/zstd/CHANGELOG
ext/zstd/CODE_OF_CONDUCT.md
ext/zstd/CONTRIBUTING.md
ext/zstd/COPYING
ext/zstd/LICENSE
ext/zstd/Makefile
ext/zstd/Package.swift
ext/zstd/doc/images/zstd_cdict_v1_3_5.png
ext/zstd/doc/images/zstd_logo86.png
ext/zstd/doc/zstd_compression_format.md
ext/zstd/doc/zstd_manual.html
ext/zstd/examples/Makefile
ext/zstd/examples/README.md
ext/zstd/examples/common.h
ext/zstd/examples/dictionary_compression.c
ext/zstd/examples/dictionary_decompression.c
ext/zstd/examples/multiple_simple_compression.c
ext/zstd/examples/multiple_streaming_compression.c
ext/zstd/examples/simple_compression.c
ext/zstd/examples/simple_decompression.c
ext/zstd/examples/streaming_compression.c
ext/zstd/examples/streaming_compression_thread_pool.c
ext/zstd/examples/streaming_decompression.c
ext/zstd/examples/streaming_memory_usage.c
ext/zstd/lib/BUCK
ext/zstd/lib/Makefile
ext/zstd/lib/README.md
ext/zstd/lib/common/allocations.h
ext/zstd/lib/common/bits.h
ext/zstd/lib/common/bitstream.h
ext/zstd/lib/common/compiler.h
ext/zstd/lib/common/cpu.h
ext/zstd/lib/common/debug.c
ext/zstd/lib/common/debug.h
[](https://github.com/pmqs/Compress-Stream-Zstd/actions) [ (de)compressor
# NOTE
This module is a fork of [Compress-Zstd](https://github.com/spiritloose/Compress-Zstd).
It contains a few changes to make streaming compression/uncompression more robust.
The only reason for this fork is to allow the module to work with \`IO-Compress-Zstd\`.
The hope is that the changes made here can be merged back upstream and this module can be retired.
# SYNOPSIS
use Compress::Stream::Zstd;
my $compressed = compress($bytes);
my $decompressed = decompress($compressed);
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# AUTHOR
Jiro Nishiguchi <jiro@cpan.org>
Some streaming enhancement by Paul Marquess <pmqs@cpan.org>
Zstandard by Facebook, Inc.
ext/zstd/CHANGELOG view on Meta::CPAN
build: fix MSVC+ClangCL linking issue (#3569) by @tru
build: fix zstd-dll, version of zstd CLI that links to the dynamic library (#3496) by @yoniko
build: fix MSVC warnings (#3495) by @embg
doc: updated zstd specification to clarify corner cases, by @Cyan4973
doc: document how to create fat binaries for macos (#3568) by @rickmark
misc: improve seekable format ingestion speed (~+100%) for very small chunk sizes (#3544) by @Cyan4973
misc: tests/fullbench can benchmark multiple files (#3516) by @dloidolt
v1.5.4 (Feb 2023)
perf: +20% faster huffman decompression for targets that can't compile x64 assembly (#3449, @terrelln)
perf: up to +10% faster streaming compression at levels 1-2 (#3114, @embg)
perf: +4-13% for levels 5-12 by optimizing function generation (#3295, @terrelln)
pref: +3-11% compression speed for `arm` target (#3199, #3164, #3145, #3141, #3138, @JunHe77 and #3139, #3160, @danlark1)
perf: +5-30% faster dictionary compression at levels 1-4 (#3086, #3114, #3152, @embg)
perf: +10-20% cold dict compression speed by prefetching CDict tables (#3177, @embg)
perf: +1% faster compression by removing a branch in ZSTD_fast_noDict (#3129, @felixhandte)
perf: Small compression ratio improvements in high compression mode (#2983, #3391, @Cyan4973 and #3285, #3302, @daniellerozenblit)
perf: small speed improvement by better detecting `STATIC_BMI2` for `clang` (#3080, @TocarIP)
perf: Improved streaming performance when `ZSTD_c_stableInBuffer` is set (#2974, @Cyan4973)
cli: Asynchronous I/O for improved cli speed (#2975, #2985, #3021, #3022, @yoniko)
cli: Change `zstdless` behavior to align with `zless` (#2909, @binhdvo)
cli: Keep original file if `-c` or `--stdout` is given (#3052, @dirkmueller)
cli: Keep original files when result is concatenated into a single output with `-o` (#3450, @Cyan4973)
cli: Preserve Permissions and Ownership of regular files (#3432, @felixhandte)
cli: Print zlib/lz4/lzma library versions with `-vv` (#3030, @terrelln)
cli: Print checksum value for single frame files with `-lv` (#3332, @Cyan4973)
cli: Print `dictID` when present with `-lv` (#3184, @htnhan)
cli: when `stderr` is *not* the console, disable status updates, but preserve final summary (#3458, @Cyan4973)
cli: support `--best` and `--no-name` in `gzip` compatibility mode (#3059, @dirkmueller)
ext/zstd/CHANGELOG view on Meta::CPAN
cli: Fix decompression memory usage reported by `-vv --long` (#3042, @u1f35c, and #3232, @zengyijing)
cli: Fix infinite loop when empty input is passed to trainer (#3081, @terrelln)
cli: Fix `--adapt` doesn't work when `--no-progress` is also set (#3354, @terrelln)
api: Support for Block-Level Sequence Producer (#3333, @embg)
api: Support for in-place decompression (#3432, @terrelln)
api: New `ZSTD_CCtx_setCParams()` function, set all parameters defined in a `ZSTD_compressionParameters` structure (#3403, @Cyan4973)
api: Streaming decompression detects incorrect header ID sooner (#3175, @Cyan4973)
api: Window size resizing optimization for edge case (#3345, @daniellerozenblit)
api: More accurate error codes for busy-loop scenarios (#3413, #3455, @Cyan4973)
api: Fix limit overflow in `compressBound` and `decompressBound` (#3362, #3373, Cyan4973) reported by @nigeltao
api: Deprecate several advanced experimental functions: streaming (#3408, @embg), copy (#3196, @mileshu)
bug: Fix corruption that rarely occurs in 32-bit mode with wlog=25 (#3361, @terrelln)
bug: Fix for block-splitter (#3033, @Cyan4973)
bug: Fixes for Sequence Compression API (#3023, #3040, @Cyan4973)
bug: Fix leaking thread handles on Windows (#3147, @animalize)
bug: Fix timing issues with cmake/meson builds (#3166, #3167, #3170, @Cyan4973)
build: Allow user to select legacy level for cmake (#3050, @shadchin)
build: Enable legacy support by default in cmake (#3079, @niamster)
build: Meson build script improvements (#3039, #3120, #3122, #3327, #3357, @eli-schwartz and #3276, @neheb)
build: Add aarch64 to supported architectures for zstd_trace (#3054, @ooosssososos)
build: support AIX architecture (#3219, @qiongsiwu)
ext/zstd/CHANGELOG view on Meta::CPAN
perf: Improve compression ratio for small windowLog by @cyan4973 (#1624)
perf: Faster compression speed in high compression mode for repetitive data by @terrelln (#1635)
api: Add parameter to generate smaller dictionaries by @tyler-tran (#1656)
cli: Recognize symlinks when built in C99 mode by @felixhandte (#1640)
cli: Expose cpu load indicator for each file on -vv mode by @ephiepark (#1631)
cli: Restrict read permissions on destination files by @chungy (#1644)
cli: zstdgrep: handle -f flag by @felixhandte (#1618)
cli: zstdcat: follow symlinks by @vejnar (#1604)
doc: Remove extra size limit on compressed blocks by @felixhandte (#1689)
doc: Fix typo by @yk-tanigawa (#1633)
doc: Improve documentation on streaming buffer sizes by @cyan4973 (#1629)
build: CMake: support building with LZ4 @leeyoung624 (#1626)
build: CMake: install zstdless and zstdgrep by @leeyoung624 (#1647)
build: CMake: respect existing uninstall target by @j301scott (#1619)
build: Make: skip multithread tests when built without support by @michaelforney (#1620)
build: Make: Fix examples/ test target by @sjnam (#1603)
build: Meson: rename options out of deprecated namespace by @lzutao (#1665)
build: Meson: fix build by @lzutao (#1602)
build: Visual Studio: don't export symbols in static lib by @scharan (#1650)
build: Visual Studio: fix linking by @absotively (#1639)
build: Fix MinGW-W64 build by @myzhang1029 (#1600)
ext/zstd/CHANGELOG view on Meta::CPAN
api: Fix ZSTD_decompressDCtx() corner cases with a dictionary
api: Move ZSTD_getDictID_*() functions to the stable section
api: Add ZSTD_c_literalCompressionMode flag to enable or disable literal compression by @terrelln
api: Allow compression parameters to be set when a dictionary is used
api: Allow setting parameters before or after ZSTD_CCtx_loadDictionary() is called
api: Fix ZSTD_estimateCStreamSize_usingCCtxParams()
api: Setting ZSTD_d_maxWindowLog to 0 means use the default
cli: Ensure that a dictionary is not used to compress itself by @shakeelrao
cli: Add --[no-]compress-literals flag to enable or disable literal compression
doc: Update the examples to use the advanced API
doc: Explain how to transition from old streaming functions to the advanced API in the header
build: Improve the Windows release packages
build: Improve CMake build by @hjmjohnson
build: Build fixes for FreeBSD by @lwhsu
build: Remove redundant warnings by @thatsafunnyname
build: Fix tests on OpenBSD by @bket
build: Extend fuzzer build system to work with the new clang engine
build: CMake now creates the libzstd.so.1 symlink
build: Improve Menson build by @lzutao
misc: Fix symbolic link detection on FreeBSD
misc: Use physical core count for -T0 on FreeBSD by @cemeyer
ext/zstd/CHANGELOG view on Meta::CPAN
misc: all /contrib projects fixed
misc: added /contrib/docker script by @gyscos
v1.3.3 (Dec 21, 2017)
perf: faster zstd_opt strategy (levels 16-19)
fix : bug #944 : multithreading with shared ditionary and large data, reported by @gsliepen
cli : fix : content size written in header by default
cli : fix : improved LZ4 format support, by @felixhandte
cli : new : hidden command `-S`, to benchmark multiple files while generating one result per file
api : fix : support large skippable frames, by @terrelln
api : fix : streaming interface was adding a useless 3-bytes null block to small frames
api : change : when setting `pledgedSrcSize`, use `ZSTD_CONTENTSIZE_UNKNOWN` macro value to mean "unknown"
build: fix : compilation under rhel6 and centos6, reported by @pixelb
build: added `check` target
v1.3.2 (Oct 10, 2017)
new : long range mode, using --long command, by Stella Lau (@stellamplau)
new : ability to generate and decode magicless frames (#591)
changed : maximum nb of threads reduced to 200, to avoid address space exhaustion in 32-bits mode
fix : multi-threading compression works with custom allocators
fix : ZSTD_sizeof_CStream() was over-evaluating memory usage
ext/zstd/CHANGELOG view on Meta::CPAN
cli : improved --list output
cli : new : can split input file for dictionary training, using command -B#
cli : new : clean operation artefact on Ctrl-C interruption
cli : fix : do not change /dev/null permissions when using command -t with root access, reported by @mike155 (#851)
cli : fix : write file size in header in multiple-files mode
api : added macro ZSTD_COMPRESSBOUND() for static allocation
api : experimental : new advanced decompression API
api : fix : sizeof_CCtx() used to over-estimate
build: fix : no-multithread variant compiles without pool.c dependency, reported by Mitchell Blank Jr (@mitchblank) (#819)
build: better compatibility with reproducible builds, by Bernhard M. Wiedemann (@bmwiedemann) (#818)
example : added streaming_memory_usage
license : changed /examples license to BSD + GPLv2
license : fix a few header files to reflect new license (#825)
v1.3.1 (Aug 21, 2017)
New license : BSD + GPLv2
perf: substantially decreased memory usage in Multi-threading mode, thanks to reports by Tino Reichardt (@mcmilk)
perf: Multi-threading supports up to 256 threads. Cap at 256 when more are requested (#760)
cli : improved and fixed --list command, by @ib (#772)
cli : command -vV to list supported formats, by @ib (#771)
build : fixed binary variants, reported by @svenha (#788)
ext/zstd/CHANGELOG view on Meta::CPAN
build: enabled Multi-threading support for *BSD, by Baptiste Daroussin
tools: updated Paramgrill. Command -O# provides best parameters for sample and speed target.
new : contrib/linux-kernel version, by Nick Terrell
v1.1.4 (Mar 18, 2017)
cli : new : can compress in *.gz format, using --format=gzip command, by Przemyslaw Skibinski
cli : new : advanced benchmark command --priority=rt
cli : fix : write on sparse-enabled file systems in 32-bits mode, by @ds77
cli : fix : --rm remains silent when input is stdin
cli : experimental : xzstd, with support for xz/lzma decoding, by Przemyslaw Skibinski
speed : improved decompression speed in streaming mode for single shot scenarios (+5%)
memory: DDict (decompression dictionary) memory usage down from 150 KB to 20 KB
arch: 32-bits variant able to generate and decode very long matches (>32 MB), by Sean Purcell
API : new : ZSTD_findFrameCompressedSize(), ZSTD_getFrameContentSize(), ZSTD_findDecompressedSize()
API : changed : dropped support of legacy versions <= v0.3 (can be changed by modifying ZSTD_LEGACY_SUPPORT value)
build : new: meson build system in contrib/meson, by Dima Krasner
build : improved cmake script, by @Majlen
build : added -Wformat-security flag, as recommended by Padraig Brady
doc : new : educational decoder, by Sean Purcell
v1.1.3 (Feb 7, 2017)
ext/zstd/CHANGELOG view on Meta::CPAN
dictBuilder : improved dictionary generation quality, thanks to Nick Terrell
API : new : lib/compress/ZSTDMT_compress.h multithreading API (experimental)
API : new : ZSTD_create?Dict_byReference(), requested by Bartosz Taudul
API : new : ZDICT_finalizeDictionary()
API : fix : ZSTD_initCStream_usingCDict() properly writes dictID into frame header, by Gregory Szorc (#511)
API : fix : all symbols properly exposed in libzstd, by Nick Terrell
build : support for Solaris target, by Przemyslaw Skibinski
doc : clarified specification, by Sean Purcell
v1.1.2 (Dec 15, 2016)
API : streaming : decompression : changed : automatic implicit reset when chain-decoding new frames without init
API : experimental : added : dictID retrieval functions, and ZSTD_initCStream_srcSize()
API : zbuff : changed : prototypes now generate deprecation warnings
lib : improved : faster decompression speed at ultra compression settings and 32-bits mode
lib : changed : only public ZSTD_ symbols are now exposed
lib : changed : reduced usage of stack memory
lib : fixed : several corner case bugs, by Nick Terrell
cli : new : gzstd, experimental version able to decode .gz files, by Przemyslaw Skibinski
cli : new : preserve file attributes
cli : new : added zstdless and zstdgrep tools
cli : fixed : status displays total amount decoded, even for file consisting of multiple frames (like pzstd)
cli : fixed : zstdcat
zlib_wrapper : added support for gz* functions, by Przemyslaw Skibinski
install : better compatibility with FreeBSD, by Dimitry Andric
source tree : changed : zbuff source files moved to lib/deprecated
v1.1.1 (Nov 2, 2016)
New : command -M#, --memory=, --memlimit=, --memlimit-decompress= to limit allowed memory consumption
New : doc/zstd_manual.html, by Przemyslaw Skibinski
Improved : slightly better compression ratio at --ultra levels (>= 20)
Improved : better memory usage when using streaming compression API, thanks to @Rogier-5 report
Added : API : ZSTD_initCStream_usingCDict(), ZSTD_initDStream_usingDDict() (experimental section)
Added : example/multiple_streaming_compression.c
Changed : zstd_errors.h is now installed within /include (and replaces errors_public.h)
Updated man page
Fixed : zstd-small, zstd-compress and zstd-decompress compilation targets
v1.1.0 (Sep 28, 2016)
New : contrib/pzstd, parallel version of zstd, by Nick Terrell
added : NetBSD install target (#338)
Improved : speed for batches of small files
Improved : speed of zlib wrapper, by Przemyslaw Skibinski
Changed : libzstd on Windows supports legacy formats, by Christophe Chevalier
ext/zstd/CHANGELOG view on Meta::CPAN
v1.0.0 (Sep 1, 2016)
Change Licensing, all project is now BSD, Copyright Facebook
Small decompression speed improvement
API : Streaming API supports legacy format
API : ZDICT_getDictID(), ZSTD_sizeof_{CCtx, DCtx, CStream, DStream}(), ZSTD_setDStreamParameter()
CLI supports legacy formats v0.4+
Fixed : compression fails on certain huge files, reported by Jesse McGrew
Enhanced documentation, by Przemyslaw Skibinski
v0.8.1 (Aug 18, 2016)
New streaming API
Changed : --ultra now enables levels beyond 19
Changed : -i# now selects benchmark time in second
Fixed : ZSTD_compress* can now compress > 4 GB in a single pass, reported by Nick Terrell
Fixed : speed regression on specific patterns (#272)
Fixed : support for Z_SYNC_FLUSH, by Dmitry Krot (#291)
Fixed : ICC compilation, by Przemyslaw Skibinski
v0.8.0 (Aug 2, 2016)
Improved : better speed on clang and gcc -O2, thanks to Eric Biggers
New : Build on FreeBSD and DragonFly, thanks to JrMarino
ext/zstd/CHANGELOG view on Meta::CPAN
v0.5.1 (Feb 18, 2016)
New : Optimal parsing => Very high compression modes, thanks to Przemyslaw Skibinski
Changed : Dictionary builder integrated into libzstd and zstd cli
Changed (!) : zstd cli now uses "multiple input files" as default mode. See `zstd -h`.
Fix : high compression modes for big-endian platforms
New : zstd cli : `-t` | `--test` command
v0.5.0 (Feb 5, 2016)
New : dictionary builder utility
Changed : streaming & dictionary API
Improved : better compression of small data
v0.4.7 (Jan 22, 2016)
Improved : small compression speed improvement in HC mode
Changed : `zstd_decompress.c` has ZSTD_LEGACY_SUPPORT to 0 by default
fix : bt search bug
v0.4.6 (Jan 13, 2016)
fix : fast compression mode on Windows
New : cmake configuration file, thanks to Artyom Dymchenko
ext/zstd/TESTING.md view on Meta::CPAN
- Small tests (`tests/legacy.c`, `tests/longmatch.c`) on x64_64
Medium Tests
------------
Medium tests run on every commit and pull request to `dev` branch, on TravisCI.
They consist of the following tests:
- The following tests run with UBsan and Asan on x86_64 and x86, as well as with
Msan on x86_64
- `tests/playTests.sh --test-large-data`
- Fuzzer tests: `tests/fuzzer.c`, `tests/zstreamtest.c`, and `tests/decodecorpus.c`
- `tests/zstreamtest.c` under Tsan (streaming mode, including multithreaded mode)
- Valgrind Test (`make -C tests test-valgrind`) (testing CLI and fuzzer under `valgrind`)
- Fuzzer tests (see above) on ARM, AArch64, PowerPC, and PowerPC64
Long Tests
----------
Long tests run on all commits to `release` branch,
and once a day on the current version of `dev` branch,
on TravisCI.
They consist of the following tests:
- Entire test suite (including fuzzers and some other specialized tests) on:
ext/zstd/contrib/linux-kernel/linux_zstd.h view on Meta::CPAN
* Return: A zstd decompression context or NULL on error.
*/
zstd_dctx *zstd_init_dctx(void *workspace, size_t workspace_size);
/**
* zstd_decompress_dctx() - decompress zstd compressed src into dst
* @dctx: The decompression context.
* @dst: The buffer to decompress src into.
* @dst_capacity: The size of the destination buffer. Must be at least as large
* as the decompressed size. If the caller cannot upper bound the
* decompressed size, then it's better to use the streaming API.
* @src: The zstd compressed data to decompress. Multiple concatenated
* frames and skippable frames are allowed.
* @src_size: The exact size of the data to decompress.
*
* Return: The decompressed size or an error, which can be checked using
* zstd_is_error().
*/
size_t zstd_decompress_dctx(zstd_dctx *dctx, void *dst, size_t dst_capacity,
const void *src, size_t src_size);
/* ====== Streaming Buffers ====== */
/**
* struct zstd_in_buffer - input buffer for streaming
* @src: Start of the input buffer.
* @size: Size of the input buffer.
* @pos: Position where reading stopped. Will be updated.
* Necessarily 0 <= pos <= size.
*
* See zstd_lib.h.
*/
typedef ZSTD_inBuffer zstd_in_buffer;
/**
* struct zstd_out_buffer - output buffer for streaming
* @dst: Start of the output buffer.
* @size: Size of the output buffer.
* @pos: Position where writing stopped. Will be updated.
* Necessarily 0 <= pos <= size.
*
* See zstd_lib.h.
*/
typedef ZSTD_outBuffer zstd_out_buffer;
/* ====== Streaming Compression ====== */
ext/zstd/contrib/linux-kernel/linux_zstd.h view on Meta::CPAN
/**
* zstd_cstream_workspace_bound() - memory needed to initialize a zstd_cstream
* @cparams: The compression parameters to be used for compression.
*
* Return: A lower bound on the size of the workspace that is passed to
* zstd_init_cstream().
*/
size_t zstd_cstream_workspace_bound(const zstd_compression_parameters *cparams);
/**
* zstd_init_cstream() - initialize a zstd streaming compression context
* @parameters The zstd parameters to use for compression.
* @pledged_src_size: If params.fParams.contentSizeFlag == 1 then the caller
* must pass the source size (zero means empty source).
* Otherwise, the caller may optionally pass the source
* size, or zero if unknown.
* @workspace: The workspace to emplace the context into. It must outlive
* the returned context.
* @workspace_size: The size of workspace.
* Use zstd_cstream_workspace_bound(params->cparams) to
* determine how large the workspace must be.
*
* Return: The zstd streaming compression context or NULL on error.
*/
zstd_cstream *zstd_init_cstream(const zstd_parameters *parameters,
unsigned long long pledged_src_size, void *workspace, size_t workspace_size);
/**
* zstd_reset_cstream() - reset the context using parameters from creation
* @cstream: The zstd streaming compression context to reset.
* @pledged_src_size: Optionally the source size, or zero if unknown.
*
* Resets the context using the parameters from creation. Skips dictionary
* loading, since it can be reused. If `pledged_src_size` is non-zero the frame
* content size is always written into the frame header.
*
* Return: Zero or an error, which can be checked using
* zstd_is_error().
*/
size_t zstd_reset_cstream(zstd_cstream *cstream,
unsigned long long pledged_src_size);
/**
* zstd_compress_stream() - streaming compress some of input into output
* @cstream: The zstd streaming compression context.
* @output: Destination buffer. `output->pos` is updated to indicate how much
* compressed data was written.
* @input: Source buffer. `input->pos` is updated to indicate how much data
* was read. Note that it may not consume the entire input, in which
* case `input->pos < input->size`, and it's up to the caller to
* present remaining data again.
*
* The `input` and `output` buffers may be any size. Guaranteed to make some
* forward progress if `input` and `output` are not empty.
*
* Return: A hint for the number of bytes to use as the input for the next
* function call or an error, which can be checked using
* zstd_is_error().
*/
size_t zstd_compress_stream(zstd_cstream *cstream, zstd_out_buffer *output,
zstd_in_buffer *input);
/**
* zstd_flush_stream() - flush internal buffers into output
* @cstream: The zstd streaming compression context.
* @output: Destination buffer. `output->pos` is updated to indicate how much
* compressed data was written.
*
* zstd_flush_stream() must be called until it returns 0, meaning all the data
* has been flushed. Since zstd_flush_stream() causes a block to be ended,
* calling it too often will degrade the compression ratio.
*
* Return: The number of bytes still present within internal buffers or an
* error, which can be checked using zstd_is_error().
*/
size_t zstd_flush_stream(zstd_cstream *cstream, zstd_out_buffer *output);
/**
* zstd_end_stream() - flush internal buffers into output and end the frame
* @cstream: The zstd streaming compression context.
* @output: Destination buffer. `output->pos` is updated to indicate how much
* compressed data was written.
*
* zstd_end_stream() must be called until it returns 0, meaning all the data has
* been flushed and the frame epilogue has been written.
*
* Return: The number of bytes still present within internal buffers or an
* error, which can be checked using zstd_is_error().
*/
size_t zstd_end_stream(zstd_cstream *cstream, zstd_out_buffer *output);
ext/zstd/contrib/linux-kernel/linux_zstd.h view on Meta::CPAN
/**
* zstd_dstream_workspace_bound() - memory needed to initialize a zstd_dstream
* @max_window_size: The maximum window size allowed for compressed frames.
*
* Return: A lower bound on the size of the workspace that is passed
* to zstd_init_dstream().
*/
size_t zstd_dstream_workspace_bound(size_t max_window_size);
/**
* zstd_init_dstream() - initialize a zstd streaming decompression context
* @max_window_size: The maximum window size allowed for compressed frames.
* @workspace: The workspace to emplace the context into. It must outlive
* the returned context.
* @workspaceSize: The size of workspace.
* Use zstd_dstream_workspace_bound(max_window_size) to
* determine how large the workspace must be.
*
* Return: The zstd streaming decompression context.
*/
zstd_dstream *zstd_init_dstream(size_t max_window_size, void *workspace,
size_t workspace_size);
/**
* zstd_reset_dstream() - reset the context using parameters from creation
* @dstream: The zstd streaming decompression context to reset.
*
* Resets the context using the parameters from creation. Skips dictionary
* loading, since it can be reused.
*
* Return: Zero or an error, which can be checked using zstd_is_error().
*/
size_t zstd_reset_dstream(zstd_dstream *dstream);
/**
* zstd_decompress_stream() - streaming decompress some of input into output
* @dstream: The zstd streaming decompression context.
* @output: Destination buffer. `output.pos` is updated to indicate how much
* decompressed data was written.
* @input: Source buffer. `input.pos` is updated to indicate how much data was
* read. Note that it may not consume the entire input, in which case
* `input.pos < input.size`, and it's up to the caller to present
* remaining data again.
*
* The `input` and `output` buffers may be any size. Guaranteed to make some
* forward progress if `input` and `output` are not empty.
* zstd_decompress_stream() will not consume the last byte of the frame until
ext/zstd/contrib/seekable_format/zstd_seekable.h view on Meta::CPAN
* middle of an archive only requires zstd to decompress at most a frame's
* worth of extra data, instead of the entire archive.
******************************************************************************/
typedef struct ZSTD_seekable_CStream_s ZSTD_seekable_CStream;
typedef struct ZSTD_seekable_s ZSTD_seekable;
typedef struct ZSTD_seekTable_s ZSTD_seekTable;
/*-****************************************************************************
* Seekable compression - HowTo
* A ZSTD_seekable_CStream object is required to tracking streaming operation.
* Use ZSTD_seekable_createCStream() and ZSTD_seekable_freeCStream() to create/
* release resources.
*
* Streaming objects are reusable to avoid allocation and deallocation,
* to start a new compression operation call ZSTD_seekable_initCStream() on the
* compressor.
*
* Data streamed to the seekable compressor will automatically be split into
* frames of size `maxFrameSize` (provided in ZSTD_seekable_initCStream()),
* or if none is provided, will be cut off whenever ZSTD_seekable_endFrame() is
ext/zstd/contrib/seekable_format/zstdseek_compress.c view on Meta::CPAN
U32 checksum;
} framelogEntry_t;
struct ZSTD_frameLog_s {
framelogEntry_t* entries;
U32 size;
U32 capacity;
int checksumFlag;
/* for use when streaming out the seek table */
U32 seekTablePos;
U32 seekTableIndex;
} framelog_t;
struct ZSTD_seekable_CStream_s {
ZSTD_CStream* cstream;
ZSTD_frameLog framelog;
U32 frameCSize;
U32 frameDSize;
ext/zstd/doc/educational_decoder/README.md view on Meta::CPAN
Educational Decoder
===================
`zstd_decompress.c` is a self-contained implementation in C99 of a decoder,
according to the [Zstandard format specification].
While it does not implement as many features as the reference decoder,
such as the streaming API or content checksums, it is written to be easy to
follow and understand, to help understand how the Zstandard format works.
It's laid out to match the [format specification],
so it can be used to understand how complex segments could be implemented.
It also contains implementations of Huffman and FSE table decoding.
[Zstandard format specification]: https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md
[format specification]: https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md
While the library's primary objective is code clarity,
it also happens to compile into a small object file.
ext/zstd/doc/educational_decoder/README.md view on Meta::CPAN
`harness.c` provides a simple test harness around the decoder:
harness <input-file> <output-file> [dictionary]
As an additional resource to be used with this decoder,
see the `decodecorpus` tool in the [tests] directory.
It generates valid Zstandard frames that can be used to verify
a Zstandard decoder implementation.
Note that to use the tool to verify this decoder implementation,
the --content-size flag should be set,
as this decoder does not handle streaming decoding,
and so it must know the decompressed size in advance.
[tests]: https://github.com/facebook/zstd/blob/dev/tests/
ext/zstd/doc/educational_decoder/zstd_decompress.c view on Meta::CPAN
}
}
/// Decompress the data from a frame block by block
static void decompress_data(frame_context_t *const ctx, ostream_t *const out,
istream_t *const in) {
// "A frame encapsulates one or multiple blocks. Each block can be
// compressed or not, and has a guaranteed maximum content size, which
// depends on frame parameters. Unlike frames, each block depends on
// previous blocks for proper decoding. However, each block can be
// decompressed without waiting for its successor, allowing streaming
// operations."
int last_block = 0;
do {
// "Last_Block
//
// The lowest bit signals if this block is the last one. Frame ends
// right after this block.
//
// Block_Type and Block_Size
//
ext/zstd/doc/zstd_compression_format.md view on Meta::CPAN
0.3.9 (2023-03-08)
Introduction
------------
The purpose of this document is to define a lossless compressed data format,
that is independent of CPU type, operating system,
file system and character set, suitable for
file compression, pipe and streaming compression,
using the [Zstandard algorithm](https://facebook.github.io/zstd/).
The text of the specification assumes a basic background in programming
at the level of bits and other primitive data representations.
The data can be produced or consumed,
even for an arbitrarily long sequentially presented input data stream,
using only an a priori bounded amount of intermediate storage,
and hence can be used in data communications.
The format uses the Zstandard compression method,
and optional [xxHash-64 checksum method](https://cyan4973.github.io/xxHash/),
ext/zstd/doc/zstd_compression_format.md view on Meta::CPAN
Content compressed by Zstandard is transformed into a Zstandard __frame__.
Multiple frames can be appended into a single file or stream.
A frame is completely independent, has a defined beginning and end,
and a set of parameters which tells the decoder how to decompress it.
A frame encapsulates one or multiple __blocks__.
Each block contains arbitrary content, which is described by its header,
and has a guaranteed maximum content size, which depends on frame parameters.
Unlike frames, each block depends on previous blocks for proper decoding.
However, each block can be decompressed without waiting for its successor,
allowing streaming operations.
Overview
---------
- [Frames](#frames)
- [Zstandard frames](#zstandard-frames)
- [Blocks](#blocks)
- [Literals Section](#literals-section)
- [Sequences Section](#sequences-section)
- [Sequence Execution](#sequence-execution)
- [Skippable frames](#skippable-frames)
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
<li><a href="#Chapter9">Streaming decompression - HowTo</a></li>
<li><a href="#Chapter10">Simple dictionary API</a></li>
<li><a href="#Chapter11">Bulk processing dictionary API</a></li>
<li><a href="#Chapter12">Dictionary helper functions</a></li>
<li><a href="#Chapter13">Advanced dictionary and prefix API (Requires v1.4.0+)</a></li>
<li><a href="#Chapter14">experimental API (static linking only)</a></li>
<li><a href="#Chapter15">Frame header and size functions</a></li>
<li><a href="#Chapter16">Memory management</a></li>
<li><a href="#Chapter17">Advanced compression functions</a></li>
<li><a href="#Chapter18">Advanced decompression functions</a></li>
<li><a href="#Chapter19">Advanced streaming functions</a></li>
<li><a href="#Chapter20">Buffer-less and synchronous inner streaming functions (DEPRECATED)</a></li>
<li><a href="#Chapter21">Buffer-less streaming compression (synchronous mode)</a></li>
<li><a href="#Chapter22">Buffer-less streaming decompression (synchronous mode)</a></li>
<li><a href="#Chapter23">Block level API (DEPRECATED)</a></li>
</ol>
<hr>
<a name="Chapter1"></a><h2>Introduction</h2><pre>
zstd, short for Zstandard, is a fast lossless compression algorithm, targeting
real-time compression scenarios at zlib-level and better compression ratios.
The zstd compression library provides in-memory compression and decompression
functions.
The library supports regular compression levels from 1 up to ZSTD_maxCLevel(),
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
NOTE: Providing `dstCapacity >= ZSTD_compressBound(srcSize)` guarantees that zstd will have
enough space to successfully compress the data.
@return : compressed size written into `dst` (<= `dstCapacity),
or an error code if it fails (which can be tested using ZSTD_isError()).
</p></pre><BR>
<pre><b>size_t ZSTD_decompress( void* dst, size_t dstCapacity,
const void* src, size_t compressedSize);
</b><p> `compressedSize` : must be the _exact_ size of some number of compressed and/or skippable frames.
`dstCapacity` is an upper bound of originalSize to regenerate.
If user cannot imply a maximum upper bound, it's better to use streaming mode to decompress data.
@return : the number of bytes decompressed into `dst` (<= `dstCapacity`),
or an errorCode if it fails (which can be tested using ZSTD_isError()).
</p></pre><BR>
<pre><b>#define ZSTD_CONTENTSIZE_UNKNOWN (0ULL - 1)
#define ZSTD_CONTENTSIZE_ERROR (0ULL - 2)
unsigned long long ZSTD_getFrameContentSize(const void *src, size_t srcSize);
</b><p> `src` should point to the start of a ZSTD encoded frame.
`srcSize` must be at least as large as the frame header.
hint : any size >= `ZSTD_frameHeaderSize_max` is large enough.
@return : - decompressed size of `src` frame content, if known
- ZSTD_CONTENTSIZE_UNKNOWN if the size cannot be determined
- ZSTD_CONTENTSIZE_ERROR if an error occurred (e.g. invalid magic number, srcSize too small)
note 1 : a 0 return value means the frame is valid but "empty".
note 2 : decompressed size is an optional field, it may not be present, typically in streaming mode.
When `return==ZSTD_CONTENTSIZE_UNKNOWN`, data to decompress could be any size.
In which case, it's necessary to use streaming mode to decompress data.
Optionally, application can rely on some implicit limit,
as ZSTD_decompress() only needs an upper bound of decompressed size.
(For example, data could be necessarily cut into blocks <= 16 KB).
note 3 : decompressed size is always present when compression is completed using single-pass functions,
such as ZSTD_compress(), ZSTD_compressCCtx() ZSTD_compress_usingDict() or ZSTD_compress_usingCDict().
note 4 : decompressed size can be very large (64-bits value),
potentially larger than what local system can handle as a single memory segment.
In which case, it's necessary to use streaming mode to decompress data.
note 5 : If source is untrusted, decompressed size could be wrong or intentionally modified.
Always ensure return value fits within application's authorized limits.
Each application can set its own limits.
note 6 : This function replaces ZSTD_getDecompressedSize()
</p></pre><BR>
<pre><b>ZSTD_DEPRECATED("Replaced by ZSTD_getFrameContentSize")
ZSTDLIB_API
unsigned long long ZSTD_getDecompressedSize(const void* src, size_t srcSize);
</b><p> NOTE: This function is now obsolete, in favor of ZSTD_getFrameContentSize().
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
* Special: value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT.
* Note 1 : it's possible to pass a negative compression level.
* Note 2 : setting a level does not automatically set all other compression parameters
* to default. Setting this will however eventually dynamically impact the compression
* parameters which have not been manually set. The manually set
* ones will 'stick'. */
</b>/* Advanced compression parameters :<b>
* It's possible to pin down compression parameters to some specific values.
* In which case, these values are no longer dynamically selected by the compressor */
ZSTD_c_windowLog=101, </b>/* Maximum allowed back-reference distance, expressed as power of 2.<b>
* This will set a memory budget for streaming decompression,
* with larger values requiring more memory
* and typically compressing more.
* Must be clamped between ZSTD_WINDOWLOG_MIN and ZSTD_WINDOWLOG_MAX.
* Special: value 0 means "use default windowLog".
* Note: Using a windowLog greater than ZSTD_WINDOWLOG_LIMIT_DEFAULT
* requires explicitly allowing such size at streaming decompression stage. */
ZSTD_c_hashLog=102, </b>/* Size of the initial probe table, as a power of 2.<b>
* Resulting memory usage is (1 << (hashLog+2)).
* Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX.
* Larger tables improve compression ratio of strategies <= dFast,
* and improve speed of strategies > dFast.
* Special: value 0 means "use default hashLog". */
ZSTD_c_chainLog=103, </b>/* Size of the multi-probe search table, as a power of 2.<b>
* Resulting memory usage is (1 << (chainLog+2)).
* Must be clamped between ZSTD_CHAINLOG_MIN and ZSTD_CHAINLOG_MAX.
* Larger tables result in better and slower compression.
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
* Must be clamped between 0 and (ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN).
* Default is MAX(0, (windowLog - ldmHashLog)), optimizing hash table usage.
* Larger values improve compression speed.
* Deviating far from default value will likely result in a compression ratio decrease.
* Special: value 0 means "automatically determine hashRateLog". */
</b>/* frame parameters */<b>
ZSTD_c_contentSizeFlag=200, </b>/* Content size will be written into frame header _whenever known_ (default:1)<b>
* Content size must be known at the beginning of compression.
* This is automatically the case when using ZSTD_compress2(),
* For streaming scenarios, content size must be provided with ZSTD_CCtx_setPledgedSrcSize() */
ZSTD_c_checksumFlag=201, </b>/* A 32-bits checksum of content is written at end of frame (default:0) */<b>
ZSTD_c_dictIDFlag=202, </b>/* When applicable, dictionary's ID is written into frame header (default:1) */<b>
</b>/* multi-threading parameters */<b>
</b>/* These parameters are only active if multi-threading is enabled (compiled with build macro ZSTD_MULTITHREAD).<b>
* Otherwise, trying to set any other value than default (0) will be a no-op and return an error.
* In a situation where it's unknown if the linked library supports multi-threading or not,
* setting ZSTD_c_nbWorkers to any value >= 1 and consulting the return value provides a quick way to check this property.
*/
ZSTD_c_nbWorkers=400, </b>/* Select how many threads will be spawned to compress in parallel.<b>
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
@return : compressed size written into `dst` (<= `dstCapacity),
or an error code if it fails (which can be tested using ZSTD_isError()).
</p></pre><BR>
<a name="Chapter6"></a><h2>Advanced decompression API (Requires v1.4.0+)</h2><pre></pre>
<pre><b>typedef enum {
ZSTD_d_windowLogMax=100, </b>/* Select a size limit (in power of 2) beyond which<b>
* the streaming API will refuse to allocate memory buffer
* in order to protect the host from unreasonable memory requirements.
* This parameter is only useful in streaming mode, since no internal buffer is allocated in single-pass mode.
* By default, a decompression context accepts window sizes <= (1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT).
* Special: value 0 means "use default maximum windowLog". */
</b>/* note : additional experimental parameters are also available<b>
* within the experimental section of the API.
* At the time of this writing, they include :
* ZSTD_d_format
* ZSTD_d_stableOutBuffer
* ZSTD_d_forceIgnoreChecksum
* ZSTD_d_refMultipleDDicts
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
size_t pos; </b>/**< position where reading stopped. Will be updated. Necessarily 0 <= pos <= size */<b>
} ZSTD_inBuffer;
</b></pre><BR>
<pre><b>typedef struct ZSTD_outBuffer_s {
void* dst; </b>/**< start of output buffer */<b>
size_t size; </b>/**< size of output buffer */<b>
size_t pos; </b>/**< position where writing stopped. Will be updated. Necessarily 0 <= pos <= size */<b>
} ZSTD_outBuffer;
</b></pre><BR>
<a name="Chapter8"></a><h2>Streaming compression - HowTo</h2><pre>
A ZSTD_CStream object is required to track streaming operation.
Use ZSTD_createCStream() and ZSTD_freeCStream() to create/release resources.
ZSTD_CStream objects can be reused multiple times on consecutive compression operations.
It is recommended to re-use ZSTD_CStream since it will play nicer with system's memory, by re-using already allocated memory.
For parallel execution, use one separate ZSTD_CStream per thread.
note : since v1.3.0, ZSTD_CStream and ZSTD_CCtx are the same thing.
Parameters are sticky : when starting a new compression on the same context,
it will re-use the same sticky parameters as previous compression session.
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
ZSTD_CCtx_refCDict(zcs, NULL); // clear the dictionary (if any)
ZSTD_CCtx_setParameter(zcs, ZSTD_c_compressionLevel, compressionLevel);
Note that ZSTD_initCStream() clears any previously set dictionary. Use the new API
to compress with a dictionary.
</p></pre><BR>
<a name="Chapter9"></a><h2>Streaming decompression - HowTo</h2><pre>
A ZSTD_DStream object is required to track streaming operations.
Use ZSTD_createDStream() and ZSTD_freeDStream() to create/release resources.
ZSTD_DStream objects can be re-used multiple times.
Use ZSTD_initDStream() to start a new decompression operation.
@return : recommended first input size
Alternatively, use advanced API to set specific properties.
Use ZSTD_decompressStream() repetitively to consume your input.
The function will update both `pos` fields.
If `input.pos < input.size`, some input has not been consumed.
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
<a name="Chapter15"></a><h2>Frame header and size functions</h2><pre></pre>
<pre><b>ZSTDLIB_STATIC_API unsigned long long ZSTD_findDecompressedSize(const void* src, size_t srcSize);
</b><p> `src` should point to the start of a series of ZSTD encoded and/or skippable frames
`srcSize` must be the _exact_ size of this series
(i.e. there should be a frame boundary at `src + srcSize`)
@return : - decompressed size of all data in all successive frames
- if the decompressed size cannot be determined: ZSTD_CONTENTSIZE_UNKNOWN
- if an error occurred: ZSTD_CONTENTSIZE_ERROR
note 1 : decompressed size is an optional field, that may not be present, especially in streaming mode.
When `return==ZSTD_CONTENTSIZE_UNKNOWN`, data to decompress could be any size.
In which case, it's necessary to use streaming mode to decompress data.
note 2 : decompressed size is always present when compression is done with ZSTD_compress()
note 3 : decompressed size can be very large (64-bits value),
potentially larger than what local system can handle as a single memory segment.
In which case, it's necessary to use streaming mode to decompress data.
note 4 : If source is untrusted, decompressed size could be wrong or intentionally modified.
Always ensure result fits within application's authorized limits.
Each application can set its own limits.
note 5 : ZSTD_findDecompressedSize handles multiple frames, and so it must traverse the input to
read each contained frame header. This is fast as most of the data is skipped,
however it does mean that all frame data must be present and valid.
</p></pre><BR>
<pre><b>ZSTDLIB_STATIC_API unsigned long long ZSTD_decompressBound(const void* src, size_t srcSize);
</b><p> `src` should point to the start of a series of ZSTD encoded and/or skippable frames
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
ZSTDLIB_STATIC_API size_t ZSTD_estimateCCtxSize_usingCParams(ZSTD_compressionParameters cParams);
ZSTDLIB_STATIC_API size_t ZSTD_estimateCCtxSize_usingCCtxParams(const ZSTD_CCtx_params* params);
ZSTDLIB_STATIC_API size_t ZSTD_estimateDCtxSize(void);
</b><p> These functions make it possible to estimate memory usage
of a future {D,C}Ctx, before its creation.
ZSTD_estimateCCtxSize() will provide a memory budget large enough
for any compression level up to selected one.
Note : Unlike ZSTD_estimateCStreamSize*(), this estimate
does not include space for a window buffer.
Therefore, the estimation is only guaranteed for single-shot compressions, not streaming.
The estimate will assume the input may be arbitrarily large,
which is the worst case.
When srcSize can be bound by a known and rather "small" value,
this fact can be used to provide a tighter estimation
because the CCtx compression context will need less memory.
This tighter estimation can be provided by more advanced functions
ZSTD_estimateCCtxSize_usingCParams(), which can be used in tandem with ZSTD_getCParams(),
and ZSTD_estimateCCtxSize_usingCCtxParams(), which can be used in tandem with ZSTD_CCtxParams_setParameter().
Both can be used to estimate memory using custom compression parameters and arbitrary srcSize limits.
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
ZSTDLIB_STATIC_API size_t ZSTD_estimateDStreamSize_fromFrame(const void* src, size_t srcSize);
</b><p> ZSTD_estimateCStreamSize() will provide a budget large enough for any compression level up to selected one.
It will also consider src size to be arbitrarily "large", which is worst case.
If srcSize is known to always be small, ZSTD_estimateCStreamSize_usingCParams() can provide a tighter estimation.
ZSTD_estimateCStreamSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
ZSTD_estimateCStreamSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParams_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_c_nbWorkers is >= 1.
Note : CStream size estimation is only correct for single-threaded compression.
ZSTD_DStream memory budget depends on window Size.
This information can be passed manually, using ZSTD_estimateDStreamSize,
or deducted from a valid frame Header, using ZSTD_estimateDStreamSize_fromFrame();
Note : if streaming is init with function ZSTD_init?Stream_usingDict(),
an internal ?Dict will be created, which additional size is not estimated here.
In this case, get total size by adding ZSTD_estimate?DictSize
Note 2 : only single-threaded compression is supported.
ZSTD_estimateCStreamSize_usingCCtxParams() will return an error code if ZSTD_c_nbWorkers is >= 1.
Note 3 : ZSTD_estimateCStreamSize* functions are not compatible with the Block-Level Sequence Producer API at this time.
Size estimates assume that no external sequence producer is registered.
</p></pre><BR>
<pre><b>ZSTDLIB_STATIC_API size_t ZSTD_estimateCDictSize(size_t dictSize, int compressionLevel);
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
</p></pre><BR>
<pre><b>ZSTDLIB_STATIC_API size_t ZSTD_DCtx_refPrefix_advanced(ZSTD_DCtx* dctx, const void* prefix, size_t prefixSize, ZSTD_dictContentType_e dictContentType);
</b><p> Same as ZSTD_DCtx_refPrefix(), but gives finer control over
how to interpret prefix content (automatic ? force raw mode (default) ? full mode only ?)
</p></pre><BR>
<pre><b>ZSTDLIB_STATIC_API size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize);
</b><p> Refuses allocating internal buffers for frames requiring a window size larger than provided limit.
This protects a decoder context from reserving too much memory for itself (potential attack scenario).
This parameter is only useful in streaming mode, since no internal buffer is allocated in single-pass mode.
By default, a decompression context accepts all window sizes <= (1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT)
@return : 0, or an error code (which can be tested using ZSTD_isError()).
</p></pre><BR>
<pre><b>ZSTDLIB_STATIC_API size_t ZSTD_DCtx_getParameter(ZSTD_DCtx* dctx, ZSTD_dParameter param, int* value);
</b><p> Get the requested decompression parameter value, selected by enum ZSTD_dParameter,
and store it into int* value.
@return : 0, or an error code (which can be tested with ZSTD_isError()).
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity, size_t* dstPos,
const void* src, size_t srcSize, size_t* srcPos);
</b><p> Same as ZSTD_decompressStream(),
but using only integral types as arguments.
This can be helpful for binders from dynamic languages
which have troubles handling structures containing memory pointers.
</p></pre><BR>
<a name="Chapter19"></a><h2>Advanced streaming functions</h2><pre> Warning : most of these functions are now redundant with the Advanced API.
Once Advanced API reaches "stable" status,
redundant functions will be deprecated, and then at some point removed.
<BR></pre>
<h3>Advanced Streaming compression functions</h3><pre></pre><b><pre></pre></b><BR>
<pre><b>ZSTD_DEPRECATED("use ZSTD_CCtx_reset, see zstd.h for detailed instructions")
ZSTDLIB_STATIC_API
size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs,
int compressionLevel,
unsigned long long pledgedSrcSize);
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
Older compression APIs such as compressCCtx(), which predate the introduction of
"advanced parameters", will ignore any external sequence producer setting.
The sequence producer can be "cleared" by registering a NULL function pointer. This
removes all limitations described above in the "LIMITATIONS" section of the API docs.
The user is strongly encouraged to read the full API documentation (above) before
calling this function.
</p></pre><BR>
<a name="Chapter20"></a><h2>Buffer-less and synchronous inner streaming functions (DEPRECATED)</h2><pre>
This API is deprecated, and will be removed in a future version.
It allows streaming (de)compression with user allocated buffers.
However, it is hard to use, and not as well tested as the rest of
our API.
Please use the normal streaming API instead: ZSTD_compressStream2,
and ZSTD_decompressStream.
If there is functionality that you need, but it doesn't provide,
please open an issue on our GitHub.
<BR></pre>
<a name="Chapter21"></a><h2>Buffer-less streaming compression (synchronous mode)</h2><pre>
A ZSTD_CCtx object is required to track streaming operations.
Use ZSTD_createCCtx() / ZSTD_freeCCtx() to manage resource.
ZSTD_CCtx object can be re-used multiple times within successive compression operations.
Start by initializing a context.
Use ZSTD_compressBegin(), or ZSTD_compressBegin_usingDict() for dictionary compression.
Then, consume your input using ZSTD_compressContinue().
There are some important considerations to keep in mind when using this advanced function :
- ZSTD_compressContinue() has no internal buffer. It uses externally provided buffers only.
- Interface is synchronous : input is consumed entirely and produces 1+ compressed blocks.
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
- ZSTD_compressContinue() detects that prior input has been overwritten when `src` buffer overlaps.
In which case, it will "discard" the relevant memory section from its history.
Finish a frame with ZSTD_compressEnd(), which will write the last block(s) and optional checksum.
It's possible to use srcSize==0, in which case, it will write a final empty block to end the frame.
Without last block mark, frames are considered unfinished (hence corrupted) by compliant decoders.
`ZSTD_CCtx` object can be re-used (ZSTD_compressBegin()) to compress again.
<BR></pre>
<h3>Buffer-less streaming compression functions</h3><pre></pre><b><pre>ZSTD_DEPRECATED("The buffer-less API is deprecated in favor of the normal streaming API. See docs.")
ZSTDLIB_STATIC_API size_t ZSTD_compressBegin(ZSTD_CCtx* cctx, int compressionLevel);
ZSTD_DEPRECATED("The buffer-less API is deprecated in favor of the normal streaming API. See docs.")
ZSTDLIB_STATIC_API size_t ZSTD_compressBegin_usingDict(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, int compressionLevel);
ZSTD_DEPRECATED("The buffer-less API is deprecated in favor of the normal streaming API. See docs.")
ZSTDLIB_STATIC_API size_t ZSTD_compressBegin_usingCDict(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict); </b>/**< note: fails if cdict==NULL */<b>
</pre></b><BR>
<pre><b>size_t ZSTD_copyCCtx(ZSTD_CCtx* cctx, const ZSTD_CCtx* preparedCCtx, unsigned long long pledgedSrcSize); </b>/**< note: if pledgedSrcSize is not known, use ZSTD_CONTENTSIZE_UNKNOWN */<b>
</b></pre><BR>
<pre><b>size_t ZSTD_compressBegin_advanced(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, ZSTD_parameters params, unsigned long long pledgedSrcSize); </b>/**< pledgedSrcSize : If srcSize is not known at init time, use ZSTD_CONTENTSIZE_UNKNOWN */...
</b></pre><BR>
<a name="Chapter22"></a><h2>Buffer-less streaming decompression (synchronous mode)</h2><pre>
A ZSTD_DCtx object is required to track streaming operations.
Use ZSTD_createDCtx() / ZSTD_freeDCtx() to manage it.
A ZSTD_DCtx object can be re-used multiple times.
First typical operation is to retrieve frame parameters, using ZSTD_getFrameHeader().
Frame header is extracted from the beginning of compressed frame, so providing only the frame's beginning is enough.
Data fragment must be large enough to ensure successful decoding.
`ZSTD_frameHeaderSize_max` bytes is guaranteed to always be large enough.
result : 0 : successful decoding, the `ZSTD_frameHeader` structure is correctly filled.
>0 : `srcSize` is too small, please provide at least result bytes on next attempt.
errorCode, which can be tested using ZSTD_isError().
ext/zstd/doc/zstd_manual.html view on Meta::CPAN
Skippable frames allow integration of user-defined data into a flow of concatenated frames.
Skippable frames will be ignored (skipped) by decompressor.
The format of skippable frames is as follows :
a) Skippable frame ID - 4 Bytes, Little endian format, any value from 0x184D2A50 to 0x184D2A5F
b) Frame Size - 4 Bytes, Little endian format, unsigned 32-bits
c) Frame Content - any content (User Data) of length equal to Frame Size
For skippable frames ZSTD_getFrameHeader() returns zfhPtr->frameType==ZSTD_skippableFrame.
For skippable frames ZSTD_decompressContinue() always returns 0 : it only skips the content.
<BR></pre>
<h3>Buffer-less streaming decompression functions</h3><pre></pre><b><pre></pre></b><BR>
<pre><b>ZSTDLIB_STATIC_API size_t ZSTD_decodingBufferSize_min(unsigned long long windowSize, unsigned long long frameContentSize); </b>/**< when frame content size is not known, pass in frameContentSize == ZSTD_CONTENTSIZE_UNKNOWN */<b>
</b></pre><BR>
<pre><b>typedef enum { ZSTDnit_frameHeader, ZSTDnit_blockHeader, ZSTDnit_block, ZSTDnit_lastBlock, ZSTDnit_checksum, ZSTDnit_skippableFrame } ZSTD_nextInputType_e;
</b></pre><BR>
<a name="Chapter23"></a><h2>Block level API (DEPRECATED)</h2><pre></pre>
<pre><b></b><p> You can get the frame header down to 2 bytes by setting:
- ZSTD_c_format = ZSTD_f_zstd1_magicless
- ZSTD_c_contentSizeFlag = 0
- ZSTD_c_checksumFlag = 0
ext/zstd/examples/Makefile view on Meta::CPAN
LIB = $(LIBDIR)/libzstd.a
.PHONY: default
default: all
.PHONY: all
all: simple_compression simple_decompression \
multiple_simple_compression\
dictionary_compression dictionary_decompression \
streaming_compression streaming_decompression \
multiple_streaming_compression streaming_memory_usage
$(LIB) :
$(MAKE) -C $(LIBDIR) libzstd.a
simple_compression.o: common.h
simple_compression : $(LIB)
simple_decompression.o: common.h
simple_decompression : $(LIB)
multiple_simple_compression.o: common.h
multiple_simple_compression : $(LIB)
dictionary_compression.o: common.h
dictionary_compression : $(LIB)
dictionary_decompression.o: common.h
dictionary_decompression : $(LIB)
streaming_compression.o: common.h
streaming_compression : $(LIB)
multiple_streaming_compression.o: common.h
multiple_streaming_compression : $(LIB)
streaming_decompression.o: common.h
streaming_decompression : $(LIB)
streaming_memory_usage.o: common.h
streaming_memory_usage : $(LIB)
.PHONY:clean
clean:
@$(RM) core *.o tmp* result* *.zst \
simple_compression simple_decompression \
multiple_simple_compression \
dictionary_compression dictionary_decompression \
streaming_compression streaming_decompression \
multiple_streaming_compression streaming_memory_usage
@echo Cleaning completed
.PHONY:test
test: all
cp README.md tmp
cp Makefile tmp2
@echo -- Simple compression tests
./simple_compression tmp
./simple_decompression tmp.zst
./multiple_simple_compression *.c
./streaming_decompression tmp.zst > /dev/null
@echo -- Streaming memory usage
./streaming_memory_usage
@echo -- Streaming compression tests
./streaming_compression tmp
./streaming_decompression tmp.zst > /dev/null
@echo -- Edge cases detection
! ./streaming_decompression tmp # invalid input, must fail
! ./simple_decompression tmp # invalid input, must fail
touch tmpNull # create 0-size file
./simple_compression tmpNull
./simple_decompression tmpNull.zst # 0-size frame : must work
@echo -- Multiple streaming tests
./multiple_streaming_compression *.c
@echo -- Dictionary compression tests
./dictionary_compression tmp2 tmp README.md
./dictionary_decompression tmp2.zst tmp.zst README.md
$(RM) tmp* *.zst
@echo tests completed
ext/zstd/examples/README.md view on Meta::CPAN
Only compatible with simple compression.
Result remains in memory.
Introduces usage of : `ZSTD_decompress()`
- [Multiple simple compression](multiple_simple_compression.c) :
Compress multiple files (in simple mode) in a single command line.
Demonstrates memory preservation technique that
minimizes malloc()/free() calls by re-using existing resources.
Introduces usage of : `ZSTD_compressCCtx()`
- [Streaming memory usage](streaming_memory_usage.c) :
Provides amount of memory used by streaming context.
Introduces usage of : `ZSTD_sizeof_CStream()`
- [Streaming compression](streaming_compression.c) :
Compress a single file.
Introduces usage of : `ZSTD_compressStream()`
- [Multiple Streaming compression](multiple_streaming_compression.c) :
Compress multiple files (in streaming mode) in a single command line.
Introduces memory usage preservation technique,
reducing impact of malloc()/free() and memset() by re-using existing resources.
- [Streaming decompression](streaming_decompression.c) :
Decompress a single file compressed by zstd.
Compatible with both simple and streaming compression.
Result is sent to stdout.
Introduces usage of : `ZSTD_decompressStream()`
- [Dictionary compression](dictionary_compression.c) :
Compress multiple files using the same dictionary.
Introduces usage of : `ZSTD_createCDict()` and `ZSTD_compress_usingCDict()`
- [Dictionary decompression](dictionary_decompression.c) :
Decompress multiple files using the same dictionary.
Result remains in memory.
ext/zstd/examples/dictionary_decompression.c view on Meta::CPAN
return ddict;
}
static void decompress(const char* fname, const ZSTD_DDict* ddict)
{
size_t cSize;
void* const cBuff = mallocAndLoadFile_orDie(fname, &cSize);
/* Read the content size from the frame header. For simplicity we require
* that it is always present. By default, zstd will write the content size
* in the header when it is known. If you can't guarantee that the frame
* content size is always written into the header, either use streaming
* decompression, or ZSTD_decompressBound().
*/
unsigned long long const rSize = ZSTD_getFrameContentSize(cBuff, cSize);
CHECK(rSize != ZSTD_CONTENTSIZE_ERROR, "%s: not compressed by zstd!", fname);
CHECK(rSize != ZSTD_CONTENTSIZE_UNKNOWN, "%s: original size unknown!", fname);
void* const rBuff = malloc_orDie((size_t)rSize);
/* Check that the dictionary ID matches.
* If a non-zstd dictionary is used, then both will be zero.
* By default zstd always writes the dictionary ID into the frame.
ext/zstd/examples/multiple_streaming_compression.c view on Meta::CPAN
/* Reset the context to a clean state to start a new compression operation.
* The parameters are sticky, so we keep the compression level and extra
* parameters that we set in createResources_orDie().
*/
CHECK_ZSTD( ZSTD_CCtx_reset(ress.cctx, ZSTD_reset_session_only) );
size_t const toRead = ress.buffInSize;
size_t read;
while ( (read = fread_orDie(ress.buffIn, toRead, fin)) ) {
/* This loop is the same as streaming_compression.c.
* See that file for detailed comments.
*/
int const lastChunk = (read < toRead);
ZSTD_EndDirective const mode = lastChunk ? ZSTD_e_end : ZSTD_e_continue;
ZSTD_inBuffer input = { ress.buffIn, read, 0 };
int finished;
do {
ZSTD_outBuffer output = { ress.buffOut, ress.buffOutSize, 0 };
size_t const remaining = ZSTD_compressStream2(ress.cctx, &output, &input, mode);
ext/zstd/examples/simple_decompression.c view on Meta::CPAN
#include <zstd.h> // presumes zstd library is installed
#include "common.h" // Helper functions, CHECK(), and CHECK_ZSTD()
static void decompress(const char* fname)
{
size_t cSize;
void* const cBuff = mallocAndLoadFile_orDie(fname, &cSize);
/* Read the content size from the frame header. For simplicity we require
* that it is always present. By default, zstd will write the content size
* in the header when it is known. If you can't guarantee that the frame
* content size is always written into the header, either use streaming
* decompression, or ZSTD_decompressBound().
*/
unsigned long long const rSize = ZSTD_getFrameContentSize(cBuff, cSize);
CHECK(rSize != ZSTD_CONTENTSIZE_ERROR, "%s: not compressed by zstd!", fname);
CHECK(rSize != ZSTD_CONTENTSIZE_UNKNOWN, "%s: original size unknown!", fname);
void* const rBuff = malloc_orDie((size_t)rSize);
/* Decompress.
* If you are doing many decompressions, you may want to reuse the context
ext/zstd/examples/streaming_decompression.c view on Meta::CPAN
void* const buffIn = malloc_orDie(buffInSize);
FILE* const fout = stdout;
size_t const buffOutSize = ZSTD_DStreamOutSize(); /* Guarantee to successfully flush at least one complete compressed block in all circumstances. */
void* const buffOut = malloc_orDie(buffOutSize);
ZSTD_DCtx* const dctx = ZSTD_createDCtx();
CHECK(dctx != NULL, "ZSTD_createDCtx() failed!");
/* This loop assumes that the input file is one or more concatenated zstd
* streams. This example won't work if there is trailing non-zstd data at
* the end, but streaming decompression in general handles this case.
* ZSTD_decompressStream() returns 0 exactly when the frame is completed,
* and doesn't consume input after the frame.
*/
size_t const toRead = buffInSize;
size_t read;
size_t lastRet = 0;
int isEmpty = 1;
while ( (read = fread_orDie(buffIn, toRead, fin)) ) {
isEmpty = 0;
ZSTD_inBuffer input = { buffIn, read, 0 };
ext/zstd/examples/streaming_memory_usage.c view on Meta::CPAN
(*stringPtr)++ ;
if (**stringPtr=='i') (*stringPtr)++;
if (**stringPtr=='B') (*stringPtr)++;
}
return result;
}
int main(int argc, char const *argv[]) {
printf("\n Zstandard (v%s) memory usage for streaming : \n\n", ZSTD_versionString());
unsigned wLog = 0;
if (argc > 1) {
const char* valStr = argv[1];
wLog = readU32FromChar(&valStr);
}
int compressionLevel;
for (compressionLevel = 1; compressionLevel <= MAX_TESTED_LEVEL; compressionLevel++) {
#define INPUT_SIZE 5
ext/zstd/lib/README.md view on Meta::CPAN
The build directory, where object files are stored
can also be manually controlled using variable `BUILD_DIR`,
for example `make BUILD_DIR=objectDir/v1`.
In which case, the hash function doesn't matter.
#### Deprecated API
Obsolete API on their way out are stored in directory `lib/deprecated`.
At this stage, it contains older streaming prototypes, in `lib/deprecated/zbuff.h`.
These prototypes will be removed in some future version.
Consider migrating code towards supported streaming API exposed in `zstd.h`.
#### Miscellaneous
The other files are not source code. There are :
- `BUCK` : support for `buck` build system (https://buckbuild.com/)
- `Makefile` : `make` script to build and install zstd library (static and dynamic)
- `README.md` : this file
- `dll/` : resources directory for Windows compilation
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
* }
* hash = XXH32_digest(state); // Finalize the hash
* XXH32_freeState(state); // Clean up
* return hash;
* }
* @endcode
*/
/*!
* @typedef struct XXH32_state_s XXH32_state_t
* @brief The opaque state struct for the XXH32 streaming API.
*
* @see XXH32_state_s for details.
*/
typedef struct XXH32_state_s XXH32_state_t;
/*!
* @brief Allocates an @ref XXH32_state_t.
*
* Must be freed with XXH32_freeState().
* @return An allocated XXH32_state_t on success, `NULL` on failure.
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
* @see
* XXH32(), XXH3_64bits_withSeed(), XXH3_128bits_withSeed(), XXH128():
* Direct equivalents for the other variants of xxHash.
* @see
* XXH64_createState(), XXH64_update(), XXH64_digest(): Streaming version.
*/
XXH_PUBLIC_API XXH64_hash_t XXH64(const void* input, size_t length, XXH64_hash_t seed);
/******* Streaming *******/
/*!
* @brief The opaque state struct for the XXH64 streaming API.
*
* @see XXH64_state_s for details.
*/
typedef struct XXH64_state_s XXH64_state_t; /* incomplete type */
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr);
XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* dst_state, const XXH64_state_t* src_state);
XXH_PUBLIC_API XXH_errorcode XXH64_reset (XXH64_state_t* statePtr, XXH64_hash_t seed);
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* statePtr, const void* input, size_t length);
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
* all implementations generage exactly the same hash value on all platforms.
* Starting from v0.8.0, it's also labelled "stable", meaning that
* any future version will also generate the same hash value.
*
* XXH3 offers 2 variants, _64bits and _128bits.
*
* When only 64 bits are needed, prefer invoking the _64bits variant, as it
* reduces the amount of mixing, resulting in faster speed on small inputs.
* It's also generally simpler to manipulate a scalar return type than a struct.
*
* The API supports one-shot hashing, streaming mode, and custom secrets.
*/
/*-**********************************************************************
* XXH3 64-bit variant
************************************************************************/
/* XXH3_64bits():
* default 64-bit variant, using default secret and default seed of 0.
* It's the fastest variant. */
XXH_PUBLIC_API XXH64_hash_t XXH3_64bits(const void* data, size_t len);
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
* This is not necessarily the case when using the blob of bytes directly
* because, when hashing _small_ inputs, only a portion of the secret is employed.
*/
XXH_PUBLIC_API XXH64_hash_t XXH3_64bits_withSecret(const void* data, size_t len, const void* secret, size_t secretSize);
/******* Streaming *******/
/*
* Streaming requires state maintenance.
* This operation costs memory and CPU.
* As a consequence, streaming is slower than one-shot hashing.
* For better performance, prefer one-shot functions whenever applicable.
*/
/*!
* @brief The state struct for the XXH3 streaming API.
*
* @see XXH3_state_s for details.
*/
typedef struct XXH3_state_s XXH3_state_t;
XXH_PUBLIC_API XXH3_state_t* XXH3_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH3_freeState(XXH3_state_t* statePtr);
XXH_PUBLIC_API void XXH3_copyState(XXH3_state_t* dst_state, const XXH3_state_t* src_state);
/*
* XXH3_64bits_reset():
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset(XXH3_state_t* statePtr);
/*
* XXH3_64bits_reset_withSeed():
* Generate a custom secret from `seed`, and store it into `statePtr`.
* digest will be equivalent to `XXH3_64bits_withSeed()`.
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSeed(XXH3_state_t* statePtr, XXH64_hash_t seed);
/*
* XXH3_64bits_reset_withSecret():
* `secret` is referenced, it _must outlive_ the hash streaming session.
* Similar to one-shot API, `secretSize` must be >= `XXH3_SECRET_SIZE_MIN`,
* and the quality of produced hash values depends on secret's entropy
* (secret's content should look like a bunch of random bytes).
* When in doubt about the randomness of a candidate `secret`,
* consider employing `XXH3_generateSecret()` instead (see below).
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSecret(XXH3_state_t* statePtr, const void* secret, size_t secretSize);
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_update (XXH3_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH64_hash_t XXH3_64bits_digest (const XXH3_state_t* statePtr);
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
} XXH128_hash_t;
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits(const void* data, size_t len);
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits_withSeed(const void* data, size_t len, XXH64_hash_t seed);
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits_withSecret(const void* data, size_t len, const void* secret, size_t secretSize);
/******* Streaming *******/
/*
* Streaming requires state maintenance.
* This operation costs memory and CPU.
* As a consequence, streaming is slower than one-shot hashing.
* For better performance, prefer one-shot functions whenever applicable.
*
* XXH3_128bits uses the same XXH3_state_t as XXH3_64bits().
* Use already declared XXH3_createState() and XXH3_freeState().
*
* All reset and streaming functions have same meaning as their 64-bit counterpart.
*/
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset(XXH3_state_t* statePtr);
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSeed(XXH3_state_t* statePtr, XXH64_hash_t seed);
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSecret(XXH3_state_t* statePtr, const void* secret, size_t secretSize);
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_update (XXH3_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits_digest (const XXH3_state_t* statePtr);
/* Following helper functions make it possible to compare XXH128_hast_t values.
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
***************************************************************************** */
/*
* These definitions are only present to allow static allocation
* of XXH states, on stack or in a struct, for example.
* Never **ever** access their members directly.
*/
/*!
* @internal
* @brief Structure for XXH32 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is
* an opaque type. This allows fields to safely be changed.
*
* Typedef'd to @ref XXH32_state_t.
* Do not access the members of this struct directly.
* @see XXH64_state_s, XXH3_state_s
*/
struct XXH32_state_s {
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
XXH32_hash_t mem32[4]; /*!< Internal buffer for partial reads. Treated as unsigned char[16]. */
XXH32_hash_t memsize; /*!< Amount of data in @ref mem32 */
XXH32_hash_t reserved; /*!< Reserved field. Do not read nor write to it. */
}; /* typedef'd to XXH32_state_t */
#ifndef XXH_NO_LONG_LONG /* defined when there is no 64-bit support */
/*!
* @internal
* @brief Structure for XXH64 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is
* an opaque type. This allows fields to safely be changed.
*
* Typedef'd to @ref XXH64_state_t.
* Do not access the members of this struct directly.
* @see XXH32_state_s, XXH3_state_s
*/
struct XXH64_state_s {
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
* @brief Default size of the secret buffer (and @ref XXH3_kSecret).
*
* This is the size used in @ref XXH3_kSecret and the seeded functions.
*
* Not to be confused with @ref XXH3_SECRET_SIZE_MIN.
*/
#define XXH3_SECRET_DEFAULT_SIZE 192
/*!
* @internal
* @brief Structure for XXH3 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined.
* Otherwise it is an opaque type.
* Never use this definition in combination with dynamic library.
* This allows fields to safely be changed in the future.
*
* @note ** This structure has a strict alignment requirement of 64 bytes!! **
* Do not allocate this with `malloc()` or `new`,
* it will not be sufficiently aligned.
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
#undef XXH_ALIGN_MEMBER
/*!
* @brief Initializes a stack-allocated `XXH3_state_s`.
*
* When the @ref XXH3_state_t structure is merely emplaced on stack,
* it should be initialized with XXH3_INITSTATE() or a memset()
* in case its first reset uses XXH3_NNbits_reset_withSeed().
* This init can be omitted if the first reset uses default or _withSecret mode.
* This operation isn't necessary when the state is created with XXH3_createState().
* Note that this doesn't prepare the state for a streaming operation,
* it's still necessary to use XXH3_NNbits_reset*() afterwards.
*/
#define XXH3_INITSTATE(XXH3_state_ptr) { (XXH3_state_ptr)->seed = 0; }
/* XXH128() :
* simple alias to pre-selected XXH3_128bits variant
*/
XXH_PUBLIC_API XXH128_hash_t XXH128(const void* data, size_t len, XXH64_hash_t seed);
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
if ((((size_t)input) & 3) == 0) { /* Input is 4-bytes aligned, leverage the speed benefit */
return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_aligned);
} }
return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_unaligned);
#endif
}
/******* Hash streaming *******/
/*!
* @ingroup xxh32_family
*/
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void)
{
return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));
}
/*! @ingroup xxh32_family */
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)
{
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
XXH_PUBLIC_API XXH64_hash_t
XXH3_64bits_withSecretandSeed(const void* input, size_t len, const void* secret, size_t secretSize, XXH64_hash_t seed)
{
if (len <= XXH3_MIDSIZE_MAX)
return XXH3_64bits_internal(input, len, seed, XXH3_kSecret, sizeof(XXH3_kSecret), NULL);
return XXH3_hashLong_64b_withSecret(input, len, seed, (const xxh_u8*)secret, secretSize);
}
/* === XXH3 streaming === */
/*
* Malloc's a pointer that is always aligned to align.
*
* This must be freed with `XXH_alignedFree()`.
*
* malloc typically guarantees 16 byte alignment on 64-bit systems and 8 byte
* alignment on 32-bit. This isn't enough for the 32 byte aligned loads in AVX2
* or on 32-bit, the 16 byte aligned loads in SSE2 and NEON.
*
ext/zstd/lib/common/xxhash.h view on Meta::CPAN
}
/*! @ingroup xxh3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH128(const void* input, size_t len, XXH64_hash_t seed)
{
return XXH3_128bits_withSeed(input, len, seed);
}
/* === XXH3 128-bit streaming === */
/*
* All initialization and update functions are identical to 64-bit streaming variant.
* The only difference is the finalization routine.
*/
/*! @ingroup xxh3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset(XXH3_state_t* statePtr)
{
return XXH3_64bits_reset(statePtr);
}
ext/zstd/lib/common/zstd_trace.h view on Meta::CPAN
/**
* ZSTD_VERSION_NUMBER
*
* This is guaranteed to be the first member of ZSTD_trace.
* Otherwise, this struct is not stable between versions. If
* the version number does not match your expectation, you
* should not interpret the rest of the struct.
*/
unsigned version;
/**
* Non-zero if streaming (de)compression is used.
*/
unsigned streaming;
/**
* The dictionary ID.
*/
unsigned dictionaryID;
/**
* Is the dictionary cold?
* Only set on decompression.
*/
unsigned dictionaryIsCold;
/**
ext/zstd/lib/compress/zstd_compress.c view on Meta::CPAN
#ifndef ZSTD_HASHLOG3_MAX
# define ZSTD_HASHLOG3_MAX 17
#endif
/*-*************************************
* Helper functions
***************************************/
/* ZSTD_compressBound()
* Note that the result from this function is only valid for
* the one-pass compression functions.
* When employing the streaming mode,
* if flushes are frequently altering the size of blocks,
* the overhead from block headers can make the compressed data larger
* than the return value of ZSTD_compressBound().
*/
size_t ZSTD_compressBound(size_t srcSize) {
size_t const r = ZSTD_COMPRESSBOUND(srcSize);
if (r==0) return ERROR(srcSize_wrong);
return r;
}
ext/zstd/lib/compress/zstd_compress.c view on Meta::CPAN
}
#endif
{ ZSTD_frameProgression fp;
size_t const buffered = (cctx->inBuff == NULL) ? 0 :
cctx->inBuffPos - cctx->inToCompress;
if (buffered) assert(cctx->inBuffPos >= cctx->inToCompress);
assert(buffered <= ZSTD_BLOCKSIZE_MAX);
fp.ingested = cctx->consumedSrcSize + buffered;
fp.consumed = cctx->consumedSrcSize;
fp.produced = cctx->producedCSize;
fp.flushed = cctx->producedCSize; /* simplified; some data might still be left within streaming output buffer */
fp.currentJobID = 0;
fp.nbActiveWorkers = 0;
return fp;
} }
/*! ZSTD_toFlushNow()
* Only useful for multithreading scenarios currently (nbWorkers >= 1).
*/
size_t ZSTD_toFlushNow(ZSTD_CCtx* cctx)
{
#ifdef ZSTD_MULTITHREAD
if (cctx->appliedParams.nbWorkers > 0) {
return ZSTDMT_toFlushNow(cctx->mtctx);
}
#endif
(void)cctx;
return 0; /* over-simplification; could also check if context is currently running in streaming mode, and in which case, report how many bytes are left to be flushed within output buffer */
}
static void ZSTD_assertEqualCParams(ZSTD_compressionParameters cParams1,
ZSTD_compressionParameters cParams2)
{
(void)cParams1;
(void)cParams2;
assert(cParams1.windowLog == cParams2.windowLog);
assert(cParams1.chainLog == cParams2.chainLog);
assert(cParams1.hashLog == cParams2.hashLog);
ext/zstd/lib/compress/zstd_compress.c view on Meta::CPAN
}
cctx->stage = ZSTDcs_created; /* return to "created but no init" status */
return op-ostart;
}
void ZSTD_CCtx_trace(ZSTD_CCtx* cctx, size_t extraCSize)
{
#if ZSTD_TRACE
if (cctx->traceCtx && ZSTD_trace_compress_end != NULL) {
int const streaming = cctx->inBuffSize > 0 || cctx->outBuffSize > 0 || cctx->appliedParams.nbWorkers > 0;
ZSTD_Trace trace;
ZSTD_memset(&trace, 0, sizeof(trace));
trace.version = ZSTD_VERSION_NUMBER;
trace.streaming = streaming;
trace.dictionaryID = cctx->dictID;
trace.dictionarySize = cctx->dictContentSize;
trace.uncompressedSize = cctx->consumedSrcSize;
trace.compressedSize = cctx->producedCSize + extraCSize;
trace.params = &cctx->appliedParams;
trace.cctx = cctx;
ZSTD_trace_compress_end(cctx->traceCtx, &trace);
}
cctx->traceCtx = 0;
#else
ext/zstd/lib/compress/zstd_compress_internal.h view on Meta::CPAN
int initialized;
seqStore_t seqStore; /* sequences storage ptrs */
ldmState_t ldmState; /* long distance matching state */
rawSeq* ldmSequences; /* Storage for the ldm output sequences */
size_t maxNbLdmSequences;
rawSeqStore_t externSeqStore; /* Mutable reference to external sequences */
ZSTD_blockState_t blockState;
U32* entropyWorkspace; /* entropy workspace of ENTROPY_WORKSPACE_SIZE bytes */
/* Whether we are streaming or not */
ZSTD_buffered_policy_e bufferedPolicy;
/* streaming */
char* inBuff;
size_t inBuffSize;
size_t inToCompress;
size_t inBuffPos;
size_t inBuffTarget;
char* outBuff;
size_t outBuffSize;
size_t outBuffContentSize;
size_t outBuffFlushedSize;
ZSTD_cStreamStage streamStage;
ext/zstd/lib/compress/zstd_compress_internal.h view on Meta::CPAN
/* ZSTD_getCParamsFromCCtxParams() :
* cParams are built depending on compressionLevel, src size hints,
* LDM and manually set compression parameters.
* Note: srcSizeHint == 0 means 0!
*/
ZSTD_compressionParameters ZSTD_getCParamsFromCCtxParams(
const ZSTD_CCtx_params* CCtxParams, U64 srcSizeHint, size_t dictSize, ZSTD_cParamMode_e mode);
/*! ZSTD_initCStream_internal() :
* Private use only. Init streaming operation.
* expects params to be valid.
* must receive dict, or cdict, or none, but not both.
* @return : 0, or an error code */
size_t ZSTD_initCStream_internal(ZSTD_CStream* zcs,
const void* dict, size_t dictSize,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params, unsigned long long pledgedSrcSize);
void ZSTD_resetSeqStore(seqStore_t* ssPtr);
ext/zstd/lib/compress/zstdmt_compress.c view on Meta::CPAN
{
ZSTDMT_jobDescription* const job = (ZSTDMT_jobDescription*)jobDescription;
ZSTD_CCtx_params jobParams = job->params; /* do not modify job->params ! copy it, modify the copy */
ZSTD_CCtx* const cctx = ZSTDMT_getCCtx(job->cctxPool);
rawSeqStore_t rawSeqStore = ZSTDMT_getSeq(job->seqPool);
buffer_t dstBuff = job->dstBuff;
size_t lastCBlockSize = 0;
/* resources */
if (cctx==NULL) JOB_ERROR(ERROR(memory_allocation));
if (dstBuff.start == NULL) { /* streaming job : doesn't provide a dstBuffer */
dstBuff = ZSTDMT_getBuffer(job->bufPool);
if (dstBuff.start==NULL) JOB_ERROR(ERROR(memory_allocation));
job->dstBuff = dstBuff; /* this value can be read in ZSTDMT_flush, when it copies the whole job */
}
if (jobParams.ldmParams.enableLdm == ZSTD_ps_enable && rawSeqStore.seq == NULL)
JOB_ERROR(ERROR(memory_allocation));
/* Don't compute the checksum for chunks, since we compute it externally,
* but write it in the header.
*/
ext/zstd/lib/compress/zstdmt_compress.c view on Meta::CPAN
mtctx->produced = 0;
if (ZSTDMT_serialState_reset(&mtctx->serial, mtctx->seqPool, params, mtctx->targetSectionSize,
dict, dictSize, dictContentType))
return ERROR(memory_allocation);
return 0;
}
/* ZSTDMT_writeLastEmptyBlock()
* Write a single empty block with an end-of-frame to finish a frame.
* Job must be created from streaming variant.
* This function is always successful if expected conditions are fulfilled.
*/
static void ZSTDMT_writeLastEmptyBlock(ZSTDMT_jobDescription* job)
{
assert(job->lastJob == 1);
assert(job->src.size == 0); /* last job is empty -> will be simplified into a last empty block */
assert(job->firstJob == 0); /* cannot be first job, as it also needs to create frame header */
assert(job->dstBuff.start == NULL); /* invoked from streaming variant only (otherwise, dstBuff might be user's output) */
job->dstBuff = ZSTDMT_getBuffer(job->bufPool);
if (job->dstBuff.start == NULL) {
job->cSize = ERROR(memory_allocation);
return;
}
assert(job->dstBuff.capacity >= ZSTD_blockHeaderSize); /* no buffer should ever be that small */
job->src = kNullRange;
job->cSize = ZSTD_writeLastEmptyBlock(job->dstBuff.start, job->dstBuff.capacity);
assert(!ZSTD_isError(job->cSize));
assert(job->consumed == 0);
ext/zstd/lib/compress/zstdmt_compress.h view on Meta::CPAN
ZSTD_threadPool *pool);
size_t ZSTDMT_freeCCtx(ZSTDMT_CCtx* mtctx);
size_t ZSTDMT_sizeof_CCtx(ZSTDMT_CCtx* mtctx);
/* === Streaming functions === */
size_t ZSTDMT_nextInputSizeHint(const ZSTDMT_CCtx* mtctx);
/*! ZSTDMT_initCStream_internal() :
* Private use only. Init streaming operation.
* expects params to be valid.
* must receive dict, or cdict, or none, but not both.
* mtctx can be freshly constructed or reused from a prior compression.
* If mtctx is reused, memory allocations from the prior compression may not be freed,
* even if they are not needed for the current compression.
* @return : 0, or an error code */
size_t ZSTDMT_initCStream_internal(ZSTDMT_CCtx* mtctx,
const void* dict, size_t dictSize, ZSTD_dictContentType_e dictContentType,
const ZSTD_CDict* cdict,
ZSTD_CCtx_params params, unsigned long long pledgedSrcSize);
ext/zstd/lib/decompress/zstd_decompress.c view on Meta::CPAN
{
RETURN_ERROR_IF(regenSize > dstCapacity, dstSize_tooSmall, "");
if (dst == NULL) {
if (regenSize == 0) return 0;
RETURN_ERROR(dstBuffer_null, "");
}
ZSTD_memset(dst, b, regenSize);
return regenSize;
}
static void ZSTD_DCtx_trace_end(ZSTD_DCtx const* dctx, U64 uncompressedSize, U64 compressedSize, unsigned streaming)
{
#if ZSTD_TRACE
if (dctx->traceCtx && ZSTD_trace_decompress_end != NULL) {
ZSTD_Trace trace;
ZSTD_memset(&trace, 0, sizeof(trace));
trace.version = ZSTD_VERSION_NUMBER;
trace.streaming = streaming;
if (dctx->ddict) {
trace.dictionaryID = ZSTD_getDictID_fromDDict(dctx->ddict);
trace.dictionarySize = ZSTD_DDict_dictSize(dctx->ddict);
trace.dictionaryIsCold = dctx->ddictIsCold;
}
trace.uncompressedSize = (size_t)uncompressedSize;
trace.compressedSize = (size_t)compressedSize;
trace.dctx = dctx;
ZSTD_trace_decompress_end(dctx->traceCtx, &trace);
}
#else
(void)dctx;
(void)uncompressedSize;
(void)compressedSize;
(void)streaming;
#endif
}
/*! ZSTD_decompressFrame() :
* @dctx must be properly initialized
* will update *srcPtr and *srcSizePtr,
* to make *srcPtr progress by one frame. */
static size_t ZSTD_decompressFrame(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
ext/zstd/lib/decompress/zstd_decompress.c view on Meta::CPAN
* ZSTD_decompressBlock_internal to never write past ip.
*
* See ZSTD_allocateLiteralsBuffer() for reference.
*/
oBlockEnd = op + (ip - op);
}
switch(blockProperties.blockType)
{
case bt_compressed:
decodedSize = ZSTD_decompressBlock_internal(dctx, op, (size_t)(oBlockEnd-op), ip, cBlockSize, /* frame */ 1, not_streaming);
break;
case bt_raw :
/* Use oend instead of oBlockEnd because this function is safe to overlap. It uses memmove. */
decodedSize = ZSTD_copyRawBlock(op, (size_t)(oend-op), ip, cBlockSize);
break;
case bt_rle :
decodedSize = ZSTD_setRleBlock(op, (size_t)(oBlockEnd-op), *ip, blockProperties.origSize);
break;
case bt_reserved :
default:
ext/zstd/lib/decompress/zstd_decompress.c view on Meta::CPAN
RETURN_ERROR_IF(remainingSrcSize<4, checksum_wrong, "");
if (!dctx->forceIgnoreChecksum) {
U32 const checkCalc = (U32)XXH64_digest(&dctx->xxhState);
U32 checkRead;
checkRead = MEM_readLE32(ip);
RETURN_ERROR_IF(checkRead != checkCalc, checksum_wrong, "");
}
ip += 4;
remainingSrcSize -= 4;
}
ZSTD_DCtx_trace_end(dctx, (U64)(op-ostart), (U64)(ip-istart), /* streaming */ 0);
/* Allow caller to get size read */
DEBUGLOG(4, "ZSTD_decompressFrame: decompressed frame of size %zi, consuming %zi bytes of input", op-ostart, ip - (const BYTE*)*srcPtr);
*srcPtr = ip;
*srcSizePtr = remainingSrcSize;
return (size_t)(op-ostart);
}
static size_t ZSTD_decompressMultiFrame(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
ext/zstd/lib/decompress/zstd_decompress.c view on Meta::CPAN
}
case ZSTDds_decompressLastBlock:
case ZSTDds_decompressBlock:
DEBUGLOG(5, "ZSTD_decompressContinue: case ZSTDds_decompressBlock");
{ size_t rSize;
switch(dctx->bType)
{
case bt_compressed:
DEBUGLOG(5, "ZSTD_decompressContinue: case bt_compressed");
rSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 1, is_streaming);
dctx->expected = 0; /* Streaming not supported */
break;
case bt_raw :
assert(srcSize <= dctx->expected);
rSize = ZSTD_copyRawBlock(dst, dstCapacity, src, srcSize);
FORWARD_IF_ERROR(rSize, "ZSTD_copyRawBlock failed");
assert(rSize == srcSize);
dctx->expected -= rSize;
break;
case bt_rle :
ext/zstd/lib/decompress/zstd_decompress.c view on Meta::CPAN
default:
RETURN_ERROR(corruption_detected, "invalid block type");
}
FORWARD_IF_ERROR(rSize, "");
RETURN_ERROR_IF(rSize > dctx->fParams.blockSizeMax, corruption_detected, "Decompressed Block Size Exceeds Maximum");
DEBUGLOG(5, "ZSTD_decompressContinue: decoded size from block : %u", (unsigned)rSize);
dctx->decodedSize += rSize;
if (dctx->validateChecksum) XXH64_update(&dctx->xxhState, dst, rSize);
dctx->previousDstEnd = (char*)dst + rSize;
/* Stay on the same stage until we are finished streaming the block. */
if (dctx->expected > 0) {
return rSize;
}
if (dctx->stage == ZSTDds_decompressLastBlock) { /* end of frame */
DEBUGLOG(4, "ZSTD_decompressContinue: decoded size from frame : %u", (unsigned)dctx->decodedSize);
RETURN_ERROR_IF(
dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN
&& dctx->decodedSize != dctx->fParams.frameContentSize,
corruption_detected, "");
if (dctx->fParams.checksumFlag) { /* another round for frame checksum */
dctx->expected = 4;
dctx->stage = ZSTDds_checkChecksum;
} else {
ZSTD_DCtx_trace_end(dctx, dctx->decodedSize, dctx->processedCSize, /* streaming */ 1);
dctx->expected = 0; /* ends here */
dctx->stage = ZSTDds_getFrameHeaderSize;
}
} else {
dctx->stage = ZSTDds_decodeBlockHeader;
dctx->expected = ZSTD_blockHeaderSize;
}
return rSize;
}
case ZSTDds_checkChecksum:
assert(srcSize == 4); /* guaranteed by dctx->expected */
{
if (dctx->validateChecksum) {
U32 const h32 = (U32)XXH64_digest(&dctx->xxhState);
U32 const check32 = MEM_readLE32(src);
DEBUGLOG(4, "ZSTD_decompressContinue: checksum : calculated %08X :: %08X read", (unsigned)h32, (unsigned)check32);
RETURN_ERROR_IF(check32 != h32, checksum_wrong, "");
}
ZSTD_DCtx_trace_end(dctx, dctx->decodedSize, dctx->processedCSize, /* streaming */ 1);
dctx->expected = 0;
dctx->stage = ZSTDds_getFrameHeaderSize;
return 0;
}
case ZSTDds_decodeSkippableHeader:
assert(src != NULL);
assert(srcSize <= ZSTD_SKIPPABLEHEADERSIZE);
ZSTD_memcpy(dctx->headerBuffer + (ZSTD_SKIPPABLEHEADERSIZE - srcSize), src, srcSize); /* complete skippable header */
dctx->expected = MEM_readLE32(dctx->headerBuffer + ZSTD_FRAMEIDSIZE); /* note : dctx->expected can grow seriously large, beyond local buffer size */
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3);
bpPtr->origSize = cSize; /* only useful for RLE */
if (bpPtr->blockType == bt_rle) return 1;
RETURN_ERROR_IF(bpPtr->blockType == bt_reserved, corruption_detected, "");
return cSize;
}
}
/* Allocate buffer for literals, either overlapping current dst, or split between dst and litExtraBuffer, or stored entirely within litExtraBuffer */
static void ZSTD_allocateLiteralsBuffer(ZSTD_DCtx* dctx, void* const dst, const size_t dstCapacity, const size_t litSize,
const streaming_operation streaming, const size_t expectedWriteSize, const unsigned splitImmediately)
{
if (streaming == not_streaming && dstCapacity > ZSTD_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH + litSize + WILDCOPY_OVERLENGTH)
{
/* room for litbuffer to fit without read faulting */
dctx->litBuffer = (BYTE*)dst + ZSTD_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH;
dctx->litBufferEnd = dctx->litBuffer + litSize;
dctx->litBufferLocation = ZSTD_in_dst;
}
else if (litSize > ZSTD_LITBUFFEREXTRASIZE)
{
/* won't fit in litExtraBuffer, so it will be split between end of dst and extra buffer */
if (splitImmediately) {
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
/* fits entirely within litExtraBuffer, so no split is necessary */
dctx->litBuffer = dctx->litExtraBuffer;
dctx->litBufferEnd = dctx->litBuffer + litSize;
dctx->litBufferLocation = ZSTD_not_in_dst;
}
}
/* Hidden declaration for fullbench */
size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
const void* src, size_t srcSize,
void* dst, size_t dstCapacity, const streaming_operation streaming);
/*! ZSTD_decodeLiteralsBlock() :
* Where it is possible to do so without being stomped by the output during decompression, the literals block will be stored
* in the dstBuffer. If there is room to do so, it will be stored in full in the excess dst space after where the current
* block will be output. Otherwise it will be stored at the end of the current dst blockspace, with a small portion being
* stored in dctx->litExtraBuffer to help keep it "ahead" of the current output write.
*
* @return : nb of bytes read from src (< srcSize )
* note : symbol not declared but exposed for fullbench */
size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
const void* src, size_t srcSize, /* note : srcSize < BLOCKSIZE */
void* dst, size_t dstCapacity, const streaming_operation streaming)
{
DEBUGLOG(5, "ZSTD_decodeLiteralsBlock");
RETURN_ERROR_IF(srcSize < MIN_CBLOCK_SIZE, corruption_detected, "");
{ const BYTE* const istart = (const BYTE*) src;
symbolEncodingType_e const litEncType = (symbolEncodingType_e)(istart[0] & 3);
switch(litEncType)
{
case set_repeat:
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(litSize > ZSTD_BLOCKSIZE_MAX, corruption_detected, "");
if (!singleStream)
RETURN_ERROR_IF(litSize < MIN_LITERALS_FOR_4_STREAMS, literals_headerWrong,
"Not enough literals (%zu) for the 4-streams mode (min %u)",
litSize, MIN_LITERALS_FOR_4_STREAMS);
RETURN_ERROR_IF(litCSize + lhSize > srcSize, corruption_detected, "");
RETURN_ERROR_IF(expectedWriteSize < litSize , dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 0);
/* prefetch huffman table if cold */
if (dctx->ddictIsCold && (litSize > 768 /* heuristic */)) {
PREFETCH_AREA(dctx->HUFptr, sizeof(dctx->entropy.hufTable));
}
if (litEncType==set_repeat) {
if (singleStream) {
hufSuccess = HUF_decompress1X_usingDTable(
dctx->litBuffer, litSize, istart+lhSize, litCSize,
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
break;
case 3:
lhSize = 3;
RETURN_ERROR_IF(srcSize<3, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need lhSize = 3");
litSize = MEM_readLE24(istart) >> 4;
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(expectedWriteSize < litSize, dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 1);
if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) { /* risk reading beyond src buffer with wildcopy */
RETURN_ERROR_IF(litSize+lhSize > srcSize, corruption_detected, "");
if (dctx->litBufferLocation == ZSTD_split)
{
ZSTD_memcpy(dctx->litBuffer, istart + lhSize, litSize - ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memcpy(dctx->litExtraBuffer, istart + lhSize + litSize - ZSTD_LITBUFFEREXTRASIZE, ZSTD_LITBUFFEREXTRASIZE);
}
else
{
ZSTD_memcpy(dctx->litBuffer, istart + lhSize, litSize);
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
break;
case 3:
lhSize = 3;
RETURN_ERROR_IF(srcSize<4, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need lhSize+1 = 4");
litSize = MEM_readLE24(istart) >> 4;
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(litSize > ZSTD_BLOCKSIZE_MAX, corruption_detected, "");
RETURN_ERROR_IF(expectedWriteSize < litSize, dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 1);
if (dctx->litBufferLocation == ZSTD_split)
{
ZSTD_memset(dctx->litBuffer, istart[lhSize], litSize - ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memset(dctx->litExtraBuffer, istart[lhSize], ZSTD_LITBUFFEREXTRASIZE);
}
else
{
ZSTD_memset(dctx->litBuffer, istart[lhSize], litSize);
}
dctx->litPtr = dctx->litBuffer;
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
size_t const maxOffbase = ((size_t)1 << (STREAM_ACCUMULATOR_MIN + 1)) - 1;
size_t const maxOffset = maxOffbase - ZSTD_REP_NUM;
assert(ZSTD_highbit32((U32)maxOffbase) == STREAM_ACCUMULATOR_MIN);
return maxOffset;
}
}
size_t
ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, const int frame, const streaming_operation streaming)
{ /* blockType == blockCompressed */
const BYTE* ip = (const BYTE*)src;
DEBUGLOG(5, "ZSTD_decompressBlock_internal (size : %u)", (U32)srcSize);
/* Note : the wording of the specification
* allows compressed block to be sized exactly ZSTD_BLOCKSIZE_MAX.
* This generally does not happen, as it makes little sense,
* since an uncompressed block would feature same size and have no decompression cost.
* Also, note that decoder from reference libzstd before < v1.5.4
* would consider this edge case as an error.
* As a consequence, avoid generating compressed blocks of size ZSTD_BLOCKSIZE_MAX
* for broader compatibility with the deployed ecosystem of zstd decoders */
RETURN_ERROR_IF(srcSize > ZSTD_BLOCKSIZE_MAX, srcSize_wrong, "");
/* Decode literals section */
{ size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize, dst, dstCapacity, streaming);
DEBUGLOG(5, "ZSTD_decodeLiteralsBlock : cSize=%u, nbLiterals=%zu", (U32)litCSize, dctx->litSize);
if (ZSTD_isError(litCSize)) return litCSize;
ip += litCSize;
srcSize -= litCSize;
}
/* Build Decoding Tables */
{
/* Compute the maximum block size, which must also work when !frame and fParams are unset.
* Additionally, take the min with dstCapacity to ensure that the totalHistorySize fits in a size_t.
ext/zstd/lib/decompress/zstd_decompress_block.c view on Meta::CPAN
}
}
size_t ZSTD_decompressBlock_deprecated(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
size_t dSize;
ZSTD_checkContinuity(dctx, dst, dstCapacity);
dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 0, not_streaming);
dctx->previousDstEnd = (char*)dst + dSize;
return dSize;
}
/* NOTE: Must just wrap ZSTD_decompressBlock_deprecated() */
size_t ZSTD_decompressBlock(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
ext/zstd/lib/decompress/zstd_decompress_block.h view on Meta::CPAN
*/
/* note: prototypes already published within `zstd_internal.h` :
* ZSTD_getcBlockSize()
* ZSTD_decodeSeqHeaders()
*/
/* Streaming state is used to inform allocation of the literal buffer */
typedef enum {
not_streaming = 0,
is_streaming = 1
} streaming_operation;
/* ZSTD_decompressBlock_internal() :
* decompress block, starting at `src`,
* into destination buffer `dst`.
* @return : decompressed block size,
* or an error code (which can be tested using ZSTD_isError())
*/
size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, const int frame, const streaming_operation streaming);
/* ZSTD_buildFSETable() :
* generate FSE decoding table for one symbol (ll, ml or off)
* this function must be called with valid parameters only
* (dt is large enough, normalizedCounter distribution total is a power of 2, max is within range, etc.)
* in which case it cannot fail.
* The workspace must be 4-byte aligned and at least ZSTD_BUILD_FSE_TABLE_WKSP_SIZE bytes, which is
* defined in zstd_decompress_internal.h.
* Internal use only.
*/
ext/zstd/lib/decompress/zstd_decompress_internal.h view on Meta::CPAN
/* dictionary */
ZSTD_DDict* ddictLocal;
const ZSTD_DDict* ddict; /* set by ZSTD_initDStream_usingDDict(), or ZSTD_DCtx_refDDict() */
U32 dictID;
int ddictIsCold; /* if == 1 : dictionary is "new" for working context, and presumed "cold" (not in cpu cache) */
ZSTD_dictUses_e dictUses;
ZSTD_DDictHashSet* ddictSet; /* Hash set for multiple ddicts */
ZSTD_refMultipleDDicts_e refMultipleDDicts; /* User specified: if == 1, will allow references to multiple DDicts. Default == 0 (disabled) */
int disableHufAsm;
/* streaming */
ZSTD_dStreamStage streamStage;
char* inBuff;
size_t inBuffSize;
size_t inPos;
size_t maxWindowSize;
char* outBuff;
size_t outBuffSize;
size_t outStart;
size_t outEnd;
size_t lhSize;