From a43299f798377fb7a40e6196c89445464be78aa4 Mon Sep 17 00:00:00 2001 From: pengrunlin Date: Mon, 4 Aug 2025 19:17:38 +0800 Subject: [PATCH 1/5] =?UTF-8?q?=E5=A2=9E=E5=8A=A0bpsf=E7=AE=97=E6=B3=95?= =?UTF-8?q?=E4=BD=BF=E7=94=A8=E6=8C=87=E5=8D=97?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 235 +++--------------------------------------------- doc/Setup.md | 250 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 263 insertions(+), 222 deletions(-) create mode 100644 doc/Setup.md diff --git a/README.md b/README.md index 0f7478e10..e4c01bd01 100644 --- a/README.md +++ b/README.md @@ -1,227 +1,18 @@ -

Zstandard

+# KSAL BPSF +## 介绍 +存储算法加速库(简称KSAL)是华为自研的存储算法加速库,当前包括BPSF算法、EC算法、CRC16 T10DIF算法、CRC32C算法、memcpy优化算法、DAS智能预取算法和Ceph百亿对象存储元数据zstd压缩算法。 关于KSAL的详细特性介绍可参考存储加速算法库。 -__Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm, -targeting real-time compression scenarios at zlib-level and better compression ratios. -It's backed by a very fast entropy stage, provided by [Huff0 and FSE library](https://github.com/Cyan4973/FiniteStateEntropy). +## 仓库说明 +本仓库主要用于KSAL BPSF算法包的编译与安装。 -Zstandard's format is stable and documented in [RFC8878](https://datatracker.ietf.org/doc/html/rfc8878). Multiple independent implementations are already available. -This repository represents the reference implementation, provided as an open-source dual [BSD](LICENSE) OR [GPLv2](COPYING) licensed **C** library, -and a command line utility producing and decoding `.zst`, `.gz`, `.xz` and `.lz4` files. -Should your project require another programming language, -a list of known ports and bindings is provided on [Zstandard homepage](https://facebook.github.io/zstd/#other-languages). +## 支持CPU +华为鲲鹏处理器 -**Development branch status:** +## 支持软件版本 +zstd 1.5.6 -[![Build Status][travisDevBadge]][travisLink] -[![Build status][CircleDevBadge]][CircleLink] -[![Build status][CirrusDevBadge]][CirrusLink] -[![Fuzzing Status][OSSFuzzBadge]][OSSFuzzLink] +## 支持操作系统 +openEuler -[travisDevBadge]: https://api.travis-ci.com/facebook/zstd.svg?branch=dev "Continuous Integration test suite" -[travisLink]: https://travis-ci.com/facebook/zstd -[CircleDevBadge]: https://circleci.com/gh/facebook/zstd/tree/dev.svg?style=shield "Short test suite" -[CircleLink]: https://circleci.com/gh/facebook/zstd -[CirrusDevBadge]: https://api.cirrus-ci.com/github/facebook/zstd.svg?branch=dev -[CirrusLink]: https://cirrus-ci.com/github/facebook/zstd -[OSSFuzzBadge]: https://oss-fuzz-build-logs.storage.googleapis.com/badges/zstd.svg -[OSSFuzzLink]: https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:zstd - -## Benchmarks - -For reference, several fast compression algorithms were tested and compared -on a desktop running Ubuntu 20.04 (`Linux 5.11.0-41-generic`), -with a Core i7-9700K CPU @ 4.9GHz, -using [lzbench], an open-source in-memory benchmark by @inikep -compiled with [gcc] 9.3.0, -on the [Silesia compression corpus]. - -[lzbench]: https://github.com/inikep/lzbench -[Silesia compression corpus]: https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia -[gcc]: https://gcc.gnu.org/ - -| Compressor name | Ratio | Compression| Decompress.| -| --------------- | ------| -----------| ---------- | -| **zstd 1.5.1 -1** | 2.887 | 530 MB/s | 1700 MB/s | -| [zlib] 1.2.11 -1 | 2.743 | 95 MB/s | 400 MB/s | -| brotli 1.0.9 -0 | 2.702 | 395 MB/s | 450 MB/s | -| **zstd 1.5.1 --fast=1** | 2.437 | 600 MB/s | 2150 MB/s | -| **zstd 1.5.1 --fast=3** | 2.239 | 670 MB/s | 2250 MB/s | -| quicklz 1.5.0 -1 | 2.238 | 540 MB/s | 760 MB/s | -| **zstd 1.5.1 --fast=4** | 2.148 | 710 MB/s | 2300 MB/s | -| lzo1x 2.10 -1 | 2.106 | 660 MB/s | 845 MB/s | -| [lz4] 1.9.3 | 2.101 | 740 MB/s | 4500 MB/s | -| lzf 3.6 -1 | 2.077 | 410 MB/s | 830 MB/s | -| snappy 1.1.9 | 2.073 | 550 MB/s | 1750 MB/s | - -[zlib]: https://www.zlib.net/ -[lz4]: https://lz4.github.io/lz4/ - -The negative compression levels, specified with `--fast=#`, -offer faster compression and decompression speed -at the cost of compression ratio (compared to level 1). - -Zstd can also offer stronger compression ratios at the cost of compression speed. -Speed vs Compression trade-off is configurable by small increments. -Decompression speed is preserved and remains roughly the same at all settings, -a property shared by most LZ compression algorithms, such as [zlib] or lzma. - -The following tests were run -on a server running Linux Debian (`Linux version 4.14.0-3-amd64`) -with a Core i7-6700K CPU @ 4.0GHz, -using [lzbench], an open-source in-memory benchmark by @inikep -compiled with [gcc] 7.3.0, -on the [Silesia compression corpus]. - -Compression Speed vs Ratio | Decompression Speed ----------------------------|-------------------- -![Compression Speed vs Ratio](doc/images/CSpeed2.png "Compression Speed vs Ratio") | ![Decompression Speed](doc/images/DSpeed3.png "Decompression Speed") - -A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph. -For a larger picture including slow modes, [click on this link](doc/images/DCspeed5.png). - - -## The case for Small Data compression - -Previous charts provide results applicable to typical file and stream scenarios (several MB). Small data comes with different perspectives. - -The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no "past" to build upon. - -To solve this situation, Zstd offers a __training mode__, which can be used to tune the algorithm for a selected type of data. -Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. -Using this dictionary, the compression ratio achievable on small data improves dramatically. - -The following example uses the `github-users` [sample set](https://github.com/facebook/zstd/releases/tag/v1.1.3), created from [github public API](https://developer.github.com/v3/users/#get-all-users). -It consists of roughly 10K records weighing about 1KB each. - -Compression Ratio | Compression Speed | Decompression Speed -------------------|-------------------|-------------------- -![Compression Ratio](doc/images/dict-cr.png "Compression Ratio") | ![Compression Speed](doc/images/dict-cs.png "Compression Speed") | ![Decompression Speed](doc/images/dict-ds.png "Decompression Speed") - - -These compression gains are achieved while simultaneously providing _faster_ compression and decompression speeds. - -Training works if there is some correlation in a family of small data samples. The more data-specific a dictionary is, the more efficient it is (there is no _universal dictionary_). -Hence, deploying one dictionary per type of data will provide the greatest benefits. -Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file. - -### Dictionary compression How To: - -1. Create the dictionary - - `zstd --train FullPathToTrainingSet/* -o dictionaryName` - -2. Compress with dictionary - - `zstd -D dictionaryName FILE` - -3. Decompress with dictionary - - `zstd -D dictionaryName --decompress FILE.zst` - - -## Build instructions - -`make` is the officially maintained build system of this project. -All other build systems are "compatible" and 3rd-party maintained, -they may feature small differences in advanced options. -When your system allows it, prefer using `make` to build `zstd` and `libzstd`. - -### Makefile - -If your system is compatible with standard `make` (or `gmake`), -invoking `make` in root directory will generate `zstd` cli in root directory. -It will also create `libzstd` into `lib/`. - -Other available options include: -- `make install` : create and install zstd cli, library and man pages -- `make check` : create and run `zstd`, test its behavior on local platform - -The `Makefile` follows the [GNU Standard Makefile conventions](https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html), -allowing staged install, standard flags, directory variables and command variables. - -For advanced use cases, specialized compilation flags which control binary generation -are documented in [`lib/README.md`](lib/README.md#modular-build) for the `libzstd` library -and in [`programs/README.md`](programs/README.md#compilation-variables) for the `zstd` CLI. - -### cmake - -A `cmake` project generator is provided within `build/cmake`. -It can generate Makefiles or other build scripts -to create `zstd` binary, and `libzstd` dynamic and static libraries. - -By default, `CMAKE_BUILD_TYPE` is set to `Release`. - -#### Support for Fat (Universal2) Output - -`zstd` can be built and installed with support for both Apple Silicon (M1/M2) as well as Intel by using CMake's Universal2 support. -To perform a Fat/Universal2 build and install use the following commands: - -```bash -cmake -B build-cmake-debug -S build/cmake -G Ninja -DCMAKE_OSX_ARCHITECTURES="x86_64;x86_64h;arm64" -cd build-cmake-debug -ninja -sudo ninja install -``` - -### Meson - -A Meson project is provided within [`build/meson`](build/meson). Follow -build instructions in that directory. - -You can also take a look at [`.travis.yml`](.travis.yml) file for an -example about how Meson is used to build this project. - -Note that default build type is **release**. - -### VCPKG -You can build and install zstd [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager: - - git clone https://github.com/Microsoft/vcpkg.git - cd vcpkg - ./bootstrap-vcpkg.sh - ./vcpkg integrate install - ./vcpkg install zstd - -The zstd port in vcpkg is kept up to date by Microsoft team members and community contributors. -If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository. - -### Visual Studio (Windows) - -Going into `build` directory, you will find additional possibilities: -- Projects for Visual Studio 2005, 2008 and 2010. - + VS2010 project is compatible with VS2012, VS2013, VS2015 and VS2017. -- Automated build scripts for Visual compiler by [@KrzysFR](https://github.com/KrzysFR), in `build/VS_scripts`, - which will build `zstd` cli and `libzstd` library without any need to open Visual Studio solution. - -### Buck - -You can build the zstd binary via buck by executing: `buck build programs:zstd` from the root of the repo. -The output binary will be in `buck-out/gen/programs/`. - -### Bazel - -You easily can integrate zstd into your Bazel project by using the module hosted on the [Bazel Central Repository](https://registry.bazel.build/modules/zstd). - -## Testing - -You can run quick local smoke tests by running `make check`. -If you can't use `make`, execute the `playTest.sh` script from the `src/tests` directory. -Two env variables `$ZSTD_BIN` and `$DATAGEN_BIN` are needed for the test script to locate the `zstd` and `datagen` binary. -For information on CI testing, please refer to `TESTING.md`. - -## Status - -Zstandard is currently deployed within Facebook and many other large cloud infrastructures. -It is run continuously to compress large amounts of data in multiple formats and use cases. -Zstandard is considered safe for production environments. - -## License - -Zstandard is dual-licensed under [BSD](LICENSE) OR [GPLv2](COPYING). - -## Contributing - -The `dev` branch is the one where all contributions are merged before reaching `release`. -If you plan to propose a patch, please commit into the `dev` branch, or its own feature branch. -Direct commit to `release` are not permitted. -For more information, please read [CONTRIBUTING](CONTRIBUTING.md). +## 参与贡献 +如果您想为本仓库贡献代码,请向本仓库任意maintainer发送邮件; 如果您找到产品中的任何Bug,欢迎您提出ISSUE \ No newline at end of file diff --git a/doc/Setup.md b/doc/Setup.md new file mode 100644 index 000000000..1a2f66412 --- /dev/null +++ b/doc/Setup.md @@ -0,0 +1,250 @@ +# BPSF算法使用指南 +## 支持CPU +华为鲲鹏处理器 + +## 支持软件版本 +zstd 1.5.6 + +## 支持操作系统 +openEuler + +# 一、软件下载与环境准备 +## 1. 安装rpmbuild工具 +(1) 创建路径并进入该路径 +``` +mkdir -p /home/ksal_bpsf +cd /home/ ksal_bpsf +``` +(2) 获取BoostKit-KSAL_1.11.0.zip,放置于`/home/ksal_bpsf`目录下 +(3) 在“/home/ksal_bpsf”目录下面解压BoostKit-KSAL_1.11.0.zip。 +``` +unzip BoostKit-KSAL_1.11.0.zip +``` +(4) 获取zstd-1.5.6.tar.gz,放置于`/home/ksal_bpsf`目录下。 +(5) 获取编译所需文件,包括:Makefile、ksal-bpsf-zstd.patch、ksal_bpsf.spec和libksal_bpsf_zstd_so_create.sh,并将下载的文件放置于`/home/ksal_bpsf`目录下。 +(6) 安装rpmbuild。 +``` +yum install rpmdevtools -y +rpmdev-setuptree +``` +## 2. 修改rpmbuild构建目录 +(7) 将“rpmbuild”目录更改至`/home/ksal_bpsf`目录下 +执行rpmbuild安装命令之后,修改“.rpmmacros”文件。将“%_topdir”地址修改为`/home/ksal_bpsf/rpmbuild`。 +``` +vi /root/.rpmmacros +``` +修改完后,再次执行rpmbuild安装命令。 +``` +rpmdev-setuptree +``` + +# 二、制作RPM包 +## 制作RPM包(release版本) +(1) 在`/home/ksal_bpsf`目录下执行如下命令,生成用于KSAL BPSF安装部署RPM包。 +``` +cd /home/ksal_bpsf/ +sh libksal_bpsf_zstd_so_create.sh +``` +(2) 安装生成的RPM包。 +``` +cd /home/ksal_bpsf/rpmbuild/RPMS/aarch64 +rpm -ivh ksal_bpsf-1.0.0-openEuler.aarch64.rpm +``` +(3) 执行如下命令查看RPM安装情况。 +``` +rpm -qi ksal_bpsf-1.0.0-openEuler.aarch64 +``` +(4) 确认安装路径。 +执行以下命令查看`/usr/lib64/`和`/usr/include`目录下的文件列表,确认KSAL BPSF动态库文件与KSAL BPSF头文件是否都位于此目录下。 +``` +ll /usr/lib64/libksal_bpsf.so +ll /usr/include/ksal_bpsf.h +``` +(5) 使用时,应链接动态库。 +``` +-lksal_bpsf +``` +(6) 使用时,应配置环境变量。 +``` +export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH +``` +(7) 软件包卸载 +``` +yum remove ksal_bpsf -y +``` + +## 制作RPM包(debug版本) +(1) `/home/ksal_bpsf`目录下执行如下命令,生成用于KSAL BPSF安装部署RPM包。 +``` +cd /home/ksal_bpsf/ +sh libksal_bpsf_zstd_so_create.sh debug +``` +(2) 安装生成的RPM包。 +``` +cd /home/ksal_bpsf/rpmbuild/RPMS/aarch64 +rpm -ivh ksal_bpsf_debug-1.0.0-openEuler.aarch64.rpm +``` +(3) 使用时,应配置环境变量。 +``` +export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH +``` +(4) 使用时,链接动态库。 +``` +-lksal_bpsf +``` +(5) 软件包卸载 +``` +yum remove ksal_bpsf_debug -y +``` + + +# 四、使用实例 +(1) 打开文件`test_bpsf.c` +``` +vi test_bpsf.c +``` +输入以下内容: +``` +#include "ksal_bpsf.h" +#include +#include + +const size_t BLOCK_SIZE_4096 = 4096; +const size_t BLOCK_SIZE_4160 = 4160; +const size_t SEGMENT_COUNT = 8; +const uint8_t kValue = 0x10; + +int main() { + const size_t src_len = BLOCK_SIZE_4096 * SEGMENT_COUNT; + uint8_t p_src[src_len]; + size_t dst_len = BPSF_compressBound(src_len); + uint8_t p_dst[dst_len]; + uint16_t offset[8] = {0}; + uint16_t len[8] = {0}; + + for (size_t i = 0; i < src_len; i++) { + p_src[i] = i * kValue; + } + + int compress_result = BPSF_compress(p_src, src_len, BLOCK_SIZE_4096, p_dst, &dst_len, offset, len); + printf("compress result: %d\n", compress_result); + + int start = 0; + int end = 3; + uint8_t decompressed_data[BLOCK_SIZE_4096 * (end - start + 1)]; + size_t decompressed_len = 0; + + uint16_t union_offset, union_len; + int union_result = BPSF_union(offset, len, start, end, &union_offset, &union_len); + printf("union result: %d\n", union_result); + + for (int i = 0; i < 8; i++) { + int decompress_result = BPSF_decompress( + p_dst + union_offset, + union_len, + end - start + 1, + BLOCK_SIZE_4096, + decompressed_data, + &decompressed_len + ); + printf("decompress result: %d\n", decompress_result); + } + return 0; +} + +``` +输入`:wq!`退出 + +(2) 使用示例 +``` +gcc test_bpsf.c -o test_bpsf -lksal_bpsf +./test_bpsf +``` +输出result均为0即为成功 + +# 五、日志 +(1) 日志说明 +通过查看`/var/log/messages`,即可查看bpsf输出内容 +Release包输出日志等级为Error级别,Debug包输出Error与Info级别日志 +其中,日志头文件在`/usr/include/bpsf_log.h` +(2) 日志使用 +用户可查看BPSF算法日志部分头文件,将日志文件对接到BPSF中 +(3)新建文件`test_bpsf_log.c` + +``` +vi test_bpsf_log.c +``` +输入以下内容: +``` + +#include "ksal_bpsf.h" +#include "bpsf_log.h" +#include +#include + +const size_t BLOCK_SIZE_4096 = 4096; +const size_t BLOCK_SIZE_4160 = 4160; +const size_t SEGMENT_COUNT = 8; +const uint8_t kValue = 0x10; + +static void customer_logger(LogLevel level, const char *message) { + const char *level_str; + switch (level) { + case LOG_ERR: + level_str = "ERROR"; + break; + case LOG_INFO: + level_str = "INFO"; + break; + default: + level_str = "ERROR"; + } + printf("[%s] %s\n", level_str, message); +} + +int main() { + SetLogFunction(customer_logger); // 传入自定义日志函数customer_logger + + const size_t src_len = BLOCK_SIZE_4096 * SEGMENT_COUNT; + uint8_t p_src[src_len]; + size_t dst_len = BPSF_compressBound(src_len); + uint8_t p_dst[dst_len]; + uint16_t offset[8] = {0}; + uint16_t len[8] = {0}; + + for (size_t i = 0; i < src_len; i++) { + p_src[i] = i * kValue; + } + + int compress_result = BPSF_compress(p_src, src_len, BLOCK_SIZE_4096, p_dst, &dst_len, offset, len); + printf("compress result: %d\n", compress_result); + + int start = 0; + int end = 3; + uint8_t decompressed_data[BLOCK_SIZE_4096 * (end - start + 1)]; + size_t decompressed_len = 0; + + uint16_t union_offset, union_len; + int union_result = BPSF_union(offset, len, start, end, &union_offset, &union_len); + printf("union result: %d\n", union_result); + + for (int i = 0; i < 8; i++) { + int decompress_result = BPSF_decompress( + p_dst + union_offset, + union_len, + end - start + 1, + BLOCK_SIZE_4096, + decompressed_data, + &decompressed_len + ); + printf("decompress result: %d\n", decompress_result); + } + return 0; +} +``` +(3) 使用示例 +``` +gcc test_bpsf_log.c -o test_bpsf_log -lksal_bpsf +./test_bpsf_log +``` +使用SetLogFunction(customer_logger)时,可在自定义日志里出现BPSF日志,而在不设置时,默认出现在`/var/log/messages`里 -- Gitee From b65941164eb581d613a03dfff8f231afdd8992b4 Mon Sep 17 00:00:00 2001 From: Berlin_Peng Date: Mon, 4 Aug 2025 19:28:09 +0800 Subject: [PATCH 2/5] =?UTF-8?q?=E4=BF=AE=E6=94=B9readme=E6=8F=8F=E8=BF=B0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e4c01bd01..869786ae3 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ 存储算法加速库(简称KSAL)是华为自研的存储算法加速库,当前包括BPSF算法、EC算法、CRC16 T10DIF算法、CRC32C算法、memcpy优化算法、DAS智能预取算法和Ceph百亿对象存储元数据zstd压缩算法。 关于KSAL的详细特性介绍可参考存储加速算法库。 ## 仓库说明 -本仓库主要用于KSAL BPSF算法包的编译与安装。 +本仓库主要用于KSAL BPSF算法包的编译与安装 ## 支持CPU 华为鲲鹏处理器 -- Gitee From bfda95edcc4dac193beb9d66a4a17af6b2dc7087 Mon Sep 17 00:00:00 2001 From: Berlin_Peng Date: Mon, 4 Aug 2025 19:39:18 +0800 Subject: [PATCH 3/5] =?UTF-8?q?=E4=BF=AE=E6=94=B9Readme?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 322567ff7..0de7eaa7e 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,18 @@ -KSAL BPSF -介绍 +# KSAL BPSF +## 介绍 存储算法加速库(简称KSAL)是华为自研的存储算法加速库,当前包括BPSF算法、EC算法、CRC16 T10DIF算法、CRC32C算法、memcpy优化算法、DAS智能预取算法和Ceph百亿对象存储元数据zstd压缩算法。 关于KSAL的详细特性介绍可参考存储加速算法库。 -仓库说明 -本仓库主要用于KSAL bpsf算法包的编译与安装 +## 仓库说明 +本仓库主要用于KSAL bpsf算法包的编译与安装,具体编译安装操作可参考[BPSF算法使用指南](https://gitee.com/Berlin_Peng/zstd/blob/bpsf/doc/Setup.md) -支持CPU +## 支持CPU 华为鲲鹏处理器 -支持软件版本 +## 支持软件版本 zstd 1.5.6 -支持操作系统 +## 支持操作系统 openEuler -参与贡献 +## 参与贡献 如果您想为本仓库贡献代码,请向本仓库任意maintainer发送邮件; 如果您找到产品中的任何Bug,欢迎您提出ISSUE \ No newline at end of file -- Gitee From 2213825d903817cae87ab72a1b50c3e44f2858a2 Mon Sep 17 00:00:00 2001 From: Berlin_Peng Date: Mon, 4 Aug 2025 19:43:51 +0800 Subject: [PATCH 4/5] =?UTF-8?q?=E5=A2=9E=E5=8A=A0bpsf=E7=9A=84setup?= =?UTF-8?q?=E6=96=87=E6=A1=A3=E4=B8=AD=E8=BD=AF=E4=BB=B6=E4=B8=8B=E8=BD=BD?= =?UTF-8?q?=E9=93=BE=E6=8E=A5?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- doc/Setup.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/Setup.md b/doc/Setup.md index 1a2f66412..eaacb8c50 100644 --- a/doc/Setup.md +++ b/doc/Setup.md @@ -8,6 +8,9 @@ zstd 1.5.6 ## 支持操作系统 openEuler +## 软件包下载链接 +[BPSF算法下载地址](https://gitee.com/kunpengcompute/zstd/releases/tag/ksal_bpsf) + # 一、软件下载与环境准备 ## 1. 安装rpmbuild工具 (1) 创建路径并进入该路径 -- Gitee From 7f830e922257cbe995cc8c0c355769d5d952cca4 Mon Sep 17 00:00:00 2001 From: Berlin_Peng Date: Tue, 5 Aug 2025 09:28:13 +0800 Subject: [PATCH 5/5] =?UTF-8?q?=E4=BF=AE=E6=94=B9ReadMe=E4=B8=AD=E7=9A=84?= =?UTF-8?q?=E6=96=87=E6=A1=A3=E9=93=BE=E6=8E=A5?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0de7eaa7e..d54f21b10 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ 存储算法加速库(简称KSAL)是华为自研的存储算法加速库,当前包括BPSF算法、EC算法、CRC16 T10DIF算法、CRC32C算法、memcpy优化算法、DAS智能预取算法和Ceph百亿对象存储元数据zstd压缩算法。 关于KSAL的详细特性介绍可参考存储加速算法库。 ## 仓库说明 -本仓库主要用于KSAL bpsf算法包的编译与安装,具体编译安装操作可参考[BPSF算法使用指南](https://gitee.com/Berlin_Peng/zstd/blob/bpsf/doc/Setup.md) +本仓库主要用于KSAL bpsf算法包的编译与安装,具体编译安装操作可参考[BPSF算法使用指南](https://gitee.com/kunpengcompute/zstd/tree/bpsf/doc/Setup.md) ## 支持CPU 华为鲲鹏处理器 -- Gitee