lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230711143337.3086664-1-glider@google.com>
Date:   Tue, 11 Jul 2023 16:33:27 +0200
From:   Alexander Potapenko <glider@...gle.com>
To:     glider@...gle.com, catalin.marinas@....com, will@...nel.org,
        pcc@...gle.com, andreyknvl@...il.com
Cc:     linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        eugenis@...gle.com, yury.norov@...il.com
Subject: [PATCH 0/5] Implement MTE tag compression for swapped pages

Currently, when MTE pages are swapped out, the tags are kept in the
memory, occupying 128 bytes per page. This is especially problematic
for devices that use zram-backed in-memory swap, because tags stored
uncompressed in the heap effectively reduce the available amount of
swap memory.

The RLE-based EA0 algorithm suggested by Evgenii Stepanov and
implemented in this patch series is able to efficiently compress
128-byte tag buffers, resulting in practical compression ratio between
2.5x and 20x. In most cases it is possible to store the compressed data
in 63-bit Xarray values, resulting in no extra memory allocations.

Our measurements show that EA0 provides better compression than existing
kernel compression algorithms (LZ4, LZO, LZ4HC, ZSTD) can offer, because
EA0 specifically targets 128-byte buffers.

To implement compression/decompression, we introduce <linux/bitqueue.h>,
which provides a simple bit queue interface.

We refactor arch/arm64/mm/mteswap.c to support both the compressed
(CONFIG_ARM64_MTE_COMP) and non-compressed case. For the former, in
addition to tag compression, we move tag allocation from kmalloc() to
separate kmem caches, providing greater locality and relaxing the
alignment requirements.

Alexander Potapenko (5):
  linux/bitqueue.h: add a KUnit test for bitqueue.h
  arm64: mte: implement CONFIG_ARM64_MTE_COMP
  arm64: mte: add a test for MTE tags compression
  arm64: mte: add compression support to mteswap.c
  fixup mteswap

 arch/arm64/Kconfig               |  20 ++
 arch/arm64/include/asm/mtecomp.h |  60 +++++
 arch/arm64/mm/Makefile           |   7 +
 arch/arm64/mm/mtecomp.c          | 398 +++++++++++++++++++++++++++++++
 arch/arm64/mm/mteswap.c          |  19 +-
 arch/arm64/mm/mteswap.h          |  12 +
 arch/arm64/mm/mteswap_comp.c     |  50 ++++
 arch/arm64/mm/mteswap_nocomp.c   |  37 +++
 arch/arm64/mm/test_mtecomp.c     | 175 ++++++++++++++
 lib/Kconfig.debug                |   8 +
 lib/Makefile                     |   1 +
 lib/test_bitqueue.c              | 244 +++++++++++++++++++
 12 files changed, 1020 insertions(+), 11 deletions(-)
 create mode 100644 arch/arm64/include/asm/mtecomp.h
 create mode 100644 arch/arm64/mm/mtecomp.c
 create mode 100644 arch/arm64/mm/mteswap.h
 create mode 100644 arch/arm64/mm/mteswap_comp.c
 create mode 100644 arch/arm64/mm/mteswap_nocomp.c
 create mode 100644 arch/arm64/mm/test_mtecomp.c
 create mode 100644 lib/test_bitqueue.c

-- 
2.41.0.255.g8b1d071c50-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ