lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230329032731.GA3319@hu-cgoldswo-sd.qualcomm.com>
Date:   Tue, 28 Mar 2023 20:29:41 -0700
From:   Chris Goldsworthy <quic_cgoldswo@...cinc.com>
To:     Matthew Wilcox <willy@...radead.org>
CC:     <linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Multi-index entries, clarifying storage overhead with respect to
 range alignment

Hi Matthew,

Consider the following excerpt from the Xarray documentation on multi-index
entries, summarizing the [2^N, 2^(N+1) - 1] alignment requirement for
utilizing multi-index entries [1]:

"The current implementation only allows tying ranges which are aligned powers of
two together; eg indices 64-127 may be tied together, but 2-6 may not be. This
may save substantial quantities of memory; for example tying 512 entries
together will save over 4kB."

Won't we still use multi-index entries for power-of-two ranges that are aligned
for the size of the range? That is, the range [i, j] satisfies:

	(A): j - i = 2^k for some k and i % 2^k == 0 .

This is more permissive than [2^N, 2^(N+1) - 1] . I'm basing this assumption
(calling it this as I'm not 100% certain yet) off of the following:

(1) Counting the number of kmem_cache_alloc_lru() calls using ranges satisfying (A)
whilst varying the starting position.

(2) Walking through xa_store_range(). The call to xas_set_range() [2] sets the
final xa_shift S that corresponds to the range we want to cover. When calling
xas_store() [3], inside of which is a call to xas_create() [4], the shift of the
lowest-level node [5] we allocate is no smaller than S - XA_CHUNK_SHIFT + 1,
i.e. the entry will still cover multiple entries so long as S is large enough
(though we still might need multiple entries to do this). This will happen
despite the alignment of things, based of what happens in xas_store_range(). Let
me know if I'm misinterpreting something.

Thanks,

Chris.

[1] https://docs.kernel.org/core-api/xarray.html#multi-index-entries
[2] https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1740
[3] https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1741
[4] https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L789
[5] https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L679

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ