lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_TJveU2v+EcokLKJVVZ8Xje2nYmmUg8bvCD8KO1oC5MgmWCA@mail.gmail.com>
Date: Tue, 1 Apr 2025 00:19:48 -0700
From: Kalesh Singh <kaleshsingh@...gle.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Matthew Wilcox <willy@...radead.org>, Andrew Morton <akpm@...ux-foundation.org>, 
	David Hildenbrand <david@...hat.com>, Dave Chinner <david@...morbit.com>, 
	Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, 
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3] mm/filemap: Allow arch to request folio size for exec memory

On Sat, Mar 29, 2025 at 3:08 AM Ryan Roberts <ryan.roberts@....com> wrote:
>
> On 28/03/2025 15:14, Matthew Wilcox wrote:
> > On Thu, Mar 27, 2025 at 04:23:14PM -0400, Ryan Roberts wrote:
> >> + Kalesh
> >>
> >> On 27/03/2025 12:44, Matthew Wilcox wrote:
> >>> On Thu, Mar 27, 2025 at 04:06:58PM +0000, Ryan Roberts wrote:
> >>>> So let's special-case the read(ahead) logic for executable mappings. The
> >>>> trade-off is performance improvement (due to more efficient storage of
> >>>> the translations in iTLB) vs potential read amplification (due to
> >>>> reading too much data around the fault which won't be used), and the
> >>>> latter is independent of base page size. I've chosen 64K folio size for
> >>>> arm64 which benefits both the 4K and 16K base page size configs and
> >>>> shouldn't lead to any read amplification in practice since the old
> >>>> read-around path was (usually) reading blocks of 128K. I don't
> >>>> anticipate any write amplification because text is always RO.
> >>>
> >>> Is there not also the potential for wasted memory due to ELF alignment?
> >>
> >> I think this is an orthogonal issue? My change isn't making that any worse.
> >
> > To a certain extent, it is.  If readahead was doing order-2 allocations
> > before and is now doing order-4, you're tying up 0-12 extra pages which
> > happen to be filled with zeroes due to being used to cache the contents
> > of a hole.
>
> Well we would still have read them in before, nothing has changed there. But I
> guess your point is more about reclaim? Because those pages are now contained in
> a larger folio, if part of the folio is in use then all of it remains active.
> Whereas before, if the folio was fully contained in the pad area and never
> accessed, it would fall down the LRU quickly and get reclaimed.
>


Hi Ryan,

I agree this was happening before and we don't need to completely
address it here. Though with the patch it's more likely that the holes
will be cached. I'd like to minimize it if possible. Since this is for
EXEC mappings, a simple check we could use is to limit this to the
VM_EXEC vma.

+ if (vm_flags & VM_EXEC) {
+ int order = arch_exec_folio_order();
+
+ if (order >= 0 && ((end-address)*2) >= 1<<order) { /* Fault around case */

For reference I found below (coincidentally? similar) distributions on
my devices

== x86 Workstation ==

Total unique exec segments:   906

Exec segments >= 16 KB:   663 ( 73.18%)
Exec segments >= 64 KB:   414 ( 45.70%)

== arm64 Android Device ==

Total unique exec segments:   2171

Exec segments >= 16 KB:  1602 ( 73.79%)
Exec segments >= 64 KB:   988 ( 45.51%)

Result were using the below script:

cat /proc/*/maps | grep 'r-xp' | \
awk '
BEGIN { OFS = "\t" }
$NF ~ /^\// {
path = $NF;
split($1, addr, "-");
size = strtonum("0x" addr[2]) - strtonum("0x" addr[1]);
print size, path;
}' | \
sort -u | \
awk '
BEGIN {
FS = "\t";
total_segments = 0;
segs_ge_16k = 0;
segs_ge_64k = 0;
}
{
total_segments++;
size = $1;
if (size >= 16384) segs_ge_16k++;
if (size >= 65536) segs_ge_64k++;
}
END {
if (total_segments > 0) {
percent_gt_16k = (segs_ge_16k / total_segments) * 100;
percent_gt_64k = (segs_ge_64k / total_segments) * 100;

printf "Total unique exec segments: %d\n", total_segments;
printf "\n";
printf "Exec segments >= 16 KB: %5d (%6.2f%%)\n", segs_ge_16k, percent_gt_16k;
printf "Exec segments >= 64 KB: %5d (%6.2f%%)\n", segs_ge_64k, percent_gt_64k;
} else {
print "No executable segments found.";
}
}'

> >
> >>> Kalesh talked about it in the MM BOF at the same time that Ted and I
> >>> were discussing it in the FS BOF.  Some coordination required (like
> >>> maybe Kalesh could have mentioned it to me rathere than assuming I'd be
> >>> there?)
> >>
> >> I was at Kalesh's talk. David H suggested that a potential solution might be for
> >> readahead to ask the fs where the next hole is and then truncate readahead to
> >> avoid reading the hole. Given it's padding, nothing should directly fault it in
> >> so it never ends up in the page cache. Not sure if you discussed anything like
> >> that if you were talking in parallel?
> >
> > Ted said that he and Kalesh had talked about that solution.  I have a
> > more bold solution in mind which lifts the ext4 extent cache to the
> > VFS inode so that the readahead code can interrogate it.
> >

Sorry about the hiccup in coordination, Matthew. It was my bad for not
letting you know I planned to discuss it in the MM BoF. I'd like to
hear Ted and your ideas on this when possible.

Thanks,
Kalesh

> >> Anyway, I'm not sure if you're suggesting these changes need to be considered as
> >> one somehow or if you're just mentioning it given it is loosely related? My view
> >> is that this change is an improvement indepently and could go in much sooner.
> >
> > This is not a reason to delay this patch.  It's just a downside which
> > should be mentioned in the commit message.
>
> Fair point; I'll add a paragraph about the potential reclaim issue.
>
> >
> >>>> +static inline int arch_exec_folio_order(void)
> >>>> +{
> >>>> +  return -1;
> >>>> +}
> >>>
> >>> This feels a bit fragile.  I often expect to be able to store an order
> >>> in an unsigned int.  Why not return 0 instead?
> >>
> >> Well 0 is a valid order, no? I think we have had the "is order signed or
> >> unsigned" argument before. get_order() returns a signed int :)
> >
> > But why not always return a valid order?  I don't think we need a
> > sentinel.  The default value can be 0 to do what we do today.
> >
>
> But a single order-0 folio is not what we do today. Note that my change as
> currently implemented requests to read a *single* folio of the specified order.
> And note that we only get the order we request to page_cache_ra_order() because
> the size is limited to a single folio. If the size were bigger, that function
> would actually expand the requested order by 2. (although the parameter is
> called "new_order", it's actually interpretted as "old_order").
>
> The current behavior is effectively to read 128K in order-2 folios (with smaller
> folios for boundary alignment).
>
> So I see a few options:
>
>   - Continue to allow non-opted in arches to use the existing behaviour; in this
> case we need a sentinel. This could be -1, UINT_MAX or 0. But in the latter case
> you are preventing an opted-in arch from specifying that they want order-0 -
> it's meaning is overridden.
>
>   - Force all arches to use the new approach with a default folio order (and
> readahead size) of order-0. (The default can be overridden per-arch). Personally
> I'd be nervous about making this change.
>
>   - Decouple the read size from the folio order size; continue to use the 128K
> read size and only allow opting-in to a specific folio order. The default order
> would be 2 (or 0). We would need to fix page_cache_async_ra() to call
> page_cache_ra_order() with "order + 2" (the new order) and fix
> page_cache_ra_order() to treat its order parameter as the *new* order.
>
> Perhaps we should do those fixes anyway (and then actually start with a folio
> order of 0 - which I think you said in the past was your original intention?).
>
> Thanks,
> Ryan
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ