lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wZzjprAs42LMw8s8C_iz4v7m6fiO7-7nBS2BxkU9u8QA@mail.gmail.com>
Date: Sat, 13 Jan 2024 13:11:03 +1300
From: Barry Song <21cnbao@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Ryan Roberts <ryan.roberts@....com>, Catalin Marinas <catalin.marinas@....com>, 
	Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>, 
	Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...hat.com>, 
	John Hubbard <jhubbard@...dia.com>, linux-arm-kernel@...ts.infradead.org, 
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org, 
	linux-mm@...ck.org
Subject: Re: [RFC PATCH v1] mm/filemap: Allow arch to request folio size for
 exec memory

On Sat, Jan 13, 2024 at 12:04 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Sat, Jan 13, 2024 at 11:54:23AM +1300, Barry Song wrote:
> > > > Perhaps an alternative would be to double ra->size and set ra->async_size to
> > > > (ra->size / 2)? That would ensure we always have 64K aligned blocks but would
> > > > give us an async portion so readahead can still happen.
> > >
> > > this might be worth to try as PMD is exactly doing this because async
> > > can decrease
> > > the latency of subsequent page faults.
> > >
> > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > >         /* Use the readahead code, even if readahead is disabled */
> > >         if (vm_flags & VM_HUGEPAGE) {
> > >                 fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> > >                 ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
> > >                 ra->size = HPAGE_PMD_NR;
> > >                 /*
> > >                  * Fetch two PMD folios, so we get the chance to actually
> > >                  * readahead, unless we've been told not to.
> > >                  */
> > >                 if (!(vm_flags & VM_RAND_READ))
> > >                         ra->size *= 2;
> > >                 ra->async_size = HPAGE_PMD_NR;
> > >                 page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> > >                 return fpin;
> > >         }
> > > #endif
> > >
> >
> > BTW, rather than simply always reading backwards,  we did something very
> > "ugly" to simulate "read-around" for CONT-PTE exec before[1]
> >
> > if page faults happen in the first half of cont-pte, we read this 64KiB
> > and its previous 64KiB. otherwise, we read it and its next 64KiB.
>
> I don't think that makes sense.  The CPU executes instructions forwards,
> not "around".  I honestly think we should treat text as "random access"
> because function A calls function B and functions A and B might well be
> very far apart from each other.  The only time I see you actually
> getting "readahead" hits is if a function is split across two pages (for
> whatever size of page), but that's a false hit!  The function is not,
> generally, 64kB long, so doing readahead is no more likely to bring in
> the next page of text that we want than reading any other random page.
>

it seems you are in favor of Ryan's modification even for filesystems
which don't support large mapping?

> Unless somebody finds the GNU Rope source code from 1998, or recreates it:
> https://lwn.net/1998/1029/als/rope.html
> Then we might actually have some locality.
>
> Did you actually benchmark what you did?  Is there really some locality
> between the code at offset 256-288kB in the file and then in the range
> 192kB-256kB?

I really didn't have benchmark data, at that point I was like,
instinctively didn’t
want to break the logic of read-around, so made the code just that.
The info your provide makes me re-think if the read-around code is necessary,
thanks!

was using filesystems without large-mapping support but worked around
the problem by
1. preparing 16*n normals pages
2. insert normal pages into xa
3. let filesystem read 16 normal pages
4. after all IO completion, transform 16 pages into mTHP and reinsert
mTHP to xa

that was very painful and finally made no improvement probably because
of due to various sync overhead. so  ran away and didn't dig more data.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ