lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZaHFbJ2Osd/tpPqN@casper.infradead.org>
Date: Fri, 12 Jan 2024 23:04:12 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Barry Song <21cnbao@...il.com>
Cc: Ryan Roberts <ryan.roberts@....com>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Hildenbrand <david@...hat.com>,
	John Hubbard <jhubbard@...dia.com>,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH v1] mm/filemap: Allow arch to request folio size for
 exec memory

On Sat, Jan 13, 2024 at 11:54:23AM +1300, Barry Song wrote:
> > > Perhaps an alternative would be to double ra->size and set ra->async_size to
> > > (ra->size / 2)? That would ensure we always have 64K aligned blocks but would
> > > give us an async portion so readahead can still happen.
> >
> > this might be worth to try as PMD is exactly doing this because async
> > can decrease
> > the latency of subsequent page faults.
> >
> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >         /* Use the readahead code, even if readahead is disabled */
> >         if (vm_flags & VM_HUGEPAGE) {
> >                 fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> >                 ractl._index &= ~((unsigned long)HPAGE_PMD_NR - 1);
> >                 ra->size = HPAGE_PMD_NR;
> >                 /*
> >                  * Fetch two PMD folios, so we get the chance to actually
> >                  * readahead, unless we've been told not to.
> >                  */
> >                 if (!(vm_flags & VM_RAND_READ))
> >                         ra->size *= 2;
> >                 ra->async_size = HPAGE_PMD_NR;
> >                 page_cache_ra_order(&ractl, ra, HPAGE_PMD_ORDER);
> >                 return fpin;
> >         }
> > #endif
> >
> 
> BTW, rather than simply always reading backwards,  we did something very
> "ugly" to simulate "read-around" for CONT-PTE exec before[1]
> 
> if page faults happen in the first half of cont-pte, we read this 64KiB
> and its previous 64KiB. otherwise, we read it and its next 64KiB.

I don't think that makes sense.  The CPU executes instructions forwards,
not "around".  I honestly think we should treat text as "random access"
because function A calls function B and functions A and B might well be
very far apart from each other.  The only time I see you actually
getting "readahead" hits is if a function is split across two pages (for
whatever size of page), but that's a false hit!  The function is not,
generally, 64kB long, so doing readahead is no more likely to bring in
the next page of text that we want than reading any other random page.

Unless somebody finds the GNU Rope source code from 1998, or recreates it:
https://lwn.net/1998/1029/als/rope.html
Then we might actually have some locality.

Did you actually benchmark what you did?  Is there really some locality
between the code at offset 256-288kB in the file and then in the range
192kB-256kB?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ