lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070518075943.GB23998@wotan.suse.de>
Date:	Fri, 18 May 2007 09:59:43 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	linux-arch@...r.kernel.org
Subject: Re: [rfc] increase struct page size?!

On Fri, May 18, 2007 at 12:43:04AM -0700, Andrew Morton wrote:
> On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <npiggin@...e.de> wrote:
> 
> > On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> > > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <npiggin@...e.de> wrote:
> > > 
> > > > Many batch operations on struct page are completely random,
> > > 
> > > But they shouldn't be: we should aim to place physically contiguous pages
> > > into logically contiguous pagecache slots, for all the reasons we
> > > discussed.
> > 
> > For big IO batch operations, pagecache would be more likely to be
> > physically contiguous, as would LRU, I suppose.
> 
> read(), write(), truncate(), writeback, pagefault.  Pretty common stuff.

Of course, but if you're doing them on random-ish ranges, or multiple
files, or continually on the same file while parts of it get reclaimed
and reinstantiated...


> > I'm more thinking of operations where things get reclaimed over time,
> > touched or dirtied in slightly different orderings, interleaved with
> > other allocations, etc.
> 
> Yes, that can happen.  But in such cases we by definition aren't touching
> the pageframes very often.  I'd assert that when the kernel is really
> hitting those pageframes hard, it is commonly doing this in ascending
> pagecache order.

I'm not sure that I would always agree. Sure if there is random *IO*
involved, then it is going to be slow and we won't be hitting page frames
so hard. But if there is random pagecache access, it will hit them almost
as hard.


> > > If/when that happens, there will be a *lot* of locality of reference
> > > against the pageframes in a lot of important codepaths.
> > 
> > And when it doesn't happen, we eat 75% more cache misses. And for that
> > matter we eat 75% more cache misses for non-batch operations like
> > allocating or freeing a page by slab, for example.
> 
> "measure twice, cut once"

Definitely agree there.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ