lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Feb 2010 00:36:08 +1100
From:	Nick Piggin <npiggin@...e.de>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Al Viro <viro@...IV.linux.org.uk>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Dave Chinner <david@...morbit.com>,
	Alexander Viro <viro@....linux.org.uk>,
	Christoph Hellwig <hch@...radead.org>,
	Christoph Lameter <clameter@....com>,
	Rik van Riel <riel@...hat.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	akpm@...ux-foundation.org, Miklos Szeredi <miklos@...redi.hu>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Hugh Dickins <hugh@...itas.com>, linux-kernel@...r.kernel.org
Subject: Re: dentries: dentry defragmentation

On Mon, Feb 01, 2010 at 02:25:27PM +0100, Andi Kleen wrote:
>  > 
> > > > Right, but as you can see it is complex to do it this way. And I
> > > > think for reclaim driven targetted reclaim, then it needn't be so
> > > > inefficient because you aren't restricted to just one page, but
> > > > in any page which is heavily fragmented (and by definition there
> > > > should be a lot of them in the system).
> > > 
> > > Assuming you can identify them quickly.
> > 
> > Well because there are a large number of them, then you are likely
> > to encounter one very quickly just off the LRU list.
> 
> There were some cases in the past where this wasn't the case.
> But yes some uptodate numbers on this would be good.
> 
> Also it doesn't address the second case here quoted again.
> 
> > > There are really two different cases here:
> > > - Run out of memory: in this case i just want to find all the objects
> > > of any page, ideally of not that recently used pages.
> > > - I am very fragmented and want a specific page freed to get a 2MB
> > > region back or for hwpoison:  same, but do it for a specific page.
> > > 
> >
> > 
> > I still don't think it adds much weight. Especially if you can just
> > try an inefficient scan.
> 
> Also see second point below.
> >  
> > 
> > > But soft hwpoison isn't the only user. The other big one would
> > > be for large pages or other large page allocations.

Well yes it's possible that it could help there.

But it is always possible to do the same reclaim work via the LRU, in
worst case it just requires reclaiming of most objects.  So it
probably doesn't fundamentally enable something we can't do already.
More a matter of performance, so again, numbers are needed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists