lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Sep 2009 19:56:08 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Nick Piggin <npiggin@...e.de>,
	Christoph Lameter <cl@...ux-foundation.org>,
	heiko.carstens@...ibm.com, sachinp@...ibm.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Tejun Heo <tj@...nel.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 2/4] slqb: Record what node is local to a kmem_cache_cpu

On Tue, Sep 22, 2009 at 09:54:33PM +0300, Pekka Enberg wrote:
> Hi Mel,
> 
> On Tue, Sep 22, 2009 at 4:54 PM, Mel Gorman <mel@....ul.ie> wrote:
> >> I don't understand how the memory leak happens from the above
> >> description (or reading the code). page_to_nid() returns some crazy
> >> value at free time?
> >
> > Nope, it isn't a leak as such, the allocator knows where the memory is.
> > The problem is that is always frees remote but on allocation, it sees
> > the per-cpu list is empty and calls the page allocator again. The remote
> > lists just grow.
> >
> >> The remote list isn't drained properly?
> >
> > That is another way of looking at it. When the remote lists get to a
> > watermark, they should drain. However, it's worth pointing out if it's
> > repaired in this fashion, the performance of SLQB will suffer as it'll
> > never reuse the local list of pages and instead always get cold pages
> > from the allocator.
> 
> I worry about setting c->local_nid to the node of the allocated struct
> kmem_cache_cpu. It seems like an arbitrary policy decision that's not
> necessarily the best option and I'm not totally convinced it's correct
> when cpusets are configured. SLUB seems to do the sane thing here by
> using page allocator fallback (which respects cpusets AFAICT) and
> recycling one slab slab at a time.
> 
> Can I persuade you into sending me a patch that fixes remote list
> draining to get things working on PPC? I'd much rather wait for Nick's
> input on the allocation policy and performance.
> 

It'll be at least next week before I can revisit this again. I'm afraid
I'm going offline from tomorrow until Tuesday.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ