lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Dec 2006 16:08:15 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Catalin Marinas <catalin.marinas@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2.6.20-rc1 00/10] Kernel memory leak detector 0.13


* Catalin Marinas <catalin.marinas@...il.com> wrote:

> On 18/12/06, Ingo Molnar <mingo@...e.hu> wrote:
> >* Catalin Marinas <catalin.marinas@...il.com> wrote:
> >> I could also use a simple allocator based on alloc_pages [...]
> >> [...] It could be so simple that it would never need to free any
> >> pages, just grow the size as required and reuse the freed memleak
> >> objects from a list.
> >
> >sounds good to me. Please make it a per-CPU pool. We'll have to fix the
> >locking too, to be per-CPU - memleak_lock is quite a scalability problem
> >right now. (Add a memleak_object->cpu pointer so that freeing can be
> >done on any other CPU as well.)
> 
> I did some simple statistics about allocations happening on one CPU 
> and freeing on a different one. On a 4-CPU ARM system (and without IRQ 
> balancing and without CONFIG_PREEMPT), these seem to happen in about 
> 8-10% of the cases. Do you expect higher figures on other 
> systems/configurations?
> 
> As I mentioned in a different e-mail, a way to remove the global hash 
> table is to create per-cpu hashes. The only problem is that in these 
> 8-10% of the cases, freeing would need to look up the other hashes. 
> This would become a problem with a high number of CPUs but I'm not 
> sure whether it would overtake the performance issues introduced by 
> cacheline ping-ponging in the single-hash case.

i dont think it's worth doing that. So we should either do the current 
global lock & hash (bad for scalability), or a pure per-CPU design. The 
pure per-CPU design would have to embedd the CPU ID the object is 
attached to into the allocated object. If that is not feasible then only 
the global hash remains i think.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ