[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061228094431.GE24765@elte.hu>
Date: Thu, 28 Dec 2006 10:44:31 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Catalin Marinas <catalin.marinas@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2.6.20-rc1 00/10] Kernel memory leak detector 0.13
* Catalin Marinas <catalin.marinas@...il.com> wrote:
> >memleak info is at a negative offset from the allocated pointer. I.e.
> >that if kmalloc() returns 'ptr', the memleak info could be at
> >ptr-sizeof(memleak_info). That way you dont have to know the size of the
> >object beforehand and there's absolutely no need for a global hash of
> >any sort.
>
> It would probably need to be just a pointer embedded in the allocated
> block. With the current design, the memleak objects have a lifetime
> longer than the tracked block. This is mainly to avoid long locking
> during memory scanning and reporting.
this thing has to be /fast/ in the common path. I dont really see the
point in spreading out allocations like that for small objects. Access
to the object has to be refcounted /anyway/ due to scanning. Just move
that refcounting to the object freeing stage. Keep freed-but-used
buffers in some sort of per-CPU list that the scanning code flushes
after it's done. (Also maybe hold the cache_chain_mutex to prevent slab
caches from being destroyed during scanning.)
> > (it gets a bit more complex for page aligned allocations for the
> > buddy and for vmalloc - but that could be solved by adding one extra
> > pointer into struct page. [...]
>
> This still leaves the issue of marking objects as not being leaks or
> being of a different type. This is done by calling memleak_* functions
> at the allocation point (outside allocator) where only the pointer is
> known. [...]
i dont see the problem. By having the pointer we have access to the
memleak descriptor too.
> [...] In the vmalloc case, it would need to call find_vm_area. This
> might not be a big problem, only that memory resources are no longer
> treated in a unified way by kmemleak (and might not be trivial to add
> support for new allocators).
the pretty horrible locking dependencies in the current one are just as
bad. (which could be softened by a simplified allocator - but that
brings in other problems, which problems can only be solved via
allocator complexity ...)
If 'unification' means global locking and bad overhead and leak
descriptor maintainance complexity then yes, we very much dont want to
treat them in a unified way. Unless there's some killer counter-argument
against embedding the memleak descriptor in the object that we allocate
it is pretty much a must.
btw., you made the argument before that what matters most is the SLAB
allocator. (when i argued that we might want to extend this to other
allocators as well) You cant have it both ways :)
> > [...] That is a far more preferable cost than the locking/cache
> > overhead of a global hash.)
>
> A global hash would need to be re-built for every scan (and destroyed
> afterwards), making this operation longer since the pointer values
> together with their aliases (resulted from using container_of) are
> added to the hash.
by 'global hash' i mean the current code.
> I understand the benefits but I personally favor simplicity over
> performance, [...]
i think you are trying to clinge to a bad design detail. You spent time
on it, and that time was still worthwile, believe me. I often change
design details dozens of times before a patch hits lkml. It doesnt
matter, really - it's the path you walk that matters. This thing /has/
to be fast/scalable to be used in distributions.
> [...] Global structures are indeed a scalability problem but for a
> reasonable number of CPUs their overhead might not be that big.
lets forget you ever said this, ok? ;-)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists