[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091106084041.GA22505@elte.hu>
Date: Fri, 6 Nov 2009 09:40:41 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Tejun Heo <tj@...nel.org>
Cc: Nick Piggin <npiggin@...e.de>, Jiri Kosina <jkosina@...e.cz>,
Peter Zijlstra <peterz@...radead.org>,
Yinghai Lu <yhlu.kernel@...il.com>,
Thomas Gleixner <tglx@...utronix.de>, cl@...ux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: irq lock inversion
* Tejun Heo <tj@...nel.org> wrote:
> Hello, Ingo.
>
> Ingo Molnar wrote:
> > I havent looked deeply but at first sight i'm not 100% sure that even
> > the lock dance hack is safe - doesnt vfree() do TLB flushes, which must
> > be done with irqs enabled in general? If yes, then the whole notion of
> > using the allocator from irqs-off sections is wrong and the flags
> > save/restore is misguided (or at least incomplete).
>
> The only place where any v*() call is nested under pcpu_lock is in the
> alloc path, specifically pcpu_extend_area_map() ends up calling
> vfree(). pcpu_free() path which can be called from irq context never
> calls any vmalloc function directly. The reclaiming is deferred to a
> work. Breaking the single nesting completely decouples the two locks
> and nobody would be calling vfree() with irq disabled, so I don't
> think there will be any problem.
My question is, why do we do flags save/restore in pcpu-alloc? Do we
ever call it with irqs disabled? If yes, then the vfree might be unsafe
due to vfree() potentially flushing TLBs (on all CPUs) and that act of
sending IPIs requiring irqs to be enabled.
( Now, Nick has optimized vfree recently to lazy-free areas, but that
was a statistical optimization: TLB flushes are still possible, just
done more rarely. So we might end up calling flush_tlb_kernel_range()
from vfree(). I've Cc:-ed Nick. )
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists