[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1289963005.8719.1238.camel@yhuang-dev>
Date: Wed, 17 Nov 2010 11:03:25 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Len Brown <lenb@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andi Kleen <andi@...stfloor.org>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>,
Mauro Carvalho Chehab <mchehab@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH -v4 1/2] lib, Make gen_pool memory allocator lockless
On Wed, 2010-11-17 at 10:35 +0800, Andrew Morton wrote:
> On Wed, 17 Nov 2010 10:18:01 +0800 Huang Ying <ying.huang@...el.com> wrote:
>
> > On Wed, 2010-11-17 at 05:50 +0800, Andrew Morton wrote:
> > > On Tue, 16 Nov 2010 08:53:10 +0800
> > > Huang Ying <ying.huang@...el.com> wrote:
> > >
> > > > This version of the gen_pool memory allocator supports lockless
> > > > operation.
> > > >
> > > > This makes it safe to use in NMI handlers and other special
> > > > unblockable contexts that could otherwise deadlock on locks. This is
> > > > implemented by using atomic operations and retries on any conflicts.
> > > > The disadvantage is that there may be livelocks in extreme cases. For
> > > > better scalability, one gen_pool allocator can be used for each CPU.
> > > >
> > > > The lockless operation only works if there is enough memory available.
> > > > If new memory is added to the pool a lock has to be still taken. So
> > > > any user relying on locklessness has to ensure that sufficient memory
> > > > is preallocated.
> > > >
> > > > The basic atomic operation of this allocator is cmpxchg on long. On
> > > > architectures that don't support cmpxchg natively a fallback is used.
> > > > If the fallback uses locks it may not be safe to use it in NMI
> > > > contexts on these architectures.
> > >
> > > The code assumes that cmpxchg is atomic wrt NMI. That would be news to
> > > me - at present an architecture can legitimately implement cmpxchg()
> > > with, say, spin_lock_irqsave() on a hashed spinlock. I don't know
> > > whether any architectures _do_ do anything like that. If so then
> > > that's a problem. If not, it's an additional requirement on future
> > > architecture ports.
> >
> > cmpxchg has been used in that way by ftrace and perf for a long time. So
> > I agree to make it a requirement on future architecture ports.
>
> All I was really doing was inviting you to check your assumptions for
> the known architecture ports. Seems that I must do it myself.
Sorry. I should have done that by myself.
> dude, take a look at include/asm-generic/cmpxchg-local.h. Not NMI-safe!
>
> arch/arm/include/asm/atomic.h's atomic_cmpxchg() isn't NMi-safe.
>
> arch/arm/include/asm/system.h uses include/asm-generic/cmpxchg-local.h.
>
> as does avr32
>
> and blackfin
>
> Now go take a look at cris.
>
> h8300 atomic_cmpxchg() isn't NMI-safe.
>
> m32r isn't NMI-safe
>
> go look at m68k, see if you can work it out.
>
> microblaze? Dunno.
>
> mn10300 uniprocessor isn't NMI-safe
>
> score isn't NMI-safe
>
> I stopped looking there.
I have talked about the NMI-safety of cmpxchg with Steven Rostedt before
in following thread:
http://lkml.org/lkml/2009/6/10/518
It seems that Steven thinks many architectures without NMI-safe cmpxchg
have no real NMI too.
In the patch description and comments, it is said that on architectures
without NMI-safe cmpxchg, gen_pool can not be used in NMI handler
safely.
Or do you think it is better to use a spin_trylock based fallback if
NMI-safe cmpxchg is not available? Or require cmpxchg implementation
uses spin_trylock instead of spin_lock?
Best Regards,
Huang Ying
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists