[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1112220854440.31315@router.home>
Date: Thu, 22 Dec 2011 08:58:43 -0600 (CST)
From: Christoph Lameter <cl@...ux.com>
To: Tejun Heo <tj@...nel.org>
cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>, Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [GIT PULL] slab fixes for 3.2-rc4
On Wed, 21 Dec 2011, Tejun Heo wrote:
> The thing is that irqsafe ones are the "complete" ones. We can use
> irqsafe ones instead of preempt safe ones but not the other way. This
> matters only if flipping irq is noticeably more expensive than
> inc/dec'ing preempt count but I suspect there are enough such
> machines. (cc'ing arch) Does anyone have better insight here? How
> much more expensive are local irq save/restore compared to inc/dec'ing
> preempt count on various archs?
Well that would be a pretty nice simplification of the API.
Replace the fallback code for the preempt safe ones with the
irqsafe fallbacks, then drop the irqsafe variants from percpu.h.
> > The way that the cmpxchg things are used is also similar to transactional
> > memory that is becoming available in the next generation of processors by
> > Intel and that is already available in the current generation of powerpc
> > processors by IBM. It is a way to avoid locking overhead.
>
> Hmmm... how about removing the ones which aren't currently in use?
Yep. Could easily be done. We can resurrect the stuff as needed when other
variants become necessary. In particular the _and and _or etc stuff was
just added to be backward compatible with the old per cpu and local_t
interfaces. There may be no use cases left.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists