[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1316544841.29966.121.camel@gandalf.stny.rr.com>
Date: Tue, 20 Sep 2011 14:54:01 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Christoph Lameter <cl@...two.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC][PATCH 0/5] Introduce checks for preemptable code for
this_cpu_read/write()
On Tue, 2011-09-20 at 11:10 -0500, Christoph Lameter wrote:
> On Tue, 20 Sep 2011, Steven Rostedt wrote:
>
> > I really mean all other users of this_cpu_*(), including the cmpxchg and
> > friends, still need to have preemption disabled.
>
> This is argument against the basic design of this_cpu_ops. They were
> designed to avoid having to disable preemption for single operations on
> per cpu data. I think this shows a basic misunderstanding of what you are
> dealing with.
>
BTW, Can you explain to me where the this_cpu_*() ops were designed to
be used? The only places where "this_cpu_*()" is used in slub.c and
page_alloc.c have irqs disabled on their use. I thought this was for
slub and page_alloc?
Is this_cpu() made just for statistics? I see it used in the inode code
for that, and some accounting in the namespace.c code.
Note and there's places all over the kernel that uses this_cpu_read()
and thinks preemption should be disabled. Just look at
arch/x86/mm/tlb.c:
/* Caller has disabled preemption */
sender = this_cpu_read(tlb_vector_offset);
Why the comment?
My argument is that this_cpu_* is just confusing. Rename your use case
and keep this_cpu_*() as what you want __this_cpu_*() to be.
Thanks!
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists