[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1331048093.11248.317.camel@twins>
Date: Tue, 06 Mar 2012 16:34:53 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Lai Jiangshan <eag0628@...il.com>
Cc: Lai Jiangshan <laijs@...fujitsu.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, mingo@...e.hu, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
rostedt@...dmis.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
patches@...aro.org, tj@...nel.org
Subject: Re: [RFC PATCH 5/6] implement per-cpu&per-domain state machine
call_srcu()
On Tue, 2012-03-06 at 23:12 +0800, Lai Jiangshan wrote:
> On Tue, Mar 6, 2012 at 7:16 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Tue, 2012-03-06 at 17:57 +0800, Lai Jiangshan wrote:
> >> srcu_head is bigger, it is worth, it provides more ability and simplify
> >> the srcu code.
> >
> > Dubious claim.. memory footprint of various data structures is deemed
> > important. rcu_head is 16 bytes, srcu_head is 32 bytes. I think it would
> > be real nice not to have two different callback structures and not grow
> > them as large.
>
> CC: tj@...nel.org
> It could be better if workqueue also supports 2*sizeof(long) work callbacks.
That's going to be very painful if at all possible.
> I prefer ability/functionality a little more, it eases the caller's pain.
> preemptible callbacks also eases the pressure of the whole system.
> But I'm also ok if we limit the srcu-callbacks in softirq.
You don't have to use softirq, you could run a complete list from a
single worklet. Just keep the single linked rcu_head list and enqueue a
static (per-cpu) worker to process the entire list.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists