[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150625191701.GA5013@mtj.duckdns.org>
Date: Thu, 25 Jun 2015 15:17:01 -0400
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Nicholas Mc Guire <der.herr@...r.at>, oleg@...hat.com,
paulmck@...ux.vnet.ibm.com, mingo@...hat.com,
linux-kernel@...r.kernel.org, dave@...olabs.net, riel@...hat.com,
viro@...IV.linux.org.uk, torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 05/13] percpu-rwsem: Optimize readers and reduce
global impact
Hello,
On Thu, Jun 25, 2015 at 09:08:00PM +0200, Peter Zijlstra wrote:
> > mm/memcontrol.c:mem_cgroup_read_events
> > mm/memcontrol.c:mem_cgroup_read_stat
>
> Those seem to be hotplug challenged. I'm thinking dropping that
> nocpu_base.count[] crap and just iterating all possible CPUs would've
> been much easier.
A patch doing that is already queued for this merge window. IIRC,
it's included as part of cgroup writeback updates.
> > > +#define per_cpu_sum(var) \
> > > +({ \
> > > + typeof(var) __sum = 0; \
> > > + int cpu; \
> > > + for_each_possible_cpu(cpu) \
> > > + __sum += per_cpu(var, cpu); \
> > > + __sum; \
> > > +})
> > > +
> >
> > so maybe put it into include/linux/percpu.h ?
percpu-defs.h would be the better place for it.
> Yes I can do that.
>
> We can try and use it more after that, there seems to be loads of places
> that could use this fs/namespace.c fs/inode.c etc..
Hmmm... the only worry I have about this is people using it on u64 on
32bit machines. CPU local ops can do split updates on lower and upper
halves and the remotely-read value will be surprising. We have the
same issues w/ regular per_cpu accesses to but the summing function /
macro is better at giving the false sense of security. Prolly
limiting it upto ulong size is a good idea?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists