[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150605221111.GY7232@ZenIV.linux.org.uk>
Date: Fri, 5 Jun 2015 23:11:11 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Peter Zijlstra <peterz@...radead.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
der.herr@...r.at
Subject: Re: [RFC][PATCH 0/5] Optimize percpu-rwsem
On Fri, Jun 05, 2015 at 11:08:57PM +0200, Oleg Nesterov wrote:
> On 06/05, Al Viro wrote:
> >
> > FWIW, I hadn't really looked into stop_machine uses, but fs/locks.c one
> > is really not all that great - there we have a large trashcan of a list
> > (every file_lock on the system) and the only use of that list is /proc/locks
> > output generation. Sure, additions take this CPU's spinlock. And removals
> > take pretty much a random one - losing the timeslice and regaining it on
> > a different CPU is quite likely with the uses there.
> >
> > Why do we need a global lock there, anyway? Why not hold only one for
> > the chain currently being traversed? Sure, we'll need to get and drop
> > them in ->next() that way; so what?
>
> And note that fs/seq_file.c:seq_hlist_next_percpu() has no other users.
>
> And given that locks_delete_global_locks() takes the random lock anyway,
> perhaps the hashed lists/locking makes no sense, I dunno.
It's not about making life easier for /proc/locks; it's about not screwing
those who add/remove file_lock... And no, that "random lock" isn't held
when modifying the (per-cpu) lists - it protects the list hanging off each
element of the global list, and /proc/locks scans those lists, so rather
than taking/dropping it in each ->show(), it's taken once in ->start()...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists