[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150605233622.GA31034@redhat.com>
Date: Sat, 6 Jun 2015 01:36:22 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Peter Zijlstra <peterz@...radead.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Tejun Heo <tj@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
der.herr@...r.at
Subject: Re: [RFC][PATCH 0/5] Optimize percpu-rwsem
On 06/05, Al Viro wrote:
>
> On Fri, Jun 05, 2015 at 11:08:57PM +0200, Oleg Nesterov wrote:
> > On 06/05, Al Viro wrote:
> > >
> > > FWIW, I hadn't really looked into stop_machine uses, but fs/locks.c one
> > > is really not all that great - there we have a large trashcan of a list
> > > (every file_lock on the system) and the only use of that list is /proc/locks
> > > output generation. Sure, additions take this CPU's spinlock. And removals
> > > take pretty much a random one - losing the timeslice and regaining it on
> > > a different CPU is quite likely with the uses there.
> > >
> > > Why do we need a global lock there, anyway? Why not hold only one for
> > > the chain currently being traversed? Sure, we'll need to get and drop
> > > them in ->next() that way; so what?
> >
> > And note that fs/seq_file.c:seq_hlist_next_percpu() has no other users.
> >
> > And given that locks_delete_global_locks() takes the random lock anyway,
> > perhaps the hashed lists/locking makes no sense, I dunno.
>
> It's not about making life easier for /proc/locks; it's about not screwing
> those who add/remove file_lock...
I meant, seq_hlist_next_percpu() could be "static" in fs/locks.c.
> And no, that "random lock" isn't held
> when modifying the (per-cpu) lists - it protects the list hanging off each
> element of the global list, and /proc/locks scans those lists, so rather
> than taking/dropping it in each ->show(), it's taken once in ->start()...
Sure, I understand. I meant that (perhaps) something like
struct {
spinlock_t lock;
struct list_head *head
} file_lock_list[];
locks_insert_global_locks(fl)
{
int idx = fl->idx = hash(fl);
spin_lock(&file_lock_list[idx].lock);
hlist_add_head(...);
spin_unlock(...);
}
seq_hlist_next_percpu() could scan file_lock_list[] and unlock/lock ->lock
when it changes the index.
But please forget, this is really minor. Just I think that file_lock_list
is not actually "per-cpu", exactly because every locks_delete_global_locks()
needs lg_local_lock_cpu(fl->fl_link_cpu) as you pointed out.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists