[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54F086CA.5060606@bmw-carit.de>
Date: Fri, 27 Feb 2015 16:01:30 +0100
From: Daniel Wagner <daniel.wagner@...-carit.de>
To: Jeff Layton <jlayton@...chiereds.net>
CC: Andi Kleen <andi@...stfloor.org>, <linux-fsdevel@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, John Kacur <jkacur@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
"J. Bruce Fields" <bfields@...ldses.org>
Subject: Re: [RFC v1 0/5] fs/locks: Use plain percpu spinlocks instead of
lglock to protect file_lock
Sorry for the late response. Got dragged away.
On 02/24/2015 10:06 PM, Jeff Layton wrote:
> On Tue, 24 Feb 2015 16:58:26 +0100
> Daniel Wagner <daniel.wagner@...-carit.de> wrote:
>
>> On 02/20/2015 05:05 PM, Andi Kleen wrote:
>>> Daniel Wagner <daniel.wagner@...-carit.de> writes:
>>>>
>>>> I am looking at how to get rid of lglock. Reason being -rt is not too
>>>> happy with that lock, especially that it uses arch_spinlock_t and
>>>
>>> AFAIK it could just use normal spinlock. Have you tried that?
>>
>> I have tried it. At least fs/locks.c didn't blow up. The benchmark
>> results (lockperf) indicated that using normal spinlocks is even
>> slightly faster. Simply converting felt like cheating. It might be
>> necessary for the other user (kernel/stop_machine.c). Currently it looks
>> like there is some additional benefit getting lglock away in fs/locks.c.
>>
>
> What would that benefit be?
>
> lglocks are basically percpu spinlocks. Fixing some underlying
> infrastructure that provides that seems like it might be a better
> approach than declaring them "manually" and avoiding them altogether.
>
> Note that you can still do basically what you're proposing here with
> lglocks as well. Avoid using lg_global_* and just lock each one in
> turn.
Yes, that was I was referring to as benefit. My main point is that there
are only lg_local_* calls we could as well use normal spinlocks. No need
to fancy.
> That said, now that I've thought about this, I'm not sure that's really
> something we want to do when accessing /proc/locks. If you lock each
> one in turn, then you aren't freezing the state of the file_lock_list
> percpu lists. Won't that mean that you aren't necessarily getting a
> consistent view of the locks on those lists when you cat /proc/locks?
Maybe I am overlooking something here but I don't see a consistency
problem. We list a blocker and all its waiter in a go since only the
blocker is added to flock_lock_list and the waiters are added blocker's
fl_block list.
> I think having a consistent view there might trump any benefit to
> performance. Reading /proc/locks is a *very* rare activity in the big
> scheme of things.
I agree, but I hope that I got it right with my consistency argument
than there shouldn't be a problem.
> I do however like the idea of moving more to be protected by the
> lglocks, and minimizing usage of the blocked_lock_lock.
Good to hear. I am trying to write a new test (a variation of the
dinning philosophers 'problem') case which benchmarks blocked_lock_lock
after the re-factoring.
cheers,
daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists