[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160906015800.GA31162@rhlx01.hs-esslingen.de>
Date: Tue, 6 Sep 2016 03:58:00 +0200
From: Andreas Mohr <andi@...as.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: der.herr@...r.at, Al Viro <viro@...IV.linux.org.uk>,
mingo@...hat.com, torvalds@...ux-foundation.org, dave@...olabs.net,
Oleg Nesterov <oleg@...hat.com>, riel@...hat.com,
tj@...nel.org, paulmck@...ux.vnet.ibm.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/7] fs/locks: Replace lg_global with a percpu-rwsem
Hi,
[no properly binding reference via In-Reply-To: available thus manually re-creating, sorry]
https://lkml.org/lkml/2016/9/5/832
Two thoughts:
***multiple locks
Don't have much insight into this
(didn't spend much thinking on this),
but of course it's unfortunate
that two lock types need to be serviced
rather than having the atomic handling exchange area/section
be sufficiently guarded by one lock only
(the question here could possibly be:
what kind of currently existing
structural disadvantage/layer distribution
prevents us from being able to
have things simply serviced
within one granular lock area only?)
***lock handling
> + percpu_down_read(&file_rwsem);
> spin_lock(&ctx->flc_lock);
> spin_unlock(&ctx->flc_lock);
> + percpu_up_read(&file_rwsem);
These are repeated multiple times in this commit, thus error-prone.
A possibly good way to commit-micro-manage this would be:
1. commit shoves things into a newly created encapsulation/wrapper helper
stuff_lock(&flc_lock); /* <---- naming surely can be improved here */
2. [this commit]
extend encapsulation/wrapper helper
to be servicing file_rwsem, too:
stuff_lock(&flc_lock, &file_rwsem);
That way it is pretty much guaranteed that:
a) neither one nor the other lock type
will get forgotten later, at a specific use site
b) lock order will be correctly maintained at all times
(AB-BA deadlock......)
[or, IOW, you get some symmetry/consistency malus points
for having provided
a relatively large amount of LOC extensions
which put certain amounts of stress on
code implementation quality/maintainability ;)]
HTH,
Andreas Mohr
Powered by blists - more mailing lists