[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161215125747.GB14324@pathway.suse.cz>
Date: Thu, 15 Dec 2016 13:57:48 +0100
From: Petr Mladek <pmladek@...e.com>
To: "Luis R. Rodriguez" <mcgrof@...nel.org>
Cc: shuah@...nel.org, jeyu@...hat.com, rusty@...tcorp.com.au,
ebiederm@...ssion.com, dmitry.torokhov@...il.com, acme@...hat.com,
corbet@....net, martin.wilck@...e.com, mmarek@...e.com,
hare@...e.com, rwright@....com, jeffm@...e.com, DSterba@...e.com,
fdmanana@...e.com, neilb@...e.com, linux@...ck-us.net,
rgoldwyn@...e.com, subashab@...eaurora.org, xypron.glpk@....de,
keescook@...omium.org, atomlin@...hat.com, mbenes@...e.cz,
paulmck@...ux.vnet.ibm.com, dan.j.williams@...el.com,
jpoimboe@...hat.com, davem@...emloft.net, mingo@...hat.com,
akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
linux-kselftest@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC 06/10] kmod: provide sanity check on kmod_concurrent access
On Thu 2016-12-08 11:48:50, Luis R. Rodriguez wrote:
> Only decrement *iff* we're possitive. Warn if we've hit
> a situation where the counter is already 0 after we're done
> with a modprobe call, this would tell us we have an unaccounted
> counter access -- this in theory should not be possible as
> only one routine controls the counter, however preemption is
> one case that could trigger this situation. Avoid that situation
> by disabling preemptiong while we access the counter.
>
> Signed-off-by: Luis R. Rodriguez <mcgrof@...nel.org>
> ---
> kernel/kmod.c | 20 ++++++++++++++++----
> 1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/kmod.c b/kernel/kmod.c
> index ab38539f7e91..09cf35a2075a 100644
> --- a/kernel/kmod.c
> +++ b/kernel/kmod.c
> @@ -113,16 +113,28 @@ static int call_modprobe(char *module_name, int wait)
>
> static int kmod_umh_threads_get(void)
> {
> + int ret = 0;
> +
> + preempt_disable();
> atomic_inc(&kmod_concurrent);
> if (atomic_read(&kmod_concurrent) < max_modprobes)
> - return 0;
> - atomic_dec(&kmod_concurrent);
> - return -EBUSY;
> + goto out;
I though more about it and the disabled preemtion might make
sense here. It makes sure that we are not rescheduled here
and that kmod_concurrent is not increased by mistake for too long.
Well, it still would make sense to increment the value
only when it is under the limit and set the incremented
value using cmpxchg to avoid races.
I mean to use similar trick that is used by refcount_inc(), see
https://lkml.kernel.org/r/20161114174446.832175072@infradead.org
> + atomic_dec_if_positive(&kmod_concurrent);
> + ret = -EBUSY;
> +out:
> + preempt_enable();
> + return 0;
> }
>
> static void kmod_umh_threads_put(void)
> {
> - atomic_dec(&kmod_concurrent);
> + int ret;
> +
> + preempt_disable();
> + ret = atomic_dec_if_positive(&kmod_concurrent);
> + WARN_ON(ret < 0);
> + preempt_enable();
The disabled preemption does not make much sense here.
We do not need to tie the atomic operation and the WARN
together so tightly.
Best Regards,
Petr
Powered by blists - more mailing lists