lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 May 2017 08:18:09 -0700
From:   Jessica Yu <jeyu@...hat.com>
To:     Dmitry Torokhov <dmitry.torokhov@...il.com>
Cc:     "Luis R. Rodriguez" <mcgrof@...nel.org>, shuah@...nel.org,
        rusty@...tcorp.com.au, ebiederm@...ssion.com, acme@...hat.com,
        corbet@....net, martin.wilck@...e.com, mmarek@...e.com,
        pmladek@...e.com, hare@...e.com, rwright@....com, jeffm@...e.com,
        DSterba@...e.com, fdmanana@...e.com, neilb@...e.com,
        linux@...ck-us.net, rgoldwyn@...e.com, subashab@...eaurora.org,
        xypron.glpk@....de, keescook@...omium.org, atomlin@...hat.com,
        mbenes@...e.cz, paulmck@...ux.vnet.ibm.com,
        dan.j.williams@...el.com, jpoimboe@...hat.com, davem@...emloft.net,
        mingo@...hat.com, akpm@...ux-foundation.org,
        torvalds@...ux-foundation.org, gregkh@...uxfoundation.org,
        linux-kselftest@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/6] kmod: preempt on kmod_umh_threads_get()

+++ Dmitry Torokhov [24/05/17 19:27 -0700]:
>On Thu, May 25, 2017 at 03:00:17AM +0200, Luis R. Rodriguez wrote:
>> On Wed, May 24, 2017 at 05:45:37PM -0700, Dmitry Torokhov wrote:
>> > On Thu, May 25, 2017 at 02:14:52AM +0200, Luis R. Rodriguez wrote:
>> > > On Fri, May 19, 2017 at 03:27:12PM -0700, Dmitry Torokhov wrote:
>> > > > On Thu, May 18, 2017 at 08:24:43PM -0700, Luis R. Rodriguez wrote:
>> > > > > In theory it is possible multiple concurrent threads will try to
>> > > > > kmod_umh_threads_get() and as such atomic_inc(&kmod_concurrent) at
>> > > > > the same time, therefore enabling a small time during which we've
>> > > > > bumped kmod_concurrent but have not really enabled work. By using
>> > > > > preemption we mitigate this a bit.
>> > > > >
>> > > > > Preemption is not needed when we kmod_umh_threads_put().
>> > > > >
>> > > > > Signed-off-by: Luis R. Rodriguez <mcgrof@...nel.org>
>> > > > > ---
>> > > > >  kernel/kmod.c | 24 ++++++++++++++++++++++--
>> > > > >  1 file changed, 22 insertions(+), 2 deletions(-)
>> > > > >
>> > > > > diff --git a/kernel/kmod.c b/kernel/kmod.c
>> > > > > index 563600fc9bb1..7ea11dbc7564 100644
>> > > > > --- a/kernel/kmod.c
>> > > > > +++ b/kernel/kmod.c
>> > > > > @@ -113,15 +113,35 @@ static int call_modprobe(char *module_name, int wait)
>> > > > >
>> > > > >  static int kmod_umh_threads_get(void)
>> > > > >  {
>> > > > > +	int ret = 0;
>> > > > > +
>> > > > > +	/*
>> > > > > +	 * Disabling preemption makes sure that we are not rescheduled here
>> > > > > +	 *
>> > > > > +	 * Also preemption helps kmod_concurrent is not increased by mistake
>> > > > > +	 * for too long given in theory two concurrent threads could race on
>> > > > > +	 * atomic_inc() before we atomic_read() -- we know that's possible
>> > > > > +	 * and but we don't care, this is not used for object accounting and
>> > > > > +	 * is just a subjective threshold. The alternative is a lock.
>> > > > > +	 */
>> > > > > +	preempt_disable();
>> > > > >  	atomic_inc(&kmod_concurrent);
>> > > > >  	if (atomic_read(&kmod_concurrent) <= max_modprobes)
>> > > >
>> > > > That is very "fancy" way to basically say:
>> > > >
>> > > > 	if (atomic_inc_return(&kmod_concurrent) <= max_modprobes)
>> > >
>> > > Do you mean to combine the atomic_inc() and atomic_read() in one as you noted
>> > > (as that is not a change in this patch), *or* that using a memory barrier here
>> > > with atomic_inc_return() should suffice to address the same and avoid an
>> > > explicit preemption  enable / disable ?
>> >
>> > I am saying that atomic_inc_return() will avoid situation where you have
>> > more than one threads incrementing the counter and believing that they
>> > are [not] allowed to start modprobe.
>> >
>> > I have no idea why you think preempt_disable() would help here. It only
>> > ensures that current thread will not be preempted between the point
>> > where you update the counter and where you check the result. It does not
>> > stop interrupts nor does it affect other threads that might be updating
>> > the same counter.
>>
>> The preemption was inspired by __module_get() and try_module_get(), was that
>> rather silly ?
>
>As far as I can see prrempt_disable() was needed in __module_get() when
>modules user per-cpu refcounts: you did not want to move away from CPU
>while manipulating refcount.
>
>Now that modules use simple atomics for refcounting I think these
>preempt_disable() and preempt_enable() can be removed.

Yup, preempt_disable/enable was originally used for percpu module
refcounting. AFAIK they are artifacts that remained from commit
e1783a240f4 "use this_cpu_xx to dynamically allocate counters" and
subsequently commit 2f35c41f58a "Replace module_ref with atomic_t
refcnt" removed the need for it.

Jessica

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ