lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170622151936.GE1538@pathway.suse.cz>
Date:   Thu, 22 Jun 2017 17:19:36 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     "Luis R. Rodriguez" <mcgrof@...nel.org>
Cc:     akpm@...ux-foundation.org, jeyu@...hat.com, shuah@...nel.org,
        rusty@...tcorp.com.au, ebiederm@...ssion.com,
        dmitry.torokhov@...il.com, acme@...hat.com, corbet@....net,
        josh@...htriplett.org, martin.wilck@...e.com, mmarek@...e.com,
        hare@...e.com, rwright@....com, jeffm@...e.com, DSterba@...e.com,
        fdmanana@...e.com, neilb@...e.com, linux@...ck-us.net,
        rgoldwyn@...e.com, subashab@...eaurora.org, xypron.glpk@....de,
        keescook@...omium.org, atomlin@...hat.com, mbenes@...e.cz,
        paulmck@...ux.vnet.ibm.com, dan.j.williams@...el.com,
        jpoimboe@...hat.com, davem@...emloft.net, mingo@...hat.com,
        alan@...ux.intel.com, tytso@....edu, gregkh@...uxfoundation.org,
        torvalds@...ux-foundation.org, linux-kselftest@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/4] kmod: throttle kmod thread limit

On Fri 2017-05-26 14:12:28, Luis R. Rodriguez wrote:
> If we reach the limit of modprobe_limit threads running the next
> request_module() call will fail. The original reason for adding
> a kill was to do away with possible issues with in old circumstances
> which would create a recursive series of request_module() calls.
> We can do better than just be super aggressive and reject calls
> once we've reached the limit by simply making pending callers wait
> until the threshold has been reduced.
> 
> The only difference is the clutch helps with avoiding making
> request_module() requests fatal more often. With x86_64 qemu,
> with 4 cores, 4 GiB of RAM it takes the following run time to
> run both tests:
> 
> time ./kmod.sh -t 0008
> real    0m12.364s
> user    0m0.704s
> sys     0m5.373s
> 
> time ./kmod.sh -t 0009
> real    0m47.638s
> user    0m1.033s
> sys     0m5.425s
> 
> Signed-off-by: Luis R. Rodriguez <mcgrof@...nel.org>
> ---
>  kernel/kmod.c                        | 16 +++++++---------
>  tools/testing/selftests/kmod/kmod.sh | 24 ++----------------------
>  2 files changed, 9 insertions(+), 31 deletions(-)
> 
> diff --git a/kernel/kmod.c b/kernel/kmod.c
> index 3e346c700e80..46b12fed6fd0 100644
> --- a/kernel/kmod.c
> +++ b/kernel/kmod.c
> @@ -163,14 +163,11 @@ int __request_module(bool wait, const char *fmt, ...)
>  		return ret;
>  
>  	if (atomic_dec_if_positive(&kmod_concurrent_max) < 0) {
> -		/* We may be blaming an innocent here, but unlikely */
> -		if (kmod_loop_msg < 5) {
> -			printk(KERN_ERR
> -			       "request_module: runaway loop modprobe %s\n",
> -			       module_name);
> -			kmod_loop_msg++;
> -		}
> -		return -ENOMEM;
> +		pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s\n, throttling...",
> +				    atomic_read(&kmod_concurrent_max),
> +				    50, module_name);

It is weird to pass the constant '50' via %s. Also a #define should be
used to keep it in sync with the kmod_concurrent_max initialization.


> +		wait_event_interruptible(kmod_wq,
> +					 atomic_dec_if_positive(&kmod_concurrent_max) >= 0);
>  	}
>  
>  	trace_module_request(module_name, wait, _RET_IP_);
> @@ -178,6 +175,7 @@ int __request_module(bool wait, const char *fmt, ...)
>  	ret = call_modprobe(module_name, wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
>  
>  	atomic_inc(&kmod_concurrent_max);
> +	wake_up_all(&kmod_wq);

Does it make sense to wake up all waiters when we released the resource
only for one? IMHO, a simple wake_up() should be here.

I am sorry for the late review. The month ran really fast.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ