lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 Mar 2011 17:11:52 +0800
From:	Yong Zhang <yong.zhang0@...il.com>
To:	Mike Galbraith <efault@....de>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [patchlet] sched: fix rt throttle runtime borrowing

On Mon, Mar 7, 2011 at 4:21 PM, Mike Galbraith <efault@....de> wrote:
> Greetings,
>
> The RT throttle leaves a bit to be desired as a protection mechanism.
> With default settings, the thing won't save your bacon if you start a
> single hog as RT on SMP box, or if your normally sane app goes nuts.
>
> With the below, my box will limp along so I can kill the RT hog.  May
> not be the best solution, but works for me.. modulo bustage I haven't
> noticed yet of course.
>
> sched: fix rt throttle runtime borrowing
>
> If allowed to borrow up to rt_period, the throttle has no effect on an out
> of control RT task, allowing it to consume 100% CPU indefinitely, blocking
> system critical SCHED_NORMAL threads indefinitely.

Yep.
I think it's helpful.

BTW, the comments(above diff calculation) in do_balance_runtime()
should be updated too :)

Thanks,
Yong

>
> Signed-off-by: Mike Galbraith <efault@....de>
>
> ---
>  kernel/sched_rt.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux-2.6/kernel/sched_rt.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched_rt.c
> +++ linux-2.6/kernel/sched_rt.c
> @@ -354,7 +354,7 @@ static int do_balance_runtime(struct rt_
>        weight = cpumask_weight(rd->span);
>
>        raw_spin_lock(&rt_b->rt_runtime_lock);
> -       rt_period = ktime_to_ns(rt_b->rt_period);
> +       rt_period = ktime_to_ns(rt_b->rt_period) - 1;
>        for_each_cpu(i, rd->span) {
>                struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
>                s64 diff;
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>



-- 
Only stand for myself
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ