[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00b501d16d30$394e0fd0$abea2f70$@alibaba-inc.com>
Date: Mon, 22 Feb 2016 13:16:42 +0800
From: "Hillf Danton" <hillf.zj@...baba-inc.com>
To: "'Mike Galbraith'" <umgwanakikbuti@...il.com>
Cc: "'Sebastian Andrzej Siewior'" <bigeasy@...utronix.de>,
"'Thomas Gleixner'" <tglx@...utronix.de>,
"'LKML'" <linux-kernel@...r.kernel.org>,
"'linux-rt-users'" <linux-rt-users@...r.kernel.org>
Subject: Re: [patch] sched,rt: __always_inline preemptible_lazy()
>
> sched,rt: __always_inline preemptible_lazy()
>
> Functions called within a notrace function must either also be
> notrace or be inlined, lest recursion blow the stack.
>
> homer: # nm kernel/sched/core.o|grep preemptible_lazy
> 00000000000000b5 t preemptible_lazy
>
> echo wakeup_rt > current_tracer ==> Welcome to infinity.
>
> Signed-off-by: Mike Galbraith <umgwanakikbuti@...il.com>
> ---
Thank you, Mike.
Acked-by: Hillf Danton <hillf.zj@...baba-inc.com>
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3469,7 +3469,7 @@ static void __sched notrace preempt_sche
> * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
> * preempt_lazy_count counter >0.
> */
> -static int preemptible_lazy(void)
> +static __always_inline int preemptible_lazy(void)
> {
> if (test_thread_flag(TIF_NEED_RESCHED))
> return 1;
Powered by blists - more mailing lists