[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230721120040.6ed2c02a@gandalf.local.home>
Date: Fri, 21 Jul 2023 12:00:40 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"James E.J. Bottomley" <James.Bottomley@...senPartnership.com>,
Helge Deller <deller@....de>,
Michael Ellerman <mpe@...erman.id.au>,
"Benjamin Herrenschmidt" <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
"Jiri Kosina" <jikos@...nel.org>, Miroslav Benes <mbenes@...e.cz>,
Petr Mladek <pmladek@...e.com>,
Joe Lawrence <joe.lawrence@...hat.com>,
Colin Ian King <colin.king@...onical.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
"Nicholas Piggin" <npiggin@...il.com>,
Jisheng Zhang <jszhang@...nel.org>,
<linux-csky@...r.kernel.org>, <linux-parisc@...r.kernel.org>,
<linuxppc-dev@...ts.ozlabs.org>, <linux-riscv@...ts.infradead.org>,
<live-patching@...r.kernel.org>,
王贇 <yun.wang@...ux.alibaba.com>,
Guo Ren <guoren@...nel.org>, Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH] tracing: Have all levels of checks prevent recursion
On Fri, 21 Jul 2023 17:34:41 +0200
Alexander Lobakin <aleksander.lobakin@...el.com> wrote:
> From: Steven Rostedt <rostedt@...dmis.org>
> Date: Fri, 15 Oct 2021 14:25:41 -0400
>
> Sorry for such a necroposting :z
> Just wanted to know if this is a bug, so that I could send a fix, or
> intended behaviour.
>
> > On Fri, 15 Oct 2021 14:20:33 -0400
> > Steven Rostedt <rostedt@...dmis.org> wrote:
> >
> >>> I think having one copy of that in a header is better than having 3
> >>> copies. But yes, something along them lines.
> >>
> >> I was just about to ask you about this patch ;-)
> >
> > Except it doesn't build :-p (need to move the inlined function down a bit)
> >
> > diff --git a/include/linux/preempt.h b/include/linux/preempt.h
> > index 4d244e295e85..b32e3dabe28b 100644
> > --- a/include/linux/preempt.h
> > +++ b/include/linux/preempt.h
> > @@ -77,6 +77,27 @@
> > /* preempt_count() and related functions, depends on PREEMPT_NEED_RESCHED */
> > #include <asm/preempt.h>
> >
> > +/**
> > + * interrupt_context_level - return interrupt context level
> > + *
> > + * Returns the current interrupt context level.
> > + * 0 - normal context
> > + * 1 - softirq context
> > + * 2 - hardirq context
> > + * 3 - NMI context
> > + */
> > +static __always_inline unsigned char interrupt_context_level(void)
> > +{
> > + unsigned long pc = preempt_count();
> > + unsigned char level = 0;
> > +
> > + level += !!(pc & (NMI_MASK));
> > + level += !!(pc & (NMI_MASK | HARDIRQ_MASK));
> > + level += !!(pc & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET));
>
> This doesn't take into account that we can switch the context manually
> via local_bh_disable() / local_irq_save() etc. During the testing of the
You cannot manually switch interrupt context.
> separate issue[0], I've found that the function returns 1 in both just
> softirq and softirq under local_irq_save().
> Is this intended? Shouldn't that be
That is intended behavior.
local_bh_disable() and local_irq_save() is not a context switch. It is just
preventing that context from happening. The interrupt_context_level() is to
tell us what context we are running in, not what context is disabled.
>
> level += !!(pc & (NMI_MASK));
> level += !!(pc * (NMI_MASK | HARDIRQ_MASK)) || irqs_disabled();
> level += !!(pc * (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)) ||
> in_atomic();
>
> ?
> Otherwise, the result it returns is not really "context level".
local_bh_disable() use to (and perhaps still does in some configurations)
confuse things. But read the comment in kernel/softirq.c
/*
* SOFTIRQ_OFFSET usage:
*
* On !RT kernels 'count' is the preempt counter, on RT kernels this applies
* to a per CPU counter and to task::softirqs_disabled_cnt.
*
* - count is changed by SOFTIRQ_OFFSET on entering or leaving softirq
* processing.
*
* - count is changed by SOFTIRQ_DISABLE_OFFSET (= 2 * SOFTIRQ_OFFSET)
* on local_bh_disable or local_bh_enable.
*
* This lets us distinguish between whether we are currently processing
* softirq and whether we just have bh disabled.
*/
Just because you disable interrupts does not mean you are in interrupt
context.
-- Steve
>
> > +
> > + return level;
> > +}
> > +
> [0]
> https://lore.kernel.org/netdev/b3884ff9-d903-948d-797a-1830a39b1e71@intel.com
>
> Thanks,
> Olek
Powered by blists - more mailing lists