[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200225003351.vvsrgyta47ciqhvo@ast-mbp>
Date: Mon, 24 Feb 2020 16:33:52 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
David Miller <davem@...emloft.net>, bpf@...r.kernel.org,
netdev@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Sebastian Sewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Clark Williams <williams@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Juri Lelli <juri.lelli@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Vinicius Costa Gomes <vinicius.gomes@...el.com>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [patch V3 06/22] bpf/trace: Remove redundant preempt_disable
from trace_call_bpf()
On Mon, Feb 24, 2020 at 09:42:52PM +0100, Thomas Gleixner wrote:
> Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
> > On Mon, Feb 24, 2020 at 03:01:37PM +0100, Thomas Gleixner wrote:
> >> --- a/kernel/trace/bpf_trace.c
> >> +++ b/kernel/trace/bpf_trace.c
> >> @@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace
> >> if (in_nmi()) /* not supported yet */
> >> return 1;
> >>
> >> - preempt_disable();
> >> + cant_sleep();
> >>
> >> if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
> >> /*
> >> @@ -115,7 +115,6 @@ unsigned int trace_call_bpf(struct trace
> >>
> >> out:
> >> __this_cpu_dec(bpf_prog_active);
> >> - preempt_enable();
> >
> > My testing uncovered that above was too aggressive:
> > [ 41.533438] BUG: assuming atomic context at kernel/trace/bpf_trace.c:86
> > [ 41.534265] in_atomic(): 0, irqs_disabled(): 0, pid: 2348, name: test_progs
> > [ 41.536907] Call Trace:
> > [ 41.537167] dump_stack+0x75/0xa0
> > [ 41.537546] __cant_sleep.cold.105+0x8b/0xa3
> > [ 41.538018] ? exit_to_usermode_loop+0x77/0x140
> > [ 41.538493] trace_call_bpf+0x4e/0x2e0
> > [ 41.538908] __uprobe_perf_func.isra.15+0x38f/0x690
> > [ 41.539399] ? probes_profile_seq_show+0x220/0x220
> > [ 41.539962] ? __mutex_lock_slowpath+0x10/0x10
> > [ 41.540412] uprobe_dispatcher+0x5de/0x8f0
> > [ 41.540875] ? uretprobe_dispatcher+0x7c0/0x7c0
> > [ 41.541404] ? down_read_killable+0x200/0x200
> > [ 41.541852] ? __kasan_kmalloc.constprop.6+0xc1/0xd0
> > [ 41.542356] uprobe_notify_resume+0xacf/0x1d60
>
> Duh. I missed that particular callchain.
>
> > The following fixes it:
> >
> > commit 7b7b71ff43cc0b15567b60c38a951c8a2cbc97f0 (HEAD -> bpf-next)
> > Author: Alexei Starovoitov <ast@...nel.org>
> > Date: Mon Feb 24 11:27:15 2020 -0800
> >
> > bpf: disable migration for bpf progs attached to uprobe
> >
> > trace_call_bpf() no longer disables preemption on its own.
> > All callers of this function has to do it explicitly.
> >
> > Signed-off-by: Alexei Starovoitov <ast@...nel.org>
> >
> > diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
> > index 18d16f3ef980..7581f5eb6091 100644
> > --- a/kernel/trace/trace_uprobe.c
> > +++ b/kernel/trace/trace_uprobe.c
> > @@ -1333,8 +1333,15 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
> > int size, esize;
> > int rctx;
> >
> > - if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> > - return;
> > + if (bpf_prog_array_valid(call)) {
> > + u32 ret;
> > +
> > + migrate_disable();
> > + ret = trace_call_bpf(call, regs);
> > + migrate_enable();
> > + if (!ret)
> > + return;
> > + }
> >
> > But looking at your patch cant_sleep() seems unnecessary strong.
> > Should it be cant_migrate() instead?
>
> Yes, if we go with the migrate_disable(). OTOH, having a
> preempt_disable() in that uprobe callsite should work as well, then we
> can keep the cant_sleep() check which covers all other callsites
> properly. No strong opinion though.
ok. I went with preempt_disable() for uprobes. It's simpler.
And pushed the whole set to bpf-next.
In few days we'll send it to Dave for net-next and on the way
to Linus's next release. imo it's a big milestone.
Thank you for the hard work to make it happen.
Powered by blists - more mailing lists