[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190125100906.GB4500@hirez.programming.kicks-ass.net>
Date: Fri, 25 Jan 2019 11:09:06 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>, davem@...emloft.net,
daniel@...earbox.net, jakub.kicinski@...ronome.com,
netdev@...r.kernel.org, kernel-team@...com, mingo@...hat.com,
will.deacon@....com, Paul McKenney <paulmck@...ux.vnet.ibm.com>,
jannh@...gle.com
Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock
On Thu, Jan 24, 2019 at 03:58:59PM -0800, Alexei Starovoitov wrote:
> On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
> > So clearly this map stuff is shared between bpf proglets, otherwise
> > there would not be a need for locking. But what happens if one is from
> > task context and another from IRQ context?
> >
> > I don't see a local_irq_save()/restore() anywhere. What avoids the
> > trivial lock inversion?
>
> > and from NMI ...
>
> progs are not preemptable and map syscall accessors have bpf_prog_active counters.
> So nmi/kprobe progs will not be running when syscall is running.
> Hence dead lock is not possible and irq_save is not needed.
What about the progs that run from SoftIRQ ? Since that bpf_prog_active
thing isn't inside BPF_PROG_RUN() what is to stop say:
reuseport_select_sock()
...
BPF_PROG_RUN()
bpf_spin_lock()
<IRQ>
...
BPF_PROG_RUN()
bpf_spin_lock() // forever more
</IRQ>
Unless you stick that bpf_prog_active stuff inside BPF_PROG_RUN itself,
I don't see how you can fundamentally avoid this happening (now or in
the future).
Powered by blists - more mailing lists