[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpWfv59MoEJES1O=FhA4YsrB2nNGGaKzDmqcmXQXzc8gow@mail.gmail.com>
Date: Wed, 2 Dec 2020 17:29:53 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
Dongdong Wang <wangdongdong@...edance.com>,
Thomas Graf <tgraf@...g.ch>, bpf@...r.kernel.org,
Cong Wang <cong.wang@...edance.com>
Subject: Re: [Patch net] lwt: disable BH too in run_lwt_bpf()
On Wed, Dec 2, 2020 at 5:10 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Tue, 1 Dec 2020 11:44:38 -0800 Cong Wang wrote:
> > From: Dongdong Wang <wangdongdong@...edance.com>
> >
> > The per-cpu bpf_redirect_info is shared among all skb_do_redirect()
> > and BPF redirect helpers. Callers on RX path are all in BH context,
> > disabling preemption is not sufficient to prevent BH interruption.
> >
> > In production, we observed strange packet drops because of the race
> > condition between LWT xmit and TC ingress, and we verified this issue
> > is fixed after we disable BH.
> >
> > Although this bug was technically introduced from the beginning, that
> > is commit 3a0af8fd61f9 ("bpf: BPF for lightweight tunnel infrastructure"),
> > at that time call_rcu() had to be call_rcu_bh() to match the RCU context.
> > So this patch may not work well before RCU flavor consolidation has been
> > completed around v5.0.
> >
> > Update the comments above the code too, as call_rcu() is now BH friendly.
> >
> > Cc: Thomas Graf <tgraf@...g.ch>
> > Cc: bpf@...r.kernel.org
> > Reviewed-by: Cong Wang <cong.wang@...edance.com>
> > Signed-off-by: Dongdong Wang <wangdongdong@...edance.com>
> > ---
> > net/core/lwt_bpf.c | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
> > index 7d3438215f32..4f3cb7c15ddf 100644
> > --- a/net/core/lwt_bpf.c
> > +++ b/net/core/lwt_bpf.c
> > @@ -39,12 +39,11 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt,
> > {
> > int ret;
> >
> > - /* Preempt disable is needed to protect per-cpu redirect_info between
> > - * BPF prog and skb_do_redirect(). The call_rcu in bpf_prog_put() and
> > - * access to maps strictly require a rcu_read_lock() for protection,
> > - * mixing with BH RCU lock doesn't work.
> > + /* Preempt disable and BH disable are needed to protect per-cpu
> > + * redirect_info between BPF prog and skb_do_redirect().
> > */
> > preempt_disable();
> > + local_bh_disable();
>
> Why not remove the preempt_disable()? Disabling BH must also disable
> preemption AFAIK.
It seems RT kernel still needs preempt disable:
https://www.spinics.net/lists/kernel/msg3710124.html
but my RT knowledge is not sufficient to tell. So I just follow the
same pattern
in x86 FPU (as of today):
static inline void fpregs_lock(void)
{
preempt_disable();
local_bh_disable();
}
static inline void fpregs_unlock(void)
{
local_bh_enable();
preempt_enable();
}
There are other similar patterns in the current code base, so if this
needs a clean up, RT people can clean up them all together.
Thanks.
Powered by blists - more mailing lists