[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM0EoM=28nsomF9PniYNtpjD4+=+hVi-dA2befgW0OCz+v0y3Q@mail.gmail.com>
Date: Sun, 24 Mar 2024 11:27:42 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: renmingshuai <renmingshuai@...wei.com>, xiyou.wangcong@...il.com, jiri@...nulli.us,
davem@...emloft.net, vladbu@...dia.com, netdev@...r.kernel.org,
yanan@...wei.com, liaichun@...wei.com, caowangbao@...wei.com,
Eric Dumazet <eric.dumazet@...il.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Victor Nogueira <victor@...atatu.com>,
Pedro Tammela <pctammela@...atatu.com>, Davide Caratti <dcaratti@...hat.com>
Subject: Re: [PATCH] net/sched: Forbid assigning mirred action to a filter
attached to the egress
On Wed, Mar 20, 2024 at 3:34 PM Jamal Hadi Salim <jhs@...atatu.com> wrote:
>
> On Wed, Mar 20, 2024 at 2:26 PM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Wed, Mar 20, 2024 at 7:13 PM Eric Dumazet <edumazet@...gle.com> wrote:
> > >
> > > On Wed, Mar 20, 2024 at 6:50 PM Jamal Hadi Salim <jhs@...atatu.com> wrote:
> >
> > > Nope, you just have to complete the patch, moving around
> > > dev_xmit_recursion_inc() and dev_xmit_recursion_dec()
> >
> > Untested part would be:
> >
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 303a6ff46e4e16296e94ed6b726621abe093e567..dbeaf67282e8b6ec164d00d796c9fd8e4fd7c332
> > 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4259,6 +4259,8 @@ int __dev_queue_xmit(struct sk_buff *skb, struct
> > net_device *sb_dev)
> > */
> > rcu_read_lock_bh();
> >
> > + dev_xmit_recursion_inc();
> > +
> > skb_update_prio(skb);
> >
> > qdisc_pkt_len_init(skb);
> > @@ -4331,9 +4333,7 @@ int __dev_queue_xmit(struct sk_buff *skb, struct
> > net_device *sb_dev)
> > HARD_TX_LOCK(dev, txq, cpu);
> >
> > if (!netif_xmit_stopped(txq)) {
> > - dev_xmit_recursion_inc();
> > skb = dev_hard_start_xmit(skb, dev, txq, &rc);
> > - dev_xmit_recursion_dec();
> > if (dev_xmit_complete(rc)) {
> > HARD_TX_UNLOCK(dev, txq);
> > goto out;
> > @@ -4353,12 +4353,14 @@ int __dev_queue_xmit(struct sk_buff *skb,
> > struct net_device *sb_dev)
> > }
> >
> > rc = -ENETDOWN;
> > + dev_xmit_recursion_dec();
> > rcu_read_unlock_bh();
> >
> > dev_core_stats_tx_dropped_inc(dev);
> > kfree_skb_list(skb);
> > return rc;
> > out:
> > + dev_xmit_recursion_dec();
> > rcu_read_unlock_bh();
> > return rc;
> > }
>
> This removed the deadlock but now every packet being redirected will
> be dropped. I was wrong earlier on the tc block because that only
> works on clsact and ingress which are fine not needing this lock.
> Here's a variation of the earlier patch that may work but comes at the
> cost of a new per-cpu increment on the qdisc.
>
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -3789,6 +3789,11 @@ static inline int __dev_xmit_skb(struct sk_buff
> *skb, struct Qdisc *q,
> if (unlikely(contended))
> spin_lock(&q->busylock);
>
> + if (__this_cpu_read(q->recursion_xmit) > 0) {
> + //drop
> + }
> +
> + __this_cpu_inc(q->recursion_xmit);
> spin_lock(root_lock);
> if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED, &q->state))) {
> __qdisc_drop(skb, &to_free);
> @@ -3825,6 +3830,7 @@ static inline int __dev_xmit_skb(struct sk_buff
> *skb, struct Qdisc *q,
> }
> }
> spin_unlock(root_lock);
> + __this_cpu_dec(q->recursion_xmit);
> if (unlikely(to_free))
> kfree_skb_list_reason(to_free,
> tcf_get_drop_reason(to_free));
>
And here's a tested version(attached) that fixes both the A->A and A->B->A.
cheers,
jamal
View attachment "qdisc-rec-xmit-var-fix.patch" of type "text/x-patch" (2619 bytes)
Powered by blists - more mailing lists