[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZF1SqomxfPNfccrt@C02FL77VMD6R.googleapis.com>
Date: Thu, 11 May 2023 13:40:10 -0700
From: Peilin Ye <yepeilin.cs@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Peilin Ye <peilin.ye@...edance.com>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Vlad Buslov <vladbu@...lanox.com>,
Pedro Tammela <pctammela@...atatu.com>,
Hillf Danton <hdanton@...a.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Cong Wang <cong.wang@...edance.com>
Subject: Re: [PATCH net 6/6] net/sched: qdisc_destroy() old ingress and
clsact Qdiscs before grafting
On Wed, May 10, 2023 at 04:15:59PM -0700, Jakub Kicinski wrote:
> My thinking was to make sure that dev->miniq_* pointers always point
> to one of the miniqs of the currently attached qdisc. Right now, on
> a quick look, those pointers are not initialized during initial graft,
> only when first filter is added, and may be cleared when filters are
> removed. But I don't think that's strictly required, miniq with no
> filters should be fine.
Ah, I see, thanks for explaining, I didn't think of that. Looking at
sch_handle_ingress() BTW, currently mini Qdisc stats aren't being updated
(mini_qdisc_bstats_cpu_update()) if there's no filters, is this intended?
Should I keep this behavior by:
diff --git a/net/core/dev.c b/net/core/dev.c
index 735096d42c1d..9016481377e0 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5116,7 +5116,7 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
* that are not configured with an ingress qdisc will bail
* out here.
*/
- if (!miniq)
+ if (!miniq || !miniq->filter_list)
return skb;
if (*pt_prev) {
On Wed, May 10, 2023 at 04:15:59PM -0700, Jakub Kicinski wrote:
> On Wed, 10 May 2023 13:11:19 -0700 Peilin Ye wrote:
> > On Fri, 5 May 2023 17:16:10 -0700 Peilin Ye wrote:
> > > Thread 1 A's refcnt Thread 2
> > > RTM_NEWQDISC (A, RTNL-locked)
> > > qdisc_create(A) 1
> > > qdisc_graft(A) 9
> > >
> > > RTM_NEWTFILTER (X, RTNL-lockless)
> > > __tcf_qdisc_find(A) 10
> > > tcf_chain0_head_change(A)
> > > mini_qdisc_pair_swap(A) (1st)
> > > |
> > > | RTM_NEWQDISC (B, RTNL-locked)
> > > RCU 2 qdisc_graft(B)
> > > | 1 notify_and_destroy(A)
> > > |
> > > tcf_block_release(A) 0 RTM_NEWTFILTER (Y, RTNL-lockless)
> > > qdisc_destroy(A) tcf_chain0_head_change(B)
> > > tcf_chain0_head_change_cb_del(A) mini_qdisc_pair_swap(B) (2nd)
> > > mini_qdisc_pair_swap(A) (3rd) |
> > > ... ...
> >
> > Looking at the code, I think there is no guarantee that (1st) cannot
> > happen after (2nd), although unlikely? Can RTNL-lockless RTM_NEWTFILTER
> > handlers get preempted?
>
> Right, we need qdisc_graft(B) to update the appropriate dev pointer
> to point to b1. With that the ordering should not matter. Probably
> using the ->attach() callback?
->attach() is later than dev_graft_qdisc(). We should get ready for
concurrent filter requests (i.e. have dev pointer pointing to b1) before
grafting (publishing) B. Also currently qdisc_graft() doesn't call
->attach() for ingress and clsact Qdiscs.
But I see your point, thanks for the suggestion! I'll try ->init() and
create v2.
Thanks,
Peilin Ye
Powered by blists - more mailing lists