[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bki2xb3d.fsf@nvidia.com>
Date: Tue, 30 May 2023 15:18:19 +0300
From: Vlad Buslov <vladbu@...dia.com>
To: Peilin Ye <yepeilin.cs@...il.com>
CC: Jamal Hadi Salim <jhs@...atatu.com>, Jakub Kicinski <kuba@...nel.org>,
Pedro Tammela <pctammela@...atatu.com>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Paolo Abeni
<pabeni@...hat.com>, Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko
<jiri@...nulli.us>, Peilin Ye <peilin.ye@...edance.com>, Daniel Borkmann
<daniel@...earbox.net>, John Fastabend <john.fastabend@...il.com>, "Hillf
Danton" <hdanton@...a.com>, <netdev@...r.kernel.org>, Cong Wang
<cong.wang@...edance.com>
Subject: Re: [PATCH v5 net 6/6] net/sched: qdisc_destroy() old ingress and
clsact Qdiscs before grafting
On Tue 30 May 2023 at 02:11, Peilin Ye <yepeilin.cs@...il.com> wrote:
> On Mon, May 29, 2023 at 02:50:26PM +0300, Vlad Buslov wrote:
>> After looking very carefully at the code I think I know what the issue
>> might be:
>>
>> Task 1 graft Qdisc Task 2 new filter
>> + +
>> | |
>> v v
>> rtnl_lock() take q->refcnt
>> + +
>> | |
>> v v
>> Spin while q->refcnt!=1 Block on rtnl_lock() indefinitely due to -EAGAIN
>>
>> This will cause a real deadlock with the proposed patch. I'll try to
>> come up with a better approach. Sorry for not seeing it earlier.
>
> Thanks a lot for pointing this out! The reproducers add flower filters to
> ingress Qdiscs so I didn't think of rtnl_lock()'ed filter requests...
>
> On Mon, May 29, 2023 at 03:58:50PM +0300, Vlad Buslov wrote:
>> - Account for such cls_api behavior in sch_api by dropping and
>> re-tacking the lock before replaying. This actually seems to be quite
>> straightforward since 'replay' functionality that we are reusing for
>> this is designed for similar behavior - it releases rtnl lock before
>> loading a sch module, takes the lock again and safely replays the
>> function by re-obtaining all the necessary data.
>
> Yes, I've tested this using that reproducer Pedro posted.
>
> On Mon, May 29, 2023 at 03:58:50PM +0300, Vlad Buslov wrote:
>> If livelock with concurrent filters insertion is an issue, then it can
>> be remedied by setting a new Qdisc->flags bit
>> "DELETED-REJECT-NEW-FILTERS" and checking for it together with
>> QDISC_CLASS_OPS_DOIT_UNLOCKED in order to force any concurrent filter
>> insertion coming after the flag is set to synchronize on rtnl lock.
>
> Thanks for the suggestion! I'll try this approach.
>
> Currently QDISC_CLASS_OPS_DOIT_UNLOCKED is checked after taking a refcnt of
> the "being-deleted" Qdisc. I'll try forcing "late" requests (that arrive
> later than Qdisc is flagged as being-deleted) sync on RTNL lock without
> (before) taking the Qdisc refcnt (otherwise I think Task 1 will replay for
> even longer?).
Yeah, I see what you mean. Looking at the code __tcf_qdisc_find()
already returns -EINVAL when q->refcnt is zero, so maybe returning
-EINVAL from that function when "DELETED-REJECT-NEW-FILTERS" flags is
set is also fine? Would be much easier to implement as opposed to moving
rtnl_lock there.
Powered by blists - more mailing lists