[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <878r607m6h.fsf@nvidia.com>
Date: Mon, 11 Dec 2023 18:10:28 +0200
From: Vlad Buslov <vladbu@...dia.com>
To: Pedro Tammela <pctammela@...atatu.com>
CC: <netdev@...r.kernel.org>, <davem@...emloft.net>, <edumazet@...gle.com>,
<kuba@...nel.org>, <pabeni@...hat.com>, <jhs@...atatu.com>,
<xiyou.wangcong@...il.com>, <jiri@...nulli.us>, <marcelo.leitner@...il.com>
Subject: Re: [PATCH net-next 1/2] net/sched: act_api: rely on rcu in
tcf_idr_check_alloc
On Fri 08 Dec 2023 at 18:07, Pedro Tammela <pctammela@...atatu.com> wrote:
> On 06/12/2023 06:52, Vlad Buslov wrote:
>>> Ok, so if I'm binding and it's observed a free index, which means "try to
>>> allocate" and I get a ENOSPC after jumping to new, try again but this time
>>> binding into the allocated action.
>>>
>>> In this scenario when we come back to 'again' we will wait until -EBUSY is
>>> replaced with the real pointer. Seems like a big enough window that any race for
>>> allocating from binding would most probably end up in this contention loop.
>>>
>>> However I think when we have these two retry mechanisms there's a extremely
>>> small window for an infinite loop if an action delete is timed just right, in
>>> between the action pointer is found and when we grab the tcfa_refcnt.
>>>
>>> idr_find (pointer)
>>> tcfa_refcnt (0) <-------|
>>> again: |
>>> idr_find (free index!) |
>>> new: |
>>> idr_alloc_u32 (ENOSPC) |
>>> again: |
>>> idr_find (EBUSY) |
>>> again: |
>>> idr_find (pointer) |
>>> <evil delete happens> |
>>> ------->>>>--------------|
>> I'm not sure I'm following. Why would this sequence cause infinite loop?
>>
>
> Perhaps I was being overly paranoid. Taking a look again it seems that not only
> an evil delete but also EBUSY must be in the action idr for a long time. I see
> it now, it looks like it converges.
>
> I was wondering if instead of looping in 'again:' in either scenarios you
> presented, what if we return -EAGAIN and let the filter infrastructure retry it
> under rtnl_lock()? At least will give enough breathing room for a call to
> schedule() to kick in if needed.
Sounds good, but you will need to ensure that both act and cls api
implementations properly retry on EAGAIN (looks like they do, but I only
gave it a cursory glance).
Powered by blists - more mailing lists