[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bkjasmtw.fsf@nvidia.com>
Date: Wed, 26 Apr 2023 17:46:38 +0300
From: Vlad Buslov <vladbu@...dia.com>
To: Pedro Tammela <pctammela@...atatu.com>,
Ivan Vecera <ivecera@...hat.com>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <netdev@...r.kernel.org>,
<jhs@...atatu.com>, <xiyou.wangcong@...il.com>, <jiri@...nulli.us>,
<marcelo.leitner@...il.com>, <paulb@...dia.com>,
<simon.horman@...igine.com>
Subject: Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace
On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@...atatu.com> wrote:
> On 26/04/2023 09:14, Vlad Buslov wrote:
>> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of
>> new filter to idr is postponed until later in code since handle is already
>> provided by the user. However, the error handling code in fl_change()
>> always assumes that the new filter had been inserted into idr. If error
>> handler is reached when replacing existing filter it may remove it from idr
>> therefore making it unreachable for delete or dump afterwards. Fix the
>> issue by verifying that 'fold' argument wasn't provided by caller before
>> calling idr_remove().
>> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization
>> earlier")
>> Signed-off-by: Vlad Buslov <vladbu@...dia.com>
>> ---
>> net/sched/cls_flower.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>> index 1844545bef37..a1c4ee2e0be2 100644
>> --- a/net/sched/cls_flower.c
>> +++ b/net/sched/cls_flower.c
>> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>> errout_mask:
>> fl_mask_put(head, fnew->mask);
>> errout_idr:
>> - idr_remove(&head->handle_idr, fnew->handle);
>> + if (!fold)
>> + idr_remove(&head->handle_idr, fnew->handle);
>> __fl_put(fnew);
>> errout_tb:
>> kfree(tb);
>
> Actually this seems to be fixing the same issue:
> https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/
Indeed it does, I've missed that patch. However, it seems there
is an issue with Ivan's approach. Consider what would happen when
fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here:
if (fold) {
/* Fold filter was deleted concurrently. Retry lookup. */
if (fold->deleted) {
err = -EAGAIN;
goto errout_hw;
}
fnew->handle = handle; // <-- fnew->handle is assigned
if (!in_ht) {
struct rhashtable_params params =
fnew->mask->filter_ht_params;
err = rhashtable_insert_fast(&fnew->mask->ht,
&fnew->ht_node,
params);
if (err)
goto errout_hw; /* <-- err is set, go to
error handler here */
in_ht = true;
}
refcount_inc(&fnew->refcnt);
rhashtable_remove_fast(&fold->mask->ht,
&fold->ht_node,
fold->mask->filter_ht_params);
/* !!! we never get to insert fnew into idr here, if ht insertion fails */
idr_replace(&head->handle_idr, fnew, fnew->handle);
list_replace_rcu(&fold->list, &fnew->list);
fold->deleted = true;
spin_unlock(&tp->lock);
fl_mask_put(head, fold->mask);
if (!tc_skip_hw(fold->flags))
fl_hw_destroy_filter(tp, fold, rtnl_held, NULL);
tcf_unbind_filter(tp, &fold->res);
/* Caller holds reference to fold, so refcnt is always > 0
* after this.
*/
refcount_dec(&fold->refcnt);
__fl_put(fold);
}
...
errout_ht:
spin_lock(&tp->lock);
errout_hw:
fnew->deleted = true;
spin_unlock(&tp->lock);
if (!tc_skip_hw(fnew->flags))
fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL);
if (in_ht)
rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
fnew->mask->filter_ht_params);
errout_mask:
fl_mask_put(head, fnew->mask);
errout_idr:
/* !!! On next line we remove handle that we don't actually own */
idr_remove(&head->handle_idr, fnew->handle);
__fl_put(fnew);
errout_tb:
kfree(tb);
errout_mask_alloc:
tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work);
errout_fold:
if (fold)
__fl_put(fold);
return err;
Also, if I understood the idea behind Ivan's fix correctly, it relies on
the fact that calling idr_remove() with handle==0 is a noop. I prefer my
approach slightly better as it is more explicit IMO.
Thoughts?
Powered by blists - more mailing lists