[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d5c66d86-23e0-b786-5cba-ae9c18a97549@redhat.com>
Date: Fri, 20 Jan 2023 10:09:41 +0100
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: brouer@...hat.com, netdev@...r.kernel.org,
Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>, pabeni@...hat.com,
syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
Subject: Re: [PATCH net-next] net: fix kfree_skb_list use of
skb_mark_not_on_list
On 19/01/2023 19.04, Eric Dumazet wrote:
> On Thu, Jan 19, 2023 at 6:50 PM Jesper Dangaard Brouer
> <brouer@...hat.com> wrote:
>>
>> A bug was introduced by commit eedade12f4cb ("net: kfree_skb_list use
>> kmem_cache_free_bulk"). It unconditionally unlinked the SKB list via
>> invoking skb_mark_not_on_list().
>>
>> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
>> returns true, meaning the SKB is ready to be free'ed, as it calls/check
>> skb_unref().
>>
>> This is needed as kfree_skb_list() is also invoked on skb_shared_info
>> frag_list. A frag_list can have SKBs with elevated refcnt due to cloning
>> via skb_clone_fraglist(), which takes a reference on all SKBs in the
>> list. This implies the invariant that all SKBs in the list must have the
>> same refcnt, when using kfree_skb_list().
>
> Yeah, or more precisely skb_drop_fraglist() calling kfree_skb_list()
>
>>
>> Reported-by: syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
>> Reported-and-tested-by: syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
>> Fixes: eedade12f4cb ("net: kfree_skb_list use kmem_cache_free_bulk")
>> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
>> ---
>> net/core/skbuff.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 4e73ab3482b8..1bffbcbe6087 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -999,10 +999,10 @@ kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
>> while (segs) {
>> struct sk_buff *next = segs->next;
>>
>> - skb_mark_not_on_list(segs);
>> -
>> - if (__kfree_skb_reason(segs, reason))
>> + if (__kfree_skb_reason(segs, reason)) {
>> + skb_mark_not_on_list(segs);
>
> Real question is : Why do we need to set/change/dirt skb->next ?
>
> I would remove this completely, and save extra cache lines dirtying.
First of all, we just read this cacheline via reading segs->next.
This cacheline must as minimum be in Shared (S) state.
Secondly SLUB will write into this cacheline. Thus, we actually know
that this cacheline need to go into Modified (M) or Exclusive (E).
Thus, writing into it here should be okay. We could replace it with a
prefetchw() to help SLUB get Exclusive (E) cache coherency state.
> Before your patch, we were not calling skb_mark_not_on_list(segs),
> so why bother ?
To catch potential bugs.
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 4e73ab3482b87d81371cff266627dab646d3e84c..180df58e85c72eaa16f5cb56b56d181a379b8921
> 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -999,8 +999,6 @@ kfree_skb_list_reason(struct sk_buff *segs, enum
> skb_drop_reason reason)
> while (segs) {
> struct sk_buff *next = segs->next;
>
> - skb_mark_not_on_list(segs);
> -
> if (__kfree_skb_reason(segs, reason))
> kfree_skb_add_bulk(segs, &sa, reason);
>
Powered by blists - more mailing lists