[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJ8Vd2V6jqVdMYLFcs0g_mu+bTJr3mKq__uXBFg1K0yhA@mail.gmail.com>
Date: Thu, 19 Jan 2023 19:04:57 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
"David S. Miller" <davem@...emloft.net>, pabeni@...hat.com,
syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
Subject: Re: [PATCH net-next] net: fix kfree_skb_list use of skb_mark_not_on_list
On Thu, Jan 19, 2023 at 6:50 PM Jesper Dangaard Brouer
<brouer@...hat.com> wrote:
>
> A bug was introduced by commit eedade12f4cb ("net: kfree_skb_list use
> kmem_cache_free_bulk"). It unconditionally unlinked the SKB list via
> invoking skb_mark_not_on_list().
>
> The skb_mark_not_on_list() should only be called if __kfree_skb_reason()
> returns true, meaning the SKB is ready to be free'ed, as it calls/check
> skb_unref().
>
> This is needed as kfree_skb_list() is also invoked on skb_shared_info
> frag_list. A frag_list can have SKBs with elevated refcnt due to cloning
> via skb_clone_fraglist(), which takes a reference on all SKBs in the
> list. This implies the invariant that all SKBs in the list must have the
> same refcnt, when using kfree_skb_list().
Yeah, or more precisely skb_drop_fraglist() calling kfree_skb_list()
>
> Reported-by: syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
> Reported-and-tested-by: syzbot+c8a2e66e37eee553c4fd@...kaller.appspotmail.com
> Fixes: eedade12f4cb ("net: kfree_skb_list use kmem_cache_free_bulk")
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
> net/core/skbuff.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 4e73ab3482b8..1bffbcbe6087 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -999,10 +999,10 @@ kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason)
> while (segs) {
> struct sk_buff *next = segs->next;
>
> - skb_mark_not_on_list(segs);
> -
> - if (__kfree_skb_reason(segs, reason))
> + if (__kfree_skb_reason(segs, reason)) {
> + skb_mark_not_on_list(segs);
Real question is : Why do we need to set/change/dirt skb->next ?
I would remove this completely, and save extra cache lines dirtying.
Before your patch, we were not calling skb_mark_not_on_list(segs),
so why bother ?
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 4e73ab3482b87d81371cff266627dab646d3e84c..180df58e85c72eaa16f5cb56b56d181a379b8921
100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -999,8 +999,6 @@ kfree_skb_list_reason(struct sk_buff *segs, enum
skb_drop_reason reason)
while (segs) {
struct sk_buff *next = segs->next;
- skb_mark_not_on_list(segs);
-
if (__kfree_skb_reason(segs, reason))
kfree_skb_add_bulk(segs, &sa, reason);
Powered by blists - more mailing lists