[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b33625370d39453edf8f981aef13a7a18a747b0a.camel@redhat.com>
Date: Mon, 25 Mar 2019 09:54:14 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Felix Fietkau <nbd@....name>, netdev@...r.kernel.org
Cc: davem@...emloft.net, brouer@...hat.com, fw@...len.de
Subject: Re: [PATCH v2] net: use bulk free in kfree_skb_list
Hi,
On Sun, 2019-03-24 at 17:56 +0100, Felix Fietkau wrote:
> Since we're freeing multiple skbs, we might as well use bulk free to save a
> few cycles. Use the same conditions for bulk free as in napi_consume_skb.
>
> Signed-off-by: Felix Fietkau <nbd@....name>
> ---
> v2: call kmem_cache_free_bulk once the skb array is full instead of
> falling back to kfree_skb
> net/core/skbuff.c | 40 ++++++++++++++++++++++++++++++++++++----
> 1 file changed, 36 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 2415d9cb9b89..1eeaa264d2a4 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -666,12 +666,44 @@ EXPORT_SYMBOL(kfree_skb);
>
> void kfree_skb_list(struct sk_buff *segs)
> {
> - while (segs) {
> - struct sk_buff *next = segs->next;
> + struct sk_buff *next = segs;
> + void *skbs[16];
> + int n_skbs = 0;
>
> - kfree_skb(segs);
> - segs = next;
> + while ((segs = next) != NULL) {
> + next = segs->next;
> +
> + if (!skb_unref(segs))
> + continue;
> +
> + if (fclone != SKB_FCLONE_UNAVAILABLE) {
> + kfree_skb(segs);
> + continue;
> + }
I think you should swap the order of skb_unref() and the above check,
or skbs with 'fclone != SKB_FCLONE_UNAVAILABLE' will go twice in
skb_unref() (kfree_skb() calls skb_unref(), too).
Other than that LGTM,
Thanks,
Paolo
Powered by blists - more mailing lists