[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190325164929.03d67fb6@carbon>
Date: Mon, 25 Mar 2019 16:49:29 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Felix Fietkau <nbd@....name>, netdev@...r.kernel.org,
davem@...emloft.net, fw@...len.de, pabeni@...hat.com,
brouer@...hat.com
Subject: Re: [PATCH v3] net: use bulk free in kfree_skb_list
On Mon, 25 Mar 2019 02:27:14 -0700
Eric Dumazet <eric.dumazet@...il.com> wrote:
> > +#ifdef CONFIG_SLUB
> > + /* SLUB writes into objects when freeing */
> > + prefetchw(segs);
> > +#endif
>
> This is done too late :
> You should probably either remove this prefetchw()
> or do it before reading segs->next at the beginning of the loop.
Agree. Not sure the prefetchw optimization makes sense here. IHMO just
drop it in this patch.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists