[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ee000f803bd1a099aa8fb02ef79c7b25e5f5b08.camel@redhat.com>
Date: Fri, 23 Jun 2023 10:18:24 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: David Howells <dhowells@...hat.com>, netdev@...r.kernel.org
Cc: Alexander Duyck <alexander.duyck@...il.com>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, Willem de Bruijn <willemdebruijn.kernel@...il.com>,
David Ahern <dsahern@...nel.org>, Matthew Wilcox <willy@...radead.org>,
Jens Axboe <axboe@...nel.dk>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Menglong Dong <imagedong@...cent.com>
Subject: Re: [PATCH net-next v3 02/18] net: Display info about
MSG_SPLICE_PAGES memory handling in proc
On Tue, 2023-06-20 at 15:53 +0100, David Howells wrote:
> Display information about the memory handling MSG_SPLICE_PAGES does to copy
> slabbed data into page fragments.
>
> For each CPU that has a cached folio, it displays the folio pfn, the offset
> pointer within the folio and the size of the folio.
>
> It also displays the number of pages refurbished and the number of pages
> replaced.
>
> Signed-off-by: David Howells <dhowells@...hat.com>
> cc: Alexander Duyck <alexander.duyck@...il.com>
> cc: Eric Dumazet <edumazet@...gle.com>
> cc: "David S. Miller" <davem@...emloft.net>
> cc: David Ahern <dsahern@...nel.org>
> cc: Jakub Kicinski <kuba@...nel.org>
> cc: Paolo Abeni <pabeni@...hat.com>
> cc: Jens Axboe <axboe@...nel.dk>
> cc: Matthew Wilcox <willy@...radead.org>
> cc: Menglong Dong <imagedong@...cent.com>
> cc: netdev@...r.kernel.org
> ---
> net/core/skbuff.c | 42 +++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 39 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index d962c93a429d..36605510a76d 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -83,6 +83,7 @@
> #include <linux/user_namespace.h>
> #include <linux/indirect_call_wrapper.h>
> #include <linux/textsearch.h>
> +#include <linux/proc_fs.h>
>
> #include "dev.h"
> #include "sock_destructor.h"
> @@ -6758,6 +6759,7 @@ nodefer: __kfree_skb(skb);
> struct skb_splice_frag_cache {
> struct folio *folio;
> void *virt;
> + unsigned int fsize;
> unsigned int offset;
> /* we maintain a pagecount bias, so that we dont dirty cache line
> * containing page->_refcount every time we allocate a fragment.
> @@ -6767,6 +6769,26 @@ struct skb_splice_frag_cache {
> };
>
> static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache);
> +static atomic_t skb_splice_frag_replaced, skb_splice_frag_refurbished;
(in case we don't agree to restrict this series to just remove
MSG_SENDPAGE_NOTLAST)
Have you considered percpu counters instead of the above atomics?
I think the increments are in not so unlikely code-paths, and the
contention there could possibly hurt performances.
Thanks,
Paolo
Powered by blists - more mailing lists