lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <91cf832e-ad66-47d0-bf2b-a8c9492d16a9@kernel.org>
Date: Wed, 31 Jan 2024 13:28:27 +0100
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org
Cc: lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
 edumazet@...gle.com, pabeni@...hat.com, bpf@...r.kernel.org,
 toke@...hat.com, willemdebruijn.kernel@...il.com, jasowang@...hat.com,
 sdf@...gle.com, ilias.apalodimas@...aro.org
Subject: Re: [PATCH v6 net-next 1/5] net: add generic per-cpu page_pool
 allocator



On 28/01/2024 15.20, Lorenzo Bianconi wrote:
> Introduce generic percpu page_pools allocator.
> Moreover add page_pool_create_percpu() and cpuid filed in page_pool struct
> in order to recycle the page in the page_pool "hot" cache if
> napi_pp_put_page() is running on the same cpu.
> This is a preliminary patch to add xdp multi-buff support for xdp running
> in generic mode.
> 
> Signed-off-by: Lorenzo Bianconi<lorenzo@...nel.org>
> ---
>   include/net/page_pool/types.h |  3 +++
>   net/core/dev.c                | 40 +++++++++++++++++++++++++++++++++++
>   net/core/page_pool.c          | 23 ++++++++++++++++----
>   net/core/skbuff.c             |  5 +++--
>   4 files changed, 65 insertions(+), 6 deletions(-)
> 
[...]
> diff --git a/net/core/dev.c b/net/core/dev.c
> index cb2dab0feee0..bf9ec740b09a 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
[...]
> @@ -11686,6 +11690,27 @@ static void __init net_dev_struct_check(void)
>    *
>    */
>   
> +#define SD_PAGE_POOL_RING_SIZE	256
> +static int net_page_pool_alloc(int cpuid)

I don't like the name net_page_pool_alloc().
It uses the page_pool_create APIs.

Let us renamed to net_page_pool_create() ?


> +{
> +#if IS_ENABLED(CONFIG_PAGE_POOL)
> +	struct page_pool_params page_pool_params = {
> +		.pool_size = SD_PAGE_POOL_RING_SIZE,
> +		.nid = NUMA_NO_NODE,
> +	};
> +	struct page_pool *pp_ptr;
> +
> +	pp_ptr = page_pool_create_percpu(&page_pool_params, cpuid);
> +	if (IS_ERR(pp_ptr)) {
> +		pp_ptr = NULL;
> +		return -ENOMEM;
> +	}
> +
> +	per_cpu(page_pool, cpuid) = pp_ptr;
> +#endif
> +	return 0;
> +}
> +
>   /*
>    *       This is called single threaded during boot, so no need
>    *       to take the rtnl semaphore.
> @@ -11738,6 +11763,9 @@ static int __init net_dev_init(void)
>   		init_gro_hash(&sd->backlog);
>   		sd->backlog.poll = process_backlog;
>   		sd->backlog.weight = weight_p;
> +
> +		if (net_page_pool_alloc(i))
> +			goto out;
>   	}
>   
>   	dev_boot_phase = 0;
> @@ -11765,6 +11793,18 @@ static int __init net_dev_init(void)
>   	WARN_ON(rc < 0);
>   	rc = 0;
>   out:
> +	if (rc < 0) {
> +		for_each_possible_cpu(i) {
> +			struct page_pool *pp_ptr = this_cpu_read(page_pool);
> +
> +			if (!pp_ptr)
> +				continue;
> +
> +			page_pool_destroy(pp_ptr);
> +			per_cpu(page_pool, i) = NULL;
> +		}
> +	}
> +
>   	return rc;
>   }
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ