lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241205184016.6941f504@kernel.org>
Date: Thu, 5 Dec 2024 18:40:16 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
 <daniel@...earbox.net>, John Fastabend <john.fastabend@...il.com>, Andrii
 Nakryiko <andrii@...nel.org>, "David S. Miller" <davem@...emloft.net>, Eric
 Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Toke
 Høiland-Jørgensen <toke@...hat.com>, Maciej
 Fijalkowski <maciej.fijalkowski@...el.com>, Stanislav Fomichev
 <sdf@...ichev.me>, Magnus Karlsson <magnus.karlsson@...el.com>,
 nex.sw.ncis.osdt.itp.upstreaming@...el.com, bpf@...r.kernel.org,
 netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v6 09/10] page_pool: allow mixing PPs within
 one bulk

Very nice in general, I'll apply the previous 8 but I'd like to offer
some alternatives here..

On Tue,  3 Dec 2024 18:37:32 +0100 Alexander Lobakin wrote:
> +void page_pool_put_netmem_bulk(netmem_ref *data, u32 count)
>  {
> -	int i, bulk_len = 0;
> -	bool allow_direct;
> -	bool in_softirq;
> +	bool allow_direct, in_softirq, again = false;
> +	netmem_ref bulk[XDP_BULK_QUEUE_SIZE];
> +	u32 i, bulk_len, foreign;
> +	struct page_pool *pool;
>  
> -	allow_direct = page_pool_napi_local(pool);
> +again:
> +	pool = NULL;
> +	bulk_len = 0;
> +	foreign = 0;
>  
>  	for (i = 0; i < count; i++) {
> -		netmem_ref netmem = netmem_compound_head(data[i]);
> +		struct page_pool *netmem_pp;
> +		netmem_ref netmem;
> +
> +		if (!again) {
> +			netmem = netmem_compound_head(data[i]);
>  
> -		/* It is not the last user for the page frag case */
> -		if (!page_pool_is_last_ref(netmem))
> +			/* It is not the last user for the page frag case */
> +			if (!page_pool_is_last_ref(netmem))
> +				continue;

We check the "again" condition potentially n^2 times, is it written
this way because we expect no mixing? Would it not be fewer cycles
to do a first pass, convert all buffers to heads, filter out all
non-last refs, and delete the "again" check?

Minor benefit is that it removes a few of the long lines so it'd be
feasible to drop the "goto again" as well and just turn this function
into a while (count) loop.

> +		} else {
> +			netmem = data[i];
> +		}
> +
> +		netmem_pp = netmem_get_pp(netmem);

nit: netmem_pp is not a great name. Ain't nothing especially netmem
about it, it's just the _current_ page pool.

> +		if (unlikely(!pool)) {
> +			pool = netmem_pp;
> +			allow_direct = page_pool_napi_local(pool);
> +		} else if (netmem_pp != pool) {
> +			/*
> +			 * If the netmem belongs to a different page_pool, save
> +			 * it for another round after the main loop.
> +			 */
> +			data[foreign++] = netmem;
>  			continue;
> +		}
>  
>  		netmem = __page_pool_put_page(pool, netmem, -1, allow_direct);
>  		/* Approved for bulk recycling in ptr_ring cache */
>  		if (netmem)
> -			data[bulk_len++] = netmem;
> +			bulk[bulk_len++] = netmem;
>  	}
>  
>  	if (!bulk_len)

You can invert this condition, and move all the code from here to the
out label into a small helper with just 3 params (pool, bulk, bulk_len).
Naming will be the tricky part but you can save us a bunch of gotos.

> -		return;
> +		goto out;
>  
>  	/* Bulk producer into ptr_ring page_pool cache */
>  	in_softirq = page_pool_producer_lock(pool);
>  	for (i = 0; i < bulk_len; i++) {
> -		if (__ptr_ring_produce(&pool->ring, (__force void *)data[i])) {
> +		if (__ptr_ring_produce(&pool->ring, (__force void *)bulk[i])) {
>  			/* ring full */
>  			recycle_stat_inc(pool, ring_full);
>  			break;
> @@ -893,13 +915,22 @@ void page_pool_put_netmem_bulk(struct page_pool *pool, netmem_ref *data,
>  
>  	/* Hopefully all pages was return into ptr_ring */
>  	if (likely(i == bulk_len))
> -		return;
> +		goto out;
>  
>  	/* ptr_ring cache full, free remaining pages outside producer lock
>  	 * since put_page() with refcnt == 1 can be an expensive operation
>  	 */
>  	for (; i < bulk_len; i++)
> -		page_pool_return_page(pool, data[i]);
> +		page_pool_return_page(pool, bulk[i]);
> +
> +out:
> +	if (!foreign)
> +		return;
> +
> +	count = foreign;
> +	again = true;
> +
> +	goto again;
>  }
>  EXPORT_SYMBOL(page_pool_put_netmem_bulk);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ