lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <292c98e5-e58e-3474-f214-44fe6a83e6c5@huawei.com>
Date: Wed, 17 Apr 2024 21:23:06 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Alexander H Duyck <alexander.duyck@...il.com>, <davem@...emloft.net>,
	<kuba@...nel.org>, <pabeni@...hat.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Andrew Morton
	<akpm@...ux-foundation.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH net-next v2 10/15] mm: page_frag: reuse existing bit field
 of 'va' for pagecnt_bias

On 2024/4/17 0:33, Alexander H Duyck wrote:
> On Mon, 2024-04-15 at 21:19 +0800, Yunsheng Lin wrote:
>> As alignment of 'va' is always aligned with the order of the
>> page allocated, we can reuse the LSB bits for the pagecount
>> bias, and remove the orginal space needed by 'pagecnt_bias'.
>> Also limit the 'fragsz' to be at least the size of
>> 'usigned int' to match the limited pagecnt_bias.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> 
> What is the point of this? You are trading off space for size on a data
> structure that is only something like 24B in size and only allocated a
> few times.

As we are going to replace page_frag with page_frag_cache in patch 13,
it is not going to only be allocated a few times as mentioned.

> 
>> ---
>>  include/linux/page_frag_cache.h | 20 +++++++----
>>  mm/page_frag_cache.c            | 63 +++++++++++++++++++--------------
>>  2 files changed, 50 insertions(+), 33 deletions(-)
>>
>> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
>> index 40a7d6da9ef0..a97a1ac017d6 100644
>> --- a/include/linux/page_frag_cache.h
>> +++ b/include/linux/page_frag_cache.h
>> @@ -9,7 +9,18 @@
>>  #define PAGE_FRAG_CACHE_MAX_ORDER	get_order(PAGE_FRAG_CACHE_MAX_SIZE)
>>  
>>  struct page_frag_cache {
>> -	void *va;
>> +	union {
>> +		void *va;
>> +		/* we maintain a pagecount bias, so that we dont dirty cache
>> +		 * line containing page->_refcount every time we allocate a
>> +		 * fragment. As 'va' is always aligned with the order of the
>> +		 * page allocated, we can reuse the LSB bits for the pagecount
>> +		 * bias, and its bit width happens to be indicated by the
>> +		 * 'size_mask' below.
>> +		 */
>> +		unsigned long pagecnt_bias;
>> +
>> +	};
> 
> Both va and pagecnt_bias are frequently accessed items. If pagecnt_bias
> somehow ends up exceeding the alignment of the page we run the risk of
> corrupting data or creating an page fault.
> 
> In my opinion this is not worth the risk especially since with the
> previous change your new change results in 0 size savings on 64b
> systems as the structure will be aligned to the size of the pointer.

But aren't we going to avoid a register usage and loading if reusing
the lower bits of 'va' for the 64b systems? And added benefit is the
memory saving for 32b systems as mentioned in previous patch.

> 
>>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>>  	__u16 offset;
>>  	__u16 size_mask:15;
>> @@ -18,10 +29,6 @@ struct page_frag_cache {
>>  	__u32 offset:31;
>>  	__u32 pfmemalloc:1;
>>  #endif
>> -	/* we maintain a pagecount bias, so that we dont dirty cache line
>> -	 * containing page->_refcount every time we allocate a fragment.
>> -	 */
>> -	unsigned int		pagecnt_bias;
>>  };
>>  
>>  static inline void page_frag_cache_init(struct page_frag_cache *nc)
>> @@ -56,7 +63,8 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
>>  					     gfp_t gfp_mask,
>>  					     unsigned int align)
>>  {
>> -	WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE);
>> +	WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE ||
>> +		     fragsz < sizeof(unsigned int));
> 
> What is the reason for this change? Seems like it is to account for an
> issue somewhere.

If the fragsz is one, we might not have enough pagecnt_bias for it,
as we are using the lower bits of 'va' now.

> 
>>  
>>  	return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ