lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <722b084a-872b-4694-963d-241d00c63487@redhat.com>
Date: Fri, 30 May 2025 22:37:00 +0200
From: David Hildenbrand <david@...hat.com>
To: lizhe.67@...edance.com
Cc: akpm@...ux-foundation.org, jgg@...pe.ca, jhubbard@...dia.com,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, muchun.song@...ux.dev,
 peterx@...hat.com
Subject: Re: [PATCH] gup: optimize longterm pin_user_pages() for large folio

On 30.05.25 17:02, lizhe.67@...edance.com wrote:
> On Fri, 30 May 2025 15:08:06 +0200, david@...hat.com wrote:
> 
>>>>> diff --git a/mm/gup.c b/mm/gup.c
>>>>> index 84461d384ae2..8c11418036e2 100644
>>>>> --- a/mm/gup.c
>>>>> +++ b/mm/gup.c
>>>>> @@ -2317,6 +2317,25 @@ static void pofs_unpin(struct pages_or_folios *pofs)
>>>>>     		unpin_user_pages(pofs->pages, pofs->nr_entries);
>>>>>     }
>>>>>     
>>>>> +static struct folio *pofs_next_folio(struct folio *folio,
>>>>> +				struct pages_or_folios *pofs, long *index_ptr)
>>>>> +{
>>>>> +	long i = *index_ptr + 1;
>>>>> +	unsigned long nr_pages = folio_nr_pages(folio);
>>>>> +
>>>>> +	if (!pofs->has_folios)
>>>>> +		while ((i < pofs->nr_entries) &&
>>>>> +			/* Is this page part of this folio? */
>>>>> +			(folio_page_idx(folio, pofs->pages[i]) < nr_pages))
>>>>
>>>> passing in a page that does not belong to the folio looks shaky and not
>>>> future proof.
>>>>
>>>> folio_page() == folio
>>>>
>>>> is cleaner
>>>
>>> Yes, this approach is cleaner. However, when obtaining a folio
>>> corresponding to a page through the page_folio() interface,
>>
>> Right, I meant page_folio().
>>
>>> READ_ONCE() is used internally to read from memory, which results
>>> in the performance of pin_user_pages() being worse than before.
>>
>> See contig_pages in [1] how it can be done using folio_page().
>>
>> [1]
>> https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/T/#u
> 
> Thank you for your suggestion. It is indeed a good idea. I
> initially thought along the same lines. However, I found that
> the conditions for optimization here are slightly different
> from those in contig_pages(). Here, it is only necessary to
> ensure that the page is within the folio, rather than
> requiring contiguity.

Yes.

> 
> I have made some preliminary attempts: using the method of
> contig_pages() still gets an optimization effect of
> approximately 73%. On the other hand, if we use the following
> code to determine whether page_to_pfn(pofs->pages[i]) belongs
> to the range
> [folio_pfn(folio), folio_pfn(folio) + folio_nr_pages(folio)),
> the optimization effect is about 70%. I sincerely hope to
> hear your thoughts on which solution you might favor.
> 
> +static struct folio *pofs_next_folio(struct folio *folio,
> +		struct pages_or_folios *pofs, long *index_ptr)
> +{
> +	long i = *index_ptr + 1;
> +
> +	if (!pofs->has_folios) {
> +		unsigned long start_pfn = folio_pfn(folio);
> +		unsigned long end_pfn = start_pfn + folio_nr_pages(folio);
> +
> +		for (; i < pofs->nr_entries; i++) {
> +			unsigned long pfn = page_to_pfn(pofs->pages[i]);
> +
> +			/* Is this page part of this folio? */
> +			if ((pfn < start_pfn) || (pfn >= end_pfn))
> +				break;

folio_page() is extremely efficient with CONFIG_SPARSEMEM_VMEMMAP.  I am 
not sure how efficient it will be in the future once "struct folio" is 
no longer an overlay of "struct page".

page_to_pfn() should be slightly more expensive than folio_page() right 
now, but maybe more efficient in the future (maybe).

I don't particularly care, whatever you prefer :)

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ