[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6f5e3238-9750-40db-8fe1-88d28655a988@redhat.com>
Date: Fri, 30 May 2025 15:08:06 +0200
From: David Hildenbrand <david@...hat.com>
To: lizhe.67@...edance.com
Cc: akpm@...ux-foundation.org, jgg@...pe.ca, jhubbard@...dia.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, muchun.song@...ux.dev,
peterx@...hat.com
Subject: Re: [PATCH] gup: optimize longterm pin_user_pages() for large folio
On 30.05.25 14:20, lizhe.67@...edance.com wrote:
> On Fri, 30 May 2025 13:31:26 +0200, david@...hat.com wrote:
>
>> On 30.05.25 11:23, lizhe.67@...edance.com wrote:
>>> From: Li Zhe <lizhe.67@...edance.com>
>>>
>>> In the current implementation of the longterm pin_user_pages() function,
>>> we invoke the collect_longterm_unpinnable_folios() function. This function
>>> iterates through the list to check whether each folio belongs to the
>>> "longterm_unpinnabled" category. The folios in this list essentially
>>> correspond to a contiguous region of user-space addresses, with each folio
>>> representing a physical address in increments of PAGESIZE. If this
>>> user-space address range is mapped with large folio, we can optimize the
>>> performance of function pin_user_pages() by reducing the number of if-else
>>> branches and the frequency of memory accesses using READ_ONCE. This patch
>>> leverages this approach to achieve performance improvements.
>>>
>>> The performance test results obtained through the gup_test tool from the
>>> kernel source tree are as follows. We achieve an improvement of over 75%
>>> for large folio with pagesize=2M. For normal page, we have only observed
>>> a very slight degradation in performance.
>>>
>>> Without this patch:
>>>
>>> [root@...alhost ~] ./gup_test -HL -m 8192 -n 512
>>> TAP version 13
>>> 1..1
>>> # PIN_LONGTERM_BENCHMARK: Time: get:13623 put:10799 us#
>>> ok 1 ioctl status 0
>>> # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>> [root@...alhost ~]# ./gup_test -LT -m 8192 -n 512
>>> TAP version 13
>>> 1..1
>>> # PIN_LONGTERM_BENCHMARK: Time: get:129733 put:31753 us#
>>> ok 1 ioctl status 0
>>> # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>>
>>> With this patch:
>>>
>>> [root@...alhost ~] ./gup_test -HL -m 8192 -n 512
>>> TAP version 13
>>> 1..1
>>> # PIN_LONGTERM_BENCHMARK: Time: get:3386 put:10844 us#
>>> ok 1 ioctl status 0
>>> # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>> [root@...alhost ~]# ./gup_test -LT -m 8192 -n 512
>>> TAP version 13
>>> 1..1
>>> # PIN_LONGTERM_BENCHMARK: Time: get:131652 put:31393 us#
>>> ok 1 ioctl status 0
>>> # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>>
>>> Signed-off-by: Li Zhe <lizhe.67@...edance.com>
>>> ---
>>> mm/gup.c | 31 +++++++++++++++++++++++--------
>>> 1 file changed, 23 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/gup.c b/mm/gup.c
>>> index 84461d384ae2..8c11418036e2 100644
>>> --- a/mm/gup.c
>>> +++ b/mm/gup.c
>>> @@ -2317,6 +2317,25 @@ static void pofs_unpin(struct pages_or_folios *pofs)
>>> unpin_user_pages(pofs->pages, pofs->nr_entries);
>>> }
>>>
>>> +static struct folio *pofs_next_folio(struct folio *folio,
>>> + struct pages_or_folios *pofs, long *index_ptr)
>>> +{
>>> + long i = *index_ptr + 1;
>>> + unsigned long nr_pages = folio_nr_pages(folio);
>>> +
>>> + if (!pofs->has_folios)
>>> + while ((i < pofs->nr_entries) &&
>>> + /* Is this page part of this folio? */
>>> + (folio_page_idx(folio, pofs->pages[i]) < nr_pages))
>>
>> passing in a page that does not belong to the folio looks shaky and not
>> future proof.
>>
>> folio_page() == folio
>>
>> is cleaner
>
> Yes, this approach is cleaner. However, when obtaining a folio
> corresponding to a page through the page_folio() interface,
Right, I meant page_folio().
> READ_ONCE() is used internally to read from memory, which results
> in the performance of pin_user_pages() being worse than before.
See contig_pages in [1] how it can be done using folio_page().
[1]
https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/T/#u
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists