lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Feb 2021 11:27:40 +0000
From:   Joao Martins <joao.m.martins@...cle.com>
To:     John Hubbard <jhubbard@...dia.com>
Cc:     linux-kernel@...r.kernel.org, linux-rdma@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Doug Ledford <dledford@...hat.com>,
        Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org
Subject: Re: [PATCH 2/4] mm/gup: decrement head page once for group of
 subpages



On 2/3/21 11:28 PM, John Hubbard wrote:
> On 2/3/21 2:00 PM, Joao Martins wrote:
>> Rather than decrementing the head page refcount one by one, we
>> walk the page array and checking which belong to the same
>> compound_head. Later on we decrement the calculated amount
>> of references in a single write to the head page. To that
>> end switch to for_each_compound_head() does most of the work.
>>
>> set_page_dirty() needs no adjustment as it's a nop for
>> non-dirty head pages and it doesn't operate on tail pages.
>>
>> This considerably improves unpinning of pages with THP and
>> hugetlbfs:
>>
>> - THP
>> gup_test -t -m 16384 -r 10 [-L|-a] -S -n 512 -w
>> PIN_LONGTERM_BENCHMARK (put values): ~87.6k us -> ~23.2k us
>>
>> - 16G with 1G huge page size
>> gup_test -f /mnt/huge/file -m 16384 -r 10 [-L|-a] -S -n 512 -w
>> PIN_LONGTERM_BENCHMARK: (put values): ~87.6k us -> ~27.5k us
>>
>> Signed-off-by: Joao Martins <joao.m.martins@...cle.com>
>> ---
>>   mm/gup.c | 29 +++++++++++------------------
>>   1 file changed, 11 insertions(+), 18 deletions(-)
>>
>> diff --git a/mm/gup.c b/mm/gup.c
>> index 4f88dcef39f2..971a24b4b73f 100644
>> --- a/mm/gup.c
>> +++ b/mm/gup.c
>> @@ -270,20 +270,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
>>   				 bool make_dirty)
>>   {
>>   	unsigned long index;
>> -
>> -	/*
>> -	 * TODO: this can be optimized for huge pages: if a series of pages is
>> -	 * physically contiguous and part of the same compound page, then a
>> -	 * single operation to the head page should suffice.
>> -	 */
> 
> Great to see this TODO (and the related one below) finally done!
> 
> Everything looks correct here.
> 
> Reviewed-by: John Hubbard <jhubbard@...dia.com>
> 
Thank you!

	Joao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ