lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Oct 2019 16:46:45 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Ira Weiny <ira.weiny@...el.com>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Al Viro <viro@...iv.linux.org.uk>,
        Alex Williamson <alex.williamson@...hat.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Björn Töpel <bjorn.topel@...el.com>,
        Christoph Hellwig <hch@...radead.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Daniel Vetter <daniel@...ll.ch>,
        Dave Chinner <david@...morbit.com>,
        David Airlie <airlied@...ux.ie>,
        "David S . Miller" <davem@...emloft.net>, Jan Kara <jack@...e.cz>,
        Jason Gunthorpe <jgg@...pe.ca>, Jens Axboe <axboe@...nel.dk>,
        Jonathan Corbet <corbet@....net>,
        Jérôme Glisse <jglisse@...hat.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Michal Hocko <mhocko@...e.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Paul Mackerras <paulus@...ba.org>,
        Shuah Khan <shuah@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>, <bpf@...r.kernel.org>,
        <dri-devel@...ts.freedesktop.org>, <kvm@...r.kernel.org>,
        <linux-block@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <linux-fsdevel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>,
        <linux-media@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
        <linuxppc-dev@...ts.ozlabs.org>, <netdev@...r.kernel.org>,
        <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 08/19] mm/process_vm_access: set FOLL_PIN via
 pin_user_pages_remote()

On 10/31/19 4:35 PM, Ira Weiny wrote:
> On Wed, Oct 30, 2019 at 03:49:19PM -0700, John Hubbard wrote:
>> Convert process_vm_access to use the new pin_user_pages_remote()
>> call, which sets FOLL_PIN. Setting FOLL_PIN is now required for
>> code that requires tracking of pinned pages.
>>
>> Also, release the pages via put_user_page*().
>>
>> Also, rename "pages" to "pinned_pages", as this makes for
>> easier reading of process_vm_rw_single_vec().
> 
> Ok...  but it made review a bit harder...
> 

Yes, sorry about that. After dealing with "pages means struct page *[]"
for all this time, having an "int pages" just was a step too far for
me here. :)

Thanks for working through it. 


thanks,

John Hubbard
NVIDIA



> Reviewed-by: Ira Weiny <ira.weiny@...el.com>
> 


>>
>> Signed-off-by: John Hubbard <jhubbard@...dia.com>
>> ---
>>  mm/process_vm_access.c | 28 +++++++++++++++-------------
>>  1 file changed, 15 insertions(+), 13 deletions(-)
>>
>> diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
>> index 357aa7bef6c0..fd20ab675b85 100644
>> --- a/mm/process_vm_access.c
>> +++ b/mm/process_vm_access.c
>> @@ -42,12 +42,11 @@ static int process_vm_rw_pages(struct page **pages,
>>  		if (copy > len)
>>  			copy = len;
>>  
>> -		if (vm_write) {
>> +		if (vm_write)
>>  			copied = copy_page_from_iter(page, offset, copy, iter);
>> -			set_page_dirty_lock(page);
>> -		} else {
>> +		else
>>  			copied = copy_page_to_iter(page, offset, copy, iter);
>> -		}
>> +
>>  		len -= copied;
>>  		if (copied < copy && iov_iter_count(iter))
>>  			return -EFAULT;
>> @@ -96,7 +95,7 @@ static int process_vm_rw_single_vec(unsigned long addr,
>>  		flags |= FOLL_WRITE;
>>  
>>  	while (!rc && nr_pages && iov_iter_count(iter)) {
>> -		int pages = min(nr_pages, max_pages_per_loop);
>> +		int pinned_pages = min(nr_pages, max_pages_per_loop);
>>  		int locked = 1;
>>  		size_t bytes;
>>  
>> @@ -106,14 +105,15 @@ static int process_vm_rw_single_vec(unsigned long addr,
>>  		 * current/current->mm
>>  		 */
>>  		down_read(&mm->mmap_sem);
>> -		pages = get_user_pages_remote(task, mm, pa, pages, flags,
>> -					      process_pages, NULL, &locked);
>> +		pinned_pages = pin_user_pages_remote(task, mm, pa, pinned_pages,
>> +						     flags, process_pages,
>> +						     NULL, &locked);
>>  		if (locked)
>>  			up_read(&mm->mmap_sem);
>> -		if (pages <= 0)
>> +		if (pinned_pages <= 0)
>>  			return -EFAULT;
>>  
>> -		bytes = pages * PAGE_SIZE - start_offset;
>> +		bytes = pinned_pages * PAGE_SIZE - start_offset;
>>  		if (bytes > len)
>>  			bytes = len;
>>  
>> @@ -122,10 +122,12 @@ static int process_vm_rw_single_vec(unsigned long addr,
>>  					 vm_write);
>>  		len -= bytes;
>>  		start_offset = 0;
>> -		nr_pages -= pages;
>> -		pa += pages * PAGE_SIZE;
>> -		while (pages)
>> -			put_page(process_pages[--pages]);
>> +		nr_pages -= pinned_pages;
>> +		pa += pinned_pages * PAGE_SIZE;
>> +
>> +		/* If vm_write is set, the pages need to be made dirty: */
>> +		put_user_pages_dirty_lock(process_pages, pinned_pages,
>> +					  vm_write);
>>  	}
>>  
>>  	return rc;
>> -- 
>> 2.23.0
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ