lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <72557537-3d64-7082-11f7-d70b41f7d0e6@nvidia.com>
Date:   Fri, 10 Jul 2020 09:19:56 -0700
From:   Ralph Campbell <rcampbell@...dia.com>
To:     <bharata@...ux.ibm.com>
CC:     <linux-mm@...ck.org>, <kvm-ppc@...r.kernel.org>,
        <linux-kselftest@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        "Jerome Glisse" <jglisse@...hat.com>,
        John Hubbard <jhubbard@...dia.com>,
        "Christoph Hellwig" <hch@....de>,
        Jason Gunthorpe <jgg@...lanox.com>,
        Shuah Khan <shuah@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes


On 7/9/20 11:35 PM, Bharata B Rao wrote:
> On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
>> When migrating system memory to device private memory, if the source
>> address range is a valid VMA range and there is no memory or a zero page,
>> the source PFN array is marked as valid but with no PFN. This lets the
>> device driver allocate private memory and clear it, then insert the new
>> device private struct page into the CPU's page tables when
>> migrate_vma_pages() is called. migrate_vma_pages() only inserts the
>> new page if the VMA is an anonymous range. There is no point in telling
>> the device driver to allocate device private memory and then not migrate
>> the page. Instead, mark the source PFN array entries as not migrating to
>> avoid this overhead.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
>> ---
>>   mm/migrate.c | 6 +++++-
>>   1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index b0125c082549..8aa434691577 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start,
>>   {
>>   	struct migrate_vma *migrate = walk->private;
>>   	unsigned long addr;
>> +	unsigned long flags;
>> +
>> +	/* Only allow populating anonymous memory. */
>> +	flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0;
>>   
>>   	for (addr = start; addr < end; addr += PAGE_SIZE) {
>> -		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
>> +		migrate->src[migrate->npages] = flags;
> 
> I see a few other such cases where we directly populate MIGRATE_PFN_MIGRATE
> w/o a pfn in migrate_vma_collect_pmd() and wonder why the vma_is_anonymous()
> check can't help there as well?
> 
> 1. pte_none() check in migrate_vma_collect_pmd()
> 2. is_zero_pfn() check in migrate_vma_collect_pmd()
> 
> Regards,
> Bharata.

For case 1, this seems like a useful addition.
For case 2, the zero page is only inserted if the VMA is marked read-only and
anonymous so I don't think the check is needed.
I'll post a v2 with the change.

Thanks for the suggestions!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ