lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191129083002.GA1669@richard>
Date:   Fri, 29 Nov 2019 16:30:02 +0800
From:   Wei Yang <richardw.yang@...ux.intel.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Wei Yang <richard.weiyang@...il.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Wei Yang <richardw.yang@...ux.intel.com>,
        akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm/page_vma_mapped: page table boundary is already
 guaranteed

On Thu, Nov 28, 2019 at 02:39:04PM -0800, Matthew Wilcox wrote:
>On Thu, Nov 28, 2019 at 09:09:45PM +0000, Wei Yang wrote:
>> On Thu, Nov 28, 2019 at 11:31:43AM +0300, Kirill A. Shutemov wrote:
>> >On Thu, Nov 28, 2019 at 09:03:21AM +0800, Wei Yang wrote:
>> >> The check here is to guarantee pvmw->address iteration is limited in one
>> >> page table boundary. To be specific, here the address range should be in
>> >> one PMD_SIZE.
>> >> 
>> >> If my understanding is correct, this check is already done in the above
>> >> check:
>> >> 
>> >>     address >= __vma_address(page, vma) + PMD_SIZE
>> >> 
>> >> The boundary check here seems not necessary.
>> >> 
>> >> Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
>> >
>> >NAK.
>> >
>> >THP can be mapped with PTE not aligned to PMD_SIZE. Consider mremap().
>> >
>> 
>> Hi, Kirill
>> 
>> Thanks for your comment during Thanks Giving Day. Happy holiday:-)
>> 
>> I didn't think about this case before, thanks for reminding. Then I tried to
>> understand your concern.
>> 
>> mremap() would expand/shrink a memory mapping. In this case, probably shrink
>> is in concern. Since pvmw->page and pvmw->vma are not changed in the loop, the
>> case you mentioned maybe pvmw->page is the head of a THP but part of it is
>> unmapped.
>
>mremap() can also move a mapping, see MREMAP_FIXED.

Hi, Matthew

Thanks for your comment.

I took a look into the MREMAP_FIXED case, but still not clear in which case it
fall into the situation Kirill mentioned.

Per my understanding, move mapping is achieved in two steps:

    * unmap some range in old vma if old_len >= new_len
    * move vma

If the length doesn't change, we are expecting to have the "copy" of old
vma. This doesn't change the THP PMD mapping.

So the change still happens in the unmap step, if I am correct.

Would you mind giving me more hint on the case when we would have the
situation as Kirill mentioned?

>
>> This means the following condition stands:
>> 
>>     vma->vm_start <= vma_address(page) 
>>     vma->vm_end <=   vma_address(page) + page_size(page)
>> 
>> Since we have checked address with vm_end, do you think this case is also
>> guarded?
>> 
>> Not sure my understanding is correct, look forward your comments.
>> 
>> >> Test:
>> >>    more than 48 hours kernel build test shows this code is not touched.
>> >
>> >Not an argument. I doubt mremap(2) is ever called in kernel build
>> >workload.
>> >
>> >-- 
>> > Kirill A. Shutemov
>> 
>> -- 
>> Wei Yang
>> Help you, Help me
>> 

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ