lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <caed62ce-242e-a5d6-eb87-88f270f48032@linux.alibaba.com>
Date:   Fri, 29 Jun 2018 09:45:01 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Matthew Wilcox <willy@...radead.org>,
        ldufour@...ux.vnet.ibm.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...hat.com>, acme@...nel.org,
        alexander.shishkin@...ux.intel.com, jolsa@...hat.com,
        namhyung@...nel.org,
        "open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC v2 PATCH 2/2] mm: mmap: zap pages with read mmap_sem for
 large mapping



On 6/29/18 4:34 AM, Michal Hocko wrote:
> On Thu 28-06-18 12:10:10, Yang Shi wrote:
>>
>> On 6/28/18 4:51 AM, Michal Hocko wrote:
>>> On Wed 27-06-18 10:23:39, Yang Shi wrote:
>>>> On 6/27/18 12:24 AM, Michal Hocko wrote:
>>>>> On Tue 26-06-18 18:03:34, Yang Shi wrote:
>>>>>> On 6/26/18 12:43 AM, Peter Zijlstra wrote:
>>>>>>> On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote:
>>>>>>>> By looking this deeper, we may not be able to cover all the unmapping range
>>>>>>>> for VM_DEAD, for example, if the start addr is in the middle of a vma. We
>>>>>>>> can't set VM_DEAD to that vma since that would trigger SIGSEGV for still
>>>>>>>> mapped area.
>>>>>>>>
>>>>>>>> splitting can't be done with read mmap_sem held, so maybe just set VM_DEAD
>>>>>>>> to non-overlapped vmas. Access to overlapped vmas (first and last) will
>>>>>>>> still have undefined behavior.
>>>>>>> Acquire mmap_sem for writing, split, mark VM_DEAD, drop mmap_sem. Acquire
>>>>>>> mmap_sem for reading, madv_free drop mmap_sem. Acquire mmap_sem for
>>>>>>> writing, free everything left, drop mmap_sem.
>>>>>>>
>>>>>>> ?
>>>>>>>
>>>>>>> Sure, you acquire the lock 3 times, but both write instances should be
>>>>>>> 'short', and I suppose you can do a demote between 1 and 2 if you care.
>>>>>> Thanks, Peter. Yes, by looking the code and trying two different approaches,
>>>>>> it looks this approach is the most straight-forward one.
>>>>> Yes, you just have to be careful about the max vma count limit.
>>>> Yes, we should just need copy what do_munmap does as below:
>>>>
>>>> if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
>>>>               return -ENOMEM;
>>>>
>>>> If the mas map count limit has been reached, it will return failure before
>>>> zapping mappings.
>>> Yeah, but as soon as you drop the lock and retake it, somebody might
>>> have changed the adddress space and we might get inconsistency.
>>>
>>> So I am wondering whether we really need upgrade_read (to promote read
>>> to write lock) and do the
>>> 	down_write
>>> 	split & set up VM_DEAD
>>> 	downgrade_write
>>> 	unmap
>>> 	upgrade_read
>>> 	zap ptes
>>> 	up_write
>> I'm supposed address space changing just can be done by mmap, mremap,
>> mprotect. If so, we may utilize the new VM_DEAD flag. If the VM_DEAD flag is
>> set for the vma, just return failure since it is being unmapped.
> I am sorry I do not follow. How does VM_DEAD flag helps for a completely
> unrelated vmas? Or maybe it would be better to post the code to see what
> you mean exactly.

I mean we just care about the vmas which have been found/split by 
munmap, right? We already set VM_DEAD to them. Even though those other 
vmas are changed by somebody else, it would not cause any inconsistency 
to this munmap call.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ