lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d9a9eba-97a4-f317-5ee9-369349c108df@linux.alibaba.com>
Date:   Tue, 7 Aug 2018 11:06:01 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Vlastimil Babka <vbabka@...e.cz>, mhocko@...nel.org,
        willy@...radead.org, ldufour@...ux.vnet.ibm.com,
        kirill@...temov.name, akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC v6 PATCH 1/2] mm: refactor do_munmap() to extract the common
 part



On 8/7/18 7:59 AM, Vlastimil Babka wrote:
> On 07/26/2018 08:10 PM, Yang Shi wrote:
>> Introduces three new helper functions:
>>    * munmap_addr_sanity()
>>    * munmap_lookup_vma()
>>    * munmap_mlock_vma()
>>
>> They will be used by do_munmap() and the new do_munmap with zapping
>> large mapping early in the later patch.
>>
>> There is no functional change, just code refactor.
>>
>> Reviewed-by: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>> ---
>>   mm/mmap.c | 120 ++++++++++++++++++++++++++++++++++++++++++--------------------
>>   1 file changed, 82 insertions(+), 38 deletions(-)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index d1eb87e..2504094 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2686,34 +2686,44 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>>   	return __split_vma(mm, vma, addr, new_below);
>>   }
>>   
>> -/* Munmap is split into 2 main parts -- this part which finds
>> - * what needs doing, and the areas themselves, which do the
>> - * work.  This now handles partial unmappings.
>> - * Jeremy Fitzhardinge <jeremy@...p.org>
>> - */
>> -int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> -	      struct list_head *uf)
>> +static inline bool munmap_addr_sanity(unsigned long start, size_t len)
> Since it's returning bool, the proper naming scheme would be something
> like "munmap_addr_ok()". I don't know how I would replace the "munmap_"
> prefix myself though.

OK, thanks for the suggestion.

>
>>   {
>> -	unsigned long end;
>> -	struct vm_area_struct *vma, *prev, *last;
>> -
>>   	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
>> -		return -EINVAL;
>> +		return false;
>>   
>> -	len = PAGE_ALIGN(len);
>> -	if (len == 0)
>> -		return -EINVAL;
>> +	if (PAGE_ALIGN(len) == 0)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>> +/*
>> + * munmap_lookup_vma: find the first overlap vma and split overlap vmas.
>> + * @mm: mm_struct
>> + * @vma: the first overlapping vma
>> + * @prev: vma's prev
>> + * @start: start address
>> + * @end: end address
>> + *
>> + * returns 1 if successful, 0 or errno otherwise
>> + */
>> +static int munmap_lookup_vma(struct mm_struct *mm, struct vm_area_struct **vma,
>> +			     struct vm_area_struct **prev, unsigned long start,
>> +			     unsigned long end)
> Agree with Michal that you could simply return vma, NULL, or error.
> Caller can easily find out prev from that, it's not like we have to
> count each cpu cycle here. It will be a bit less tricky code as well,
> which is a plus.
>
> ...
>> +static inline void munmap_mlock_vma(struct vm_area_struct *vma,
>> +				    unsigned long end)
> This function does munlock, not mlock. You could call it e.g.
> munlock_vmas().

OK

>
>> +{
>> +	struct vm_area_struct *tmp = vma;
>> +
>> +	while (tmp && tmp->vm_start < end) {
>> +		if (tmp->vm_flags & VM_LOCKED) {
>> +			vma->vm_mm->locked_vm -= vma_pages(tmp);
> You keep 'vma' just for the vm_mm? Better extract mm pointer first and
> then you don't need the 'tmp'.

OK

Thanks,
Yang

>
>> +			munlock_vma_pages_all(tmp);
>> +		}
>> +		tmp = tmp->vm_next;
>> +	}
>> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ