lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200706100729.y2wbkpc4tyvjojzg@box>
Date:   Mon, 6 Jul 2020 13:07:29 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Wei Yang <richard.weiyang@...ux.alibaba.com>
Cc:     akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
        yang.shi@...ux.alibaba.com, vbabka@...e.cz, willy@...radead.org,
        thomas_os@...pmail.org, thellstrom@...are.com,
        anshuman.khandual@....com, sean.j.christopherson@...el.com,
        aneesh.kumar@...ux.ibm.com, peterx@...hat.com, walken@...gle.com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, digetx@...il.com
Subject: Re: [RESEND Patch v2 3/4] mm/mremap: calculate extent in one place

On Fri, Jun 26, 2020 at 09:52:15PM +0800, Wei Yang wrote:
> Page tables is moved on the base of PMD. This requires both source
> and destination range should meet the requirement.
> 
> Current code works well since move_huge_pmd() and move_normal_pmd()
> would check old_addr and new_addr again. And then return to move_ptes()
> if the either of them is not aligned.
> 
> In stead of calculating the extent separately, it is better to calculate
> in one place, so we know it is not necessary to try move pmd. By doing
> so, the logic seems a little clear.
> 
> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
> Tested-by: Dmitry Osipenko <digetx@...il.com>
> ---
>  mm/mremap.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/mremap.c b/mm/mremap.c
> index de27b12c8a5a..a30b3e86cc99 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -258,6 +258,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>  		extent = next - old_addr;
>  		if (extent > old_end - old_addr)
>  			extent = old_end - old_addr;
> +		next = (new_addr + PMD_SIZE) & PMD_MASK;

Please use round_up() for both 'next' calculations.

> +		if (extent > next - new_addr)
> +			extent = next - new_addr;
>  		old_pmd = get_old_pmd(vma->vm_mm, old_addr);
>  		if (!old_pmd)
>  			continue;
> @@ -301,9 +304,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>  
>  		if (pte_alloc(new_vma->vm_mm, new_pmd))
>  			break;
> -		next = (new_addr + PMD_SIZE) & PMD_MASK;
> -		if (extent > next - new_addr)
> -			extent = next - new_addr;
>  		move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
>  			  new_pmd, new_addr, need_rmap_locks);
>  	}
> -- 
> 2.20.1 (Apple Git-117)
> 

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ