[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 7 Jul 2020 09:38:56 +0800
From: Wei Yang <richard.weiyang@...ux.alibaba.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Wei Yang <richard.weiyang@...ux.alibaba.com>,
akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
yang.shi@...ux.alibaba.com, vbabka@...e.cz, willy@...radead.org,
thomas_os@...pmail.org, thellstrom@...are.com,
anshuman.khandual@....com, sean.j.christopherson@...el.com,
aneesh.kumar@...ux.ibm.com, peterx@...hat.com, walken@...gle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, digetx@...il.com
Subject: Re: [RESEND Patch v2 3/4] mm/mremap: calculate extent in one place
On Mon, Jul 06, 2020 at 01:07:29PM +0300, Kirill A. Shutemov wrote:
>On Fri, Jun 26, 2020 at 09:52:15PM +0800, Wei Yang wrote:
>> Page tables is moved on the base of PMD. This requires both source
>> and destination range should meet the requirement.
>>
>> Current code works well since move_huge_pmd() and move_normal_pmd()
>> would check old_addr and new_addr again. And then return to move_ptes()
>> if the either of them is not aligned.
>>
>> In stead of calculating the extent separately, it is better to calculate
>> in one place, so we know it is not necessary to try move pmd. By doing
>> so, the logic seems a little clear.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
>> Tested-by: Dmitry Osipenko <digetx@...il.com>
>> ---
>> mm/mremap.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/mremap.c b/mm/mremap.c
>> index de27b12c8a5a..a30b3e86cc99 100644
>> --- a/mm/mremap.c
>> +++ b/mm/mremap.c
>> @@ -258,6 +258,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>> extent = next - old_addr;
>> if (extent > old_end - old_addr)
>> extent = old_end - old_addr;
>> + next = (new_addr + PMD_SIZE) & PMD_MASK;
>
>Please use round_up() for both 'next' calculations.
>
I took another close look into this, seems this is not a good suggestion.
round_up(new_addr, PMD_SIZE)
would be new_addr when new_addr is PMD_SIZE aligned, which is not what we
expect.
>> + if (extent > next - new_addr)
>> + extent = next - new_addr;
>> old_pmd = get_old_pmd(vma->vm_mm, old_addr);
>> if (!old_pmd)
>> continue;
>> @@ -301,9 +304,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>>
>> if (pte_alloc(new_vma->vm_mm, new_pmd))
>> break;
>> - next = (new_addr + PMD_SIZE) & PMD_MASK;
>> - if (extent > next - new_addr)
>> - extent = next - new_addr;
>> move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
>> new_pmd, new_addr, need_rmap_locks);
>> }
>> --
>> 2.20.1 (Apple Git-117)
>>
>
>--
> Kirill A. Shutemov
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists