[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200626135216.24314-1-richard.weiyang@linux.alibaba.com>
Date: Fri, 26 Jun 2020 21:52:12 +0800
From: Wei Yang <richard.weiyang@...ux.alibaba.com>
To: akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
yang.shi@...ux.alibaba.com, vbabka@...e.cz, willy@...radead.org,
thomas_os@...pmail.org, thellstrom@...are.com,
anshuman.khandual@....com, sean.j.christopherson@...el.com,
aneesh.kumar@...ux.ibm.com, peterx@...hat.com, walken@...gle.com
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, digetx@...il.com,
Wei Yang <richard.weiyang@...ux.alibaba.com>
Subject: [RESEND Patch v2 0/4] mm/mremap: cleanup move_page_tables() a little
move_page_tables() tries to move page table by PMD or PTE.
The root reason is if it tries to move PMD, both old and new range should be
PMD aligned. But current code calculate old range and new range separately.
This leads to some redundant check and calculation.
This cleanup tries to consolidate the range check in one place to reduce some
extra range handling.
v2:
* remove 3rd patch which doesn't work on ARM platform. Thanks report and
test from Dmitry Osipenko
Wei Yang (4):
mm/mremap: format the check in move_normal_pmd() same as
move_huge_pmd()
mm/mremap: it is sure to have enough space when extent meets
requirement
mm/mremap: calculate extent in one place
mm/mremap: start addresses are properly aligned
include/linux/huge_mm.h | 2 +-
mm/huge_memory.c | 8 +-------
mm/mremap.c | 17 ++++++-----------
3 files changed, 8 insertions(+), 19 deletions(-)
--
2.20.1 (Apple Git-117)
Powered by blists - more mailing lists