[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8fd6b1d8-6dc3-29a8-0377-e4323b74d6af@linux.vnet.ibm.com>
Date: Tue, 17 Oct 2017 15:17:38 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: "Huang, Ying" <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...e.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Arnd Bergmann <arnd@...db.de>, Hugh Dickins <hughd@...gle.com>,
Jérôme Glisse <jglisse@...hat.com>,
Daniel Colascione <dancol@...gle.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Subject: Re: [PATCH -mm] mm, pagemap: Fix soft dirty marking for PMD migration
entry
On 10/17/2017 01:48 PM, Huang, Ying wrote:
> From: Huang Ying <ying.huang@...el.com>
>
> Now, when the page table is walked in the implementation of
> /proc/<pid>/pagemap, pmd_soft_dirty() is used for both the PMD huge
> page map and the PMD migration entries. That is wrong,
> pmd_swp_soft_dirty() should be used for the PMD migration entries
> instead because the different page table entry flag is used.
Yeah, different flags can be used on various archs to represent
mapped a PMD and a migration PMD entry. Sounds good.
>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: "Jérôme Glisse" <jglisse@...hat.com>
> Cc: Daniel Colascione <dancol@...gle.com>
> Cc: Zi Yan <zi.yan@...rutgers.edu>
> Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
> ---
> fs/proc/task_mmu.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 2593a0c609d7..01aad772f8db 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1311,13 +1311,15 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
> pmd_t pmd = *pmdp;
> struct page *page = NULL;
>
> - if ((vma->vm_flags & VM_SOFTDIRTY) || pmd_soft_dirty(pmd))
> + if (vma->vm_flags & VM_SOFTDIRTY)
> flags |= PM_SOFT_DIRTY;
>
> if (pmd_present(pmd)) {
> page = pmd_page(pmd);
>
> flags |= PM_PRESENT;
> + if (pmd_soft_dirty(pmd))
> + flags |= PM_SOFT_DIRTY;
> if (pm->show_pfn)
> frame = pmd_pfn(pmd) +
> ((addr & ~PMD_MASK) >> PAGE_SHIFT);
> @@ -1329,6 +1331,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
> frame = swp_type(entry) |
> (swp_offset(entry) << MAX_SWAPFILES_SHIFT);
> flags |= PM_SWAP;
> + if (pmd_swp_soft_dirty(pmd))
> + flags |= PM_SOFT_DIRTY;
Though I was initially skeptical about whether this will compile
on POWER because of lack of a pmd_swp_soft_dirty() definition
but it turns out we have a generic one to fallback on as we dont
define ARCH_ENABLE_THP_MIGRATION yet.
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
#ifndef CONFIG_ARCH_ENABLE_THP_MIGRATION
static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
{
return pmd;
}
static inline int pmd_swp_soft_dirty(pmd_t pmd)
{
return 0;
}
static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
{
return pmd;
}
#endif
Powered by blists - more mailing lists