[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161012054933.GB20573@suse.de>
Date: Wed, 12 Oct 2016 06:49:33 +0100
From: Mel Gorman <mgorman@...e.de>
To: Andi Kleen <andi@...stfloor.org>
Cc: peterz@...radead.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH] Don't touch single threaded PTEs which are on the right
node
On Tue, Oct 11, 2016 at 01:28:58PM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@...ux.intel.com>
>
> We had some problems with pages getting unmapped in single threaded
> affinitized processes. It was tracked down to NUMA scanning.
>
> In this case it doesn't make any sense to unmap pages if the
> process is single threaded and the page is already on the
> node the process is running on.
>
> Add a check for this case into the numa protection code,
> and skip unmapping if true.
>
> In theory the process could be migrated later, but we
> will eventually rescan and unmap and migrate then.
>
> In theory this could be made more fancy: remembering this
> state per process or even whole mm. However that would
> need extra tracking and be more complicated, and the
> simple check seems to work fine so far.
>
> Signed-off-by: Andi Kleen <ak@...ux.intel.com>
> ---
> mm/mprotect.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index a4830f0325fe..e8028658e817 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -94,6 +94,14 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> /* Avoid TLB flush if possible */
> if (pte_protnone(oldpte))
> continue;
> +
> + /*
> + * Don't mess with PTEs if page is already on the node
> + * a single-threaded process is running on.
> + */
> + if (atomic_read(&vma->vm_mm->mm_users) == 1 &&
> + cpu_to_node(raw_smp_processor_id()) == page_to_nid(page))
> + continue;
> }
You shouldn't need to check the number of mm_users and the node the task
is running on for every PTE being scanned.
A more important corner case is if the VMA is shared with a task running on
another node. By avoiding the NUMA hinting faults here, the hinting faults
trapped by the remote process will appear exclusive and allow migration of
the page. This will happen even if the single-threade task is continually
using the pages.
When you said "we had some problems", you didn't describe the workload or
what the problems were (I'm assuming latency/jitter). Would restricting
this check to private VMAs be sufficient?
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists