[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4cb536f6-2609-4e3e-b996-4a613c9844ad@nvidia.com>
Date: Thu, 17 Aug 2023 19:29:12 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Yan Zhao <yan.y.zhao@...el.com>,
David Hildenbrand <david@...hat.com>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<kvm@...r.kernel.org>, <pbonzini@...hat.com>, <seanjc@...gle.com>,
<mike.kravetz@...cle.com>, <apopple@...dia.com>, <jgg@...dia.com>,
<rppt@...nel.org>, <akpm@...ux-foundation.org>,
<kevin.tian@...el.com>, "Mel Gorman" <mgorman@...hsingularity.net>,
<alex.williamson@...hat.com>
Subject: Re: [RFC PATCH v2 0/5] Reduce NUMA balance caused TLB-shootdowns in a
VM
On 8/17/23 17:13, Yan Zhao wrote:
...
> But consider for GPUs case as what John mentioned, since the memory is
> not even pinned, maybe they still need flag VM_NO_NUMA_BALANCING ?
> For VMs, we hint VM_NO_NUMA_BALANCING for passthrough devices supporting
> IO page fault (so no need to pin), and VM_MAYLONGTERMDMA to avoid misplace
> and migration.
>
> Is that good?
> Or do you think just a per-mm flag like MMF_NO_NUMA is good enough for
> now?
>
So far, a per-mm setting seems like it would suffice. However, it is
also true that new hardware is getting really creative and large, to
the point that it's not inconceivable that a process might actually
want to let NUMA balancing run in part of its mm, but turn it off
to allow fault-able device access to another part of the mm.
We aren't seeing that yet, but on the other hand, that may be
simply because there is no practical way to set that up and see
how well it works.
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists