[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190110174159.GD4394@redhat.com>
Date: Thu, 10 Jan 2019 12:42:00 -0500
From: Jerome Glisse <jglisse@...hat.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Andrea Arcangeli <aarcange@...hat.com>,
Huang Ying <ying.huang@...el.com>,
Zhang Yi <yi.z.zhang@...ux.intel.com>, kvm@...r.kernel.org,
Dave Hansen <dave.hansen@...el.com>,
Liu Jingqi <jingqi.liu@...el.com>,
Yao Yuan <yuan.yao@...el.com>, Fan Du <fan.du@...el.com>,
Dong Eddie <eddie.dong@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Peng Dong <dongx.peng@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Fengguang Wu <fengguang.wu@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
linux-accelerators@...ts.ozlabs.org, Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC][PATCH v2 00/21] PMEM NUMA node and hotness
accounting/migration
On Thu, Jan 10, 2019 at 05:42:48PM +0100, Michal Hocko wrote:
> On Thu 10-01-19 10:53:17, Jerome Glisse wrote:
> > On Tue, Jan 08, 2019 at 03:52:56PM +0100, Michal Hocko wrote:
> > > On Wed 02-01-19 12:21:10, Jonathan Cameron wrote:
> > > [...]
> > > > So ideally I'd love this set to head in a direction that helps me tick off
> > > > at least some of the above usecases and hopefully have some visibility on
> > > > how to address the others moving forwards,
> > >
> > > Is it sufficient to have such a memory marked as movable (aka only have
> > > ZONE_MOVABLE)? That should rule out most of the kernel allocations and
> > > it fits the "balance by migration" concept.
> >
> > This would not work for GPU, GPU driver really want to be in total
> > control of their memory yet sometimes they want to migrate some part
> > of the process to their memory.
>
> But that also means that GPU doesn't really fit the model discussed
> here, right? I thought HMM is the way to manage such a memory.
HMM provides the plumbing and tools to manage but right now the patchset
for nouveau expose API through nouveau device file as nouveau ioctl. This
is not a good long term solution when you want to mix and match multiple
GPUs memory (possibly from different vendors). Then you get each device
driver implementing their own mem policy infrastructure and without any
coordination between devices/drivers. While it is _mostly_ ok for single
GPU case, it is seriously crippling for the multi-GPUs or multi-devices
cases (for instance when you chain network and GPU together or GPU and
storage).
People have been asking for a single common API to manage both regular
memory and device memory. As anyway the common case is you move things
around depending on which devices/CPUs is working on the dataset.
Cheers,
Jérôme
Powered by blists - more mailing lists