[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2093258630.3273244.1481229443563.JavaMail.zimbra@redhat.com>
Date: Thu, 8 Dec 2016 15:37:23 -0500 (EST)
From: Jerome Glisse <jglisse@...hat.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, John Hubbard <jhubbard@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Ross Zwisler <ross.zwisler@...ux.intel.com>
Subject: Re: [HMM v14 05/16] mm/ZONE_DEVICE/unaddressable: add support for
un-addressable device memory
> On 12/08/2016 08:39 AM, Jerome Glisse wrote:
> >> On 12/08/2016 08:39 AM, Jérôme Glisse wrote:
> >>> > > Architecture that wish to support un-addressable device memory should
> >>> > > make
> >>> > > sure to never populate the kernel linar mapping for the physical
> >>> > > range.
> >> >
> >> > Does the platform somehow provide a range of physical addresses for this
> >> > unaddressable area? How do we know no memory will be hot-added in a
> >> > range we're using for unaddressable device memory, for instance?
> > That's what one of the big issue. No platform does not reserve any range so
> > there is a possibility that some memory get hotpluged and assign this
> > range.
> >
> > I pushed the range decision to higher level (ie it is the device driver
> > that
> > pick one) so right now for device driver using HMM (NVidia close driver as
> > we don't have nouveau ready for that yet) it goes from the highest physical
> > address and scan down until finding an empty range big enough.
>
> I don't think you should be stealing physical address space for things
> that don't and can't have physical addresses. Delegating this to
> individual device drivers and hoping that they all get it right seems
> like a recipe for disaster.
Well i expected device driver to use hmm_devmem_add() which does not take
physical address but use the above logic to pick one.
>
> Maybe worth adding to the changelog:
>
> This feature potentially breaks memory hotplug unless every
> driver using it magically predicts the future addresses of
> where memory will be hotplugged.
I will add debug printk to memory hotplug in case it fails because of some
un-addressable resource. If you really dislike memory hotplug being broken
then i can go down the way of allowing to hotplug memory above the max
physical memory limit. This require more changes but i believe this is
doable for some of the memory model (sparsemem and sparsemem extreme).
>
> BTW, how many more of these "big issues" does this set have? I didn't
> see any mention of this in the changelogs.
I am not sure what to say here. If you don't use HMM ie no device that
hotplug it. Then there is no chance of having issue. If you have a device
that use it then someone might try to do something stupid (try to kmap
and access such un-addressable page for instance). So i am not sure where
to draw the line.
Cheers,
Jérôme
Powered by blists - more mailing lists