[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b50c0fa-bbeb-1041-c05c-2667134044b6@redhat.com>
Date: Fri, 30 Oct 2020 07:41:38 +0100
From: David Hildenbrand <david@...hat.com>
To: Sudarshan Rajagopalan <sudaraja@...eaurora.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Mark Rutland <mark.rutland@....com>,
Steven Price <steven.price@....com>,
Mike Rapoport <rppt@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Greg Kroah-Hartman <gregkh@...gle.com>,
Pratik Patel <pratikp@...eaurora.org>
Subject: Re: mm/memblock: export memblock_{start/end}_of_DRAM
On 29.10.20 22:29, Sudarshan Rajagopalan wrote:
> Hello all,
>
Hi!
> We have a usecase where a module driver adds certain memory blocks using
> add_memory_driver_managed(), so that it can perform memory hotplug
> operations on these blocks. In general, these memory blocks aren’t
> something that gets physically added later, but is part of actual RAM
> that system booted up with. Meaning – we set the ‘mem=’ cmdline
> parameter to limit the memory and later add the remaining ones using
> add_memory*() variants.
>
> The basic idea is to have driver have ownership and manage certain
> memory blocks for hotplug operations.
So, in summary, you're still abusing the memory hot(un)plug
infrastructure from your driver - just not in a severe way as before.
And I'll tell you why, so you might understand why exposing this API is
not really a good idea and why your driver wouldn't - for example - be
upstream material.
Don't get me wrong, what you are doing might be ok in your context, but
it's simply not universally applicable in our current model.
Ordinary system RAM works different than many other devices (like PCI
devices) whereby *something* senses the device and exposes it to the
system, and some available driver binds to it and owns the memory.
Memory is detected by a driver and added to the system via e.g.,
add_memory_driver_managed(). Memory devices are created and the memory
is directly handed off to the system, to be used as system RAM as soon
as memory devices are onlined. There is no driver that "binds" memory
like other devices - it's rather the core (buddy) that uses/owns that
memory immediately after device creation.
>
> For the driver be able to know how much memory was limited and how much
> actually present, we take the delta of ‘bootmem physical end address’
> and ‘memblock_end_of_DRAM’. The 'bootmem physical end address' is
> obtained by scanning the reg values in ‘memory’ DT node and determining
> the max {addr,size}. Since our driver is getting modularized, we won’t
> have access to memblock_end_of_DRAM (i.e. end address of all memory
> blocks after ‘mem=’ is applied).
What you do with "mem=" is force memory detection to ignore some of it's
detected memory.
>
> So checking if memblock_{start/end}_of_DRAM() symbols can be exported?
> Also, this information can be obtained by userspace by doing ‘cat
> /proc/iomem’ and greping for ‘System RAM’. So wondering if userspace can
Not correct: with "mem=", cat /proc/iomem only shows *detected* + added
system RAM, not the unmodified detection.
> have access to such info, can we allow kernel module drivers have access
> by exporting memblock_{start/end}_of_DRAM().
>
> Or are there any other ways where a module driver can get the end
> address of system memory block?
And here is our problem: You disabled *detection* of that memory by the
responsible driver (here: core). Now your driver wants to know what
would have been detected. Assume you have memory hole in that region -
it would not work by simply looking at start/end. You're driver is not
the one doing the detection.
Another issue is: when using such memory for KVM guests, there is no
mechanism that tracks ownership of that memory - imagine another driver
wanting to use that memory. This really only works in special environments.
Yet another issue: you cannot assume that memblock data will stay around
after boot. While we do it right now for arm64, that might change at
some point. This is also one of the reasons why we don't export any real
memblock data to drivers.
When using "mem=" you have to know the exact layout of your system RAM
and communicate the right places how that layout looks like manually:
here, to your driver.
The clean way of doing things today is to allocate RAM and use it for
guests - e.g., using hugetlb/gigantic pages. As I said, there are other
techniques coming up to deal with minimizing struct page overhead - if
that's what you're concerned with (I still don't know why you're
removing the memory from the host when giving it to the guest).
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists