[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231025025932.GA3953138@tiffany>
Date: Wed, 25 Oct 2023 11:59:32 +0900
From: Hyesoo Yu <hyesoo.yu@...sung.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: David Hildenbrand <david@...hat.com>,
Alexandru Elisei <alexandru.elisei@....com>, will@...nel.org,
oliver.upton@...ux.dev, maz@...nel.org, james.morse@....com,
suzuki.poulose@....com, yuzenghui@...wei.com, arnd@...db.de,
akpm@...ux-foundation.org, mingo@...hat.com, peterz@...radead.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
mhiramat@...nel.org, rppt@...nel.org, hughd@...gle.com,
pcc@...gle.com, steven.price@....com, anshuman.khandual@....com,
vincenzo.frascino@....com, eugenis@...gle.com, kcc@...gle.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kvmarm@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 00/37] Add support for arm64 MTE dynamic tag storage
reuse
On Wed, Sep 13, 2023 at 04:29:25PM +0100, Catalin Marinas wrote:
> On Mon, Sep 11, 2023 at 02:29:03PM +0200, David Hildenbrand wrote:
> > On 11.09.23 13:52, Catalin Marinas wrote:
> > > On Wed, Sep 06, 2023 at 12:23:21PM +0100, Alexandru Elisei wrote:
> > > > On Thu, Aug 24, 2023 at 04:24:30PM +0100, Catalin Marinas wrote:
> > > > > On Thu, Aug 24, 2023 at 01:25:41PM +0200, David Hildenbrand wrote:
> > > > > > On 24.08.23 13:06, David Hildenbrand wrote:
> > > > > > > Regarding one complication: "The kernel needs to know where to allocate
> > > > > > > a PROT_MTE page from or migrate a current page if it becomes PROT_MTE
> > > > > > > (mprotect()) and the range it is in does not support tagging.",
> > > > > > > simplified handling would be if it's in a MIGRATE_CMA pageblock, it
> > > > > > > doesn't support tagging. You have to migrate to a !CMA page (for
> > > > > > > example, not specifying GFP_MOVABLE as a quick way to achieve that).
> > > > > >
> > > > > > Okay, I now realize that this patch set effectively duplicates some CMA
> > > > > > behavior using a new migrate-type.
> > > [...]
> > > > I considered mixing the tag storage memory memory with normal memory and
> > > > adding it to MIGRATE_CMA. But since tag storage memory cannot be tagged,
> > > > this means that it's not enough anymore to have a __GFP_MOVABLE allocation
> > > > request to use MIGRATE_CMA.
> > > >
> > > > I considered two solutions to this problem:
> > > >
> > > > 1. Only allocate from MIGRATE_CMA is the requested memory is not tagged =>
> > > > this effectively means transforming all memory from MIGRATE_CMA into the
> > > > MIGRATE_METADATA migratetype that the series introduces. Not very
> > > > appealing, because that means treating normal memory that is also on the
> > > > MIGRATE_CMA lists as tagged memory.
> > >
> > > That's indeed not ideal. We could try this if it makes the patches
> > > significantly simpler, though I'm not so sure.
> > >
> > > Allocating metadata is the easier part as we know the correspondence
> > > from the tagged pages (32 PROT_MTE page) to the metadata page (1 tag
> > > storage page), so alloc_contig_range() does this for us. Just adding it
> > > to the CMA range is sufficient.
> > >
> > > However, making sure that we don't allocate PROT_MTE pages from the
> > > metadata range is what led us to another migrate type. I guess we could
> > > achieve something similar with a new zone or a CPU-less NUMA node,
> >
> > Ideally, no significant core-mm changes to optimize for an architecture
> > oddity. That implies, no new zones and no new migratetypes -- unless it is
> > unavoidable and you are confident that you can convince core-MM people that
> > the use case (giving back 3% of system RAM at max in some setups) is worth
> > the trouble.
>
> If I was an mm maintainer, I'd also question this ;). But vendors seem
> pretty picky about the amount of RAM reserved for MTE (e.g. 0.5G for a
> 16G platform does look somewhat big). As more and more apps adopt MTE,
> the wastage would be smaller but the first step is getting vendors to
> enable it.
>
> > I also had CPU-less NUMA nodes in mind when thinking about that, but not
> > sure how easy it would be to integrate it. If the tag memory has actually
> > different performance characteristics as well, a NUMA node would be the
> > right choice.
>
> In general I'd expect the same characteristics. However, changing the
> memory designation from tag to data (and vice-versa) requires some cache
> maintenance. The allocation cost is slightly higher (not the runtime
> one), so it would help if the page allocator does not favour this range.
> Anyway, that's an optimisation to worry about later.
>
> > If we could find some way to easily support this either via CMA or CPU-less
> > NUMA nodes, that would be much preferable; even if we cannot cover each and
> > every future use case right now. I expect some issues with CXL+MTE either
> > way , but are happy to be taught otherwise :)
>
> I think CXL+MTE is rather theoretical at the moment. Given that PCIe
> doesn't have any notion of MTE, more likely there would be some piece of
> interconnect that generates two memory accesses: one for data and the
> other for tags at a configurable offset (which may or may not be in the
> same CXL range).
>
> > Another thought I had was adding something like CMA memory characteristics.
> > Like, asking if a given CMA area/page supports tagging (i.e., flag for the
> > CMA area set?)?
>
> I don't think adding CMA memory characteristics helps much. The metadata
> allocation wouldn't go through cma_alloc() but rather
> alloc_contig_range() directly for a specific pfn corresponding to the
> data pages with PROT_MTE. The core mm code doesn't need to know about
> the tag storage layout.
>
> It's also unlikely for cma_alloc() memory to be mapped as PROT_MTE.
> That's typically coming from device drivers (DMA API) with their own
> mmap() implementation that doesn't normally set VM_MTE_ALLOWED (and
> therefore PROT_MTE is rejected).
>
> What we need though is to prevent vma_alloc_folio() from allocating from
> a MIGRATE_CMA list if PROT_MTE (VM_MTE). I guess that's basically
> removing __GFP_MOVABLE in those cases. As long as we don't have large
> ZONE_MOVABLE areas, it shouldn't be an issue.
>
How about unsetting ALLOC_CMA if GFP_TAGGED ?
Removing __GFP_MOVABLE may cause movable pages to be allocated in un
unmovable migratetype, which may not be desirable for page fragmentation.
> > When you need memory that supports tagging and have a page that does not
> > support tagging (CMA && taggable), simply migrate to !MOVABLE memory
> > (eventually we could also try adding !CMA).
> >
> > Was that discussed and what would be the challenges with that? Page
> > migration due to compaction comes to mind, but it might also be easy to
> > handle if we can just avoid CMA memory for that.
>
> IIRC that was because PROT_MTE pages would have to come only from
> !MOVABLE ranges. Maybe that's not such big deal.
>
Could you explain what it means that PROT_MTE have to come only from
!MOVABLE range ? I don't understand this part very well.
Thanks,
Hyesoo.
> We'll give this a go and hopefully it simplifies the patches a bit (it
> will take a while as Alex keeps going on holiday ;)). In the meantime,
> I'm talking to the hardware people to see whether we can have MTE pages
> in the tag storage/metadata range. We'd still need to reserve about 0.1%
> of the RAM for the metadata corresponding to the tag storage range when
> used as data but that's negligible (1/32 of 1/32). So if some future
> hardware allows this, we can drop the page allocation restriction from
> the CMA range.
>
> > > though the latter is not guaranteed not to allocate memory from the
> > > range, only make it less likely. Both these options are less flexible in
> > > terms of size/alignment/placement.
> > >
> > > Maybe as a quick hack - only allow PROT_MTE from ZONE_NORMAL and
> > > configure the metadata range in ZONE_MOVABLE but at some point I'd
> > > expect some CXL-attached memory to support MTE with additional carveout
> > > reserved.
> >
> > I have no idea how we could possibly cleanly support memory hotplug in
> > virtual environments (virtual DIMMs, virtio-mem) with MTE. In contrast to
> > s390x storage keys, the approach that arm64 with MTE took here (exposing tag
> > memory to the VM) makes it rather hard and complicated.
>
> The current thinking is that the VM is not aware of the tag storage,
> that's entirely managed by the host. The host would treat the guest
> memory similarly to the PROT_MTE user allocations, reserve metadata etc.
>
> Thanks for the feedback so far, very useful.
>
> --
> Catalin
>
Powered by blists - more mailing lists