[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52de3aca-41b1-471e-8f87-1a77de547510@arm.com>
Date: Wed, 29 Nov 2023 16:48:43 +0000
From: Robin Murphy <robin.murphy@....com>
To: Jason Gunthorpe <jgg@...pe.ca>,
Pasha Tatashin <pasha.tatashin@...een.com>
Cc: akpm@...ux-foundation.org, alex.williamson@...hat.com,
alim.akhtar@...sung.com, alyssa@...enzweig.io,
asahi@...ts.linux.dev, baolu.lu@...ux.intel.com,
bhelgaas@...gle.com, cgroups@...r.kernel.org, corbet@....net,
david@...hat.com, dwmw2@...radead.org, hannes@...xchg.org,
heiko@...ech.de, iommu@...ts.linux.dev, jasowang@...hat.com,
jernej.skrabec@...il.com, jonathanh@...dia.com, joro@...tes.org,
kevin.tian@...el.com, krzysztof.kozlowski@...aro.org,
kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-rockchip@...ts.infradead.org,
linux-samsung-soc@...r.kernel.org, linux-sunxi@...ts.linux.dev,
linux-tegra@...r.kernel.org, lizefan.x@...edance.com,
marcan@...can.st, mhiramat@...nel.org, mst@...hat.com,
m.szyprowski@...sung.com, netdev@...r.kernel.org,
paulmck@...nel.org, rdunlap@...radead.org, samuel@...lland.org,
suravee.suthikulpanit@....com, sven@...npeter.dev,
thierry.reding@...il.com, tj@...nel.org, tomas.mudrunka@...il.com,
vdumpa@...dia.com, virtualization@...ts.linux.dev, wens@...e.org,
will@...nel.org, yu-cheng.yu@...el.com
Subject: Re: [PATCH 08/16] iommu/fsl: use page allocation function provided by
iommu-pages.h
On 28/11/2023 11:50 pm, Jason Gunthorpe wrote:
> On Tue, Nov 28, 2023 at 06:00:13PM -0500, Pasha Tatashin wrote:
>> On Tue, Nov 28, 2023 at 5:53 PM Robin Murphy <robin.murphy@....com> wrote:
>>>
>>> On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
>>>> Convert iommu/fsl_pamu.c to use the new page allocation functions
>>>> provided in iommu-pages.h.
>>>
>>> Again, this is not a pagetable. This thing doesn't even *have* pagetables.
>>>
>>> Similar to patches #1 and #2 where you're lumping in configuration
>>> tables which belong to the IOMMU driver itself, as opposed to pagetables
>>> which effectively belong to an IOMMU domain's user. But then there are
>>> still drivers where you're *not* accounting similar configuration
>>> structures, so I really struggle to see how this metric is useful when
>>> it's so completely inconsistent in what it's counting :/
>>
>> The whole IOMMU subsystem allocates a significant amount of kernel
>> locked memory that we want to at least observe. The new field in
>> vmstat does just that: it reports ALL buddy allocator memory that
>> IOMMU allocates. However, for accounting purposes, I agree, we need to
>> do better, and separate at least iommu pagetables from the rest.
>>
>> We can separate the metric into two:
>> iommu pagetable only
>> iommu everything
>>
>> or into three:
>> iommu pagetable only
>> iommu dma
>> iommu everything
>>
>> What do you think?
>
> I think I said this at LPC - if you want to have fine grained
> accounting of memory by owner you need to go talk to the cgroup people
> and come up with something generic. Adding ever open coded finer
> category breakdowns just for iommu doesn't make alot of sense.
>
> You can make some argument that the pagetable memory should be counted
> because kvm counts it's shadow memory, but I wouldn't go into further
> detail than that with hand coded counters..
Right, pagetable memory is interesting since it's something that any
random kernel user can indirectly allocate via iommu_domain_alloc() and
iommu_map(), and some of those users may even be doing so on behalf of
userspace. I have no objection to accounting and potentially applying
limits to *that*.
Beyond that, though, there is nothing special about "the IOMMU
subsystem". The amount of memory an IOMMU driver needs to allocate for
itself in order to function is not of interest beyond curiosity, it just
is what it is; limiting it would only break the IOMMU, and if a user
thinks it's "too much", the only actionable thing that might help is to
physically remove devices from the system. Similar for DMA buffers; it
might be intriguing to account those, but it's not really an actionable
metric - in the overwhelming majority of cases you can't simply tell a
driver to allocate less than what it needs. And that is of course
assuming if we were to account *all* DMA buffers, since whether they
happen to have an IOMMU translation or not is irrelevant (we'd have
already accounted the pagetables as pagetables if so).
I bet "the networking subsystem" also consumes significant memory on the
same kind of big systems where IOMMU pagetables would be of any concern.
I believe some of the some of the "serious" NICs can easily run up
hundreds of megabytes if not gigabytes worth of queues, SKB pools, etc.
- would you propose accounting those too?
Thanks,
Robin.
Powered by blists - more mailing lists