lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Za6kuE4KUPuwk7+j@Asurada-Nvidia>
Date: Mon, 22 Jan 2024 09:24:08 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "will@...nel.org" <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
	"joro@...tes.org" <joro@...tes.org>, "jean-philippe@...aro.org"
	<jean-philippe@...aro.org>, Alistair Popple <apopple@...dia.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, "iommu@...ts.linux.dev"
	<iommu@...ts.linux.dev>
Subject: Re: [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in
 struct io_pgtable_cfg

Hi Jason,

Thank you for the ideas!

On Mon, Jan 22, 2024 at 09:01:52AM -0400, Jason Gunthorpe wrote:
> On Sat, Jan 20, 2024 at 11:59:45AM -0800, Nicolin Chen wrote:
> > Hi Robin/Will,
> > 
> > On Tue, Aug 29, 2023 at 02:25:10PM -0700, Robin Murphy wrote:
> > > > Also, what we need is actually an arbitrary number for max_tlbi_ops.
> > > > And I think it could be irrelevant to the page size, i.e. either a
> > > > 4K pgsize or a 64K pgsize could use the same max_tlbi_ops number,
> > > > because what eventually impacts the latency is the number of loops
> > > > of building/issuing commands.
> > > 
> > > Although there is perhaps a counter-argument for selective invalidation,
> > > that if you're using 64K pages to improve TLB hit rates vs. 4K, then you
> > > have more to lose from overinvalidation, so maybe a single threshold
> > > isn't so appropriate for both.
> > > 
> > > Yes, ultimately it all comes down to picking an arbitrary number, but
> > > the challenge is that we want to pick a *good* one, and ideally have
> > > some reasoning behind it. As Will suggested, copying what the mm layer
> > > does gives us an easy line of reasoning, even if it's just in the form
> > > of passing the buck. And that's actually quite attractive, since
> > > otherwise we'd then have to get into the question of what really is the
> > > latency of building and issuing commands, since that clearly depends on
> > > how fast the CPU is, and how fast the SMMU is, and how busy the SMMU is,
> > > and how large the command queue is, and how many other CPUs are also
> > > contending for the command queue... and very quickly it becomes hard to
> > > believe that any simple constant can be good for all possible systems.
> > 
> > So, here we have another request to optimize this number further,
> > though the merged arbitrary number copied from MMU side could fix
> > the soft lockup. The iommu_unmap delay with a 500MB buffer is not
> > quite satisfying on our testing chip, since the threshold now for
> > max_tlbi_ops is at 512MB for 64K pgsize (8192 * 64KB).
> > 
> > As Robin remarked, this could be really a case-by-case situation.
> > So, I wonder if we can rethink of adding a configurable threshold
> > that has a default value at its current setup matching MMU side.
> > 
> > If this is acceptable, what can be the preferable way of having a
> > configuration: a per-SMMU or a per-IOMMU-group sysfs node? I am
> > open for any other option too.
> 
> Maybe it should be more dynamic and you get xx ms to push
> invalidations otherwise it gives up and does invalidate all?
> 
> The busier the system the broader the invalidation?

Yea, I think this could be good one.

> Or do we need to measure at boot time invalidation performance and set
> a threshold that way?

I see. We can run an invalidation at default max_tlbi_ops to
get its delay in msec or usec, and then set as the threshold
"xx ms" in the idea one.

> Also, it seems to me that SVA use cases and, say, DMA API cases are
> somewhat different where we may be willing to wait longer for DMA API.

Hmm, the lockup that my patch fixed was for an SVA case that
doesn't seem to involve DMA API:
https://lore.kernel.org/linux-iommu/20230901203904.4073-1-nicolinc@nvidia.com/

And the other lockup fix for a non-SVA case from Zhang doesn't
seem to involve DMA API either:
https://lore.kernel.org/linux-iommu/e74ea905-d107-4202-97ca-c2c509e7aa1e@huawei.com/

Maybe we can treat DMA API a bit different. But I am not sure
about the justification of leaving it to wait longer. Mind
elaborating?

Thanks
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ