lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 25 Jan 2024 09:23:00 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "will@...nel.org" <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
	"joro@...tes.org" <joro@...tes.org>, "jean-philippe@...aro.org"
	<jean-philippe@...aro.org>, Alistair Popple <apopple@...dia.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, "iommu@...ts.linux.dev"
	<iommu@...ts.linux.dev>
Subject: Re: [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in
 struct io_pgtable_cfg

On Thu, Jan 25, 2024 at 09:55:37AM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 23, 2024 at 04:11:09PM -0800, Nicolin Chen wrote:
> > > prevented strongly. Broadly speaking if SVA is pushing too high an
> > > invalidation workload then we need to agressively trim it, and do so
> > > dynamically. Certainly we should not have a tunable that has to be set
> > > right to avoid soft lockup.
> > > 
> > > A tunable to improve performance, perhaps, but not to achieve basic
> > > correctness.
> > 
> > So, should we make an optional tunable only for those who care
> > about performance? Though I think having a tunable would just
> > fix both issues.
> 
> When the soft lockup issue is solved you can consider if a tunable is
> still interesting..

Yea, it would be on top of the soft lockup fix. I assume we are
still going with your change: arm_smmu_inv_range_too_big, though
I wonder if we should apply before your rework series to make it
a bug fix..

> > > Maybe it is really just a simple thing - compute how many invalidation
> > > commands are needed, if they don't all fit in the current queue space,
> > > then do an invalidate all instead?
> > 
> > The queue could actually have a large space. But one large-size
> > invalidation would be divided into batches that have to execute
> > back-to-back. And the batch size is 64 commands in 64-bit case,
> > which might be too small as a cap.
> 
> Yes, some notable code reorganizing would be needed to implement
> something like this
> 
> Broadly I'd sketch sort of:
> 
>  - Figure out how fast the HW can execute a lot of commands
>  - The above should drive some XX maximum number of commands, maybe we
>    need to measure at boot, IDK
>  - Strongly time bound SVA invalidation:
>    * No more than XX commands, if more needed then push invalidate
>      all
>    * All commands must fit in the available queue space, if more
>      needed then push invalidate all
>  - The total queue depth must not be larger than YY based on the
>    retire rate so that even a full queue will complete invalidation
>    below the target time.
> 
> A tunable indicating what the SVA time bound target should be might be
> appropriate..

Thanks for listing it out. I will draft something with that, and
should we just confine it to SVA or non DMA callers in general?

Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ