[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZbK8xxdfeuJ7NQ8E@Asurada-Nvidia>
Date: Thu, 25 Jan 2024 11:55:51 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "will@...nel.org" <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
"joro@...tes.org" <joro@...tes.org>, "jean-philippe@...aro.org"
<jean-philippe@...aro.org>, Alistair Popple <apopple@...dia.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>
Subject: Re: [PATCH 1/3] iommu/io-pgtable-arm: Add nents_per_pgtable in
struct io_pgtable_cfg
On Thu, Jan 25, 2024 at 01:47:28PM -0400, Jason Gunthorpe wrote:
> On Thu, Jan 25, 2024 at 09:23:00AM -0800, Nicolin Chen wrote:
> > > When the soft lockup issue is solved you can consider if a tunable is
> > > still interesting..
> >
> > Yea, it would be on top of the soft lockup fix. I assume we are
> > still going with your change: arm_smmu_inv_range_too_big, though
> > I wonder if we should apply before your rework series to make it
> > a bug fix..
>
> It depends what change you settle on..
I mean your arm_smmu_inv_range_too_big patch. Should it be a bug
fix CCing the stable tree? My previous SVA fix was, by the way.
> > > > > Maybe it is really just a simple thing - compute how many invalidation
> > > > > commands are needed, if they don't all fit in the current queue space,
> > > > > then do an invalidate all instead?
> > > >
> > > > The queue could actually have a large space. But one large-size
> > > > invalidation would be divided into batches that have to execute
> > > > back-to-back. And the batch size is 64 commands in 64-bit case,
> > > > which might be too small as a cap.
> > >
> > > Yes, some notable code reorganizing would be needed to implement
> > > something like this
> > >
> > > Broadly I'd sketch sort of:
> > >
> > > - Figure out how fast the HW can execute a lot of commands
> > > - The above should drive some XX maximum number of commands, maybe we
> > > need to measure at boot, IDK
> > > - Strongly time bound SVA invalidation:
> > > * No more than XX commands, if more needed then push invalidate
> > > all
> > > * All commands must fit in the available queue space, if more
> > > needed then push invalidate all
> > > - The total queue depth must not be larger than YY based on the
> > > retire rate so that even a full queue will complete invalidation
> > > below the target time.
> > >
> > > A tunable indicating what the SVA time bound target should be might be
> > > appropriate..
> >
> > Thanks for listing it out. I will draft something with that, and
> > should we just confine it to SVA or non DMA callers in general?
>
> Also, how much of this SVA issue is multithreaded? Will multiple
> command queues improve anything?
The bottleneck from measurement is mostly at SMMU consuming the
commands with a single CMDQ HW, so multithreading unlikely helps.
And VCMDQ only provides a multi-queue interface/wrapper for VM
isolations.
Thanks
Nic
Powered by blists - more mailing lists