[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whogEk1UJfU3E7aW18PDYRbdAzXta5J0ECg=CB5=sCe7g@mail.gmail.com>
Date: Tue, 18 Apr 2023 10:36:34 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: John Garry <john.g.garry@...cle.com>
Cc: Vasant Hegde <vasant.hegde@....com>,
Robin Murphy <robin.murphy@....com>, joro@...tes.org,
will@...nel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
On Tue, Apr 18, 2023 at 3:20 AM John Garry <john.g.garry@...cle.com> wrote:
>
> JFYI, Since you are using NVMe, you could also alternatively try
> something like which I did for some SCSI storage controller drivers to
> limit the request_queue max_sectors soft limit, like:
That patch is not only whitespace-damaged, it's randomly missing one
'+' character so it makes no sense even ignoring the whitespace
problems. _and_ it has a nonsensical cast to 'unsigned int' which
makes that 'min()' possibly do crazy and invalid things (ie imagine
dma_opt_mapping_size() returning 4GB).
You can't cast things to the smaller size just to get rid of a
warning, for chrissake!
In fact, even without the cast, it seems entirely broken, since the
fallback for dma_opt_mapping_size() is to return 0 (admittedly _that_
case only happens with HAS_DMA=n).
Finally, doing this inside the
if (ctrl->max_hw_sectors) {
conditional seems entirely wrong, since any dma mapping limits would
be entirely independent of any driver maximum hw size, and in fact
*easier* to hit if the block device itself doesn't have any max
limits.
So please burn that patch in the darkest pits of hell and let's try to
forget it ever existed. Ok?
Also, shouldn't any possible dma mapping size affect not
'max_sectors', but 'max_segment_size'? At least the docs imply that
dma_opt_mapping_size() is about the max size of a _single_ mapping,
not of the whole thing?
Anyway, if this is actually an issue, to the point that it's now being
discussed for a _second_ block driver subsystem, then shouldn't the
queue handling just do this all automatically, instead of adding
random crap to random block driver architectures?
And no, I don't know this code, so maybe I'm entirely missing
something, but that patch just raised my hackles enough that I had to
say something.
Linus
Powered by blists - more mailing lists