[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/kA1Tp5wIZSiY4q@kbusch-mbp.dhcp.thefacebook.com>
Date: Fri, 24 Feb 2023 11:24:21 -0700
From: Keith Busch <kbusch@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Keith Busch <kbusch@...a.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Bryan O'Donoghue <bryan.odonoghue@...aro.org>
Subject: Re: [PATCH] dmapool: push new blocks in ascending order
On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@...nel.org> wrote:
>
> > On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > > From: Keith Busch <kbusch@...nel.org>
> > > >
> > > > Some users of the dmapool need their allocations to happen in ascending
> > > > order. The recent optimizations pushed the blocks in reverse order, so
> > > > restore the previous behavior by linking the next available block from
> > > > low-to-high.
> > >
> > > Who are those users?
> > >
> > > Also should we document this behavior somewhere so that it isn't
> > > accidentally changed again some time in the future?
> >
> > usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>
> It would be helpful to know why these users need this side-effect. Did
> the drivers break? Or just get slower?
The affected driver was reported to be unusable without this behavior.
> Are those drivers misbehaving by assuming this behavior? Should we
I do think they're using the wrong API. You you shouldn't use the dmapool if
your blocks need to be arranged in a contiguous address order. They should just
directly use dma_alloc_coherent() instead.
> require that they be altered instead of forever constraining the dmapool
> implementation in this fashion?
This change isn't really constraining dmapool where it matters. It's just an
unexpected one-time initialization thing.
As far as altering those drivers, I'll reach out to someone on that side for
comment (I'm currently not familiar with the affected subsystem).
Powered by blists - more mailing lists