[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230223124137.e6fe921659e6f6f1c10668b6@linux-foundation.org>
Date: Thu, 23 Feb 2023 12:41:37 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Keith Busch <kbusch@...nel.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Keith Busch <kbusch@...a.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Bryan O'Donoghue <bryan.odonoghue@...aro.org>
Subject: Re: [PATCH] dmapool: push new blocks in ascending order
On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@...nel.org> wrote:
> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > From: Keith Busch <kbusch@...nel.org>
> > >
> > > Some users of the dmapool need their allocations to happen in ascending
> > > order. The recent optimizations pushed the blocks in reverse order, so
> > > restore the previous behavior by linking the next available block from
> > > low-to-high.
> >
> > Who are those users?
> >
> > Also should we document this behavior somewhere so that it isn't
> > accidentally changed again some time in the future?
>
> usb/chipidea/udc.c qh_pool called "ci_hw_qh".
It would be helpful to know why these users need this side-effect. Did
the drivers break? Or just get slower?
Are those drivers misbehaving by assuming this behavior? Should we
require that they be altered instead of forever constraining the dmapool
implementation in this fashion?
Powered by blists - more mailing lists