[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d768c2b8-1649-6565-0367-a0e07cc01b03@linaro.org>
Date: Fri, 24 Feb 2023 22:28:30 +0000
From: Bryan O'Donoghue <bryan.odonoghue@...aro.org>
To: Keith Busch <kbusch@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Keith Busch <kbusch@...a.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] dmapool: push new blocks in ascending order
On 24/02/2023 18:24, Keith Busch wrote:
> On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
>> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@...nel.org> wrote:
>>
>>> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
>>>> On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
>>>>> From: Keith Busch <kbusch@...nel.org>
>>>>>
>>>>> Some users of the dmapool need their allocations to happen in ascending
>>>>> order. The recent optimizations pushed the blocks in reverse order, so
>>>>> restore the previous behavior by linking the next available block from
>>>>> low-to-high.
>>>>
>>>> Who are those users?
>>>>
>>>> Also should we document this behavior somewhere so that it isn't
>>>> accidentally changed again some time in the future?
>>>
>>> usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>>
>> It would be helpful to know why these users need this side-effect. Did
>> the drivers break? Or just get slower?
>
> The affected driver was reported to be unusable without this behavior.
>
>> Are those drivers misbehaving by assuming this behavior? Should we
>
> I do think they're using the wrong API. You you shouldn't use the dmapool if
> your blocks need to be arranged in a contiguous address order. They should just
> directly use dma_alloc_coherent() instead.
>
>> require that they be altered instead of forever constraining the dmapool
>> implementation in this fashion?
>
> This change isn't really constraining dmapool where it matters. It's just an
> unexpected one-time initialization thing.
>
> As far as altering those drivers, I'll reach out to someone on that side for
> comment (I'm currently not familiar with the affected subsystem).
We can always change this driver, I'm fine to do that in-parallel/instead.
The symptom we have is a silent failure absent this change so, I just
wonder are we really the _only_ code path that would be affected absent
the change in this patch ?
---
bod
Powered by blists - more mailing lists