[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d7cbbab2-8fe0-4a10-8b06-e47da955865e@redhat.com>
Date: Tue, 3 Jun 2025 18:25:57 +0200
From: David Hildenbrand <david@...hat.com>
To: Jiri Bohac <jbohac@...e.cz>
Cc: Baoquan He <bhe@...hat.com>, Vivek Goyal <vgoyal@...hat.com>,
Dave Young <dyoung@...hat.com>, kexec@...ts.infradead.org,
Philipp Rudo <prudo@...hat.com>, Donald Dutile <ddutile@...hat.com>,
Pingfan Liu <piliu@...hat.com>, Tao Liu <ltao@...hat.com>,
linux-kernel@...r.kernel.org, David Hildenbrand <dhildenb@...hat.com>,
Michal Hocko <mhocko@...e.cz>
Subject: Re: [PATCH v4 4/5] kdump: wait for DMA to finish when using CMA
On 03.06.25 17:59, Jiri Bohac wrote:
> On Tue, Jun 03, 2025 at 03:15:03PM +0200, David Hildenbrand wrote:
>> On 30.05.25 22:29, Jiri Bohac wrote:
>>> When re-using the CMA area for kdump there is a risk of pending DMA into
>>> pinned user pages in the CMA area.
>>>
>>> Pages that are pinned long-term are migrated away from CMA, so these are
>>> not a concern. Pages pinned without FOLL_LONGTERM remain in the CMA and may
>>> possibly be the source or destination of a pending DMA transfer.
>>
>> I'll note that we right now do have an upstream BUG where that is sometimes
>> not the case. I mentioned it previously that such bugs will be a problem :(
>>
>> https://lkml.kernel.org/r/20250523023709epcms1p236d4f55b79adb9366ec1cf6d5792b06b@epcms1p2
>
> I'll just reitarate the whole purpose of this patchset, as
> added to Documentation:
I know, but stating "these are not a concern", when they are currently a
concern upstream is a bit suboptimal. :)
I'd phrase it more like "Pages residing in CMA areas can usually not get
long-term pinned, so long-term pinning is typically not a concern. BUGs
in the kernel might still lead to long-term pinning of such pages if
everything goes wrong."
Or sth like that.
>>> +static void crash_cma_clear_pending_dma(void)
>>> +{
>>> + unsigned int s = cma_dma_timeout_sec;
>>> +
>>> + if (!crashk_cma_cnt)
>>> + return;
>>> +
>>> + while (s--)
>>> + mdelay(1000);
>>
>> Any reason we cannot do it in a single mdelay() invocation?
>>
>> mdelay() already is a loop around udelay on larger values IIUC.
>
> No good reasons ;)
> I just wanted to prevent a totally theoretical overflow (if cma_dma_timeout_sec was made configurable;
> I also anticipated someone might want to add some progress printks into the cycle (without verifying if
> that's even possible in this context).
>
> If you want, I have no problem changing this to:
> + mdelay(cma_dma_timeout_sec * 1000);
Probably good enough. Or just hard-code 10s and call it a day. :)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists