[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51c11147-4927-4ebc-9737-fd1eebe4e0bd@redhat.com>
Date: Fri, 7 Mar 2025 08:54:52 +0100
From: Jocelyn Falempe <jfalempe@...hat.com>
To: Matthew Wilcox <willy@...radead.org>,
Ryosuke Yasuoka <ryasuoka@...hat.com>, maarten.lankhorst@...ux.intel.com,
mripard@...nel.org, tzimmermann@...e.de, airlied@...il.com, simona@...ll.ch,
kraxel@...hat.com, gurchetansingh@...omium.org, olvaffe@...il.com,
akpm@...ux-foundation.org, urezki@...il.com, hch@...radead.org,
dmitry.osipenko@...labora.com, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, virtualization@...ts.linux.dev,
linux-mm@...ck.org
Subject: Re: [PATCH drm-next 1/2] vmalloc: Add atomic_vmap
On 06/03/2025 16:52, Simona Vetter wrote:
> On Thu, Mar 06, 2025 at 02:24:51PM +0100, Jocelyn Falempe wrote:
>> On 06/03/2025 05:52, Matthew Wilcox wrote:
>>> On Thu, Mar 06, 2025 at 12:25:53AM +0900, Ryosuke Yasuoka wrote:
>>>> Some drivers can use vmap in drm_panic, however, vmap is sleepable and
>>>> takes locks. Since drm_panic will vmap in panic handler, atomic_vmap
>>>> requests pages with GFP_ATOMIC and maps KVA without locks and sleep.
>>>
>>> In addition to the implicit GFP_KERNEL allocations Vlad mentioned, how
>>> is this supposed to work?
>>>
>>>> + vn = addr_to_node(va->va_start);
>>>> +
>>>> + insert_vmap_area(va, &vn->busy.root, &vn->busy.head);
>>>
>>> If someone else is holding the vn->busy.lock because they're modifying the
>>> busy tree, you'll corrupt the tree. You can't just say "I can't take a
>>> lock here, so I won't bother". You need to figure out how to do something
>>> safe without taking the lock. For example, you could preallocate the
>>> page tables and reserve a vmap area when the driver loads that would
>>> then be usable for the panic situation. I don't know that we have APIs
>>> to let you do that today, but it's something that could be added.
>>>
>> Regarding the lock, it should be possible to use the trylock() variant, and
>> fail if the lock is already taken. (In the panic handler, only 1 CPU remain
>> active, so it's unlikely the lock would be released anyway).
>>
>> If we need to pre-allocate the page table and reserve the vmap area, maybe
>> it would be easier to just always vmap() the primary framebuffer, so it can
>> be used in the panic handler?
>
> Yeah I really don't like the idea of creating some really brittle one-off
> core mm code just so we don't have to vmap a buffer unconditionally. I
> think even better would be if drm_panic can cope with non-linear buffers,
> it's entirely fine if the drawing function absolutely crawls and sets each
> individual byte ...
It already supports some non-linear buffer, like Nvidia block-linear:
https://elixir.bootlin.com/linux/v6.13.5/source/drivers/gpu/drm/nouveau/dispnv50/wndw.c#L606
And I've also sent some patches to support Intel's 4-tile and Y-tile format:
https://patchwork.freedesktop.org/patch/637200/?series=141936&rev=5
https://patchwork.freedesktop.org/patch/637202/?series=141936&rev=5
Hopefully Color Compression can be disabled on intel's GPU, otherwise
that would be a bit harder to implement than tiling.
>
> The only thing you're allowed to do in panic is try_lock on a raw spinlock
> (plus some really scare lockless tricks), imposing that on core mm sounds
> like a non-starter to me.
>
> Cheers, Sima
Powered by blists - more mailing lists