[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8kp9Z9VgTpQmV9d@casper.infradead.org>
Date: Thu, 6 Mar 2025 04:52:05 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Ryosuke Yasuoka <ryasuoka@...hat.com>
Cc: maarten.lankhorst@...ux.intel.com, mripard@...nel.org,
tzimmermann@...e.de, airlied@...il.com, simona@...ll.ch,
kraxel@...hat.com, gurchetansingh@...omium.org, olvaffe@...il.com,
akpm@...ux-foundation.org, urezki@...il.com, hch@...radead.org,
dmitry.osipenko@...labora.com, jfalempe@...hat.com,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, linux-mm@...ck.org
Subject: Re: [PATCH drm-next 1/2] vmalloc: Add atomic_vmap
On Thu, Mar 06, 2025 at 12:25:53AM +0900, Ryosuke Yasuoka wrote:
> Some drivers can use vmap in drm_panic, however, vmap is sleepable and
> takes locks. Since drm_panic will vmap in panic handler, atomic_vmap
> requests pages with GFP_ATOMIC and maps KVA without locks and sleep.
In addition to the implicit GFP_KERNEL allocations Vlad mentioned, how
is this supposed to work?
> + vn = addr_to_node(va->va_start);
> +
> + insert_vmap_area(va, &vn->busy.root, &vn->busy.head);
If someone else is holding the vn->busy.lock because they're modifying the
busy tree, you'll corrupt the tree. You can't just say "I can't take a
lock here, so I won't bother". You need to figure out how to do something
safe without taking the lock. For example, you could preallocate the
page tables and reserve a vmap area when the driver loads that would
then be usable for the panic situation. I don't know that we have APIs
to let you do that today, but it's something that could be added.
Powered by blists - more mailing lists