[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9572ba57-5552-4543-a3b0-6097520a12a3@gmail.com>
Date: Fri, 24 Jan 2025 19:42:30 -0500
From: Demi Marie Obenour <demiobenour@...il.com>
To: "Huang, Honglei1" <Honglei1.Huang@....com>, Huang Rui
<ray.huang@....com>, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, Dmitry Osipenko
<dmitry.osipenko@...labora.com>, dri-devel@...ts.freedesktop.org,
David Airlie <airlied@...hat.com>, Gerd Hoffmann <kraxel@...hat.com>,
Gurchetan Singh <gurchetansingh@...omium.org>, Chia-I Wu
<olvaffe@...il.com>, Akihiko Odaki <akihiko.odaki@...nix.com>,
Lingshan Zhu <Lingshan.Zhu@....com>, Simona Vetter <simona.vetter@...ll.ch>
Subject: Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource
object
On 1/8/25 12:05 PM, Simona Vetter wrote:
> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>
>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>> From: Honglei Huang <Honglei1.Huang@....com>
>>>>>
>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>> Used for let host access guest userspace memory and
>>>>> notice the change of userspace memory.
>>>>> This series patches are in very beginning state,
>>>>> User space are pinned currently to ensure the host
>>>>> device memory operations are correct.
>>>>> The free and unmap operations for userspace can be
>>>>> handled by MMU notifier this is a simple and basice
>>>>> SVM feature for this series patches.
>>>>> The physical PFNS update operations is splited into
>>>>> two OPs in here. The evicted memories won't be used
>>>>> anymore but remap into host again to achieve same
>>>>> effect with hmm_rang_fault.
>>>>
>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>
>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>> notifier
>>>>
>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>> page references or page pins at all, for full SVM integration. This
>>>> should use hmm_range_fault ideally, since that's the version that
>>>> doesn't ever grab any page reference pins.
>>>>
>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>> page reference or a temporary page pin (which seems to be what you're
>>>> doing here). In much older kernels there was some justification for them,
>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>> as clean design choices imo.
>>>>
>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>> you?
>>>
>>> +1 on using FOLL_LONGTERM. Fully dynamic memory management has a huge cost
>>> in complexity that pinning everything avoids. Furthermore, this avoids the
>>> host having to take action in response to guest memory reclaim requests.
>>> This avoids additional complexity (and thus attack surface) on the host side.
>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>> about supporting systems that require swappable GPU VRAM.
>>
>> Hi Sima and Demi,
>>
>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>> version.
>>
>> And for the first pin variants implementation, the MMU notifier is also
>> needed I think.Cause the userptr feature in UMD generally used like this:
>> the registering of userptr always is explicitly invoked by user code like
>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>> need call system call "free(userptrAddr)", then kernel driver will release
>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>> user has been free the userptr except for MMU notifior.And in UMD theres is
>> no way to get the free() operation is invoked by user.The only way is use
>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>> some virtio CMDs as far as I can see.
>>
>> And for the second way that is use hmm_range_fault, there is a predictable
>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>> may migrate when GPU/device is working. In bare metal, when memory is
>> migrating KFD driver will pause the compute work of the device in
>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>> memories to GPU then restore the compute work of device to ensure the
>> correction of the data. But in virtio-GPU driver the migration happen in
>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>> can be used for notify host but as lack of mmap_write_lock protection in
>> host kernel, host will hold invalid data for a short period of time, this
>> may lead to some issues. And it is hard to fix as far as I can see.
>>
>> I will extract some APIs into helper according to your request, and I will
>> refactor the whole userptr implementation, use some callbacks in page
>> getting path, let the pin method and hmm_range_fault can be choiced
>> in this series patches.
>
> Ok, so if this is for svm, then you need full blast hmm, or the semantics
> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> not work.
Is this still broken in the virtualized case? Page migration between host
and device memory is completely transparent to the guest kernel, so pinning
guest memory doesn't interfere with the host KMD at all. In fact, the host
KMD is not even aware of it.
Allowing memory registered with AMDKFD to be pageable *by the guest* seems
like a bad idea to me. Paging would require a guest <=> host round-trip
for _each_ call to mmu_interval_notifier_ops::invalidate(). That’s going
to be _very_ slow if it happens with any regularity. Worse, the userspace
VMM will need to be notified if the GPU writes to the pages while the guest
expects them to be stable. Can this be done with userfaultfd, and if so,
is it even a good idea?
The reason I am not sure that using userfaultfd to notify the guest of
changes is a good idea is that it seems intuitively rather risky. At a
minimum, it allows the guest to stall host accesses for an arbitrarily
long period of time, which I suspect will make exploiting race conditions
easier. Furthermore, this seems very prone to deadlocks. Suppose that
that the guest causes a virtual device to access write-protected memory.
The VMM’s virtual device implementation will cause a userfaultfd
write-protect fault, which will then be passed to the guest to handle.
Suppose that resolving the fault requires allocating memory, which in
turn causes memory reclaim that waits for I/O on the same block device.
If the virtual device is single-threaded, you just deadlocked. Even
if it is not single-threaded, operations like live migration might never
complete. It might be possible for userspace to check the cause of a
write-protect fault and break the deadlock, but that is even more
complexity.
With FOLL_LONGTERM, this can’t happen. The guest will never try to
make the pages clean, so it never needs to write-protect them. This
means that the host does not need to worry about its device model
stalling forever and that there is no risk of deadlock. The only thing
I know will break is using writable file-backed memory with SVM, but
that seems like a very, _very_ niche thing to do as there is no
consistency guarantee. Read-only access would work fine.
> The other option is that hsakmt/kfd api is completely busted, and that's
> kinda not a kernel problem.
My understanding is that it _is_ busted, in that it is tied to address
spaces, not contexts. If my understanding is correct, the host-side
device model must create a separate process for each guest process that
wants to use KFD. Otherwise, different guest processes that use the same
GPU virtual address will conflict with each other.
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Powered by blists - more mailing lists