lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e415baf-94d9-2723-3770-9a35f9fa6278@amd.com>
Date:   Mon, 30 Jan 2023 13:58:32 +0100
From:   Christian König <christian.koenig@....com>
To:     David Airlie <airlied@...hat.com>
Cc:     Danilo Krummrich <dakr@...hat.com>,
        Matthew Brost <matthew.brost@...el.com>, daniel@...ll.ch,
        bskeggs@...hat.com, jason@...kstrand.net, tzimmermann@...e.de,
        mripard@...nel.org, corbet@....net, nouveau@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        linux-doc@...r.kernel.org
Subject: Re: [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces

Am 27.01.23 um 21:25 schrieb David Airlie:
> [SNIP]
>> What we have inside the kernel is the information what happens if an
>> address X is accessed. On AMD HW this can be:
>>
>> 1. Route to the PCIe bus because the mapped BO is stored in system memory.
>> 2. Route to the internal MC because the mapped BO is stored in local memory.
>> 3. Route to other GPUs in the same hive.
>> 4. Route to some doorbell to kick of other work.
>> ...
>> x. Ignore write, return 0 on reads (this is what is used for sparse
>> mappings).
>> x+1. Trigger a recoverable page fault. This is used for things like SVA.
>> x+2. Trigger a non-recoverable page fault. This is used for things like
>> unmapped regions where access is illegal.
>>
>> All this is plus some hw specific caching flags.
>>
>> When Vulkan allocates a sparse VKBuffer what should happen is the following:
>>
>> 1. The Vulkan driver somehow figures out a VA region A..B for the
>> buffer. This can be in userspace (libdrm_amdgpu) or kernel (drm_mm), but
>> essentially is currently driver specific.
> There are NO plans to have drm_mm do VA region management, VA region
> management will be in userspace in Mesa. Can we just not bring that up again?

If we are talking about Mesa drivers then yes that should work because 
they can then implement all the hw specific quirks you need for VA 
allocation. If the VA allocation should be hw independent then we have a 
major problem here.

At least on AMD hw we have four different address spaces and even if you 
know of hand from which one you want to allocate you need to share your 
address space between Vulkan, VA-API and potentially even things like 
ROCm/OpenCL.

If we don't properly do that then the AMD user space tools for debugging 
and profiling (RMV, UMR etc...) won't work any more.

> This is for GPU VA tracking not management if that makes it easier we
> could rename it.
>
>> 2. The kernel gets a request to map the VA range A..B as sparse, meaning
>> that it updates the page tables from A..B with the sparse setting.
>>
>> 3. User space asks kernel to map a couple of memory backings at location
>> A+1, A+10, A+15 etc....
> 3.5?
>
> Userspace asks the kernel to unmap A+1 so it can later map something
> else in there?
>
> What happens in that case, with a set of queued binds, do you just do
> a new sparse mapping for A+1, does userspace decide that?

Yes, exactly that. Essentially there are no unmap operation from the 
kernel pov.

You just tell the kernel what should happen when the hw tries to resolve 
address X.

This what can happen can potentially be resolve to some buffer memory, 
ignored for sparse binding or generate a fault. This is stuff which is 
most likely common to all drivers.

But then at least on newer AMD hardware we also have things like raise a 
debug trap on access, wait forever until a debugger tells you to 
continue.....

It would be great if we could have the common stuff for a VA update 
IOCTL common for all drivers, e.g. in/out fences, range description 
(start, offset, end....), GEM handle in a standardized structure while 
still be able to handle all that hw specific stuff as well.

Christian.

>
> Dave.
>
>> 4. The VKBuffer is de-allocated, userspace asks kernel to update region
>> A..B to not map anything (usually triggers a non-recoverable fault).
>>
>> When you want to unify this between hw drivers I strongly suggest to
>> completely start from scratch once more.
>>
>> First of all don't think about those mappings as VMAs, that won't work
>> because VMAs are usually something large. Think of this as individual
>> PTEs controlled by the application. similar how COW mappings and struct
>> pages are handled inside the kernel.
>>
>> Then I would start with the VA allocation manager. You could probably
>> base that on drm_mm. We handle it differently in amdgpu currently, but I
>> think this is something we could change.
>>
>> Then come up with something close to the amdgpu VM system. I'm pretty
>> sure that should work for Nouveau and Intel XA as well. In other words
>> you just have a bunch of very very small structures which represents
>> mappings and a larger structure which combine all mappings of a specific
>> type, e.g. all mappings of a BO or all sparse mappings etc...
>>
>> Merging of regions is actually not mandatory. We don't do it in amdgpu
>> and can live with the additional mappings pretty well. But I think this
>> can differ between drivers.
>>
>> Regards,
>> Christian.
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ