lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 11 Jan 2024 10:31:37 +0100
From: Christian König <christian.koenig@....com>
To: "Zhang, Julia" <Julia.Zhang@....com>,
 Gurchetan Singh <gurchetansingh@...omium.org>, Chia-I Wu
 <olvaffe@...il.com>, David Airlie <airlied@...hat.com>,
 Gerd Hoffmann <kraxel@...hat.com>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
 "amd-gfx@...ts.freedesktop.org" <amd-gfx@...ts.freedesktop.org>,
 "virtualization@...ts.linux-foundation.org"
 <virtualization@...ts.linux-foundation.org>
Cc: "Deucher, Alexander" <Alexander.Deucher@....com>,
 David Airlie <airlied@...il.com>, Erik Faye-Lund <kusmabite@...il.com>,
 "Olsak, Marek" <Marek.Olsak@....com>,
 "Pelloux-Prayer, Pierre-Eric" <Pierre-eric.Pelloux-prayer@....com>,
 "Huang, Honglei1" <Honglei1.Huang@....com>,
 "Chen, Jiqian" <Jiqian.Chen@....com>, "Huang, Ray" <Ray.Huang@....com>
Subject: Re: [PATCH 1/1] drm/virtio: Implement device_attach

Am 11.01.24 um 09:52 schrieb Zhang, Julia:
>
> On 2024/1/10 18:21, Daniel Vetter wrote:
>> On Wed, Jan 10, 2024 at 05:56:28PM +0800, Julia Zhang wrote:
>>> drm_gem_map_attach() requires drm_gem_object_funcs.get_sg_table to be
>>> implemented, or else return ENOSYS. Virtio has no get_sg_table
>>> implemented for vram object. To fix this, add a new device_attach to
>>> call drm_gem_map_attach() for shmem object and return 0 for vram object
>>> instead of calling drm_gem_map_attach for both of these two kinds of
>>> object.
>>>
>>> Signed-off-by: Julia Zhang <julia.zhang@....com>
>>> ---
>>>   drivers/gpu/drm/virtio/virtgpu_prime.c | 14 +++++++++++++-
>>>   1 file changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
>>> index 44425f20d91a..f0b0ff6f3813 100644
>>> --- a/drivers/gpu/drm/virtio/virtgpu_prime.c
>>> +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
>>> @@ -71,6 +71,18 @@ static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>>>   	drm_gem_unmap_dma_buf(attach, sgt, dir);
>>>   }
>>>   
>>> +static int virtgpu_gem_device_attach(struct dma_buf *dma_buf,
>>> +				     struct dma_buf_attachment *attach)
>>> +{
>>> +	struct drm_gem_object *obj = attach->dmabuf->priv;
>>> +	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
>>> +
>>> +	if (virtio_gpu_is_vram(bo))
>>> +		return 0;
>> You need to reject attach here because these vram buffer objects cannot be
>> used by any other driver. In that case dma_buf_attach _must_ fail, not
>> silently succeed.
>>
> Do you mean these vram buffer objects should not be imported by other drivers?
>
>> Because if it silently succeeds then the subsequent dma_buf_map_attachment
>> will blow up because you don't have the ->get_sg_table hook implemented.
>>
> I saw only this call stack would call ->get_sg_table:
>
> #0 ->get_sg_table
> #1 drm_gem_map_dma_buf
> #2 virtgpu_gem_map_dma_buf
> #3 __map_dma_buf
> #4 dma_buf_dynamic_attach
> #5 amdgpu_gem_prime_import
>
> this stack is for shmem object and it requires ->get_sg_table get implemented.
>
>
> But for vram object, __map_dma_buf will call like this:
>
> #0 sg_alloc_table
> #1 virtio_gpu_vram_map_dma_buf
> #2 virtgpu_gem_map_dma_buf
> #3 __map_dma_buf
> #4 dma_buf_dynamic_attach
> #5 amdgpu_gem_prime_import
>
> which will not call ->get_sg_table but alloc a sg table instead and fill it from the vram object.

Yeah and exactly that won't work for this use case.

The VRAM of amdgpu is exposed through an MMIO BAR the guest can't access.

> Before __map_dma_buf, the call stack of virtgpu_gem_device_attach is:
>
> #0 virtgpu_gem_device_attach
> #1 virtio_dma_buf_attach
> #2 dma_buf_dynamic_attach
> #3 amdgpu_gem_prime_import
>
> So my problem is that to realize dGPU prime feature on VM, I actually want dma_buf_attach succeed
> for vram object so that passthrough dGPU can import these vram objects and blit data to it.

That is completely futile effort, the dGPU physically can't access that 
stuff or otherwise we have a major security hole in the VM.

> If here return -EBUSY, then amdgpu_gem_prime_import will fail and the whole feature will fail.

Yeah and that is completely intentional. Let's discuss that use case AMD 
internally first.

Regards,
Christian.

>
>> Per the documentation the error code for this case must be -EBUSY, see the
>> section for the attach hook here:
>>
>> https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#c.dma_buf_ops
>>
>> Since you're looking into this area, please make sure there's not other
>> similar mistake in virtio code.
>>
>> Also can you please make a kerneldoc patch for struct virtio_dma_buf_ops
>> to improve the documentation there? I think it would be good to move those
>> to the inline style and then at least put a kernel-doc hyperlink to struct
>> dma_buf_ops.attach and mention that attach must fail for non-shareable
>> buffers.
>>
>> In general the virtio_dma_buf kerneldoc seems to be on the "too minimal,
>> explains nothing" side of things :-/
> OK, let me take a look and try to do it.
>
> Regards,
> Julia
>
>> Cheers, Sima
>>
>>> +
>>> +	return drm_gem_map_attach(dma_buf, attach);
>>> +}
>>> +
>>>   static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops =  {
>>>   	.ops = {
>>>   		.cache_sgt_mapping = true,
>>> @@ -83,7 +95,7 @@ static const struct virtio_dma_buf_ops virtgpu_dmabuf_ops =  {
>>>   		.vmap = drm_gem_dmabuf_vmap,
>>>   		.vunmap = drm_gem_dmabuf_vunmap,
>>>   	},
>>> -	.device_attach = drm_gem_map_attach,
>>> +	.device_attach = virtgpu_gem_device_attach,
>>>   	.get_uuid = virtgpu_virtio_get_uuid,
>>>   };
>>>   
>>> -- 
>>> 2.34.1
>>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ