lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 3 Apr 2023 22:59:35 +0300
From:   Dmitry Osipenko <dmitry.osipenko@...labora.com>
To:     Emil Velikov <emil.velikov@...labora.com>
Cc:     David Airlie <airlied@...hat.com>,
        Gerd Hoffmann <kraxel@...hat.com>,
        Gurchetan Singh <gurchetansingh@...omium.org>,
        Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
        Rob Clark <robdclark@...il.com>,
        Marek Olšák <maraeo@...il.com>,
        Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@....com>,
        dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
        kernel@...labora.com, virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v4 1/2] drm/virtio: Refactor job submission code path

On 3/30/23 18:32, Emil Velikov wrote:
>> +static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
>> +				     struct dma_fence *fence)
>> +{
>> +	struct dma_fence_unwrap itr;
>> +	struct dma_fence *f;
>> +	int err;
>> +
>> +	dma_fence_unwrap_for_each(f, &itr, fence) {
> The dma_fence_unwrap_for_each() change should be a separate patch,
> highlighting why we want it.

Good point, it actually should be a potential optimization for the
in-fence waiting.

>> +	ret = virtio_gpu_init_submit(&submit, exbuf, dev, file,
>> +				     fence_ctx, ring_idx);
>> +	if (ret)
>> +		goto cleanup;
>> +
>> +	ret = virtio_gpu_wait_in_fence(&submit);
>> +	if (ret)
>> +		goto cleanup;
>> +
> We have reshuffled the order around in_fence waiting, out_fence install,
> handles, cmdbuf, drm events, etc. Can we get that split up a bit, with
> some comments.
> 
> If it were me, I would keep the wait_in_fence early and inline
> virtio_gpu_init_submit (the nesting/abstraction seems a bit much). This
> means one can omit the virtio_gpu_submit::exbuf all together.

I tried to inline and this variant makes code much less readable to me.

The point of having wait_in_fence after submit_init is that it makes
submit code path shorter. If we have to wait for in-fence, then once
fence signals, there is no need to init and instead move directly to a
further submission step.

Perhaps won't hurt to also factor out the wait_fence from parse_deps in
the second patch and do all the waits right before locking the buflist.

-- 
Best regards,
Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ