lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230630100252.7ff6421d@collabora.com>
Date:   Fri, 30 Jun 2023 10:02:52 +0200
From:   Boris Brezillon <boris.brezillon@...labora.com>
To:     Danilo Krummrich <dakr@...hat.com>
Cc:     airlied@...il.com, daniel@...ll.ch, tzimmermann@...e.de,
        mripard@...nel.org, corbet@....net, christian.koenig@....com,
        bskeggs@...hat.com, Liam.Howlett@...cle.com,
        matthew.brost@...el.com, alexdeucher@...il.com, ogabbay@...nel.org,
        bagasdotme@...il.com, willy@...radead.org, jason@...kstrand.net,
        dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        Donald Robson <donald.robson@...tec.com>,
        Dave Airlie <airlied@...hat.com>
Subject: Re: [PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA
 mappings

Hi Danilo,

On Fri, 30 Jun 2023 00:25:18 +0200
Danilo Krummrich <dakr@...hat.com> wrote:

> + *	int driver_gpuva_remap(struct drm_gpuva_op *op, void *__ctx)
> + *	{
> + *		struct driver_context *ctx = __ctx;
> + *
> + *		drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
> + *
> + *		drm_gpuva_unlink(op->remap.unmap->va);
> + *		kfree(op->remap.unmap->va);
> + *
> + *		if (op->remap.prev) {
> + *			drm_gpuva_link(ctx->prev_va);

I ended up switching to dma_resv-based locking for the GEMs and I
wonder what the locking is supposed to look like in the async-mapping
case, where we insert/remove the VA nodes in the drm_sched::run_job()
path.

What I have right now is something like:

	dma_resv_lock(vm->resv);

	// split done in drm_gpuva_sm_map(), each iteration
	// of the loop is a call to the driver ->[re,un]map()
	// hook
	for_each_sub_op() {
		
		// Private BOs have their resv field pointing to the
		// VM resv and we take the VM resv lock before calling
		// drm_gpuva_sm_map()
		if (vm->resv != gem->resv)
			dma_resv_lock(gem->resv);

		drm_gpuva_[un]link(va);
		gem_[un]pin(gem);

		if (vm->resv != gem->resv)
			dma_resv_unlock(gem->resv);
	}

	dma_resv_unlock(vm->resv);

In practice, I don't expect things to deadlock, because the VM resv is
not supposed to be taken outside the VM context and the locking order
is always the same (VM lock first, and then each shared BO
taken/released independently), but I'm not super thrilled by this
nested lock, and I'm wondering if we shouldn't have a pass collecting
locks in a drm_exec context first, and then have
the operations executed. IOW, something like that:

	drm_exec_init(exec, DRM_EXEC_IGNORE_DUPLICATES)
	drm_exec_until_all_locked(exec) {
		// Dummy GEM is the dummy GEM object I use to make the VM
		// participate in the locking without having to teach
		// drm_exec how to deal with raw dma_resv objects.
		ret = drm_exec_lock_obj(exec, vm->dummy_gem);
		drm_exec_retry_on_contention(exec);
		if (ret)
			return ret;

		// Could take the form of drm_gpuva_sm_[un]map_acquire_locks()
		// helpers
		for_each_sub_op() {
			ret = drm_exec_lock_obj(exec, gem);
			if (ret)
				return ret;
		}
	}

	// each iteration of the loop is a call to the driver
	// ->[re,un]map() hook
	for_each_sub_op() {
		...
		gem_[un]pin_locked(gem);
		drm_gpuva_[un]link(va);
		...
	}

	drm_exec_fini(exec);

Don't know if I got this right, or if I'm just confused again by how
the drm_gpuva API is supposed to be used.

Regards,

Boris

> + *			ctx->prev_va = NULL;
> + *		}
> + *
> + *		if (op->remap.next) {
> + *			drm_gpuva_link(ctx->next_va);
> + *			ctx->next_va = NULL;
> + *		}
> + *
> + *		return 0;
> + *	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ