[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <oepspnsohjueyafzk6c2qp2o7vhh4eax73po4byejacujyoiam@7avjrl6alhq2>
Date: Thu, 18 Sep 2025 22:05:21 +0300
From: Dmitry Baryshkov <dmitry.baryshkov@....qualcomm.com>
To: Rob Clark <rob.clark@....qualcomm.com>
Cc: Dmitry Baryshkov <lumag@...nel.org>,
Abhinav Kumar <abhinav.kumar@...ux.dev>,
Jessica Zhang <jessica.zhang@....qualcomm.com>,
Sean Paul <sean@...rly.run>,
Marijn Suijten <marijn.suijten@...ainline.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Konrad Dybcio <konradybcio@...nel.org>, linux-arm-msm@...r.kernel.org,
dri-devel@...ts.freedesktop.org, freedreno@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH v5 2/5] drm/msm: split VM_BIND from the rest of GEM VMA
code
On Thu, Sep 18, 2025 at 07:46:32AM -0700, Rob Clark wrote:
> On Wed, Sep 17, 2025 at 8:51 PM Dmitry Baryshkov
> <dmitry.baryshkov@....qualcomm.com> wrote:
> >
> > In preparation to disabling GPU functionality split VM_BIND-related
> > functions (which are used only for the GPU) from the rest of the GEM VMA
> > implementation.
> >
> > Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@....qualcomm.com>
> > ---
> > drivers/gpu/drm/msm/Makefile | 1 +
> > drivers/gpu/drm/msm/msm_gem_vm_bind.c | 1116 +++++++++++++++++++++++++++++++
> > drivers/gpu/drm/msm/msm_gem_vma.c | 1177 +--------------------------------
> > drivers/gpu/drm/msm/msm_gem_vma.h | 105 +++
> > 4 files changed, 1225 insertions(+), 1174 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
> > index 0c0dfb25f01b193b10946fae20138caf32cf0ed2..d7876c154b0aa2cb0164c4b1fb7900b1a42db46b 100644
> > --- a/drivers/gpu/drm/msm/Makefile
> > +++ b/drivers/gpu/drm/msm/Makefile
> > @@ -115,6 +115,7 @@ msm-y += \
> > msm_gem_shrinker.o \
> > msm_gem_submit.o \
> > msm_gem_vma.o \
> > + msm_gem_vm_bind.o \
> > msm_gpu.o \
> > msm_gpu_devfreq.o \
> > msm_io_utils.o \
>
> [snip]
>
> > diff --git a/drivers/gpu/drm/msm/msm_gem_vma.h b/drivers/gpu/drm/msm/msm_gem_vma.h
> > new file mode 100644
> > index 0000000000000000000000000000000000000000..f702f81529e72b86bffb4960408f1912bc65851a
> > --- /dev/null
> > +++ b/drivers/gpu/drm/msm/msm_gem_vma.h
> > @@ -0,0 +1,105 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * Copyright (C) 2016 Red Hat
> > + * Author: Rob Clark <robdclark@...il.com>
> > + */
> > +
> > +#ifndef _MSM_GEM_VMA_H_
> > +#define _MSM_GEM_VMA_H_
> > +
> > +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__)
> > +
> > +/**
> > + * struct msm_vm_map_op - create new pgtable mapping
> > + */
> > +struct msm_vm_map_op {
> > + /** @iova: start address for mapping */
> > + uint64_t iova;
> > + /** @range: size of the region to map */
> > + uint64_t range;
> > + /** @offset: offset into @sgt to map */
> > + uint64_t offset;
> > + /** @sgt: pages to map, or NULL for a PRR mapping */
> > + struct sg_table *sgt;
> > + /** @prot: the mapping protection flags */
> > + int prot;
> > +
> > + /**
> > + * @queue_id: The id of the submitqueue the operation is performed
> > + * on, or zero for (in particular) UNMAP ops triggered outside of
> > + * a submitqueue (ie. process cleanup)
> > + */
> > + int queue_id;
> > +};
> > +
> > +/**
> > + * struct msm_vm_unmap_op - unmap a range of pages from pgtable
> > + */
> > +struct msm_vm_unmap_op {
> > + /** @iova: start address for unmap */
> > + uint64_t iova;
> > + /** @range: size of region to unmap */
> > + uint64_t range;
> > +
> > + /** @reason: The reason for the unmap */
> > + const char *reason;
> > +
> > + /**
> > + * @queue_id: The id of the submitqueue the operation is performed
> > + * on, or zero for (in particular) UNMAP ops triggered outside of
> > + * a submitqueue (ie. process cleanup)
> > + */
> > + int queue_id;
> > +};
> > +
> > +static void
> > +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int queue_id)
>
> These would have to be static-inline
>
> But overall I'm not sure how I feel about this.. I guess the goal is
> to reduce the size of a kms-only driver? If so, I think you could do
> better with some ugly ifdef (for ex, you could also remove scheduler
> and other fields not used by kernel managed VMs from msm_gem_vm).
More or less so. I also wanted to separate the complicated parts from
the simple GEM parts, but I see your point too.
I was also trying to trim the dependencies, but this can be #ifdef'd.
> I'm not sure how much the savings would be, or if it is worth the pain
> (ie. extra build configurations to test going forward, etc). Having
> no GPU doesn't seem like a case worth optimizing for, tbh. You could
> still have a single driver which binds to multiple different devices,
> ie. if # of GPUs != # of DPUs without this with no change in
> footprint.
Counting GPUs and DPUs isn't that easy, because this also makes things
assymmetric: some of the GPU/DPU pairs can be handled natively, some of
them need to have buffers exported and then imported. We also have a
usecase of splitting the GPU driver because for some of the workloads it
would be better to load just the GPU driver or just the display driver
(possibly replacing the other one with the proprietary driver).
>
> BR,
> -R
>
>
>
>
> > +{
> > + int idx;
> > +
> > + if (!vm->managed)
> > + lockdep_assert_held(&vm->mmu_lock);
> > +
> > + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range);
> > +
> > + if (!vm->log)
> > + return;
> > +
> > + idx = vm->log_idx;
> > + vm->log[idx].op = op;
> > + vm->log[idx].iova = iova;
> > + vm->log[idx].range = range;
> > + vm->log[idx].queue_id = queue_id;
> > + vm->log_idx = (vm->log_idx + 1) & ((1 << vm->log_shift) - 1);
> > +}
> > +
> > +static void
> > +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op)
> > +{
> > + const char *reason = op->reason;
> > +
> > + if (!reason)
> > + reason = "unmap";
> > +
> > + vm_log(vm, reason, op->iova, op->range, op->queue_id);
> > +
> > + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range);
> > +}
> > +
> > +static int
> > +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op)
> > +{
> > + vm_log(vm, "map", op->iova, op->range, op->queue_id);
> > +
> > + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset,
> > + op->range, op->prot);
> > +}
> > +
> > +int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *_arg);
> > +int msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg);
> > +int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *_arg);
> > +
> > +int msm_gem_vm_sched_init(struct msm_gem_vm *vm, struct drm_device *drm);
> > +void msm_gem_vm_sched_fini(struct msm_gem_vm *vm);
> > +
> > +#endif
> >
> > --
> > 2.47.3
> >
--
With best wishes
Dmitry
Powered by blists - more mailing lists