lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACSVV00VzOfTDh2sKst+POzkZ-5MH+0BDY-GVB2WKTyONRrHjw@mail.gmail.com>
Date: Tue, 17 Jun 2025 05:48:52 -0700
From: Rob Clark <rob.clark@....qualcomm.com>
To: Danilo Krummrich <dakr@...hat.com>
Cc: dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
        freedreno@...ts.freedesktop.org,
        Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <mripard@...nel.org>,
        Thomas Zimmermann <tzimmermann@...e.de>,
        David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] drm/gpuvm: Add locking helpers

On Tue, Jun 17, 2025 at 2:51 AM Danilo Krummrich <dakr@...hat.com> wrote:
>
> On Mon, Jun 16, 2025 at 03:25:08PM -0700, Rob Clark wrote:
> > On Mon, Jun 16, 2025 at 2:39 PM Danilo Krummrich <dakr@...hat.com> wrote:
> > >
> > > On Sat, Jun 14, 2025 at 08:03:20AM -0700, Rob Clark wrote:
> > > > On Sat, Jun 14, 2025 at 3:39 AM Danilo Krummrich <dakr@...hat.com> wrote:
> > > > >
> > > > > On Fri, Jun 13, 2025 at 04:57:03PM -0700, Rob Clark wrote:
> > > > > > For UNMAP/REMAP steps we could be needing to lock objects that are not
> > > > > > explicitly listed in the VM_BIND ioctl in order to tear-down unmapped
> > > > > > VAs.  These helpers handle locking/preparing the needed objects.
> > > > >
> > > > > Yes, that's a common use-case. I think drivers typically iterate through their
> > > > > drm_gpuva_ops to lock those objects.
> > > > >
> > > > > I had a look at you link [1] and it seems that you keep a list of ops as well by
> > > > > calling vm_op_enqueue() with a new struct msm_vm_op from the callbacks.
> > > > >
> > > > > Please note that for exactly this case there is the op_alloc callback in
> > > > > struct drm_gpuvm_ops, such that you can allocate a custom op type (i.e. struct
> > > > > msm_vm_op) that embedds a struct drm_gpuva_op.
> > > >
> > > > I did use drm_gpuvm_sm_xyz_ops_create() in an earlier iteration of my
> > > > VM_BIND series, but it wasn't quite what I was after.  I wanted to
> > > > apply the VM updates immediately to avoid issues with a later
> > > > map/unmap overlapping an earlier map, which
> > > > drm_gpuvm_sm_xyz_ops_create() doesn't really handle.  I'm not even
> > > > sure why this isn't a problem for other drivers unless userspace is
> > > > providing some guarantees.
> > >
> > > The drm_gpuva_ops are usually used in a pattern like this.
> > >
> > >         vm_bind {
> > >                 for_each_vm_bind_operation {
>                             drm_gpuvm_sm_xyz_ops_create();
> > >                         drm_gpuva_for_each_op {
> > >                                 // modify drm_gpuvm's interval tree
> > >                                 // pre-allocate memory
> > >                                 // lock and prepare objects
> > >                         }
> > >                 }
> > >
> > >                 drm_sched_entity_push_job();
> > >         }
> > >
> > >         run_job {
> > >                 for_each_vm_bind_operation {
> > >                         drm_gpuva_for_each_op {
> > >                                 // modify page tables
> > >                         }
> > >                 }
> > >         }
> > >
> > >         run_job {
> > >                 for_each_vm_bind_operation {
> > >                         drm_gpuva_for_each_op {
> > >                                 // free page table structures, if any
> > >                                 // free unused pre-allocated memory
> > >                         }
> > >                 }
> > >         }
> > >
> > > What did you do instead to get map/unmap overlapping? Even more interesting,
> > > what are you doing now?
> >
> > From what I can tell, the drivers using drm_gpva_for_each_op()/etc are
> > doing drm_gpuva_remove() while iterating the ops list..
> > drm_gpuvm_sm_xyz_ops_create() itself does not modify the VM.  So this
> > can only really work if you perform one MAP or UNMAP at a time.  Or at
> > least if you process the VM modifying part of the ops list before
> > proceeding to the next op.
>
> (Added the drm_gpuvm_sm_xyz_ops_create() step above.)
>
> I went through the code you posted [1] and conceptually you're implementing
> exactly the pattern I described above, i.e. you do:
>
>         vm_bind {
>                 for_each_vm_bind_operation {
>                         drm_gpuvm_sm_xyz_exec_lock();
>                 }
>
>                 for_each_vm_bind_operation {
>                         drm_gpuvm_sm_xyz() {
>                                 // modify drm_gpuvm's interval tree
>                                 // create custom ops
>                         }
>                 }
>
>                 drm_sched_entity_push_job();
>         }
>
>         run_job {
>                 for_each_vm_bind_operation {
>                         for_each_custom_op() {
>                                 // do stuff
>                         }
>                 }
>         }

Close, but by the time we get to run_job there is just a single list
of ops covering all the vm_bind operations:

        run_job {
                for_each_custom_op() {
                        // do stuff
                }
        }

rather than a list of va ops per vm_bind op.

> However, GPUVM intends to solve your use-case with the following, semantically
> identical, approach.
>
>         vm_bind {
>                 for_each_vm_bind_operation {
>                         drm_gpuvm_sm_xyz_ops_create();
>
>                         drm_gpuva_for_each_op {
>                                 // modify drm_gpuvm's interval tree
>                                 // lock and prepare objects (1)

I currently decouple lock+pin from VM modification to avoid an error
path that leaves the VM partially modified.  Once you add this back
in, the va-ops approach isn't simpler, IMHO.

>                         }
>                 }
>
>                 drm_sched_entity_push_job();
>         }
>
>         run_job {
>                 for_each_vm_bind_operation {
>                         drm_gpuva_for_each_op() {
>                                 // do stuff
>                         }
>                 }
>         }
>
> (Note that GPUVM already supports to extend the existing OP structures; you
> should take advantage of that.)
>
> Hence, the helper we really want is to lock and prepare the objects at (1). I.e.
> a helper that takes a pointer to a struct drm_gpuva_op and locks / validates the
> corresponding objects.

I still prefer that we don't _require_ using va-ops.  But if it makes
it more useful for other drivers I could add a helper which
exec_lock's based on a list of va-ops instead.

BR,
-R

> [1] https://gitlab.freedesktop.org/robclark/msm/-/blob/sparse-newer/drivers/gpu/drm/msm/msm_gem_vma.c
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ