[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8B1FB608-7D43-4DD9-8737-DCE59ED74CCA@collabora.com>
Date: Tue, 5 Aug 2025 12:18:56 -0300
From: Daniel Almeida <daniel.almeida@...labora.com>
To: Benno Lossin <lossin@...nel.org>
Cc: Onur <work@...rozkan.dev>,
Boqun Feng <boqun.feng@...il.com>,
linux-kernel@...r.kernel.org,
rust-for-linux@...r.kernel.org,
ojeda@...nel.org,
alex.gaynor@...il.com,
gary@...yguo.net,
a.hindborg@...nel.org,
aliceryhl@...gle.com,
tmgross@...ch.edu,
dakr@...nel.org,
peterz@...radead.org,
mingo@...hat.com,
will@...nel.org,
longman@...hat.com,
felipe_life@...e.com,
daniel@...lak.dev,
bjorn3_gh@...tonmail.com,
dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
Hi Benno,
> On 2 Aug 2025, at 17:58, Benno Lossin <lossin@...nel.org> wrote:
>
> On Sat Aug 2, 2025 at 4:15 PM CEST, Daniel Almeida wrote:
>> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@...nel.org> wrote:
>>> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>>>> One thing I didn’t understand with your approach: is it amenable to loops?
>>>> i.e.: are things like drm_exec() implementable?
>>>
>>> I don't think so, see also my reply here:
>>>
>>> https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>>>
>>> The type-based approach with tuples doesn't handle dynamic number of
>>> locks.
>>>
>>
>> This is probably the default use-case by the way.
>
> That's an important detail. In that case, a type state won't we a good
> idea. Unless it's also common to have a finite number of them, in which
> case we should have two API.
>
>>>> /**
>>>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>>>> * @exec: drm_exec object
>>>> *
>>>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>>>> * locked and no more contention exists. At the beginning of the loop it is
>>>> * guaranteed that no GEM object is locked.
>>>> *
>>>> * Since labels can't be defined local to the loops body we use a jump pointer
>>>> * to make sure that the retry is only used from within the loops body.
>>>> */
>>>> #define drm_exec_until_all_locked(exec) \
>>>> __PASTE(__drm_exec_, __LINE__): \
>>>> for (void *__drm_exec_retry_ptr; ({ \
>>>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>>>> (void)__drm_exec_retry_ptr; \
>>>> drm_exec_cleanup(exec); \
>>>> });)
>>>
>>> My understanding of C preprocessor macros is not good enough to parse or
>>> understand this :( What is that `__PASTE` thing?
>>
>> This macro is very useful, but also cursed :)
>>
>> This declares a unique label before the loop, so you can jump back to it on
>> contention. It is usually used in conjunction with:
>
> Ahh, I missed the `:` at the end of the line. Thanks for explaining!
> (also Miguel in the other reply!) If you don't mind I'll ask some more
> basic C questions :)
>
> And yeah it's pretty cursed...
>
>> /**
>> * drm_exec_retry_on_contention - restart the loop to grap all locks
>> * @exec: drm_exec object
>> *
>> * Control flow helper to continue when a contention was detected and we need to
>> * clean up and re-start the loop to prepare all GEM objects.
>> */
>> #define drm_exec_retry_on_contention(exec) \
>> do { \
>> if (unlikely(drm_exec_is_contended(exec))) \
>> goto *__drm_exec_retry_ptr; \
>> } while (0)
>
> The `do { ... } while(0)` is used because C doesn't have `{ ... }`
> scopes? (& because you want to be able to have this be called from an if
> without braces?)
do {} while (0) makes this behave as a single statement. It usually used in
macros to ensure that they can be correctly called from control statements even
when no braces are used, like you said. It also enforces that a semi-colon has
to be placed at the end when the macro is called, which makes it behave a bit
more like a function call.
There may be other uses that I am not aware of, but it’s not something that
specific to “drm_exec_retry_on_contention".
>
>> The termination is handled by:
>>
>> /**
>> * drm_exec_cleanup - cleanup when contention is detected
>> * @exec: the drm_exec object to cleanup
>> *
>> * Cleanup the current state and return true if we should stay inside the retry
>> * loop, false if there wasn't any contention detected and we can keep the
>> * objects locked.
>> */
>> bool drm_exec_cleanup(struct drm_exec *exec)
>> {
>> if (likely(!exec->contended)) {
>> ww_acquire_done(&exec->ticket);
>> return false;
>> }
>>
>> if (likely(exec->contended == DRM_EXEC_DUMMY)) {
>> exec->contended = NULL;
>> ww_acquire_init(&exec->ticket, &reservation_ww_class);
>> return true;
>> }
>>
>> drm_exec_unlock_all(exec);
>> exec->num_objects = 0;
>> return true;
>> }
>> EXPORT_SYMBOL(drm_exec_cleanup);
>>
>> The third clause in the loop is empty.
>>
>> For example, in amdgpu:
>>
>> /**
>> * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
>> * @mem: KFD BO structure.
>> * @vm: the VM to reserve.
>> * @ctx: the struct that will be used in unreserve_bo_and_vms().
>> */
>> static int reserve_bo_and_vm(struct kgd_mem *mem,
>> struct amdgpu_vm *vm,
>> struct bo_vm_reservation_context *ctx)
>> {
>> struct amdgpu_bo *bo = mem->bo;
>> int ret;
>>
>> WARN_ON(!vm);
>>
>> ctx->n_vms = 1;
>> ctx->sync = &mem->sync;
>> drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
>> drm_exec_until_all_locked(&ctx->exec) {
>> ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>>
>> ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>> }
>> // <—— everything is locked at this point.
>
> Which function call locks the mutexes?
The function below, which is indirectly called from amdgpu_vm_lock_pd() in
this particular example:
```
/**
* drm_exec_lock_obj - lock a GEM object for use
* @exec: the drm_exec object with the state
* @obj: the GEM object to lock
*
* Lock a GEM object for use and grab a reference to it.
*
* Returns: -EDEADLK if a contention is detected, -EALREADY when object is
* already locked (can be suppressed by setting the DRM_EXEC_IGNORE_DUPLICATES
* flag), -ENOMEM when memory allocation failed and zero for success.
*/
int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj)
{
int ret;
ret = drm_exec_lock_contended(exec);
if (unlikely(ret))
return ret;
if (exec->prelocked == obj) {
drm_gem_object_put(exec->prelocked);
exec->prelocked = NULL;
return 0;
}
if (exec->flags & DRM_EXEC_INTERRUPTIBLE_WAIT)
ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket);
else
ret = dma_resv_lock(obj->resv, &exec->ticket);
if (unlikely(ret == -EDEADLK)) {
drm_gem_object_get(obj);
exec->contended = obj;
return -EDEADLK;
}
if (unlikely(ret == -EALREADY) &&
exec->flags & DRM_EXEC_IGNORE_DUPLICATES)
return 0;
if (unlikely(ret))
return ret;
ret = drm_exec_obj_locked(exec, obj);
if (ret)
goto error_unlock;
return 0;
error_unlock:
dma_resv_unlock(obj->resv);
return ret;
}
EXPORT_SYMBOL(drm_exec_lock_obj);
```
And the tracking of locked objects is done at:
```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
struct drm_gem_object *obj)
{
if (unlikely(exec->num_objects == exec->max_objects)) {
size_t size = exec->max_objects * sizeof(void *);
void *tmp;
tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
if (!tmp)
return -ENOMEM;
exec->objects = tmp;
exec->max_objects += PAGE_SIZE / sizeof(void *);
}
drm_gem_object_get(obj);
exec->objects[exec->num_objects++] = obj;
return 0;
}
```
Note that dma_resv_lock() is:
```
static inline int dma_resv_lock(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
{
return ww_mutex_lock(&obj->lock, ctx);
}
```
Again, this is GEM-specific, but the idea is to generalize it.
>
>> return 0;
>>
>>
>> So, something like:
>>
>> some_unique_label:
>> for(void *retry_ptr;
>> ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
>
> Normally this should be a condition, or rather an expression evaluating
> to bool, why is this okay? Or does C just take the value of the last
> function call due to the `({})`?
This is described here [0]. As per the docs, it evaluates to bool (as
drm_exec_cleanup() is last, and that evaluates to bool)
>
> Why isn't `({})` used instead of `do { ... } while(0)` above?
I’m not sure I understand what you’re trying to ask.
If you’re asking why ({}) is being used here, then it’s because we need
to return (i.e. evaluate to) a value, and a do {…} while(0) does not do
that.
>
>> /* empty *) {
>> /* user code here, which potentially jumps back to some_unique_label */
>> }
>
> Thanks for the example & the macro expansion. What I gather from this is
> that we'd probably want a closure that executes the code & reruns it
> when contention is detected.
Yep, I think so, too.
>
>>>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>>>
>>>> /**
>>>> * struct drm_exec - Execution context
>>>> */
>>>> struct drm_exec {
>>>> /**
>>>> * @flags: Flags to control locking behavior
>>>> */
>>>> u32 flags;
>>>>
>>>> /**
>>>> * @ticket: WW ticket used for acquiring locks
>>>> */
>>>> struct ww_acquire_ctx ticket;
>>>>
>>>> /**
>>>> * @num_objects: number of objects locked
>>>> */
>>>> unsigned int num_objects;
>>>>
>>>> /**
>>>> * @max_objects: maximum objects in array
>>>> */
>>>> unsigned int max_objects;
>>>>
>>>> /**
>>>> * @objects: array of the locked objects
>>>> */
>>>> struct drm_gem_object **objects;
>>>>
>>>> /**
>>>> * @contended: contended GEM object we backed off for
>>>> */
>>>> struct drm_gem_object *contended;
>>>>
>>>> /**
>>>> * @prelocked: already locked GEM object due to contention
>>>> */
>>>> struct drm_gem_object *prelocked;
>>>> };
>>>>
>>>> This is GEM-specific, but we could perhaps implement the same idea by
>>>> tracking ww_mutexes instead of GEM objects.
>>>
>>> But this would only work for `Vec<WwMutex<T>>`, right?
>>
>> I’m not sure if I understand your point here.
>>
>> The list of ww_mutexes that we've managed to currently lock would be something
>> that we'd keep track internally in our context. In what way is a KVec an issue?
>
> I saw "array of the locked objects" and thus thought so this must only
> work for an array of locks. Looking at the type a bit closer, it
> actually is an array of pointers, so it does work for arbitrary data
> structures storing the locks.
>
> So essentially it would amount to storing `Vec<WwMutexGuard<'_, T>>` in
> Rust IIUC. I was under the impression that we wanted to avoid that,
> because it's an extra allocation.
It’s the price to pay for correctness IMHO.
The “exec” abstraction also allocates:
```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
struct drm_gem_object *obj)
{
if (unlikely(exec->num_objects == exec->max_objects)) {
size_t size = exec->max_objects * sizeof(void *);
void *tmp;
tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
if (!tmp)
return -ENOMEM;
exec->objects = tmp;
exec->max_objects += PAGE_SIZE / sizeof(void *);
}
drm_gem_object_get(obj);
exec->objects[exec->num_objects++] = obj;
return 0;
}
```
>
> But maybe that's actually what's done on the C side.
See above.
>
>> Btw, I can also try to implement a proof of concept, so long as people agree that
>> this approach makes sense.
>
> I'm not sure I understand what you are proposing, so I can't give a
> recommendation yet.
>
I am suggesting what you said above and more:
a) run a user closure where the user can indicate which ww_mutexes they want to lock
b) keep track of the objects above
c) keep track of whether a contention happened
d) rollback if a contention happened, releasing all locks
e) rerun the user closure from a clean slate after rolling back
f) run a separate user closure whenever we know that all objects have been locked.
That’s a very broad description, but I think it can work.
Note that the operations above would be implemented by a separate type, not by
the ww_mutex abstraction itself. But users should probably be using the API
above unless there’s a strong reason not to.
> ---
> Cheers,
> Benno
>
— Daniel
[0]: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
Powered by blists - more mailing lists