[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1420ed7c-d3c2-7768-f954-4a88036448a9@linux.intel.com>
Date: Thu, 18 May 2023 10:43:17 +0100
From: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel@...ts.freedesktop.org, freedreno@...ts.freedesktop.org,
Daniel Vetter <daniel@...ll.ch>,
Boris Brezillon <boris.brezillon@...labora.com>,
Christopher Healy <healych@...zon.com>,
Emil Velikov <emil.l.velikov@...il.com>,
Christian König <christian.koenig@....com>,
Rob Clark <robdclark@...omium.org>,
David Airlie <airlied@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
Jonathan Corbet <corbet@....net>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 8/9] drm/fdinfo: Add comm/cmdline override fields
In case you were waiting for me looking at the rest of the series, there
was this reply from the previous round I can expand on.
On 02/05/2023 08:50, Tvrtko Ursulin wrote:
>
> On 01/05/2023 17:58, Rob Clark wrote:
>> On Fri, Apr 28, 2023 at 4:05 AM Tvrtko Ursulin
>> <tvrtko.ursulin@...ux.intel.com> wrote:
>>>
>>>
>>> On 27/04/2023 18:53, Rob Clark wrote:
>>>> From: Rob Clark <robdclark@...omium.org>
>>>>
>>>> These are useful in particular for VM scenarios where the process which
>>>> has opened to drm device file is just a proxy for the real user in a VM
>>>> guest.
>>>>
>>>> Signed-off-by: Rob Clark <robdclark@...omium.org>
>>>> ---
>>>> Documentation/gpu/drm-usage-stats.rst | 18 ++++++++++++++++++
>>>> drivers/gpu/drm/drm_file.c | 15 +++++++++++++++
>>>> include/drm/drm_file.h | 19 +++++++++++++++++++
>>>> 3 files changed, 52 insertions(+)
>>>>
>>>> diff --git a/Documentation/gpu/drm-usage-stats.rst
>>>> b/Documentation/gpu/drm-usage-stats.rst
>>>> index 58dc0d3f8c58..e4877cf8089c 100644
>>>> --- a/Documentation/gpu/drm-usage-stats.rst
>>>> +++ b/Documentation/gpu/drm-usage-stats.rst
>>>> @@ -73,6 +73,24 @@ scope of each device, in which case `drm-pdev`
>>>> shall be present as well.
>>>> Userspace should make sure to not double account any usage
>>>> statistics by using
>>>> the above described criteria in order to associate data to
>>>> individual clients.
>>>>
>>>> +- drm-comm-override: <valstr>
>>>> +
>>>> +Returns the client executable override string. Some drivers
>>>> support letting
>>>> +userspace override this in cases where the userspace is simply a
>>>> "proxy".
>>>> +Such as is the case with virglrenderer drm native context, where
>>>> the host
>>>> +process is just forwarding command submission, etc, from guest
>>>> userspace.
>>>> +This allows the proxy to make visible the executable name of the
>>>> actual
>>>> +app in the VM guest.
>>>> +
>>>> +- drm-cmdline-override: <valstr>
>>>> +
>>>> +Returns the client cmdline override string. Some drivers support
>>>> letting
>>>> +userspace override this in cases where the userspace is simply a
>>>> "proxy".
>>>> +Such as is the case with virglrenderer drm native context, where
>>>> the host
>>>> +process is just forwarding command submission, etc, from guest
>>>> userspace.
>>>> +This allows the proxy to make visible the cmdline of the actual app
>>>> in the
>>>> +VM guest.
>>>
>>> Perhaps it would be okay to save space here by not repeating the
>>> description, like:
>>>
>>> drm-comm-override: <valstr>
>>> drm-cmdline-override: <valstr>
>>>
>>> Long description blah blah...
>>> This allows the proxy to make visible the _executable name *and* command
>>> line_ blah blah..
>>>
>>>> +
>>>> Utilization
>>>> ^^^^^^^^^^^
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>>> index 9321eb0bf020..d7514c313af1 100644
>>>> --- a/drivers/gpu/drm/drm_file.c
>>>> +++ b/drivers/gpu/drm/drm_file.c
>>>> @@ -178,6 +178,8 @@ struct drm_file *drm_file_alloc(struct drm_minor
>>>> *minor)
>>>> spin_lock_init(&file->master_lookup_lock);
>>>> mutex_init(&file->event_read_lock);
>>>>
>>>> + mutex_init(&file->override_lock);
>>>> +
>>>> if (drm_core_check_feature(dev, DRIVER_GEM))
>>>> drm_gem_open(dev, file);
>>>>
>>>> @@ -292,6 +294,8 @@ void drm_file_free(struct drm_file *file)
>>>> WARN_ON(!list_empty(&file->event_list));
>>>>
>>>> put_pid(file->pid);
>>>> + kfree(file->override_comm);
>>>> + kfree(file->override_cmdline);
>>>> kfree(file);
>>>> }
>>>>
>>>> @@ -995,6 +999,17 @@ void drm_show_fdinfo(struct seq_file *m, struct
>>>> file *f)
>>>> PCI_SLOT(pdev->devfn),
>>>> PCI_FUNC(pdev->devfn));
>>>> }
>>>>
>>>> + mutex_lock(&file->override_lock);
>>>
>>> You could add a fast unlocked check before taking the mutex for no risk
>>> apart a transient false negative. For 99.9999% of userspace it would
>>> mean no pointless lock/unlock cycle.
>>
>> I'm not sure I get your point? This needs to be serialized against
>> userspace setting the override values
>
> if (file->override_comm || file->override_cmdline) {
> mutex_lock(&file->override_lock);
> if (file->override_comm)
> drm_printf(&p, "drm-comm-override:\t%s\n",
> file->override_comm);
> if (file->override_cmdline)
> drm_printf(&p, "drm-cmdline-override:\t%s\n",
> file->override_cmdline);
> mutext_unlock(&file->override_lock);
> }
>
> No risk apart for a transient false negative (which is immaterial for
> userspace since fdinfo reads are not ordered versus the override setting
> anyway) and 99.9% of deployments can get by not needing to pointlessly
> cycle the lock.
This fast path bypass I think is worth it but up to you if you are
really opposed. It's just that I don't see a point for cycling the mutex
for nothing in majority of cases.
>>>
>>>> + if (file->override_comm) {
>>>> + drm_printf(&p, "drm-comm-override:\t%s\n",
>>>> + file->override_comm);
>>>> + }
>>>> + if (file->override_cmdline) {
>>>> + drm_printf(&p, "drm-cmdline-override:\t%s\n",
>>>> + file->override_cmdline);
>>>> + }
>>>> + mutex_unlock(&file->override_lock);
>>>> +
>>>> if (dev->driver->show_fdinfo)
>>>> dev->driver->show_fdinfo(&p, file);
>>>> }
>>>> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
>>>> index 1339e925af52..604d05fa6f0c 100644
>>>> --- a/include/drm/drm_file.h
>>>> +++ b/include/drm/drm_file.h
>>>> @@ -370,6 +370,25 @@ struct drm_file {
>>>> */
>>>> struct drm_prime_file_private prime;
>>>>
>>>> + /**
>>>> + * @comm: Overridden task comm
>>>> + *
>>>> + * Accessed under override_lock
>>>> + */
>>>> + char *override_comm;
>>>> +
>>>> + /**
>>>> + * @cmdline: Overridden task cmdline
>>>> + *
>>>> + * Accessed under override_lock
>>>> + */
>>>> + char *override_cmdline;
>>>> +
>>>> + /**
>>>> + * @override_lock: Serialize access to override_comm and
>>>> override_cmdline
>>>> + */
>>>> + struct mutex override_lock;
>>>> +
>>>
>>> I don't think this should go to drm just yet though. Only one driver can
>>> make use of it so I'd leave it for later and print from msm_show_fdinfo
>>> for now.
>>
>> This was my original approach but danvet asked that it be moved into
>> drm for consistency across drivers. (And really, I want the in-flight
>> amd and intel native-context stuff to motivate adding similar features
>> to amdgpu/i915/xe.)
>
> IMO if implementation is not shared, not even by using helpers, I don't
> think data storage should be either, but it's not a deal breaker.
To summarise my thoughts on the patch (v4):
I am not really keen on the split of data fields in common and no common
implementation or helpers.
For what the drm-usage-stats.rst are concerned it looks completely fine.
And feature really will be useful in virtualised stacks.
Code in this patch is also completely fine.
Therefore you can have an r-b on those parts, but with reservations on
whether it makes sense to put the fields under drm_file just yet. That
should be fine under the r-b rules AFAIU. Ideally you can collect an ack
from someone else too.
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@...el.com>
Regards,
Tvrtko
>
> Regards,
>
> Tvrtko
>
>>
>> BR,
>> -R
>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>> /* private: */
>>>> #if IS_ENABLED(CONFIG_DRM_LEGACY)
>>>> unsigned long lock_count; /* DRI1 legacy lock count */
Powered by blists - more mailing lists