lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2881a000-b758-444d-a259-3fd909028739@arm.com>
Date: Fri, 20 Dec 2024 11:14:21 +0000
From: Mihail Atanassov <mihail.atanassov@....com>
To: Steven Price <steven.price@....com>,
 Adrián Martínez Larumbe
 <adrian.larumbe@...labora.com>,
 Boris Brezillon <boris.brezillon@...labora.com>,
 Liviu Dudau <liviu.dudau@....com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
 Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
 David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>
Cc: nd@....com, kernel@...labora.com, dri-devel@...ts.freedesktop.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/2] drm/panthor: Expose size of driver internal BO's
 over fdinfo



On 20/12/2024 11:08, Steven Price wrote:
> On 19/12/2024 16:30, Mihail Atanassov wrote:
>>
>>
>> On 18/12/2024 18:18, Adrián Martínez Larumbe wrote:
>>> From: Adrián Larumbe <adrian.larumbe@...labora.com>
>>>
>>> This will display the sizes of kenrel BO's bound to an open file,
>>> which are
>>> otherwise not exposed to UM through a handle.
>>>
>>> The sizes recorded are as follows:
>>>    - Per group: suspend buffer, protm-suspend buffer, syncobjcs
>>>    - Per queue: ringbuffer, profiling slots, firmware interface
>>>    - For all heaps in all heap pools across all VM's bound to an open
>>> file,
>>>    record size of all heap chuks, and for each pool the gpu_context BO
>>> too.
>>>
>>> This does not record the size of FW regions, as these aren't bound to a
>>> specific open file and remain active through the whole life of the
>>> driver.
>>>
>>> Signed-off-by: Adrián Larumbe <adrian.larumbe@...labora.com>
>>> Reviewed-by: Liviu Dudau <liviu.dudau@....com>
>>> ---
> 
> [...]
> 
>>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/
>>> panthor/panthor_mmu.c
>>> index c39e3eb1c15d..51f6e66df3f5 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> @@ -1941,6 +1941,41 @@ struct panthor_heap_pool
>>> *panthor_vm_get_heap_pool(struct panthor_vm *vm, bool c
>>>        return pool;
>>>    }
>>>    +/**
>>> + * panthor_vm_heaps_size() - Calculate size of all heap chunks across
>>> all
>>> + * heaps over all the heap pools in a VM
>>> + * @pfile: File.
>>> + * @status: Memory status to be updated.
>>> + *
>>> + * Calculate all heap chunk sizes in all heap pools bound to a VM. If
>>> the VM
>>> + * is active, record the size as active as well.
>>> + */
>>> +void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct
>>> drm_memory_stats *status)
>>> +{
>>> +    struct panthor_vm *vm;
>>> +    unsigned long i;
>>> +
>>> +    if (!pfile->vms)
>>> +        return;
>>> +
>>> +    xa_for_each(&pfile->vms->xa, i, vm) {
>>> +        size_t size;
>>> +
>>> +        mutex_lock(&vm->heaps.lock);
>>
>> Use `scoped_guard` instead?
>>
>> #include <linux/cleanup.h>
>>
>> /* ... */
>>
>>      xa_for_each(...) {
>>          size_t size;
>>
>>          scoped_guard(mutex, &vm->heaps.lock) {
>>              if (!vm->heaps.pool)
>>                  continue;
>>
>>              size = panthor_heap_pool_size(vm->heaps.pool);
>>          }
>>          /* ... */
> 
> I don't believe this actually works. The implementation of scoped_guard
> uses a for() loop. So the "continue" will be applied to this (hidden)
> internal loop rather than the xa_for_each() loop intended.

Yikes, good call-out! I ought to have checked... I'll make a mental note 
of that limitation.

> 
> An alternative would be:
> 
> 	xa_for_each(&pfile->vms->xa, i, vm) {
> 		size_t size = 0;
> 
> 		mutex_lock(&vm->heaps.lock);
> 		if (vm->heaps.pool)
> 			size = panthor_heap_pool_size(vm->heaps.pool);
> 		mutex_unlock(&vm->heaps.lock);

Well then you can do a:

		scoped_guard(mutex)(&vm->heaps.lock) {
			if (vm->heaps.pool)
				size = panthor_heap_pool_size(vm->heaps.pool);
		}

		/* ;) */

> 
> 		status->resident += size;
> 		status->private += size;
> 		if (vm->as.id >= 0)
> 			status->active += size;
> 	}
> 
> (relying on size=0 being a no-op for the additions). Although I was
> personally also happy with the original - but perhaps that's just
> because I'm old and still feel anxious when I see scoped_guard() ;)
> 
> Steve
> 

-- 
Mihail Atanassov <mihail.atanassov@....com>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ