lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8cd39b48-7fb8-40b2-8d6c-e6fc2b48f86d@arm.com>
Date: Wed, 31 Jan 2024 10:10:05 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Alexandru Elisei <alexandru.elisei@....com>
Cc: catalin.marinas@....com, will@...nel.org, oliver.upton@...ux.dev,
 maz@...nel.org, james.morse@....com, suzuki.poulose@....com,
 yuzenghui@...wei.com, arnd@...db.de, akpm@...ux-foundation.org,
 mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
 vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org,
 bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
 vschneid@...hat.com, mhiramat@...nel.org, rppt@...nel.org, hughd@...gle.com,
 pcc@...gle.com, steven.price@....com, vincenzo.frascino@....com,
 david@...hat.com, eugenis@...gle.com, kcc@...gle.com, hyesoo.yu@...sung.com,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 kvmarm@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
 linux-arch@...r.kernel.org, linux-mm@...ck.org,
 linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH RFC v3 06/35] mm: cma: Make CMA_ALLOC_SUCCESS/FAIL count
 the number of pages



On 1/30/24 17:28, Alexandru Elisei wrote:
> Hi,
> 
> On Tue, Jan 30, 2024 at 10:22:11AM +0530, Anshuman Khandual wrote:
>>
>> On 1/29/24 17:21, Alexandru Elisei wrote:
>>> Hi,
>>>
>>> On Mon, Jan 29, 2024 at 02:54:20PM +0530, Anshuman Khandual wrote:
>>>>
>>>> On 1/25/24 22:12, Alexandru Elisei wrote:
>>>>> The CMA_ALLOC_SUCCESS, respectively CMA_ALLOC_FAIL, are increased by one
>>>>> after each cma_alloc() function call. This is done even though cma_alloc()
>>>>> can allocate an arbitrary number of CMA pages. When looking at
>>>>> /proc/vmstat, the number of successful (or failed) cma_alloc() calls
>>>>> doesn't tell much with regards to how many CMA pages were allocated via
>>>>> cma_alloc() versus via the page allocator (regular allocation request or
>>>>> PCP lists refill).
>>>>>
>>>>> This can also be rather confusing to a user who isn't familiar with the
>>>>> code, since the unit of measurement for nr_free_cma is the number of pages,
>>>>> but cma_alloc_success and cma_alloc_fail count the number of cma_alloc()
>>>>> function calls.
>>>>>
>>>>> Let's make this consistent, and arguably more useful, by having
>>>>> CMA_ALLOC_SUCCESS count the number of successfully allocated CMA pages, and
>>>>> CMA_ALLOC_FAIL count the number of pages the cma_alloc() failed to
>>>>> allocate.
>>>>>
>>>>> For users that wish to track the number of cma_alloc() calls, there are
>>>>> tracepoints for that already implemented.
>>>>>
>>>>> Signed-off-by: Alexandru Elisei <alexandru.elisei@....com>
>>>>> ---
>>>>>  mm/cma.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/mm/cma.c b/mm/cma.c
>>>>> index f49c95f8ee37..dbf7fe8cb1bd 100644
>>>>> --- a/mm/cma.c
>>>>> +++ b/mm/cma.c
>>>>> @@ -517,10 +517,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
>>>>>  	pr_debug("%s(): returned %p\n", __func__, page);
>>>>>  out:
>>>>>  	if (page) {
>>>>> -		count_vm_event(CMA_ALLOC_SUCCESS);
>>>>> +		count_vm_events(CMA_ALLOC_SUCCESS, count);
>>>>>  		cma_sysfs_account_success_pages(cma, count);
>>>>>  	} else {
>>>>> -		count_vm_event(CMA_ALLOC_FAIL);
>>>>> +		count_vm_events(CMA_ALLOC_FAIL, count);
>>>>>  		if (cma)
>>>>>  			cma_sysfs_account_fail_pages(cma, count);
>>>>>  	}
>>>> Without getting into the merits of this patch - which is actually trying to do
>>>> semantics change to /proc/vmstat, wondering how is this even related to this
>>>> particular series ? If required this could be debated on it's on separately.
>>> Having the number of CMA pages allocated and the number of CMA pages freed
>>> allows someone to infer how many tagged pages are in use at a given time:
>> That should not be done in CMA which is a generic multi purpose allocator.

> Ah, ok. Let me rephrase that: Having the number of CMA pages allocated, the
> number of failed CMA page allocations and the number of freed CMA pages
> allows someone to infer how many CMA pages are in use at a given time.
> That's valuable information for software designers and system
> administrators, as it allows them to tune the number of CMA pages available
> in a system.
> 
> Or put another way: what would you consider to be more useful?  Knowing the
> number of cma_alloc()/cma_release() calls, or knowing the number of pages
> that cma_alloc()/cma_release() allocated or freed?

There is still value in knowing how many times cma_alloc() succeeded or failed
regardless of the cumulative number pages involved over the time. Actually the
count helps to understand how cma_alloc() performed overall as an allocator.

But on the cma_release() path there is no chances of failure apart from - just
when the caller itself provides an wrong input. So there are no corresponding
CMA_RELEASE_SUCCESS/CMA_RELEASE_FAIL vmstat counters in there - for a reason !

Coming back to CMA based pages being allocated and freed, there is already an
interface via sysfs (CONFIG_CMA_SYSFS) which gets updated in cma_alloc() path
via cma_sysfs_account_success_pages() and cma_sysfs_account_fail_pages().

#ls /sys/kernel/mm/cma/<name>
alloc_pages_fail alloc_pages_success

Why these counters could not meet your requirements ? Also 'struct cma' can
be updated to add an element 'nr_pages_freed' to be tracked in cma_release(),
providing free pages count as well.

There are additional debug fs based elements (CONFIG_CMA_DEBUGFS) available.

#ls /sys/kernel/debug/cma/<name>
alloc  base_pfn  bitmap  count  free  maxchunk  order_per_bit  used

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ