lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 29 Jan 2024 11:51:30 +0000
From: Alexandru Elisei <alexandru.elisei@....com>
To: Anshuman Khandual <anshuman.khandual@....com>
Cc: catalin.marinas@....com, will@...nel.org, oliver.upton@...ux.dev,
	maz@...nel.org, james.morse@....com, suzuki.poulose@....com,
	yuzenghui@...wei.com, arnd@...db.de, akpm@...ux-foundation.org,
	mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	bristot@...hat.com, vschneid@...hat.com, mhiramat@...nel.org,
	rppt@...nel.org, hughd@...gle.com, pcc@...gle.com,
	steven.price@....com, vincenzo.frascino@....com, david@...hat.com,
	eugenis@...gle.com, kcc@...gle.com, hyesoo.yu@...sung.com,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	kvmarm@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
	linux-arch@...r.kernel.org, linux-mm@...ck.org,
	linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH RFC v3 06/35] mm: cma: Make CMA_ALLOC_SUCCESS/FAIL count
 the number of pages

Hi,

On Mon, Jan 29, 2024 at 02:54:20PM +0530, Anshuman Khandual wrote:
> 
> 
> On 1/25/24 22:12, Alexandru Elisei wrote:
> > The CMA_ALLOC_SUCCESS, respectively CMA_ALLOC_FAIL, are increased by one
> > after each cma_alloc() function call. This is done even though cma_alloc()
> > can allocate an arbitrary number of CMA pages. When looking at
> > /proc/vmstat, the number of successful (or failed) cma_alloc() calls
> > doesn't tell much with regards to how many CMA pages were allocated via
> > cma_alloc() versus via the page allocator (regular allocation request or
> > PCP lists refill).
> > 
> > This can also be rather confusing to a user who isn't familiar with the
> > code, since the unit of measurement for nr_free_cma is the number of pages,
> > but cma_alloc_success and cma_alloc_fail count the number of cma_alloc()
> > function calls.
> > 
> > Let's make this consistent, and arguably more useful, by having
> > CMA_ALLOC_SUCCESS count the number of successfully allocated CMA pages, and
> > CMA_ALLOC_FAIL count the number of pages the cma_alloc() failed to
> > allocate.
> > 
> > For users that wish to track the number of cma_alloc() calls, there are
> > tracepoints for that already implemented.
> > 
> > Signed-off-by: Alexandru Elisei <alexandru.elisei@....com>
> > ---
> >  mm/cma.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/cma.c b/mm/cma.c
> > index f49c95f8ee37..dbf7fe8cb1bd 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -517,10 +517,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> >  	pr_debug("%s(): returned %p\n", __func__, page);
> >  out:
> >  	if (page) {
> > -		count_vm_event(CMA_ALLOC_SUCCESS);
> > +		count_vm_events(CMA_ALLOC_SUCCESS, count);
> >  		cma_sysfs_account_success_pages(cma, count);
> >  	} else {
> > -		count_vm_event(CMA_ALLOC_FAIL);
> > +		count_vm_events(CMA_ALLOC_FAIL, count);
> >  		if (cma)
> >  			cma_sysfs_account_fail_pages(cma, count);
> >  	}
> 
> Without getting into the merits of this patch - which is actually trying to do
> semantics change to /proc/vmstat, wondering how is this even related to this
> particular series ? If required this could be debated on it's on separately.

Having the number of CMA pages allocated and the number of CMA pages freed
allows someone to infer how many tagged pages are in use at a given time:
(allocated CMA pages - CMA pages allocated by drivers* - CMA pages
released) * 32. That is valuable information for software and hardware
designers.

Besides that, for every iteration of the series, this has proven invaluable
for discovering bugs with freeing and/or reserving tag storage pages.

*that would require userspace reading cma_alloc_success and
cma_release_success before any tagged allocations are performed.

Thanks,
Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ