lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Feb 2021 08:15:30 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     John Hubbard <jhubbard@...dia.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        gregkh@...uxfoundation.org, surenb@...gle.com, joaodias@...gle.com,
        LKML <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs

On Thu, Feb 04, 2021 at 10:41:14PM -0800, John Hubbard wrote:
> On 2/4/21 10:24 PM, Minchan Kim wrote:
> > On Thu, Feb 04, 2021 at 09:49:54PM -0800, John Hubbard wrote:
> > > On 2/4/21 9:17 PM, Minchan Kim wrote:
> ...
> > > # cat vmstat | grep -i cma
> > > nr_free_cma 261718
> > > 
> > > # cat meminfo | grep -i cma
> > > CmaTotal:        1048576 kB
> > > CmaFree:         1046872 kB
> > > 
> > > OK, given that CMA is already in those two locations, maybe we should put
> > > this information in one or both of those, yes?
> > 
> > Do you suggest something liks this, for example?
> > 
> > 
> > cat vmstat | grep -i cma
> > cma_a_success	125
> > cma_a_fail	25
> > cma_b_success	130
> > cma_b_fail	156
> > ..
> > cma_f_fail	xxx
> > 
> 
> Yes, approximately. I was wondering if this would suffice at least as a baseline:
> 
> cma_alloc_success   125
> cma_alloc_failure   25

IMO, regardless of the my patch, it would be good to have such statistics
in that CMA was born to replace carved out memory with dynamic allocation
ideally for memory efficiency ideally so failure should regard critical
so admin could notice it how the system is hurt.

Anyway, it's not enough for me and orthgonal with my goal.

> 
> ...and then, to see if more is needed, some questions:
> 
> a)  Do you know of an upper bound on how many cma areas there can be
> (I think Matthew also asked that)?

There is no upper bound since it's configurable.

> 
> b) Is tracking the cma area really as valuable as other possibilities? We can put
> "a few" to "several" items here, so really want to get your very favorite bits of
> information in. If, for example, there can be *lots* of cma areas, then maybe tracking

At this moment, allocation/failure for each CMA area since they have
particular own usecase, which makes me easy to keep which module will
be affected. I think it is very useful per-CMA statistics as minimum
code change so I want to enable it by default under CONFIG_CMA && CONFIG_SYSFS.

> by a range of allocation sizes is better...

I takes your suggestion something like this.

[alloc_range] could be order or range by interval

/sys/kernel/mm/cma/cma-A/[alloc_range]/success
/sys/kernel/mm/cma/cma-A/[alloc_range]/fail
..
..
/sys/kernel/mm/cma/cma-Z/[alloc_range]/success
/sys/kernel/mm/cma/cma-Z/[alloc_range]/fail

I agree it would be also useful but I'd like to enable it under
CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ