lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Feb 2021 12:25:52 -0800
From:   John Hubbard <jhubbard@...dia.com>
To:     Minchan Kim <minchan@...nel.org>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        <gregkh@...uxfoundation.org>, <surenb@...gle.com>,
        <joaodias@...gle.com>, LKML <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs

On 2/5/21 8:15 AM, Minchan Kim wrote:
...
>> Yes, approximately. I was wondering if this would suffice at least as a baseline:
>>
>> cma_alloc_success   125
>> cma_alloc_failure   25
> 
> IMO, regardless of the my patch, it would be good to have such statistics
> in that CMA was born to replace carved out memory with dynamic allocation
> ideally for memory efficiency ideally so failure should regard critical
> so admin could notice it how the system is hurt.

Right. So CMA failures are useful for the admin to see, understood.

> 
> Anyway, it's not enough for me and orthgonal with my goal.
> 

OK. But...what *is* your goal, and why is this useless (that's what
orthogonal really means here) for your goal?

Also, would you be willing to try out something simple first,
such as providing indication that cma is active and it's overall success
rate, like this:

/proc/vmstat:

cma_alloc_success   125
cma_alloc_failure   25

...or is the only way to provide the more detailed items, complete with
per-CMA details, in a non-debugfs location?


>>
>> ...and then, to see if more is needed, some questions:
>>
>> a)  Do you know of an upper bound on how many cma areas there can be
>> (I think Matthew also asked that)?
> 
> There is no upper bound since it's configurable.
> 

OK, thanks,so that pretty much rules out putting per-cma details into
anything other than a directory or something like it.

>>
>> b) Is tracking the cma area really as valuable as other possibilities? We can put
>> "a few" to "several" items here, so really want to get your very favorite bits of
>> information in. If, for example, there can be *lots* of cma areas, then maybe tracking
> 
> At this moment, allocation/failure for each CMA area since they have
> particular own usecase, which makes me easy to keep which module will
> be affected. I think it is very useful per-CMA statistics as minimum
> code change so I want to enable it by default under CONFIG_CMA && CONFIG_SYSFS.
> 
>> by a range of allocation sizes is better...
> 
> I takes your suggestion something like this.
> 
> [alloc_range] could be order or range by interval
> 
> /sys/kernel/mm/cma/cma-A/[alloc_range]/success
> /sys/kernel/mm/cma/cma-A/[alloc_range]/fail
> ..
> ..
> /sys/kernel/mm/cma/cma-Z/[alloc_range]/success
> /sys/kernel/mm/cma/cma-Z/[alloc_range]/fail

Actually, I meant, "ranges instead of cma areas", like this:

/<path-to-cma-data/[alloc_range_1]/success
/<path-to-cma-data/[alloc_range_1]/fail
/<path-to-cma-data/[alloc_range_2]/success
/<path-to-cma-data/[alloc_range_2]/fail
...
/<path-to-cma-data/[alloc_range_max]/success
/<path-to-cma-data/[alloc_range_max]/fail

The idea is that knowing the allocation sizes that succeeded
and failed is maybe even more interesting and useful than
knowing the cma area that contains them.

> 
> I agree it would be also useful but I'd like to enable it under
> CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset.
> 

I will stop harassing you very soon, just want to bottom out on
understanding the real goals first. :)

thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ