lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Feb 2021 16:25:51 -0800
From:   John Hubbard <jhubbard@...dia.com>
To:     Suren Baghdasaryan <surenb@...gle.com>
CC:     Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        John Dias <joaodias@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs

On 2/4/21 3:45 PM, Suren Baghdasaryan wrote:
...
>>>>> 2) The overall CMA allocation attempts/failures (first two items above) seem
>>>>> an odd pair of things to track. Maybe that is what was easy to track, but I'd
>>>>> vote for just omitting them.
>>>>
>>>> Then, how to know how often CMA API failed?
>>>
>>> Why would you even need to know that, *in addition* to knowing specific
>>> page allocation numbers that failed? Again, there is no real-world motivation
>>> cited yet, just "this is good data". Need more stories and support here.
>>
>> IMHO it would be very useful to see whether there are multiple
>> small-order allocation failures or a few large-order ones, especially
>> for CMA where large allocations are not unusual. For that I believe
>> both alloc_pages_attempt and alloc_pages_fail would be required.
> 
> Sorry, I meant to say "both cma_alloc_fail and alloc_pages_fail would
> be required".

So if you want to know that, the existing items are still a little too indirect
to really get it right. You can only know the average allocation size, by
dividing. Instead, we should provide the allocation size, for each count.

The limited interface makes this a little awkward, but using zones/ranges could
work: "for this range of allocation sizes, there were the following stats". Or,
some other technique that I haven't thought of (maybe two items per file?) would
be better.

On the other hand, there's an argument for keeping this minimal and simple. That
would probably lead us to putting in a couple of items into /proc/vmstat, as I
just mentioned in my other response, and calling it good.


thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists