[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96bc11de-fe47-c7d3-6e61-5a5a5b6d2f4c@nvidia.com>
Date: Thu, 4 Feb 2021 16:34:17 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Suren Baghdasaryan <surenb@...gle.com>
CC: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
John Dias <joaodias@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs
On 2/4/21 4:25 PM, John Hubbard wrote:
> On 2/4/21 3:45 PM, Suren Baghdasaryan wrote:
> ...
>>>>>> 2) The overall CMA allocation attempts/failures (first two items above) seem
>>>>>> an odd pair of things to track. Maybe that is what was easy to track, but I'd
>>>>>> vote for just omitting them.
>>>>>
>>>>> Then, how to know how often CMA API failed?
>>>>
>>>> Why would you even need to know that, *in addition* to knowing specific
>>>> page allocation numbers that failed? Again, there is no real-world motivation
>>>> cited yet, just "this is good data". Need more stories and support here.
>>>
>>> IMHO it would be very useful to see whether there are multiple
>>> small-order allocation failures or a few large-order ones, especially
>>> for CMA where large allocations are not unusual. For that I believe
>>> both alloc_pages_attempt and alloc_pages_fail would be required.
>>
>> Sorry, I meant to say "both cma_alloc_fail and alloc_pages_fail would
>> be required".
>
> So if you want to know that, the existing items are still a little too indirect
> to really get it right. You can only know the average allocation size, by
> dividing. Instead, we should provide the allocation size, for each count.
>
> The limited interface makes this a little awkward, but using zones/ranges could
> work: "for this range of allocation sizes, there were the following stats". Or,
> some other technique that I haven't thought of (maybe two items per file?) would
> be better.
>
> On the other hand, there's an argument for keeping this minimal and simple. That
> would probably lead us to putting in a couple of items into /proc/vmstat, as I
> just mentioned in my other response, and calling it good.
...and remember: if we keep it nice and minimal and clean, we can put it into
/proc/vmstat and monitor it.
And then if a problem shows up, the more complex and advanced debugging data can
go into debugfs's CMA area. And you're all set.
If Android made up some policy not to use debugfs, then:
a) that probably won't prevent engineers from using it anyway, for advanced debugging,
and
b) If (a) somehow falls short, then we need to talk about what Android's plans are to
fill the need. And "fill up sysfs with debugfs items, possibly duplicating some of them,
and generally making an unecessary mess, to compensate for not using debugfs" is not
my first choice. :)
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists