[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d7ec1f-d892-0491-a2de-3d0feecca647@nvidia.com>
Date: Thu, 4 Feb 2021 16:24:20 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Minchan Kim <minchan@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
<gregkh@...uxfoundation.org>, <surenb@...gle.com>,
<joaodias@...gle.com>, LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs
On 2/4/21 4:12 PM, Minchan Kim wrote:
...
>>> Then, how to know how often CMA API failed?
>>
>> Why would you even need to know that, *in addition* to knowing specific
>> page allocation numbers that failed? Again, there is no real-world motivation
>> cited yet, just "this is good data". Need more stories and support here.
>
> Let me give an example.
>
> Let' assume we use memory buffer allocation via CMA for bluetooth
> enable of device.
> If user clicks the bluetooth button in the phone but fail to allocate
> the memory from CMA, user will still see bluetooth button gray.
> User would think his touch was not enough powerful so he try clicking
> again and fortunately CMA allocation was successful this time and
> they will see bluetooh button enabled and could listen the music.
>
> Here, product team needs to monitor how often CMA alloc failed so
> if the failure ratio is steadily increased than the bar,
> it means engineers need to go investigation.
>
> Make sense?
>
Yes, except that it raises more questions:
1) Isn't this just standard allocation failure? Don't you already have a way
to track that?
Presumably, having the source code, you can easily deduce that a bluetooth
allocation failure goes directly to a CMA allocation failure, right?
Anyway, even though the above is still a little murky, I expect you're right
that it's good to have *some* indication, somewhere about CMA behavior...
Thinking about this some more, I wonder if this is really /proc/vmstat sort
of data that we're talking about. It seems to fit right in there, yes?
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists