[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YByi/gdaGJeV/+8b@google.com>
Date: Thu, 4 Feb 2021 17:44:30 -0800
From: Minchan Kim <minchan@...nel.org>
To: John Hubbard <jhubbard@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
gregkh@...uxfoundation.org, surenb@...gle.com, joaodias@...gle.com,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: cma: support sysfs
On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote:
> On 2/4/21 4:12 PM, Minchan Kim wrote:
> ...
> > > > Then, how to know how often CMA API failed?
> > >
> > > Why would you even need to know that, *in addition* to knowing specific
> > > page allocation numbers that failed? Again, there is no real-world motivation
> > > cited yet, just "this is good data". Need more stories and support here.
> >
> > Let me give an example.
> >
> > Let' assume we use memory buffer allocation via CMA for bluetooth
> > enable of device.
> > If user clicks the bluetooth button in the phone but fail to allocate
> > the memory from CMA, user will still see bluetooth button gray.
> > User would think his touch was not enough powerful so he try clicking
> > again and fortunately CMA allocation was successful this time and
> > they will see bluetooh button enabled and could listen the music.
> >
> > Here, product team needs to monitor how often CMA alloc failed so
> > if the failure ratio is steadily increased than the bar,
> > it means engineers need to go investigation.
> >
> > Make sense?
> >
>
> Yes, except that it raises more questions:
>
> 1) Isn't this just standard allocation failure? Don't you already have a way
> to track that?
>
> Presumably, having the source code, you can easily deduce that a bluetooth
> allocation failure goes directly to a CMA allocation failure, right?
>
> Anyway, even though the above is still a little murky, I expect you're right
> that it's good to have *some* indication, somewhere about CMA behavior...
>
> Thinking about this some more, I wonder if this is really /proc/vmstat sort
> of data that we're talking about. It seems to fit right in there, yes?
Thing is CMA instance are multiple, cma-A, cma-B, cma-C and each of CMA
heap has own specific scenario. /proc/vmstat could be bloated a lot
while CMA instance will be increased.
Powered by blists - more mailing lists