[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220428161644.84921-1-sj@kernel.org>
Date: Thu, 28 Apr 2022 16:16:44 +0000
From: sj@...nel.org
To: Barry Song <21cnbao@...il.com>
Cc: sj@...nel.org, "Andrew Morton" <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
"Matthew Wilcox" <willy@...radead.org>, shuah@...nel.org,
brendanhiggins@...gle.com, foersleo@...zon.de, sieberf@...zon.com,
"Shakeel Butt" <shakeelb@...gle.com>, sjpark@...zon.de,
tuhailong@...il.com, "Song Jiang" <sjiang88@...il.com>,
张诗明(Simon Zhang)
<zhangshiming@...o.com>,
李培锋(wink)
<lipeifeng@...o.com>, linux-damon@...zon.com
Subject: Re: DAMON VA regions don't split on an large Android APP
On Thu, 28 Apr 2022 13:27:59 +1200 Barry Song <21cnbao@...il.com> wrote:
[...]
>
> Thanks for the clarification.
>
> i hardcoded min_nr_regions to 200 by:
> diff --git a/_damon.py b/_damon.py
> index 1306ea1..82342a5 100644
> --- a/_damon.py
> +++ b/_damon.py
> @@ -344,7 +344,7 @@ def set_attrs_argparser(parser):
> parser.add_argument('-u', '--updr', metavar='<interval>', type=int,
> default=1000000, help='regions update interval (us)')
> parser.add_argument('-n', '--minr', metavar='<# regions>', type=int,
> - default=10, help='minimal number of regions')
> + default=200, help='minimal number of regions')
> parser.add_argument('-m', '--maxr', metavar='<# regions>', type=int,
> default=1000, help='maximum number of regions')
>
>
> Now wss seems to make more senses:
>
> ~/damo # ./damo monitor --report_type=wss --count=20 2551
> # <percentile> <wss>
> # target_id 0
> # avr: 235.394 MiB
> 0 0 B |
> |
> 25 2.164 MiB |
> |
> 50 129.875 MiB |*********
> |
> 75 430.547 MiB |******************************
> |
> 100 844.238 MiB
> |***********************************************************|
>
> # <percentile> <wss>
> # target_id 0
> # avr: 352.501 MiB
> 0 8.781 MiB |
[...]
> |
> 100 664.480 MiB
> |***********************************************************|
>
> Regions are like:
>
> monitoring_start: 2.250 s
> monitoring_end: 2.350 s
> monitoring_duration: 100.425 ms
> target_id: 0
> nr_regions: 488
> 000012c00000-00002c14a000( 405.289 MiB): 0
> 00002c14a000-000044f05000( 397.730 MiB): 0
> 000044f05000-00005d106000( 386.004 MiB): 0
> 00005d106000-0000765f9000( 404.949 MiB): 0
> 0000765f9000-0000867b8000( 257.746 MiB): 0
> 0000867b8000-00009fb18000( 403.375 MiB): 0
[...]
> 007f74a66000-007f8caaf000( 384.285 MiB): 0
> 007f8caaf000-007fa423b000( 375.547 MiB): 0
> 007fa423b000-007fb9fb6000( 349.480 MiB): 0
> 007fb9fb6000-007fd29ae000( 393.969 MiB): 0
> 007fd29ae000-007fdbd6e000( 147.750 MiB): 0
>
> Though I am not quite sure if it is accurate enough :-) so fixed-gran would be
> a nice feature.
Totally agreed. Thank you for making your voice! I will use this for
re-prioritizing my TODO list items.
[...]
> > >
> > > And I have a question, what do percentile 0,25,50,75 mean here?
> > > Why are they so different with percentile 100?
> > > For example, 0,25,50,75 has only KiB but 100 has GiB.
> >
> > For each aggregation interval, we get one snapshot. So, if we have a
> > monitoring results that recorded for, say, 100 aggregation interval, we have
> > 100 snapshots. 'damo' calculates working set size of each snapshot by summing
> > size of regions assumed to be accessed at least once. So, in this example, we
> > get 100 wss values. Then, 'damo' sorts the values and provides the smallest
> > one as 0-th percentile, 25th small value as 25-th percentile, and so on.
> >
> > 100-th percentile wss is usually noisy, as DAMON regions shouldn't be converged
> > well at the beginning of the record. I believe that could be the reason why
> > the 100-th percentile wss is so unexpectedly big.
> >
> > I personally use 50-th percentile as reliable value.
>
> Thanks, it seems you mean if we get 100 snapshots with values exactly as
> 2, 4, 6, 8, 10..... , 198, 200 (just an example)
>
> for 25%, we will get 50; for 50%, we will get 100; for 75%, we will
> get 150, for 100%,
> we will get 200. Right?
You're perfectly understanding my point.
>
> I am not quite sure I understand "as DAMON regions shouldn't be converged well
> at the beginning of the record", in case we are monitoring with
> --count=2000, I suppose
> only at the beginning, regions are not splitted very well? When we
> have run monitor
> for a while, regions should have been relatively stable? I mean I
> don't quite understand
> why 100% is noise and 50% is more reliable.
'damo monitor' simply repeats 'damo record' and 'damo report'. That is, it
starts recording, stop recording, reporting, and repeat. Therefore every 'damo
moitor' results are fresh ones, not a snapshot of ongoing record. Therefore
regions converge from the beginning for every 'damo monitor' output. Sorry for
the ugly implementation. It should be improved in a near future.
Thanks,
SJ
[...]
Powered by blists - more mailing lists