[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211130172754.GS3366@techsingularity.net>
Date: Tue, 30 Nov 2021 17:27:54 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Alexey Avramov <hakavlad@...ox.lv>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Rik van Riel <riel@...riel.com>,
Mike Galbraith <efault@....de>,
Darrick Wong <djwong@...nel.org>, regressions@...ts.linux.dev,
Linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/1] mm: vmscan: Reduce throttling due to a failure to
make progress
On Wed, Dec 01, 2021 at 01:03:48AM +0900, Alexey Avramov wrote:
> I tested this [1] patch on top of 5.16-rc2. It's the same test with 10 tails.
>
> - with noswap
>
> Summary:
>
> 2021-11-30 23:32:36,890: Stall times for the last 548.6s:
> 2021-11-30 23:32:36,890: -----------
> 2021-11-30 23:32:36,891: some cpu 3.7s, avg 0.7%
> 2021-11-30 23:32:36,891: -----------
> 2021-11-30 23:32:36,891: some io 187.6s, avg 34.2%
> 2021-11-30 23:32:36,891: full io 178.3s, avg 32.5%
> 2021-11-30 23:32:36,891: -----------
> 2021-11-30 23:32:36,892: some memory 392.2s, avg 71.5%
> 2021-11-30 23:32:36,892: full memory 390.7s, avg 71.2%
>
> full psi:
> https://raw.githubusercontent.com/hakavlad/cache-tests/main/516-reclaim-throttle/516-rc2/patch5/noswap/psi
>
> mem:
> https://raw.githubusercontent.com/hakavlad/cache-tests/main/516-reclaim-throttle/516-rc2/patch5/noswap/mem
>
Ok, taking just noswap in isolation, this is what I saw when running
firefox + youtube vido and running tail /dev/zero 10 times in a row
2021-11-30 17:10:11,817: =================================
2021-11-30 17:10:11,817: Peak values: avg10 avg60 avg300
2021-11-30 17:10:11,817: ----------- ------ ------ ------
2021-11-30 17:10:11,817: some cpu 1.00 0.96 0.56
2021-11-30 17:10:11,817: ----------- ------ ------ ------
2021-11-30 17:10:11,817: some io 0.24 0.06 0.04
2021-11-30 17:10:11,817: full io 0.24 0.06 0.01
2021-11-30 17:10:11,817: ----------- ------ ------ ------
2021-11-30 17:10:11,817: some memory 2.48 0.51 0.38
2021-11-30 17:10:11,817: full memory 2.48 0.51 0.37
2021-11-30 17:10:11,817: =================================
2021-11-30 17:10:11,817: Stall times for the last 53.7s:
2021-11-30 17:10:11,817: -----------
2021-11-30 17:10:11,817: some cpu 0.4s, avg 0.8%
2021-11-30 17:10:11,817: -----------
2021-11-30 17:10:11,817: some io 0.1s, avg 0.2%
2021-11-30 17:10:11,817: full io 0.1s, avg 0.2%
2021-11-30 17:10:11,817: -----------
2021-11-30 17:10:11,817: some memory 0.3s, avg 0.6%
2021-11-30 17:10:11,817: full memory 0.3s, avg 0.6%
Obviously a fairly different experience and most likely due to the
underlying storage.
Can you run the same test but after doing this
$ echo 1 > /sys/kernel/debug/tracing/events/vmscan/mm_vmscan_throttled/enable
$ cat /sys/kernel/debug/tracing/trace_pipe > trace.out
and send me the trace.out file please?
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists