[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140318063822.GS29270@yliu-dev.sh.intel.com>
Date: Tue, 18 Mar 2014 14:38:22 +0800
From: Yuanhan Liu <yuanhan.liu@...ux.intel.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Suleiman Souhlal <suleiman@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Yuanhan Liu <yuanhan.liu@...ux.intel.com>
Subject: Re: performance regression due to commit e82e0561("mm: vmscan: obey
proportional scanning requirements for kswapd")
On Sat, Mar 15, 2014 at 08:56:10PM -0700, Hugh Dickins wrote:
> On Fri, 14 Mar 2014, Mel Gorman wrote:
> > On Thu, Mar 13, 2014 at 05:44:57AM -0700, Hugh Dickins wrote:
> > > On Wed, 12 Mar 2014, Mel Gorman wrote:
> > > > On Tue, Feb 18, 2014 at 04:01:22PM +0800, Yuanhan Liu wrote:
... snip ...
> > > I missed Yuanhan's mail, but seeing your reply reminds me of another
> > > issue with that proportionality patch - or perhaps more thought would
> > > show them to be two sides of the same issue, with just one fix required.
> > > Let me throw our patch into the cauldron.
> > >
> > > [PATCH] mm: revisit shrink_lruvec's attempt at proportionality
> > >
> > > We have a memcg reclaim test which exerts a certain amount of pressure,
> > > and expects to see a certain range of page reclaim in response. It's a
> > > very wide range allowed, but the test repeatably failed on v3.11 onwards,
> > > because reclaim goes wild and frees up almost everything.
> > >
> > > This wild behaviour bisects to Mel's "scan_adjusted" commit e82e0561dae9
> > > "mm: vmscan: obey proportional scanning requirements for kswapd". That
> > > attempts to achieve proportionality between anon and file lrus: to the
> > > extent that once one of those is empty, it then tries to empty the other.
> > > Stop that.
> > >
> > > Signed-off-by: Hugh Dickins <hughd@...gle.com>
> > > ---
> > >
> > > We've been running happily with this for months; but all that time it's
> > > been on my TODO list with a "needs more thought" tag before we could
> > > upstream it, and I never got around to that. We also have a somewhat
> > > similar, but older and quite independent, fix to get_scan_count() from
> > > Suleiman, which I'd meant to send along at the same time: I'll dig that
> > > one out tomorrow or the day after.
>
> I've sent that one out now in a new thread
> https://lkml.org/lkml/2014/3/15/168
> and also let's tie these together with Hannes's
> https://lkml.org/lkml/2014/3/14/277
>
> > >
> >
> > I ran a battery of page reclaim related tests against it on top of
> > 3.14-rc6. Workloads showed small improvements in their absolute performance
> > but actual IO behaviour looked much better in some tests. This is the
> > iostats summary for the test that showed the biggest different -- dd of
> > a large file on ext3.
> >
> > 3.14.0-rc6 3.14.0-rc6
> > vanilla proportional-v1r1
> > Mean sda-avgqz 1045.64 224.18
> > Mean sda-await 2120.12 506.77
> > Mean sda-r_await 18.61 19.78
> > Mean sda-w_await 11089.60 2126.35
> > Max sda-avgqz 2294.39 787.13
> > Max sda-await 7074.79 2371.67
> > Max sda-r_await 503.00 414.00
> > Max sda-w_await 35721.93 7249.84
> >
> > Not all workloads benefitted. The same workload on ext4 showed no useful
> > difference. btrfs looks like
> >
> > 3.14.0-rc6 3.14.0-rc6
> > vanilla proportional-v1r1
> > Mean sda-avgqz 762.69 650.39
> > Mean sda-await 2438.46 2495.15
> > Mean sda-r_await 44.18 47.20
> > Mean sda-w_await 6109.19 5139.86
> > Max sda-avgqz 2203.50 1870.78
> > Max sda-await 7098.26 6847.21
> > Max sda-r_await 63.02 156.00
> > Max sda-w_await 19921.70 11085.13
> >
> > Better but not as dramatically so. I didn't analyse why. A workload that
> > had a large anonymous mapping with large amounts of IO in the background
> > did not show any regressions so based on that and the fact the patch looks
> > ok, here goes nothing;
> >
> > Acked-by: Mel Gorman <mgorman@...e.de>
>
> Big thank you, Mel, for doing so much work on it, and so very quickly.
> I get quite lost in the numbers myself: I'm much more convinced of it
> by your numbers and ack.
>
> >
> > You say it's already been tested for months but it would be nice if the
> > workload that generated this thread was also tested.
>
> Yes indeed: Yuanhan, do you have time to try this patch for your
> testcase? I'm hoping it will prove at least as effective as your
> own suggested patch, but please let us know what you find - thanks.
Hi Hugh,
Sure, and sorry to tell you that this patch introduced another half
performance descrease from avg 60 MB/s to 30 MB/s in this testcase.
Moreover, the dd throughput for each process was steady before, however,
it's quite bumpy from 20 MB/s to 40 MB/s w/ this patch applied, and thus
got a avg of 30 MB/s:
11327188992 bytes (11 GB) copied, 300.014 s, 37.8 MB/s
1809373+0 records in
1809372+0 records out
7411187712 bytes (7.4 GB) copied, 300.008 s, 24.7 MB/s
3068285+0 records in
3068284+0 records out
12567691264 bytes (13 GB) copied, 300.001 s, 41.9 MB/s
1883877+0 records in
1883876+0 records out
7716356096 bytes (7.7 GB) copied, 300.002 s, 25.7 MB/s
1807674+0 records in
1807673+0 records out
7404228608 bytes (7.4 GB) copied, 300.024 s, 24.7 MB/s
1796473+0 records in
1796472+0 records out
7358349312 bytes (7.4 GB) copied, 300.008 s, 24.5 MB/s
1905655+0 records in
1905654+0 records out
7805558784 bytes (7.8 GB) copied, 300.016 s, 26.0 MB/s
2819168+0 records in
2819167+0 records out
11547308032 bytes (12 GB) copied, 300.025 s, 38.5 MB/s
1848381+0 records in
1848380+0 records out
7570964480 bytes (7.6 GB) copied, 300.005 s, 25.2 MB/s
3023133+0 records in
3023132+0 records out
12382748672 bytes (12 GB) copied, 300.024 s, 41.3 MB/s
1714585+0 records in
1714584+0 records out
7022936064 bytes (7.0 GB) copied, 300.011 s, 23.4 MB/s
1835132+0 records in
1835131+0 records out
7516696576 bytes (7.5 GB) copied, 299.998 s, 25.1 MB/s
1733341+0 records in
--yliu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists