[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181018081552.GZ18839@dhcp22.suse.cz>
Date: Thu, 18 Oct 2018 10:15:52 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Chris Wilson <chris@...is-wilson.co.uk>
Cc: Kuo-Hsin Yang <vovoy@...omium.org>, linux-kernel@...r.kernel.org,
intel-gfx@...ts.freedesktop.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, peterz@...radead.org,
dave.hansen@...el.com, corbet@....net, hughd@...gle.com,
joonas.lahtinen@...ux.intel.com, marcheu@...omium.org,
hoegsberg@...omium.org
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable
On Thu 18-10-18 07:56:45, Chris Wilson wrote:
> Quoting Chris Wilson (2018-10-16 19:31:06)
> > Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
> > nullifying the advantage gained from not walking the lists in reclaim.
> > I'll have better numbers in a couple of days.
>
> Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl)
> consisting of cycletest with a background load of trying to allocate +
> populate 2MiB (to hit thp) while catting all files to /dev/null, the
> result of using mapping_set_unevictable is mixed.
I haven't really read through your report completely yet but I wanted to
point out that the above test scenario is unlikely show the real effect of
the LRU scanning overhead because shmem pages do live on the anonymous
LRU list. With a plenty of file page cache available we do not even scan
anonymous LRU lists. You would have to generate a swapout workload to
test this properly.
On the other hand if mapping_set_unevictable has really a measurably bad
performance impact then this is probably not worth much because most
workloads are swap modest.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists