[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181101130910.GI23921@dhcp22.suse.cz>
Date: Thu, 1 Nov 2018 14:09:10 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Vovo Yang <vovoy@...omium.org>
Cc: dave.hansen@...el.com, linux-kernel@...r.kernel.org,
intel-gfx@...ts.freedesktop.org, linux-mm@...ck.org,
Chris Wilson <chris@...is-wilson.co.uk>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
peterz@...radead.org, akpm@...ux-foundation.org
Subject: Re: [PATCH v3] mm, drm/i915: mark pinned shmemfs pages as unevictable
On Thu 01-11-18 19:28:46, Vovo Yang wrote:
> On Thu, Nov 1, 2018 at 12:42 AM Michal Hocko <mhocko@...nel.org> wrote:
> > On Wed 31-10-18 07:40:14, Dave Hansen wrote:
> > > Didn't we create the unevictable lists in the first place because
> > > scanning alone was observed to be so expensive in some scenarios?
> >
> > Yes, that is the case. I might just misunderstood the code I thought
> > those pages were already on the LRU when unevictable flag was set and
> > we would only move these pages to the unevictable list lazy during the
> > reclaim. If the flag is set at the time when the page is added to the
> > LRU then it should get to the proper LRU list right away. But then I do
> > not understand the test results from previous run at all.
>
> "gem_syslatency -t 120 -b -m" allocates a lot of anon pages, it consists of
> these looping threads:
> * ncpu threads to alloc i915 shmem buffers, these buffers are freed by i915
> shrinker.
> * ncpu threads to mmap, write, munmap an 2 MiB mapping.
> * 1 thread to cat all files to /dev/null
>
> Without the unevictable patch, after rebooting and running
> "gem_syslatency -t 120 -b -m", I got these custom vmstat:
> pgsteal_kswapd_anon 29261
> pgsteal_kswapd_file 1153696
> pgsteal_direct_anon 255
> pgsteal_direct_file 13050
> pgscan_kswapd_anon 14524536
> pgscan_kswapd_file 1488683
> pgscan_direct_anon 1702448
> pgscan_direct_file 25849
>
> And meminfo shows large anon lru size during test.
> # cat /proc/meminfo | grep -i "active("
> Active(anon): 377760 kB
> Inactive(anon): 3195392 kB
> Active(file): 19216 kB
> Inactive(file): 16044 kB
>
> With this patch, the custom vmstat after test:
> pgsteal_kswapd_anon 74962
> pgsteal_kswapd_file 903588
> pgsteal_direct_anon 4434
> pgsteal_direct_file 14969
> pgscan_kswapd_anon 2814791
> pgscan_kswapd_file 1113676
> pgscan_direct_anon 526766
> pgscan_direct_file 32432
>
> The anon pgscan count is reduced.
OK, so that explain my question about the test case. Even though you
generate a lot of page cache, the amount is still too small to trigger
pagecache mostly reclaim and anon LRUs are scanned as well.
Now to the difference with the previous version which simply set the
UNEVICTABLE flag on mapping. Am I right assuming that pages are already
at LRU at the time? Is there any reason the mapping cannot have the flag
set before they are added to the LRU?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists