[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEHM+4rvBmFWhzPXZrwxXvMEmVdkpsgRg26wVNYSA8HKF_8AwQ@mail.gmail.com>
Date: Fri, 2 Nov 2018 20:35:11 +0800
From: Vovo Yang <vovoy@...omium.org>
To: mhocko@...nel.org
Cc: Dave Hansen <dave.hansen@...el.com>, linux-kernel@...r.kernel.org,
intel-gfx@...ts.freedesktop.org, linux-mm@...ck.org,
Chris Wilson <chris@...is-wilson.co.uk>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3] mm, drm/i915: mark pinned shmemfs pages as unevictable
On Thu, Nov 1, 2018 at 9:10 PM Michal Hocko <mhocko@...nel.org> wrote:
> OK, so that explain my question about the test case. Even though you
> generate a lot of page cache, the amount is still too small to trigger
> pagecache mostly reclaim and anon LRUs are scanned as well.
>
> Now to the difference with the previous version which simply set the
> UNEVICTABLE flag on mapping. Am I right assuming that pages are already
> at LRU at the time? Is there any reason the mapping cannot have the flag
> set before they are added to the LRU?
I checked again. When I run gem_syslatency, it sets unevictable flag
first and then adds pages to LRU, so my explanation to the previous
test result is wrong. It should not be necessary to explicitly move
these pages to unevictable list for this test case. The performance
improvement of this patch on kbl might be due to not calling
shmem_unlock_mapping.
The perf result of a shmem lock test shows find_get_entries is the
most expensive part of shmem_unlock_mapping.
85.32%--ksys_shmctl
shmctl_do_lock
--85.29%--shmem_unlock_mapping
|--45.98%--find_get_entries
| --10.16%--radix_tree_next_chunk
|--16.78%--check_move_unevictable_pages
|--16.07%--__pagevec_release
| --15.67%--release_pages
| --4.82%--free_unref_page_list
|--4.38%--pagevec_remove_exceptionals
--0.59%--_cond_resched
Powered by blists - more mailing lists