[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160608131831.GJ22570@dhcp22.suse.cz>
Date: Wed, 8 Jun 2016 15:18:31 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
Andrea Arcangeli <aarcange@...hat.com>,
Andi Kleen <andi@...stfloor.org>,
Tim Chen <tim.c.chen@...ux.intel.com>, kernel-team@...com
Subject: Re: [PATCH 09/10] mm: only count actual rotations as LRU reclaim cost
On Mon 06-06-16 15:48:35, Johannes Weiner wrote:
> Noting a reference on an active file page but still deactivating it
> represents a smaller cost of reclaim than noting a referenced
> anonymous page and actually physically rotating it back to the head.
> The file page *might* refault later on, but it's definite progress
> toward freeing pages, whereas rotating the anonymous page costs us
> real time without making progress toward the reclaim goal.
>
> Don't treat both events as equal. The following patch will hook up LRU
> balancing to cache and swap refaults, which are a much more concrete
> cost signal for reclaiming one list over the other. Remove the
> maybe-IO cost bias from page references, and only note the CPU cost
> for actual rotations that prevent the pages from getting reclaimed.
The changelog was quite hard to digest for me but I guess I got your
point. The change itself makes sense to me because noting the LRU
cost for pages which we intentionally keep on the active list because
they are really precious is reasonable. Which is not the case for
referenced pages in general because we only find out whether they are
really needed when we encounter them on the inactive list.
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/vmscan.c | 8 +++-----
> 1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 06e381e1004c..acbd212eab6e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1821,7 +1821,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
>
> if (page_referenced(page, 0, sc->target_mem_cgroup,
> &vm_flags)) {
> - nr_rotated += hpage_nr_pages(page);
> /*
> * Identify referenced, file-backed active pages and
> * give them one more trip around the active list. So
> @@ -1832,6 +1831,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
> * so we ignore them here.
> */
> if ((vm_flags & VM_EXEC) && page_is_file_cache(page)) {
> + nr_rotated += hpage_nr_pages(page);
> list_add(&page->lru, &l_active);
> continue;
> }
> @@ -1846,10 +1846,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
> */
> spin_lock_irq(&zone->lru_lock);
> /*
> - * Count referenced pages from currently used mappings as rotated,
> - * even though only some of them are actually re-activated. This
> - * helps balance scan pressure between file and anonymous pages in
> - * get_scan_count.
> + * Rotating pages costs CPU without actually
> + * progressing toward the reclaim goal.
> */
> lru_note_cost(lruvec, file, nr_rotated);
>
> --
> 2.8.3
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists