[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200508215122.GB226164@cmpxchg.org>
Date: Fri, 8 May 2020 17:51:22 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>, Roman Gushchin <guro@...com>,
Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Yafang Shao <laoar.shao@...il.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: swap: fix update_page_reclaim_stat for huge pages
On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote:
> Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats
> just once for a page irrespective if a page is huge or not. Fix that by
> passing the hpage_nr_pages(page) to it.
>
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
https://lore.kernel.org/patchwork/patch/685703/
Laughs, then cries.
> @@ -928,7 +928,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
> }
>
> if (!PageUnevictable(page))
> - update_page_reclaim_stat(lruvec, file, PageActive(page_tail));
> + update_page_reclaim_stat(lruvec, file, PageActive(page_tail), 1);
The change to __pagevec_lru_add_fn() below makes sure the tail pages
are already accounted. This would make them count twice.
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> @@ -973,7 +973,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
> if (page_evictable(page)) {
> lru = page_lru(page);
> update_page_reclaim_stat(lruvec, page_is_file_lru(page),
> - PageActive(page));
> + PageActive(page), nr_pages);
> if (was_unevictable)
> __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
> } else {
Powered by blists - more mailing lists