[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160718235849.GB9161@bbox>
Date: Tue, 19 Jul 2016 08:58:49 +0900
From: Minchan Kim <minchan@...nel.org>
To: Mel Gorman <mgorman@...hsingularity.net>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Vlastimil Babka <vbabka@...e.cz>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] mm, vmscan: Release/reacquire lru_lock on pgdat
change
On Mon, Jul 18, 2016 at 03:50:25PM +0100, Mel Gorman wrote:
> With node-lru, the locking is based on the pgdat. As Minchan pointed
> out, there is an opportunity to reduce LRU lock release/acquire in
> check_move_unevictable_pages by only changing lock on a pgdat change.
>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
> ---
> mm/vmscan.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 45344acf52ba..a6f31617a08c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3775,24 +3775,24 @@ int page_evictable(struct page *page)
> void check_move_unevictable_pages(struct page **pages, int nr_pages)
> {
> struct lruvec *lruvec;
> - struct zone *zone = NULL;
> + struct pglist_data *pgdat = NULL;
> int pgscanned = 0;
> int pgrescued = 0;
> int i;
>
> for (i = 0; i < nr_pages; i++) {
> struct page *page = pages[i];
> - struct zone *pagezone;
> + struct pglist_data *pagepgdat = page_pgdat(page);
No need to initialize in here.
>
> pgscanned++;
> - pagezone = page_zone(page);
> - if (pagezone != zone) {
> - if (zone)
> - spin_unlock_irq(zone_lru_lock(zone));
> - zone = pagezone;
> - spin_lock_irq(zone_lru_lock(zone));
> + pagepgdat = page_pgdat(page);
Double initialize. Please remove either one.
> + if (pagepgdat != pgdat) {
> + if (pgdat)
> + spin_unlock_irq(&pgdat->lru_lock);
> + pgdat = pagepgdat;
> + spin_lock_irq(&pgdat->lru_lock);
> }
> - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat);
> + lruvec = mem_cgroup_page_lruvec(page, pgdat);
>
> if (!PageLRU(page) || !PageUnevictable(page))
> continue;
> @@ -3808,10 +3808,10 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
> }
> }
>
> - if (zone) {
> + if (pgdat) {
> __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued);
> __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
> - spin_unlock_irq(zone_lru_lock(zone));
> + spin_unlock_irq(&pgdat->lru_lock);
> }
> }
> #endif /* CONFIG_SHMEM */
> --
> 2.6.4
>
Powered by blists - more mailing lists