[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Udo=DSH76YF9L_qmWFNSCJW22UQaL57jHWnKstdB2wngg@mail.gmail.com>
Date: Wed, 5 Aug 2020 14:18:28 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
kbuild test robot <lkp@...el.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org,
Shakeel Butt <shakeelb@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Wei Yang <richard.weiyang@...il.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Rong Chen <rong.a.chen@...el.com>
Subject: Re: [PATCH v17 11/21] mm/lru: move lru_lock holding in func lru_note_cost_page
On Sat, Jul 25, 2020 at 6:00 AM Alex Shi <alex.shi@...ux.alibaba.com> wrote:
>
> It's a clean up patch w/o function changes.
>
> Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
Reviewed-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> ---
> mm/memory.c | 3 ---
> mm/swap.c | 2 ++
> mm/swap_state.c | 2 --
> mm/workingset.c | 2 --
> 4 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 87ec87cdc1ff..dafc5585517e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3150,10 +3150,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> * XXX: Move to lru_cache_add() when it
> * supports new vs putback
> */
> - spin_lock_irq(&page_pgdat(page)->lru_lock);
> lru_note_cost_page(page);
> - spin_unlock_irq(&page_pgdat(page)->lru_lock);
> -
> lru_cache_add(page);
> swap_readpage(page, true);
> }
> diff --git a/mm/swap.c b/mm/swap.c
> index dc8b02cdddcb..b88ca630db70 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -298,8 +298,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)
>
> void lru_note_cost_page(struct page *page)
> {
> + spin_lock_irq(&page_pgdat(page)->lru_lock);
> lru_note_cost(mem_cgroup_page_lruvec(page, page_pgdat(page)),
> page_is_file_lru(page), hpage_nr_pages(page));
> + spin_unlock_irq(&page_pgdat(page)->lru_lock);
> }
>
> static void __activate_page(struct page *page, struct lruvec *lruvec)
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 05889e8e3c97..080be52db6a8 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -440,9 +440,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> }
>
> /* XXX: Move to lru_cache_add() when it supports new vs putback */
> - spin_lock_irq(&page_pgdat(page)->lru_lock);
> lru_note_cost_page(page);
> - spin_unlock_irq(&page_pgdat(page)->lru_lock);
>
> /* Caller will initiate read into locked page */
> SetPageWorkingset(page);
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 50b7937bab32..337d5b9ad132 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -372,9 +372,7 @@ void workingset_refault(struct page *page, void *shadow)
> if (workingset) {
> SetPageWorkingset(page);
> /* XXX: Move to lru_cache_add() when it supports new vs putback */
> - spin_lock_irq(&page_pgdat(page)->lru_lock);
> lru_note_cost_page(page);
> - spin_unlock_irq(&page_pgdat(page)->lru_lock);
> inc_lruvec_state(lruvec, WORKINGSET_RESTORE);
> }
> out:
> --
> 1.8.3.1
>
Powered by blists - more mailing lists