[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191112143844.GB7934@bombadil.infradead.org>
Date: Tue, 12 Nov 2019 06:38:44 -0800
From: Matthew Wilcox <willy@...radead.org>
To: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org,
mgorman@...hsingularity.net, tj@...nel.org, hughd@...gle.com,
khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com,
yang.shi@...ux.alibaba.com, Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 6/8] mm/lru: remove rcu_read_lock to fix performance
regression
On Tue, Nov 12, 2019 at 10:06:26PM +0800, Alex Shi wrote:
> Intel 0day report there are performance regression on this patchset.
> The detailed info points to rcu_read_lock + PROVE_LOCKING which causes
> queued_spin_lock_slowpath waiting too long time to get lock.
> Remove rcu_read_lock is safe here since we had a spinlock hold.
Argh. You have not sent these patches in a properly reviewable form!
I wasted all that time reviewing the earlier patch in this series only to
find out that you changed it here. FIX THE PATCH, don't send a fix-patch
on top of it!
> Reported-by: kbuild test robot <lkp@...el.com>
> Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Roman Gushchin <guro@...com>
> Cc: Shakeel Butt <shakeelb@...gle.com>
> Cc: Chris Down <chris@...isdown.name>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> ---
> include/linux/memcontrol.h | 29 ++++++++++++-----------------
> 1 file changed, 12 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2421b720d272..f869897a68f0 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1307,20 +1307,18 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
> struct pglist_data *pgdat = page_pgdat(page);
> struct lruvec *lruvec;
>
> - rcu_read_lock();
> + if (!locked_lruvec)
> + goto lock;
> +
> lruvec = mem_cgroup_page_lruvec(page, pgdat);
>
> - if (locked_lruvec == lruvec) {
> - rcu_read_unlock();
> + if (locked_lruvec == lruvec)
> return lruvec;
> - }
> - rcu_read_unlock();
>
> - if (locked_lruvec)
> - spin_unlock_irq(&locked_lruvec->lru_lock);
> + spin_unlock_irq(&locked_lruvec->lru_lock);
>
> +lock:
> lruvec = lock_page_lruvec_irq(page, pgdat);
> -
> return lruvec;
> }
>
> @@ -1331,21 +1329,18 @@ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page,
> struct pglist_data *pgdat = page_pgdat(page);
> struct lruvec *lruvec;
>
> - rcu_read_lock();
> + if (!locked_lruvec)
> + goto lock;
> +
> lruvec = mem_cgroup_page_lruvec(page, pgdat);
>
> - if (locked_lruvec == lruvec) {
> - rcu_read_unlock();
> + if (locked_lruvec == lruvec)
> return lruvec;
> - }
> - rcu_read_unlock();
>
> - if (locked_lruvec)
> - spin_unlock_irqrestore(&locked_lruvec->lru_lock,
> - locked_lruvec->flags);
> + spin_unlock_irqrestore(&locked_lruvec->lru_lock, locked_lruvec->flags);
>
> +lock:
> lruvec = lock_page_lruvec_irqsave(page, pgdat);
> -
> return lruvec;
> }
>
> --
> 1.8.3.1
>
>
Powered by blists - more mailing lists