[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200319140305.GC187654@cmpxchg.org>
Date: Thu, 19 Mar 2020 10:03:05 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: shakeelb@...gle.com, vbabka@...e.cz, willy@...radead.org,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [v4 PATCH 2/2] mm: swap: use smp_mb__after_atomic() to order LRU
bit set
On Wed, Mar 18, 2020 at 11:02:21AM +0800, Yang Shi wrote:
> Memory barrier is needed after setting LRU bit, but smp_mb() is too
> strong. Some architectures, i.e. x86, imply memory barrier with atomic
> operations, so replacing it with smp_mb__after_atomic() sounds better,
> which is nop on strong ordered machines, and full memory barriers on
> others. With this change the vm-scalability cases would perform better
> on x86, I saw total 6% improvement with this patch and previous inline
> fix.
>
> The test data (lru-file-readtwice throughput) against v5.6-rc4:
> mainline w/ inline fix w/ both (adding this)
> 150MB 154MB 159MB
>
> Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs")
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
> Reviewed-and-Tested-by: Shakeel Butt <shakeelb@...gle.com>
> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
Powered by blists - more mailing lists