[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190129225317.GA15515@cmpxchg.org>
Date: Tue, 29 Jan 2019 17:53:17 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: mhocko@...e.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC v2 PATCH] mm: vmscan: do not iterate all mem cgroups for
global direct reclaim
On Wed, Jan 30, 2019 at 06:11:17AM +0800, Yang Shi wrote:
> In current implementation, both kswapd and direct reclaim has to iterate
> all mem cgroups. It is not a problem before offline mem cgroups could
> be iterated. But, currently with iterating offline mem cgroups, it
> could be very time consuming. In our workloads, we saw over 400K mem
> cgroups accumulated in some cases, only a few hundred are online memcgs.
> Although kswapd could help out to reduce the number of memcgs, direct
> reclaim still get hit with iterating a number of offline memcgs in some
> cases. We experienced the responsiveness problems due to this
> occassionally.
>
> A simple test with pref shows it may take around 220ms to iterate 8K memcgs
> in direct reclaim:
> dd 13873 [011] 578.542919: vmscan:mm_vmscan_direct_reclaim_begin
> dd 13873 [011] 578.758689: vmscan:mm_vmscan_direct_reclaim_end
> So for 400K, it may take around 11 seconds to iterate all memcgs.
>
> Here just break the iteration once it reclaims enough pages as what
> memcg direct reclaim does. This may hurt the fairness among memcgs. But
> the cached iterator cookie could help to achieve the fairness more or
> less.
>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
Looks sane to me, thanks Yang.
Acked-by: Johannes Weiner <hannes@...xchg.org>
Powered by blists - more mailing lists