[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhNTcM9XtqA1zUUi@dhcp22.suse.cz>
Date: Mon, 21 Feb 2022 09:55:12 +0100
From: Michal Hocko <mhocko@...e.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org,
peterz@...radead.org, guro@...com, shakeelb@...gle.com,
minchan@...nel.org, timmurray@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 1/1] mm: count time in drain_all_pages during direct
reclaim as memory pressure
On Sat 19-02-22 09:49:40, Suren Baghdasaryan wrote:
> When page allocation in direct reclaim path fails, the system will
> make one attempt to shrink per-cpu page lists and free pages from
> high alloc reserves. Draining per-cpu pages into buddy allocator can
> be a very slow operation because it's done using workqueues and the
> task in direct reclaim waits for all of them to finish before
> proceeding. Currently this time is not accounted as psi memory stall.
>
> While testing mobile devices under extreme memory pressure, when
> allocations are failing during direct reclaim, we notices that psi
> events which would be expected in such conditions were not triggered.
> After profiling these cases it was determined that the reason for
> missing psi events was that a big chunk of time spent in direct
> reclaim is not accounted as memory stall, therefore psi would not
> reach the levels at which an event is generated. Further investigation
> revealed that the bulk of that unaccounted time was spent inside
> drain_all_pages call.
It would be cool to have some numbers here.
> Annotate drain_all_pages and unreserve_highatomic_pageblock during
> page allocation failure in the direct reclaim path so that delays
> caused by these calls are accounted as memory stall.
If the draining is too slow and dependent on the current CPU/WQ
contention then we should address that. The original intention was that
having a dedicated WQ with WQ_MEM_RECLAIM would help to isolate the
operation from the rest of WQ activity. Maybe we need to fine tune
mm_percpu_wq. If that doesn't help then we should revise the WQ model
and use something else. Memory reclaim shouldn't really get stuck behind
other unrelated work.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists