[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190927074803.GB26848@dhcp22.suse.cz>
Date: Fri, 27 Sep 2019 09:48:03 +0200
From: Michal Hocko <mhocko@...nel.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>,
"Kirill A. Shutemov" <kirill@...temov.name>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch for-5.3 0/4] revert immediate fallback to remote hugepages
On Thu 26-09-19 12:03:37, David Rientjes wrote:
[...]
> Your patch is setting __GFP_THISNODE for __GFP_DIRECT_RECLAIM: this
> allocation will fail in the fastpath for both my case (fragmented local
> node) and Andrea's case (out of memory local node). The first
> get_page_from_freelist() will then succeed in the slowpath for both cases;
> compaction is not tried for either.
>
> In my case, that results in a perpetual remote access latency that we
> can't tolerate. If Andrea's remote nodes are fragmented or low on memory,
> his case encounters swap storms over both the local node and remote nodes.
>
> So I'm not really sure what is solved by your patch?
There are two aspects the patch is targeting at. The first is that the
fast path is targeting a higher watermak (WMARK_LOW) so it might
fallback to a remote node easier and then the fast path doesn't wake up
kcompactd so there is no pro-active compaction going on to help future
allocations.
You are right that a fragmented or at min watermark node would fallback
to a remote node even with this patch. I wanted to see how much the
kcompactd can change the overall picture. If this is not sufficient then
maybe we need to drop the first optimistic attempt as well and simply go
right into the light compaction. Something like this on top
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff5484fdbdf9..61284e7f01ee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4434,7 +4434,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
* The adjusted alloc_flags might result in immediate success, so try
* that first
*/
- page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+ if (!order)
+ page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
if (page)
goto got_pg;
The whole point of handling this in the page allocator directly is to
have a unified solutions rather than have each specific caller invent
its own way to achieve higher locality.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists