[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160706093338.GO11498@techsingularity.net>
Date: Wed, 6 Jul 2016 10:33:38 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>, Rik van Riel <riel@...riel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 31/31] mm, vmstat: Remove zone and node double accounting
by approximating retries
On Wed, Jul 06, 2016 at 09:58:50AM +0100, Mel Gorman wrote:
> On Wed, Jul 06, 2016 at 09:02:52AM +0900, Minchan Kim wrote:
> > On Fri, Jul 01, 2016 at 09:01:39PM +0100, Mel Gorman wrote:
> > > The number of LRU pages, dirty pages and writeback pages must be accounted
> > > for on both zones and nodes because of the reclaim retry logic, compaction
> > > retry logic and highmem calculations all depending on per-zone stats.
> > >
> > > The retry logic is only critical for allocations that can use any zones.
> >
> > Sorry, I cannot follow this assertion.
> > Could you explain?
> >
>
> The patch has been reworked since and I tried clarifying the changelog.
> Does this help?
>
It occurred to me at breakfast that this should be more consistent with
the OOM killer on both 32-bit and 64-bit so;
diff --git a/mm/compaction.c b/mm/compaction.c
index dfe7dafe8e8b..640532831b94 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1448,11 +1448,9 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
struct zoneref *z;
pg_data_t *last_pgdat = NULL;
-#ifdef CONFIG_HIGHMEM
/* Do not retry compaction for zone-constrained allocations */
- if (!is_highmem_idx(ac->high_zoneidx))
+ if (ac->high_zoneidx < ZONE_NORMAL)
return false;
-#endif
/*
* Make sure at least one zone would pass __compaction_suitable if we continue
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ded48e580abc..194a8162528b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3455,11 +3455,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
return false;
/*
- * Blindly retry allocation requests that cannot use all zones. We do
- * not have a reliable and fast means of calculating reclaimable, dirty
- * and writeback pages in eligible zones.
+ * Blindly retry lowmem allocation requests that are often ignored by
+ * the OOM killer as we not have a reliable and fast means of
+ * calculating reclaimable, dirty and writeback pages in eligible zones.
*/
- if (IS_ENABLED(CONFIG_HIGHMEM) && !is_highmem_idx(gfp_zone(gfp_mask)))
+ if (ac->high_zoneidx < ZONE_NORMAL)
goto out;
/*
Powered by blists - more mailing lists