[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220925103529.13716-4-yongw.pur@gmail.com>
Date: Sun, 25 Sep 2022 03:35:29 -0700
From: wangyong <yongw.pur@...il.com>
To: gregkh@...uxfoundation.org
Cc: jaewon31.kim@...sung.com, linux-kernel@...r.kernel.org,
mhocko@...nel.org, stable@...r.kernel.org, wang.yong12@....com.cn,
yongw.pur@...il.com, Minchan Kim <minchan@...nel.org>,
Baoquan He <bhe@...hat.com>, Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Yong-Taek Lee <ytk.lee@...sung.com>, stable@...r.kerenl.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH v2 stable-4.19 3/3] page_alloc: fix invalid watermark check on a negative value
From: Jaewon Kim <jaewon31.kim@...sung.com>
[ backport of commit 9282012fc0aa248b77a69f5eb802b67c5a16bb13 ]
There was a report that a task is waiting at the
throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was
increasing.
This is a bug where zone_watermark_fast returns true even when the free
is very low. The commit f27ce0e14088 ("page_alloc: consider highatomic
reserve in watermark fast") changed the watermark fast to consider
highatomic reserve. But it did not handle a negative value case which
can be happened when reserved_highatomic pageblock is bigger than the
actual free.
If watermark is considered as ok for the negative value, allocating
contexts for order-0 will consume all free pages without direct reclaim,
and finally free page may become depleted except highatomic free.
Then allocating contexts may fall into throttle_direct_reclaim. This
symptom may easily happen in a system where wmark min is low and other
reclaimers like kswapd does not make free pages quickly.
Handle the negative case by using MIN.
Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com
Fixes: f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast")
Signed-off-by: Jaewon Kim <jaewon31.kim@...sung.com>
Reported-by: GyeongHwan Hong <gh21.hong@...sung.com>
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Baoquan He <bhe@...hat.com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Yong-Taek Lee <ytk.lee@...sung.com>
Cc: <stable@...r.kerenl.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
mm/page_alloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 237463d..d6d8a37 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3243,11 +3243,15 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
* need to be calculated.
*/
if (!order) {
- long fast_free;
+ long usable_free;
+ long reserved;
- fast_free = free_pages;
- fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags);
- if (fast_free > mark + z->lowmem_reserve[classzone_idx])
+ usable_free = free_pages;
+ reserved = __zone_watermark_unusable_free(z, 0, alloc_flags);
+
+ /* reserved may over estimate high-atomic reserves. */
+ usable_free -= min(usable_free, reserved);
+ if (usable_free > mark + z->lowmem_reserve[classzone_idx])
return true;
}
--
2.7.4
Powered by blists - more mailing lists