lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260104101443.f10264bc9730de884b52c5a2@linux-foundation.org>
Date: Sun, 4 Jan 2026 10:14:43 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: wujing <realwujing@...com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Suren Baghdasaryan
 <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>, Brendan Jackman
 <jackmanb@...gle.com>, Johannes Weiner <hannes@...xchg.org>, Zi Yan
 <ziy@...dia.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Qiliang
 Yuan <yuanql9@...natelecom.cn>
Subject: Re: [PATCH 1/1] mm/page_alloc: auto-tune min_free_kbytes on atomic
 allocation failure

On Sun,  4 Jan 2026 20:26:52 +0800 wujing <realwujing@...com> wrote:

> Introduce a mechanism to dynamically increase vm.min_free_kbytes when
> critical atomic allocations (GFP_ATOMIC, order-0) fail. This prevents
> recurring network packet drops or other atomic failures by proactively
> reserving more memory.

Seems like a good idea, however it's very likely that the networking
people have looked into this rather a lot.  Can I suggest that you
engage with them?  netdev@...r.kernel.org.

> The adjustment doubles min_free_kbytes upon upon failure (exponential backoff),
> capped at 1% of total RAM.

But no attempt to reduce it again after the load spike has gone away.

> Observed failure logs:
> [38535641.026406] node 0: slabs: 941, objs: 54656, free: 0
> [38535641.037711] node 1: slabs: 349, objs: 22096, free: 272
> [38535641.049025] node 1: slabs: 349, objs: 22096, free: 272
>
> ...
>
> +static void boost_min_free_kbytes_workfn(struct work_struct *work);
> +static DECLARE_WORK(boost_min_free_kbytes_work, boost_min_free_kbytes_workfn);
> +
>  void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
>  {
>  	struct va_format vaf;
> @@ -4947,6 +4951,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		goto retry;
>  	}
>  fail:
> +	/* Auto-tuning: trigger boost if atomic allocation fails */
> +	if ((gfp_mask & GFP_ATOMIC) && order == 0)
> +		schedule_work(&boost_min_free_kbytes_work);
> +

Probably this should be selectable and tunable via a kernel boot
parameter or a procfs tunable.  But I suggest you not do that work
until having discussed the approach with the networking developers.

>  	warn_alloc(gfp_mask, ac->nodemask,
>  			"page allocation failure: order:%u", order);
>  got_pg:


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ