[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220407154426.7076e19f5b80d927dd715de9@linux-foundation.org>
Date: Thu, 7 Apr 2022 15:44:26 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Juergen Gross <jgross@...e.com>
Cc: xen-devel@...ts.xenproject.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Marek Marczykowski-Górecki
<marmarek@...isiblethingslab.com>, Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH v2] mm, page_alloc: fix build_zonerefs_node()
On Thu, 7 Apr 2022 14:06:37 +0200 Juergen Gross <jgross@...e.com> wrote:
> Since commit 6aa303defb74 ("mm, vmscan: only allocate and reclaim from
> zones with pages managed by the buddy allocator")
Six years ago!
> only zones with free
> memory are included in a built zonelist. This is problematic when e.g.
> all memory of a zone has been ballooned out when zonelists are being
> rebuilt.
>
> The decision whether to rebuild the zonelists when onlining new memory
> is done based on populated_zone() returning 0 for the zone the memory
> will be added to. The new zone is added to the zonelists only, if it
> has free memory pages (managed_zone() returns a non-zero value) after
> the memory has been onlined. This implies, that onlining memory will
> always free the added pages to the allocator immediately, but this is
> not true in all cases: when e.g. running as a Xen guest the onlined
> new memory will be added only to the ballooned memory list, it will be
> freed only when the guest is being ballooned up afterwards.
>
> Another problem with using managed_zone() for the decision whether a
> zone is being added to the zonelists is, that a zone with all memory
> used will in fact be removed from all zonelists in case the zonelists
> happen to be rebuilt.
>
> Use populated_zone() when building a zonelist as it has been done
> before that commit.
>
> Cc: stable@...r.kernel.org
Some details, please. Is this really serious enough to warrant
backporting? Is some new workload/usage pattern causing people to hit
this?
Powered by blists - more mailing lists