[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YG8cYCsxwNwszhji@dhcp22.suse.cz>
Date: Thu, 8 Apr 2021 17:08:16 +0200
From: Michal Hocko <mhocko@...e.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Roman Gushchin <guro@...com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH] mm: page_counter: mitigate consequences of a
page_counter underflow
On Thu 08-04-21 10:31:55, Johannes Weiner wrote:
> When the unsigned page_counter underflows, even just by a few pages, a
> cgroup will not be able to run anything afterwards and trigger the OOM
> killer in a loop.
>
> Underflows shouldn't happen, but when they do in practice, we may just
> be off by a small amount that doesn't interfere with the normal
> operation - consequences don't need to be that dire.
Yes, I do agree.
> Reset the page_counter to 0 upon underflow. We'll issue a warning that
> the accounting will be off and then try to keep limping along.
I do not remember any reports about the existing WARN_ON but it is not
really hard to imagine a charging imbalance to be introduced easily.
> [ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
The lack of external synchronization makes it more tricky because
certain charges might get just lost depending on the ordering. This
sucks but considering that the system is already botched and counters
cannot be trusted this is definitely better than a potentially
completely unusable memcg. It would be nice to mention that in the above
paragraph as a caveat.
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/page_counter.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_counter.c b/mm/page_counter.c
> index c6860f51b6c6..7d83641eb86b 100644
> --- a/mm/page_counter.c
> +++ b/mm/page_counter.c
> @@ -52,9 +52,13 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages)
> long new;
>
> new = atomic_long_sub_return(nr_pages, &counter->usage);
> - propagate_protected_usage(counter, new);
> /* More uncharges than charges? */
> - WARN_ON_ONCE(new < 0);
> + if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
> + new, nr_pages)) {
> + new = 0;
> + atomic_long_set(&counter->usage, new);
> + }
> + propagate_protected_usage(counter, new);
> }
>
> /**
> --
> 2.31.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists