[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1311291546370.22413@chino.kir.corp.google.com>
Date: Fri, 29 Nov 2013 16:00:09 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.cz>, azurit@...ox.sk,
mm-commits@...r.kernel.org, stable@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [merged] mm-memcg-handle-non-error-oom-situations-more-gracefully.patch
removed from -mm tree
On Wed, 27 Nov 2013, Johannes Weiner wrote:
> > None that I am currently aware of, I'll continue to try them out. I'd
> > suggest just dropping the stable@...nel.org from the whole series though
> > unless there is another report of such a problem that people are running
> > into.
>
> The series has long been merged, how do we drop stable@...nel.org from
> it?
>
You said you have informed stable to not merge these patches until further
notice, I'd suggest simply avoid ever merging the whole series into a
stable kernel since the problem isn't serious enough. Marking changes
that do "goto nomem" seem fine to mark for stable, though.
> > We've had this patch internally since we started using memcg, it has
> > avoided some unnecessary oom killing.
>
> Do you have quantified data that OOM kills are reduced over a longer
> sampling period? How many kills are skipped? How many of them are
> deferred temporarily but the VM ended up having to kill something
> anyway?
On the scale that we run memcg, we would see it daily in automated testing
primarily because we panic the machine for memcg oom conditions where
there are no killable processes. It would typically manifest by two
processes that are allocating memory in a memcg; one is oom killed, is
allowed to allocate, handles its SIGKILL, exits and frees its memory and
the second process which is oom disabled races with the uncharge and is
oom disabled so the machine panics.
The upstream kernel of course doesn't panic in such a condition but if the
same scenario were to have happened, the second process would be
unnecessarily oom killed because it raced with the uncharge of the first
victim and it had exited before the scan of processes in the memcg oom
killer could detect it and defer. So this patch definitely does prevent
unnecessary oom killing when run at such a large scale that we do.
I'll send a formal patch.
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1836,6 +1836,13 @@ static void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
> > if (!chosen)
> > return;
> > points = chosen_points * 1000 / totalpages;
> > +
> > + /* One last chance to see if we really need to kill something */
> > + if (mem_cgroup_margin(memcg) >= (1 << order)) {
> > + put_task_struct(chosen);
> > + return;
> > + }
> > +
> > oom_kill_process(chosen, gfp_mask, order, points, totalpages, memcg,
> > NULL, "Memory cgroup out of memory");
> > }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists