[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140527150100.70f6c7cf93d27d58c8f5eb48@linux-foundation.org>
Date: Tue, 27 May 2014 15:01:00 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Michal Hocko <mhocko@...e.cz>,
Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH mmotm/next]
memcg-mm-introduce-lowlimit-reclaim-fix2.patch
On Tue, 27 May 2014 14:36:04 -0700 (PDT) Hugh Dickins <hughd@...gle.com> wrote:
> mem_cgroup_within_guarantee() oopses in _raw_spin_lock_irqsave() when
> booted with cgroup_disable=memory. Fix that in the obvious inelegant
> way for now - though I hope we are moving towards a world in which
> almost all of the mem_cgroup_disabled() tests will vanish, with a
> root_mem_cgroup which can handle the basics even when disabled.
>
> I bet there's a neater way of doing this, rearranging the loop (and we
> shall want to avoid spinlocking on root_mem_cgroup when we reach that
> new world), but that's the kind of thing I'd get wrong in a hurry!
>
> ...
>
> @@ -2793,6 +2793,9 @@ static struct mem_cgroup *mem_cgroup_loo
> bool mem_cgroup_within_guarantee(struct mem_cgroup *memcg,
> struct mem_cgroup *root)
> {
> + if (mem_cgroup_disabled())
> + return false;
> +
> do {
> if (!res_counter_low_limit_excess(&memcg->res))
> return true;
This seems to be an awfully late and deep place at which to be noticing
mem_cgroup_disabled(). Should mem_cgroup_within_guarantee() even be called
in this state?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists