[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081210105000.GC25467@balbir.in.ibm.com>
Date: Wed, 10 Dec 2008 16:20:00 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: menage@...gle.com,
KAMEZAWA Hiroyuki <kamezawa.hiroyuki@...fujitsu.com>,
Daisuke Miyakawa <dmiyakawa@...gle.com>,
YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC][RFT] memcg fix cgroup_mutex deadlock when cpuset
reclaims memory
* KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2008-12-10 17:49:06]:
> On Wed, 10 Dec 2008 10:49:47 +0530
> Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
>
> > Hi,
> >
> > Here is a proposed fix for the memory controller cgroup_mutex deadlock
> > reported. It is lightly tested and reviewed. I need help with review
> > and test. Is the reported deadlock reproducible after this patch? A
> > careful review of the cpuset impact will also be highly appreciated.
> >
> > From: Balbir Singh <balbir@...ux.vnet.ibm.com>
> >
> > cpuset_migrate_mm() holds cgroup_mutex throughout the duration of
> > do_migrate_pages(). The issue with that is that
> >
> > 1. It can lead to deadlock with memcg, as do_migrate_pages()
> > enters reclaim
> > 2. It can lead to long latencies, preventing users from creating/
> > destroying other cgroups anywhere else
> >
> > The patch holds callback_mutex through the duration of cpuset_migrate_mm() and
> > gives up cgroup_mutex while doing so.
> >
> > Signed-off-by: Balbir Singh <balbir@...ux.vnet.ibm.com>
> > ---
> >
> > include/linux/cpuset.h | 13 ++++++++++++-
> > kernel/cpuset.c | 23 ++++++++++++-----------
> > 2 files changed, 24 insertions(+), 12 deletions(-)
> >
> > diff -puN kernel/cgroup.c~cpuset-remove-cgroup-mutex-from-update-path kernel/cgroup.c
> > diff -puN kernel/cpuset.c~cpuset-remove-cgroup-mutex-from-update-path kernel/cpuset.c
> > --- a/kernel/cpuset.c~cpuset-remove-cgroup-mutex-from-update-path
> > +++ a/kernel/cpuset.c
> > @@ -369,7 +369,7 @@ static void guarantee_online_mems(const
> > * task has been modifying its cpuset.
> > */
> >
> > -void cpuset_update_task_memory_state(void)
> > +void __cpuset_update_task_memory_state(bool held)
> > {
> > int my_cpusets_mem_gen;
> > struct task_struct *tsk = current;
> > @@ -380,7 +380,8 @@ void cpuset_update_task_memory_state(voi
> > rcu_read_unlock();
> >
> > if (my_cpusets_mem_gen != tsk->cpuset_mems_generation) {
> > - mutex_lock(&callback_mutex);
> > + if (!held)
> > + mutex_lock(&callback_mutex);
> > task_lock(tsk);
> > cs = task_cs(tsk); /* Maybe changed when task not locked */
> > guarantee_online_mems(cs, &tsk->mems_allowed);
> > @@ -394,7 +395,8 @@ void cpuset_update_task_memory_state(voi
> > else
> > tsk->flags &= ~PF_SPREAD_SLAB;
> > task_unlock(tsk);
> > - mutex_unlock(&callback_mutex);
> > + if (!held)
> > + mutex_unlock(&callback_mutex);
> > mpol_rebind_task(tsk, &tsk->mems_allowed);
> > }
> > }
> > @@ -949,13 +951,15 @@ static int update_cpumask(struct cpuset
> > * so that the migration code can allocate pages on these nodes.
> > *
> > * Call holding cgroup_mutex, so current's cpuset won't change
> > - * during this call, as manage_mutex holds off any cpuset_attach()
> > + * during this call, as callback_mutex holds off any cpuset_attach()
> > * calls. Therefore we don't need to take task_lock around the
> > * call to guarantee_online_mems(), as we know no one is changing
> > * our task's cpuset.
> > *
> > * Hold callback_mutex around the two modifications of our tasks
> > - * mems_allowed to synchronize with cpuset_mems_allowed().
> > + * mems_allowed to synchronize with cpuset_mems_allowed(). Give
> > + * up cgroup_mutex to avoid deadlocking with other subsystems
> > + * as we enter reclaim from do_migrate_pages().
> > *
> > * While the mm_struct we are migrating is typically from some
> > * other task, the task_struct mems_allowed that we are hacking
> > @@ -976,17 +980,14 @@ static void cpuset_migrate_mm(struct mm_
> > {
> > struct task_struct *tsk = current;
> >
> > - cpuset_update_task_memory_state();
> > -
> > + cgroup_unlock();
> > mutex_lock(&callback_mutex);
> > + cpuset_update_task_memory_state_locked();
> > tsk->mems_allowed = *to;
> > - mutex_unlock(&callback_mutex);
> > -
> > do_migrate_pages(mm, from, to, MPOL_MF_MOVE_ALL);
> > -
> > - mutex_lock(&callback_mutex);
> > guarantee_online_mems(task_cs(tsk),&tsk->mems_allowed);
> > mutex_unlock(&callback_mutex);
> > + cgroup_lock();
> > }
> >
>
> Hmm...can't this happen ?
>
> Assume there is a task X and cgroup Z1 and Z2. Z1 and Z2 doesn't need to be in
> the same hierarchy.
> ==
> CPU A attach task X to cgroup Z1
> cgroup_lock()
> for_each_subsys_state()
You mean for_each_subsys() right?
> => attach(X,Z)
> => migrate_mm()
> => cgroup_unlock()
> migration
>
> CPU B attach task X to cgroup Z2 at the same time
> cgroup_lock()
> replace css_set.
> ==
>
> Works on CPU B can't break for_each_subsys_state() in CPU A ?
>
for_each_subsys is hierarchy aware, so if we try to add the same task
to different hierachies, it should not be a problem right?
> Sorry if I misunderstand.
I hope I understood your scenario correctly.
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists