[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <528AD316.10001@huawei.com>
Date: Tue, 19 Nov 2013 10:55:18 +0800
From: Li Zefan <lizefan@...wei.com>
To: Shawn Bohrer <shawn.bohrer@...il.com>
CC: Hugh Dickins <hughd@...gle.com>, Tejun Heo <tj@...nel.org>,
Michal Hocko <mhocko@...e.cz>, <cgroups@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
Johannes Weiner <hannes@...xchg.org>,
Markus Blank-Burian <burian@...nster.de>
Subject: Re: 3.10.16 cgroup_mutex deadlock
> Thanks Tejun and Hugh. Sorry for my late entry in getting around to
> testing this fix. On the surface it sounds correct however I'd like to
> test this on top of 3.10.* since that is what we'll likely be running.
> I've tried to apply Hugh's patch above on top of 3.10.19 but it
> appears there are a number of conflicts. Looking over the changes and
> my understanding of the problem I believe on 3.10 only the
> cgroup_free_fn needs to be run in a separate workqueue. Below is the
> patch I've applied on top of 3.10.19, which I'm about to start
> testing. If it looks like I botched the backport in any way please
> let me know so I can test a propper fix on top of 3.10.19.
>
You didn't move css free_work to the dedicate wq as Tejun's patch does.
css free_work won't acquire cgroup_mutex, but when destroying a lot of
cgroups, we can have a lot of css free_work in the workqueue, so I'd
suggest you also use cgroup_destroy_wq for it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists