[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <469CFF2B.1080702@linux.vnet.ibm.com>
Date: Tue, 17 Jul 2007 23:10:59 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: "Paul (??) Menage"
<menage@...gle.com>
CC: dhaval@...ux.vnet.ibm.com, Pavel Emelianov <xemul@...ru>,
linux kernel mailing list <linux-kernel@...r.kernel.org>,
Paul Jackson <pj@....com>,
Linux Containers <containers@...ts.osdl.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Containers: css_put() dilemma
Paul (??) Menage wrote:
> Because as soon as you do the atomic_dec_and_test() on css->refcnt and
> the refcnt hits zero, then theoretically someone other thread (that
> already holds container_mutex) could check that the refcount is zero
> and free the container structure.
>
Hi, Paul,
That sounds correct. I wonder now if the solution should be some form
of delegation for deletion of unreferenced containers (HINT: work queue
or kernel threads).
> Adding a synchronize_rcu in container_diput() guarantees that the
> container structure won't be freed while someone may still be
> accessing it.
>
Do we take rcu_read_lock() in css_put() path or use call_rcu() to
free the container?
>>
>> Could you please elaborate as to why using a release agent is broken
>> when the memory controller is attached to it?
>
> Because then it will try to take container_mutex in css_put() if it
> drops the last reference to a container, which is the thing that you
> said you had to avoid since you called css_put() in contexts that
> couldn't sleep.
>
> Paul
--
Warm Regards,
Balbir Singh
Linux Technology Center
IBM, ISTL
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists