[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A19A48.4000005@bull.net>
Date: Thu, 31 Jan 2008 10:52:08 +0100
From: Pierre Peiffer <pierre.peiffer@...l.net>
To: "Serge E. Hallyn" <serue@...ibm.com>
Cc: linux-kernel@...r.kernel.org, containers@...ts.linux-foundation.org
Subject: Re: [PATCH 2.6.24-rc8-mm1 12/15] (RFC) IPC/semaphores: make use of
RCU to free the sem_undo_list
Serge E. Hallyn wrote:
> Quoting pierre.peiffer@...l.net (pierre.peiffer@...l.net):
>> From: Pierre Peiffer <pierre.peiffer@...l.net>
>>
>> Today, the sem_undo_list is freed when the last task using it exits.
>> There is no mechanism in place, that allows a safe concurrent access to
>> the sem_undo_list of a target task and protects efficiently against a
>> task-exit.
>>
>> That is okay for now as we don't need this.
>>
>> As I would like to provide a /proc interface to access this data, I need
>> such a safe access, without blocking the target task if possible.
>>
>> This patch proposes to introduce the use of RCU to delay the real free of
>> these sem_undo_list structures. They can then be accessed in a safe manner
>> by any tasks inside read critical section, this way:
>>
>> struct sem_undo_list *undo_list;
>> int ret;
>> ...
>> rcu_read_lock();
>> undo_list = rcu_dereference(task->sysvsem.undo_list);
>> if (undo_list)
>> ret = atomic_inc_not_zero(&undo_list->refcnt);
>> rcu_read_unlock();
>> ...
>> if (undo_list && ret) {
>> /* section where undo_list can be used quietly */
>> ...
>> }
>> ...
>
> And of course then
>
> if (atomic_dec_and_test(&undo_list->refcnt))
> free_semundo_list(undo_list);
>
> by that task.
>
I will precise this too.
>> Signed-off-by: Pierre Peiffer <pierre.peiffer@...l.net>
>
> Looks correct in terms of locking/refcounting.
>
> Signed-off-by: Serge Hallyn <serue@...ibm.com>
>
Thanks !
--
Pierre
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists