[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070325125025.b6e8f0d4.pj@sgi.com>
Date: Sun, 25 Mar 2007 12:50:25 -0700
From: Paul Jackson <pj@....com>
To: vatsa@...ibm.com
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Fix race between attach_task and cpuset_exit
> + task_lock(tsk);
> cs = tsk->cpuset;
> tsk->cpuset = &top_cpuset; /* the_top_cpuset_hack - see above */
> + atomic_dec(&cs->count);
> + task_unlock(tsk);
>
> if (notify_on_release(cs)) {
> char *pathbuf = NULL;
>
> mutex_lock(&manage_mutex);
> - if (atomic_dec_and_test(&cs->count))
> + if (!atomic_read(&cs->count))
> check_for_release(cs, &pathbuf);
Is there perhaps another race here? Could it happen that:
1) the atomic_dec() lowers the count to say one (any value > zero)
2) after we drop the task lock, some other task or tasks decrement
the count to zero
3) we catch that zero when we atomic_read the count, and issue a spurious
check_for_release().
I'm thinking that we should use the same oldcs_tobe_released logic
here as we used in attach_task, so that we do an atomic_dec_and_test()
inside the task lock, and if that hit zero, then we know that our
pointer to this cpuset is the last remaining reference, so we can
release that pointer at our convenience, knowing no one else can
reference or mess with that cpuset any more.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists