[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090327212933.GA5325@redhat.com>
Date: Fri, 27 Mar 2009 22:29:33 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Markus Metzger <markus.t.metzger@...glemail.com>
Cc: "Metzger, Markus T" <markus.t.metzger@...el.com>,
"Kleen, Andi" <andi.kleen@...el.com>, Ingo Molnar <mingo@...e.hu>,
Roland McGrath <roland@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [rfc] x86, bts: fix crash
On 03/27, Markus Metzger wrote:
>
> Regarding the race on task->thread.ds_ctx between ds_release_bts() and
> ds_switch_to(), how would I prevent a task from being rescheduled for
> a small amount of time?
I don't see how we can do this. We can split wait_task_inactive() into
2 functions, the first one returns with task_rq_lock() held and interrupts
disabled. But this is nasty, and in any case wait_task_inactive(p) can't
force "p" to be deactivated.
Can't we do something different?
For simplicity, let's suppose that we have only task_struct->bts and it
is just a blob of memory which can be used by CPU somehow.
First, we add "struct rcu_head" into task_struct->bts, and then
void free_bts((struct rcu_head *rcu)
{
struct bts_tracer *bts = container_of();
...
kfree(bts);
}
void ds_release_bts(struct bts_tracer *tracer)
{
struct task_struct *child = tracer->ds.context->task;
struct bts_tracer *bts = child->bts;
child->bts = NULL;
// make sure child will NOT use ->bts
// after the next context switch,
// clear TIF_DS_AREA_MSR or something
...
call_rcu_sched(bts->rcu, free_bts);
}
Now we can call ds_release_bts() from the atomic context (as we do
now).
Once again, the pseudo code above has nothing to do with reality,
just for illustration.
(as for the memory accounting, this is another issue, let's forget
for now).
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists