[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1402294928.6316.38.camel@marge.simpson.net>
Date: Mon, 09 Jun 2014 08:22:08 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: RT <linux-rt-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH] rt/aio: fix rcu garbage collection might_sleep()
splat
On Mon, 2014-06-09 at 05:17 +0200, Mike Galbraith wrote:
> On Mon, 2014-06-09 at 10:08 +0800, Lai Jiangshan wrote:
> > Hi, rt-people
> >
> > I don't think it is the correct direction.
>
> Yup, it's a band-aid, ergo RFC.
Making aio play by the rules is safe. Another option is to bend the
rules up a little. With the below, ltp aio-stress testcases haven't yet
griped or exploded either.. which somehow isn't quite as comforting as
"is safe" :)
---
fs/aio.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -509,7 +509,9 @@ static void free_ioctx_reqs(struct percp
struct kioctx *ctx = container_of(ref, struct kioctx, reqs);
INIT_WORK(&ctx->free_work, free_ioctx);
+ preempt_enable_rt();
schedule_work(&ctx->free_work);
+ preempt_disable_rt();
}
/*
@@ -522,7 +524,9 @@ static void free_ioctx_users(struct perc
struct kioctx *ctx = container_of(ref, struct kioctx, users);
struct kiocb *req;
+ preempt_enable_rt();
spin_lock_irq(&ctx->ctx_lock);
+ local_irq_disable_rt();
while (!list_empty(&ctx->active_reqs)) {
req = list_first_entry(&ctx->active_reqs,
@@ -536,6 +540,8 @@ static void free_ioctx_users(struct perc
percpu_ref_kill(&ctx->reqs);
percpu_ref_put(&ctx->reqs);
+ preempt_disable_rt();
+ local_irq_enable_rt();
}
static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists