[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140320163004.GE28970@kvack.org>
Date: Thu, 20 Mar 2014 12:30:04 -0400
From: Benjamin LaHaise <bcrl@...ck.org>
To: Dave Jones <davej@...hat.com>, Gu Zheng <guz.fnst@...fujitsu.com>,
Al Viro <viro@...iv.linux.org.uk>, jmoyer@...hat.com,
kosaki.motohiro@...fujitsu.com,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
tangchen <tangchen@...fujitsu.com>, miaox@...fujitsu.com,
linux-aio@...ck.org, fsdevel <linux-fsdevel@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/2] aio: fix the confliction of read events and migrating ring page
On Thu, Mar 20, 2014 at 10:32:07AM -0400, Dave Jones wrote:
> On Thu, Mar 20, 2014 at 01:46:25PM +0800, Gu Zheng wrote:
>
> > diff --git a/fs/aio.c b/fs/aio.c
> > index 88ad40c..e353085 100644
> > --- a/fs/aio.c
> > +++ b/fs/aio.c
> > @@ -319,6 +319,9 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
> > ctx->ring_pages[old->index] = new;
> > spin_unlock_irqrestore(&ctx->completion_lock, flags);
> >
> > + /* Ensure read event is completed before putting old page */
> > + mutex_lock(&ctx->ring_lock);
> > + mutex_unlock(&ctx->ring_lock);
> > put_page(old);
> >
> > return rc;
>
> This looks a bit weird. Would using a completion work here ?
Nope. This is actually the most elegant fix I've seen for this approach,
as everything else has relied on adding additional spin locks (which only
end up being needed in the migration case) around access to the ring_pages
on the reader side. That said, this patch is not a complete solution to
the problem, as the update of the ring's head pointer could still get lost
with this patch. I think the right thing is just taking the ring_lock
mutex over the entire page migration operation. That should be safe, as
nowhere else is the ring_lock mutex nested with any other locks.
-ben
> Dave
--
"Thought is the essence of where you are now."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists