[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <B41635854730A14CA71C92B36EC22AAC83A809@mssmsx411>
Date: Thu, 15 Feb 2007 12:16:23 +0300
From: "Ananiev, Leonid I" <leonid.i.ananiev@...el.com>
To: "Ken Chen" <kenchen@...gle.com>
Cc: <suparna@...ibm.com>, "Andrew Morton" <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, "linux-aio" <linux-aio@...ck.org>,
"Zach Brown" <zach.brown@...cle.com>,
"Chris Mason" <chris.mason@...cle.com>
Subject: RE: [PATCH] aio: fix kernel bug when page is temporally busy
Ken Chen wrote
> It might shut up kernel
> panic by eliminate double calls to aio_complete(), but it will
> silently introduce data corruption.
I had got kernel panic after an hour of aiostress running.
After patching I have not got aiostress massage
"verify error, file %s offset %Lu contents (offset:bad:good):\n"
during 5 hour aiostress running with 'verify' option.
Looking closely into aiostress.c
ftp://ftp.suse.com/pub/people/mason/utils/aio-stress.c
we can see that this program may write in random mode THE SAME
contents to the same file offset asynchronously from different
buffers and do not cure about it. Does Ken consider that kernel
panic is the best way to prevent data corruption in such kind
of programs?
> So any error value returned from invalidate_inode_pages2_range() has
> to be taken seriously in the direct IO submit path instead of dropping
> it to the floor.
If invalidate_inode_pages2_range() will return EIOCBRETRY as the patch
"aio: fix kernel bug when page is temporally busy"
sets then do_sync_read/write() will not drop IO submit but will retry
it:
for (;;) {
ret = filp->f_op->aio_read(&kiocb, &iov, 1, kiocb.ki_pos);
if (ret != -EIOCBRETRY)
break;
wait_on_retry_sync_kiocb(&kiocb);
}
And do_sync_read/write() will not return EIO if page is busy
as it does now, before patching.
Ken Chen wrote:
> I also think the original patch is wrong.
What do you mean saying 'also'?
Leonid
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists