[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081204145810.GR6703@one.firstfloor.org>
Date: Thu, 4 Dec 2008 15:58:10 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
xfs@....sgi.com, Alasdair G Kergon <agk@...hat.com>,
Andi Kleen <andi-suse@...stfloor.org>,
Milan Broz <mbroz@...hat.com>
Subject: Re: Device loses barrier support (was: Fixed patch for simple barriers.)
> You start pvmove. The filesystem doesn't know about pvmove.
>
> The next time filesystem does somethig, it submits these 3 requests and
> the 2nd fill unexpectedly fail.
Again the file systems handle failing barriers and they expect it.
>
> So the fact that pvmove drains the request queue won't help you.
Help you against what?
>
> > > finished after the 2nd write) and you are in an interrupt context, where
> > > you can't reissue -EOPNOTSUPP request. So what do you want to do?
> >
> > The barrier aware file systems I know of just resubmit synchronously when
> > a barrier fails.
>
> ... and produce structure corruption for certain period in time, because
> the writes meant to be ordered are submitted unordered.
No there is nothing unordered. The file system path typically looks like
commit of a transaction
if (i have never seen a barrier failing)
write block with barrier
if (EOPNOTSUPP) {
record failure
submit synchronously
}
} else
submit synchronously
So if a pvmove barrier fails it will just submit synchronously.
The write block with barrier bit varies, jbd/gfs2 do it synchronously
too and xfs does it asynchronously (with io done callbacks), but
in both cases they handle an EOPNOTSUPP comming out in the final
io done.
When the pvmove migrates from no barrier support to barrier support
there won't be any barrier on the file system for the time of the
current mount, but that's also fine.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists