lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Sep 2011 16:16:20 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>, hch@...radead.org,
	Jeremy Fitzhardinge <jeremy@...p.org>, jbeulich@...ell.com,
	linux-kernel@...r.kernel.org, JBeulich@...e.com
Subject: Re: Help with implementing some form of barriers in 3.0 kernels.

[Hmm, for some reason I never manage to receive Konrads mails directly,
 but only get the replies, or copies via the list]

On Wed, Sep 07, 2011 at 02:17:40PM -0400, Vivek Goyal wrote:
> On Wed, Sep 07, 2011 at 01:48:32PM -0400, Konrad Rzeszutek Wilk wrote:
> > Hey Christoph,
> > 
> > I was wondering what you think is the proper way of implementing a
> > backend to support the 'barrier' type requests? We have this issue were
> > there are 2.6.36 type guests that still use barriers and we would like
> > to support them properly. But in 3.0 there are no barriers - hence
> > the question whether WRITE_fLUSH_FUA would be equal to WRITE_BARRIER?
> 
> I think WRITE_FLUSH_FUA is not same as WRITE_BARRIER. Because it does
> not ensure request ordering. A request rq2 which is issued after rq1 (with
> WRITE_flush_FUA), can still finish before rq1. In the past WRITE_BARRIER
> would not allow that.
> 
> So AFAIK, WRITE_flush_fua is not WRITE_BARRIER.

Indeed.  And while most guests won't care some will.  E.g. reiserfs
which is the standard filesystem in most SuSE guests, which happen to
be fairly popular with Xen.

I'd suggest you look at the pre-2.6.36 barrier implementation and see
if you can move that into xen-blkfront.

For the qemu side doing this is a bit easier as you'll just have to wait
for all pending aio requests to complete.  The current qemu xen disk
code gets thus horribly wrong, though.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ