lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMM=eLcvpQRef34-xoxg8qpJiexqesdhcauhU+dRgBw5wNVoag@mail.gmail.com>
Date:	Wed, 14 Sep 2011 11:34:55 -0400
From:	Mike Snitzer <snitzer@...hat.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Jan Beulich <JBeulich@...e.com>,
	Jeremy Fitzhardinge <jeremy@...p.org>, hch@...radead.org,
	Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org
Subject: Re: Help with implementing some form of barriers in 3.0 kernels.

On Wed, Sep 14, 2011 at 10:32 AM, Konrad Rzeszutek Wilk
<konrad.wilk@...cle.com> wrote:
>
> > > +   if (drain) {
> > > +           struct request_queue *q = bdev_get_queue(preq.bdev);
> > > +           unsigned long flags;
> > > +
>
> > > +           /* Emulate the original behavior of write barriers */
> > > +           spin_lock_irqsave(q->queue_lock, flags);
> > > +           elv_drain_elevator(q);
> > > +           __blk_run_queue(q);
> > > +           spin_unlock_irqrestore(q->queue_lock, flags);
> > > +   }
> I also had to add:
>
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index eaf49d1..20fddbc 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -679,6 +679,10 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
>                struct request_queue *q = bdev_get_queue(preq.bdev);
>                unsigned long flags;
>
> +               if (!q->elevator) {
> +                       __end_block_io_op(pending_req, -EOPNOTSUPP);
> +                       return -EOPNOTSUPP;
> +               }
>                /* Emulate the original behavior of write barriers */
>                spin_lock_irqsave(q->queue_lock, flags);
>                elv_drain_elevator(q);
> diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
> index 52d8893..722714a 100644
> --- a/drivers/block/xen-blkback/common.h
> +++ b/drivers/block/xen-blkback/common.h
> @@ -157,6 +157,7 @@ struct xen_vbd {
>        /* Cached size parameter. */
>        sector_t                size;
>        bool                    flush_support;
> +       bool                    barrier_support;
>  };
>
>  struct backend_info;
> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
> index da1e27a..7189ecd 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -384,6 +384,9 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
>        if (q && q->flush_flags)
>                vbd->flush_support = true;
>
> +       if (q && q->elevator && q->elevator->ops)
> +               vbd->barrier_support = true;
> +
>        DPRINTK("Successful creation of handle=%04x (dom=%u)\n",
>                handle, blkif->domid);
>        return 0;
> @@ -728,7 +731,7 @@ again:
>        if (err)
>                goto abort;
>
> -       err = xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support);
> +       err = xen_blkbk_barrier(xbt, be, be->blkif->vbd.barrier_support);
>
>        err = xen_blkbk_discard(xbt, be);
>
>
> Otherwise it would crash.. Thought I am not sure why the elevator is not
> set (the guest is on a LVM LV).

bio-based DM devices do not have an elevator.  But that doesn't mean
barriers (or FLUSH/FUA) aren't supported and/or needed by the
underlying devices.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ