[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE=gft4KDC0r3S-5p-oHz0cBiwpPqJ8mYVJ2JP7ghnPdaR_u6w@mail.gmail.com>
Date: Tue, 12 Nov 2019 09:22:51 -0800
From: Evan Green <evgreen@...omium.org>
To: "Darrick J. Wong" <darrick.wong@...cle.com>
Cc: Jens Axboe <axboe@...nel.dk>,
Martin K Petersen <martin.petersen@...cle.com>,
Gwendal Grignou <gwendal@...omium.org>,
Ming Lei <ming.lei@...hat.com>,
Alexis Savery <asavery@...omium.org>,
Douglas Anderson <dianders@...omium.org>,
Bart Van Assche <bvanassche@....org>,
Chaitanya Kulkarni <chaitanya.kulkarni@....com>,
linux-block <linux-block@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 2/2] loop: Better discard support for block devices
Thanks for replying and taking a look Darrick. I didn't see your patch
in Jens tree when I looked just before sending it, but maybe I missed
it.
On Mon, Nov 11, 2019 at 5:37 PM Darrick J. Wong <darrick.wong@...cle.com> wrote:
>
> On Mon, Nov 11, 2019 at 10:50:30AM -0800, Evan Green wrote:
> > If the backing device for a loop device is a block device,
> > then mirror the "write zeroes" capabilities of the underlying
> > block device into the loop device. Copy this capability into both
> > max_write_zeroes_sectors and max_discard_sectors of the loop device.
> >
> > The reason for this is that REQ_OP_DISCARD on a loop device translates
> > into blkdev_issue_zeroout(), rather than blkdev_issue_discard(). This
> > presents a consistent interface for loop devices (that discarded data
> > is zeroed), regardless of the backing device type of the loop device.
> > There should be no behavior change for loop devices backed by regular
> > files.
> >
> > While in there, differentiate between REQ_OP_DISCARD and
> > REQ_OP_WRITE_ZEROES, which are different for block devices,
> > but which the loop device had just been lumping together, since
> > they're largely the same for files.
> >
> > This change fixes blktest block/003, and removes an extraneous
> > error print in block/013 when testing on a loop device backed
> > by a block device that does not support discard.
> >
> > Signed-off-by: Evan Green <evgreen@...omium.org>
> > Reviewed-by: Gwendal Grignou <gwendal@...omium.org>
> > Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@....com>
> > ---
> >
> > Changes in v6: None
> > Changes in v5:
> > - Don't mirror discard if lo_encrypt_key_size is non-zero (Gwendal)
> >
> > Changes in v4:
> > - Mirror blkdev's write_zeroes into loopdev's discard_sectors.
> >
> > Changes in v3:
> > - Updated commit description
> >
> > Changes in v2: None
> >
> > drivers/block/loop.c | 57 ++++++++++++++++++++++++++++----------------
> > 1 file changed, 37 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> > index d749156a3d88..236f6deb0772 100644
> > --- a/drivers/block/loop.c
> > +++ b/drivers/block/loop.c
> > @@ -417,19 +417,14 @@ static int lo_read_transfer(struct loop_device *lo, struct request *rq,
> > return ret;
> > }
> >
> > -static int lo_discard(struct loop_device *lo, struct request *rq, loff_t pos)
> > +static int lo_discard(struct loop_device *lo, struct request *rq,
> > + int mode, loff_t pos)
> > {
> > - /*
> > - * We use punch hole to reclaim the free space used by the
> > - * image a.k.a. discard. However we do not support discard if
> > - * encryption is enabled, because it may give an attacker
> > - * useful information.
> > - */
> > struct file *file = lo->lo_backing_file;
> > - int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
> > + struct request_queue *q = lo->lo_queue;
> > int ret;
> >
> > - if ((!file->f_op->fallocate) || lo->lo_encrypt_key_size) {
> > + if (!blk_queue_discard(q)) {
> > ret = -EOPNOTSUPP;
> > goto out;
> > }
> > @@ -599,8 +594,13 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
> > case REQ_OP_FLUSH:
> > return lo_req_flush(lo, rq);
> > case REQ_OP_DISCARD:
> > + return lo_discard(lo, rq,
> > + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, pos);
> > +
> > case REQ_OP_WRITE_ZEROES:
> > - return lo_discard(lo, rq, pos);
> > + return lo_discard(lo, rq,
> > + FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE, pos);
>
> Yes, this more or less reimplements what's already in -next...
Agree, this part would disappear if I rebased on top of your patch.
This series has been around for awhile, you see :)
>
> > +
> > case REQ_OP_WRITE:
> > if (lo->transfer)
> > return lo_write_transfer(lo, rq, pos);
> > @@ -854,6 +854,21 @@ static void loop_config_discard(struct loop_device *lo)
> > struct file *file = lo->lo_backing_file;
> > struct inode *inode = file->f_mapping->host;
> > struct request_queue *q = lo->lo_queue;
> > + struct request_queue *backingq;
> > +
> > + /*
> > + * If the backing device is a block device, mirror its zeroing
> > + * capability. REQ_OP_DISCARD translates to a zero-out even when backed
> > + * by block devices to keep consistent behavior with file-backed loop
> > + * devices.
> > + */
> > + if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
> > + backingq = bdev_get_queue(inode->i_bdev);
>
> What happens if the inode is from a filesystem that can have multiple
> backing devices (like btrfs)?
Then I would expect S_ISBLK(inode->i_mode) would not be true. This is
only for when you've created a loop device directly on top of a block
device (ie you pointed the loop device at /dev/sda). We use this in
our Chrome OS installer because it makes the logic simple whether
you're installing to a real disk or a file image.
>
> > + blk_queue_max_discard_sectors(q,
> > + backingq->limits.max_write_zeroes_sectors);
> > +
> > + blk_queue_max_write_zeroes_sectors(q,
> > + backingq->limits.max_write_zeroes_sectors);
>
> Also, seeing as filesystems tend to implement PUNCH_HOLE and ZERO_RANGE
> on their own independent of the hardware capabilities of the underlying
> device, it doesn't make much sense to forward the blockdev limits to the
> loop device.
>
> (Put another way, XFS's ZERO_RANGE implementation can zero hundreds of
> gigabytes at a time even if the underlying device is a spinning rust.)
Hopefully my comment above addresses this too (there is no file system
in the scenario I'm coding for).
Powered by blists - more mailing lists