[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200403025757.GL10737@dread.disaster.area>
Date: Fri, 3 Apr 2020 13:57:57 +1100
From: Dave Chinner <david@...morbit.com>
To: "Martin K. Petersen" <martin.petersen@...cle.com>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@....com>, hch@....de,
darrick.wong@...cle.com, axboe@...nel.dk, tytso@....edu,
adilger.kernel@...ger.ca, ming.lei@...hat.com, jthumshirn@...e.de,
minwoo.im.dev@...il.com, damien.lemoal@....com,
andrea.parri@...rulasolutions.com, hare@...e.com, tj@...nel.org,
hannes@...xchg.org, khlebnikov@...dex-team.ru, ajay.joshi@....com,
bvanassche@....org, arnd@...db.de, houtao1@...wei.com,
asml.silence@...il.com, linux-block@...r.kernel.org,
linux-ext4@...r.kernel.org
Subject: Re: [PATCH 0/4] block: Add support for REQ_OP_ASSIGN_RANGE
On Thu, Apr 02, 2020 at 09:34:43PM -0400, Martin K. Petersen wrote:
>
> Hi Dave!
>
> > Ok, so ext4 has a very limited max allocation size for an extent, so
> > I expect this won't cause huge latency problems. However, what
> > happens when we use XFS, have a 64kB block size, and fallocate() is
> > allocating disk space in continguous 100GB extents and passing those
> > down to the block device?
>
> Depends on the device.
Great. :(
> > How does this get split by dm devices? Are raid stripes going to dice
> > this into separate stripe unit sized bios, so instead of single large
> > requests we end up with hundreds or thousands or tiny allocation
> > requests being issued?
>
> There is nothing special about this operation. It needs to be handled
> the same way as all other splits. I.e. ideally coalesced at the bottom
> of the stack so we can issue larger, contiguous commands to the
> hardware.
>
> > How are we expecting hardware to behave here? Is this a queued
> > command in the scsi/nvme/sata protocols? Or is this, for the moment,
> > just a special snowflake that we can't actually use in production
> > because the hardware just can't handle what we throw at it?
>
> For now it's SCSI and queued. Only found in high-end thinly provisioned
> storage arrays and not in your average SSD.
So it's a special snowflake :)
> The performance expectation for REQ_OP_ALLOCATE is that it is faster
> than a write to the same block range since the device potentially needs
> to do less work. I.e. the device simply needs to decrement the free
> space and mark the LBAs reserved in a map. It doesn't need to write all
> the blocks to zero them. If you want zeroed blocks, use
> REQ_OP_WRITE_ZEROES.
I suspect that the implications of wiring filesystems directly up to
this hasn't been thought through entirely....
> > IOWs, what sort of latency issues is this operation going to cause
> > on real hardware? Is this going to be like discard? i.e. where we
> > end up not using it at all because so few devices actually handle
> > the massive stream of operations the filesystem will end up sending
> > the device(s) in the course of normal operations?
>
> The intended use case, from a SCSI perspective, is that on a thinly
> provisioned device you can use this operation to preallocate blocks so
> that future writes to the LBAs in question will not fail due to the
> device being out of space. I.e. you would use this to pin down block
> ranges where you can not tolerate write failures. The advantage over
> writing the blocks individually is that dedup won't apply and that the
> device doesn't actually have to go write all the individual blocks.
.... because when backed by thinp storage, plumbing user level
fallocate() straight through from the filesystem introduces a
trivial, user level storage DOS vector....
i.e. a user can just fallocate a bunch of files and, because the
filesystem can do that instantly, can also run the back end array
out of space almost instantly. Storage admins are going to love
this!
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists