[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200902162007.GB5513@redhat.com>
Date: Wed, 2 Sep 2020 12:20:07 -0400
From: Mike Snitzer <snitzer@...hat.com>
To: Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
martin.petersen@...cle.com, Hans de Goede <hdegoede@...hat.com>,
Song Liu <song@...nel.org>,
Richard Weinberger <richard@....at>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-raid@...r.kernel.org, Minchan Kim <minchan@...nel.org>,
dm-devel@...hat.com, linux-mtd@...ts.infradead.org,
linux-mm@...ck.org, drbd-dev@...n.linbit.com,
cgroups@...r.kernel.org
Subject: Re: [PATCH 06/14] block: lift setting the readahead size into the
block layer
On Wed, Sep 02 2020 at 11:11am -0400,
Christoph Hellwig <hch@....de> wrote:
> On Wed, Aug 26, 2020 at 06:07:38PM -0400, Mike Snitzer wrote:
> > On Sun, Jul 26 2020 at 11:03am -0400,
> > Christoph Hellwig <hch@....de> wrote:
> >
> > > Drivers shouldn't really mess with the readahead size, as that is a VM
> > > concept. Instead set it based on the optimal I/O size by lifting the
> > > algorithm from the md driver when registering the disk. Also set
> > > bdi->io_pages there as well by applying the same scheme based on
> > > max_sectors.
> > >
> > > Signed-off-by: Christoph Hellwig <hch@....de>
> > > ---
> > > block/blk-settings.c | 5 ++---
> > > block/blk-sysfs.c | 1 -
> > > block/genhd.c | 13 +++++++++++--
> > > drivers/block/aoe/aoeblk.c | 2 --
> > > drivers/block/drbd/drbd_nl.c | 12 +-----------
> > > drivers/md/bcache/super.c | 4 ----
> > > drivers/md/dm-table.c | 3 ---
> > > drivers/md/raid0.c | 16 ----------------
> > > drivers/md/raid10.c | 24 +-----------------------
> > > drivers/md/raid5.c | 13 +------------
> > > 10 files changed, 16 insertions(+), 77 deletions(-)
> >
> >
> > In general these changes need a solid audit relative to stacking
> > drivers. That is, the limits stacking methods (blk_stack_limits)
> > vs lower level allocation methods (__device_add_disk).
> >
> > You optimized for lowlevel __device_add_disk establishing the bdi's
> > ra_pages and io_pages. That is at the beginning of disk allocation,
> > well before any build up of stacking driver's queue_io_opt() -- which
> > was previously done in disk_stack_limits or driver specific methods
> > (e.g. dm_table_set_restrictions) that are called _after_ all the limits
> > stacking occurs.
> >
> > By inverting the setting of the bdi's ra_pages and io_pages to be done
> > so early in __device_add_disk it'll break properly setting these values
> > for at least DM afaict.
>
> ra_pages never got inherited by stacking drivers, check it by modifying
> it on an underlying device and then creating a trivial dm or md one.
Sure, not saying that it did. But if the goal is to set ra_pages based
on io_opt then to do that correctly on stacking drivers it must be done
in terms of limits stacking right? Or at least done at a location that
is after the limits stacking has occurred? So should DM just open-code
setting ra_pages like it did for io_pages?
Because setting ra_pages in __device_add_disk() is way too early for DM
-- given it uses device_add_disk_no_queue_reg via add_disk_no_queue_reg
at DM device creation (before stacking all underlying devices' limits).
> And I think that is a good thing - in general we shouldn't really mess
> with this thing from drivers if we can avoid it. I've kept the legacy
> aoe and md parity raid cases, out of which the first looks pretty weird
> and the md one at least remotely sensible.
I don't want drivers, like DM, to have to worry about these. So I agree
with that goal ;)
> ->io_pages is still inherited in disk_stack_limits, just like before
> so no change either.
I'm missing where, but I only looked closer at this 06/14 patch.
In it I see io_pages is no longer adjusted in disk_stack_limits().
Mike
Powered by blists - more mailing lists