[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080919164524.295e1f9a.akpm@linux-foundation.org>
Date: Fri, 19 Sep 2008 16:45:24 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Kiyoshi Ueda <k-ueda@...jp.nec.com>
Cc: James.Bottomley@...senPartnership.com,
linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
dm-devel@...hat.com, j-nomura@...jp.nec.com, k-ueda@...jp.nec.com
Subject: Re: [PATCH 1/2] lld busy status exporting interface
On Fri, 19 Sep 2008 19:11:22 -0400 (EDT)
Kiyoshi Ueda <k-ueda@...jp.nec.com> wrote:
> > Back in the days when we first did the backing_dev_info.congested_fn()
> > logic it was decided that there basically was no single place in which
> > the congested state could be stored.
> >
> > So we ended up deciding that whenever a caller wants to know a
> > backing_dev's congested status, it has to call in to the
> > ->congested_fn() and that congested_fn would then call down into all
> > the constituent low-level drivers/queues/etc asking each one if it is
> > congested.
>
> bdi_lld_congested() also does that using bdi_congested(), which calls
> ->congested_fn().
> And only real device drivers (e.g. scsi, ide) set/clear the flag.
> Stacking drivers like request-based dm don't.
umm, OK, that should work.
> So stacking drivers always check the BDI_lld_congested flag of
> the bottom device of the device stack.
How does a stacking driver know that the backing_device which it is
looking at is a "lowest level" device?
I don't think it does - only the code which implements that device
knows this, so the stacking driver has to call into that device's
congested_fn(), yes?
In which case one wonders why the state was stored in the
backing_dev_info at all. Why not store it in the device-private data
to avoid confusion and abuse?
> BDI_[write|read]_congested flags have been using for queue's
> congestion, so that I/O queueing/merging can be proceeded even if
> the lld is congested. So I added a new flag.
iirc, BDI_read/write_congested predated the introduction of the
congested_fn() and perhaps should have been removed once we went to the
congested_fn approach. But it's been a while since I spent a lot of
time looking in there.
>
> > I mean, as a simple example which is probably wrong - what happens if a
> > single backing_dev is implemented via two different disks and
> > controllers, and they both become congested and then one of them comes
> > uncongested. Is there no way in which the above implemention can
> > incorrectly flag the backing_dev as being uncongested?
>
> Do you mean that "a single backing_dev via two disks/controllers" is
> a dm device (e.g. a dm-multipath device having 2 paths, a dm-mirror
> device having 2 disks)?
Something along those lines, sure.
> If so, dm doesn't set/clear the flag, and the decision, whether
> the dm device itself is congested or not, depends on dm's target driver.
>
> For example of dm-multipath,
> o call bdi_lld_congested() for each path.
> o if one of the paths are uncongested, dm-multipath will decide
> the dm device is uncongested and dispatch incoming I/Os to
> the uncongested path.
hm, OK.
> For example of dm-mirror,
> o call bdi_lld_congested() for each disk.
> o if the incoming I/O is READ, same logic as dm-multipath above.
> if the incoming I/O is WRITE, dm-mirror will decide the dm device
> is uncongested only when all disks are uncongested.
>
> Thanks,
> Kiyoshi Ueda
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists