[<prev] [next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.1007301434080.9875@uplift.swm.pp.se>
Date: Fri, 30 Jul 2010 14:40:46 +0200 (CEST)
From: Mikael Abrahamsson <swmike@....pp.se>
To: linux-kernel@...r.kernel.org
Subject: Re: MD raid and different elevators (disk i/o schedulers) (fwd)
Hi, this might be more appropriate for lkml (or is there another place?)
because people with knowledge of how these layers interact might be here
and not on linux-raid-ml ?
If block cache is done on all levels and readahead is done on all levels,
then quite a lot of redundant block information is going to exist in
memory for all these layers? I can understand that it might make sense to
keep block cache for the fs and perhaps for the drive layer, but
md->dm(crypto)->lvm layers this might make less sense?
What about default readahead for these devices? Doing readahead on dm
device might be bad in some situations, perhaps good in others?
---------- Forwarded message ----------
Date: Thu, 29 Jul 2010 12:53:35 +0200 (CEST)
From: Mikael Abrahamsson <swmike@....pp.se>
To: Fabio Muzzi <liste@...gan.org>
Cc: linux-raid@...r.kernel.org
Subject: Re: MD raid and different elevators (disk i/o schedulers)
On Thu, 29 Jul 2010, Fabio Muzzi wrote:
> Is this true? Are there compatibility issues using different i/o schedulers
> with software raid?
I'd actually like to raise this one level further:
In the case of (drives)->md->dm(crypto)->lvm->fs, how do the schedulers,
readahead settings, blocksizes, barriers etc interact thru all these layers? Is
block caching done on all layers? Is readahead done on all layers?
--
Mikael Abrahamsson email: swmike@....pp.se
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists