[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5BC8670C-E7B0-4827-BB74-B96AF327CF1D@rudd-o.com>
Date: Fri, 3 Sep 2010 10:38:20 -0700
From: "Manuel Amador (Rudd-O)" <rudd-o@...d-o.com>
To: Chris Friesen <chris.friesen@...band.com>
Cc: "fuse-devel@...ts.sourceforge.net" <fuse-devel@...ts.sourceforge.net>,
"axboe@...nel.dk" <axboe@...nel.dk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [fuse-devel] ionice and FUSE-based filesystems?
I also wanna know this!!
Manuel Amador (Rudd-O)
Cloud.com, Inc. -- http://www.cloud.com
On Sep 2, 2010, at 13:37, Chris Friesen <chris.friesen@...band.com>
wrote:
>
> I'm curious about the limits of using ionice with multiple layers of
> filesystems and devices.
>
> In particular, we have a scenario with a FUSE-based filesystem running
> on top of xfs on top of LVM, on top of software RAID, on top of
> spinning
> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>
> In the above scenario would you expect the IO nice value of the writes
> done by a task to be propagated all the way down to the disk
> writes? Or
> would they get stripped off at some point?
>
> Thanks,
> Chris
>
> --
> Chris Friesen
> Software Developer
> GENBAND
> chris.friesen@...band.com
> www.genband.com
>
> ---
> ---
> ---
> ---------------------------------------------------------------------
> This SF.net Dev2Dev email is sponsored by:
>
> Show off your parallel programming skills.
> Enter the Intel(R) Threading Challenge 2010.
> http://p.sf.net/sfu/intel-thread-sfd
> _______________________________________________
> fuse-devel mailing list
> fuse-devel@...ts.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/fuse-devel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists