[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C814B8D.7050100@genband.com>
Date: Fri, 03 Sep 2010 13:25:01 -0600
From: Chris Friesen <chris.friesen@...band.com>
To: Jens Axboe <axboe@...nel.dk>
CC: fuse-devel@...ts.sourceforge.net,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: ionice and FUSE-based filesystems?
On 09/03/2010 12:57 PM, Jens Axboe wrote:
> On 09/02/2010 10:37 PM, Chris Friesen wrote:
>>
>> I'm curious about the limits of using ionice with multiple layers of
>> filesystems and devices.
>>
>> In particular, we have a scenario with a FUSE-based filesystem running
>> on top of xfs on top of LVM, on top of software RAID, on top of spinning
>> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>>
>> In the above scenario would you expect the IO nice value of the writes
>> done by a task to be propagated all the way down to the disk writes? Or
>> would they get stripped off at some point?
>
> Miklos should be able to expand on what fuse does, but at least on
> the write side priorities will only be carried through for non-buffered
> writes with the current design (since actual write out happens out of
> context of the submitting application).
So we're talking either O_SYNC or O_DIRECT only? That seems an
unfortunate limitation given that it then forces the app to block. Has
any thought been given to somehow associating the priority with the
actual operation so that it would affect buffered writes as well?
As for fuse...I was concerned that the addition of userspace tasks to
handle the filesystem operations would result in the I/O operations
taking on the priority of the fuse tasks rather than the originating
task. Or does fuse adjust its I/O nice level according to that of the
incoming requests?
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@...band.com
www.genband.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists