[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4C879CCF.5080703@kernel.dk>
Date: Wed, 08 Sep 2010 16:25:19 +0200
From: Jens Axboe <axboe@...nel.dk>
To: Chris Friesen <chris.friesen@...band.com>
CC: fuse-devel@...ts.sourceforge.net,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: ionice and FUSE-based filesystems?
On 2010-09-03 21:25, Chris Friesen wrote:
> On 09/03/2010 12:57 PM, Jens Axboe wrote:
>> On 09/02/2010 10:37 PM, Chris Friesen wrote:
>>>
>>> I'm curious about the limits of using ionice with multiple layers of
>>> filesystems and devices.
>>>
>>> In particular, we have a scenario with a FUSE-based filesystem running
>>> on top of xfs on top of LVM, on top of software RAID, on top of spinning
>>> disks. (Something like that, anyways.) The IO scheduler is CFQ.
>>>
>>> In the above scenario would you expect the IO nice value of the writes
>>> done by a task to be propagated all the way down to the disk writes? Or
>>> would they get stripped off at some point?
>>
>> Miklos should be able to expand on what fuse does, but at least on
>> the write side priorities will only be carried through for non-buffered
>> writes with the current design (since actual write out happens out of
>> context of the submitting application).
>
> So we're talking either O_SYNC or O_DIRECT only? That seems an
> unfortunate limitation given that it then forces the app to block. Has
> any thought been given to somehow associating the priority with the
> actual operation so that it would affect buffered writes as well?
Yes. Work is progressing to allow to track dirty pages which would then
allow you to use ionice for buffered writes as well.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists