[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D68631.4030706@garzik.org>
Date: Fri, 03 Apr 2009 17:57:05 -0400
From: Jeff Garzik <jeff@...zik.org>
To: Janne Grunau <j@...nau.net>
CC: Mark Lord <lkml@....ca>,
Lennart Sorensen <lsorense@...lub.uwaterloo.ca>,
Jens Axboe <jens.axboe@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>, tytso@....edu,
drees76@...il.com, jesper@...gh.cc,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
Janne Grunau wrote:
> On Fri, Apr 03, 2009 at 03:57:52PM -0400, Jeff Garzik wrote:
>> Mark Lord wrote:
>>> Grab that file and try it out. Instructions are included within.
>>> Report back again and let us know if it makes any difference.
>>>
>>> Someday I may try and chase down the exact bug that causes mythbackend
>>> to go fsyncing berserk like that, but for now this workaround is fine.
>
> that sounds if it indeed syncs every 100ms instead of once per second
> over the whole recording. It's inteneded behaviour for the first 64K.
>
>> mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
>> (Sync method... uses fdatasync if available, fsync if not).
>>
>> mythtv is definitely a candidate for sync_file_range() style output, IMO.
> yeah, I'm on it.
Just curious, does MythTV need fsync(), or merely to tell the kernel to
begin asynchronously writing data to storage?
sync_file_range(..., SYNC_FILE_RANGE_WRITE) might be enough, if you do
not need to actually wait for completion.
This may be the case, if the idea behind MythTV's fsync(2) is simply to
prevent the kernel from building up a huge amount of dirty pages in the
pagecache [which, in turn, produces bursty write-out behavior].
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists