[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49CA2F41.8030804@themaw.net>
Date: Wed, 25 Mar 2009 22:18:57 +0900
From: Ian Kent <raven@...maw.net>
To: Jeff Layton <jlayton@...hat.com>
CC: Wu Fengguang <fengguang.wu@...el.com>,
Dave Chinner <david@...morbit.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hch@...radead.org" <hch@...radead.org>,
"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back
to empty s_dirty list
Jeff Layton wrote:
> On Wed, 25 Mar 2009 20:17:43 +0800
> Wu Fengguang <fengguang.wu@...el.com> wrote:
>
>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
>>> On Wed, 25 Mar 2009 10:50:37 +0800
>>> Wu Fengguang <fengguang.wu@...el.com> wrote:
>>>
>>>>> Given the right situation though (or maybe the right filesystem), it's
>>>>> not too hard to imagine this problem occurring even in current mainline
>>>>> code with an inode that's frequently being redirtied.
>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
>>>> inodes will be parked in s_dirty for 30s. During which time the
>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
>>>> value, will be retried for writeback and then re-inserted into a
>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
>>>>
>>> Doesn't that assume that there are new inodes that are being dirtied?
>>> If you only have the same inodes being redirtied and never any new
>>> ones, the problem still occurs, right?
>> Yes. But will a production server run months without making one single
>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
>> fix this possible issue.:)
>>
>
> Yes. It's not that the box will run that long without creating a
> single new dirtied inode, but rather that it won't necessarily create
> one on all of its mounts. It's often the case that someone has a
> mountpoint for a dedicated purpose.
>
> Consider a host that has a mountpoint that contains logfiles that are
> being heavily written. There's nothing that says that they must rotate
> those logs over a particular period (assuming the fs has enough space,
> etc). If the same ones are constantly being redirtied and no new
> ones are created, then I think this problem can easily happen.
>
>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
>>>>>>
>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
>>>>>> if one big file is repeatedly redirtied hence its writeback being
>>>>>> delayed considerably.
>>>>>>
>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
>>>>>> in a _best effort_ way. It at best partially hides the above issues,
>>>>>> if there are any. In particular, if (b) is possible, the bug should
>>>>>> already show up at least in some situations.
>>>>>>
>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
>>>>>>
>>>>>> http://lkml.org/lkml/2008/1/16/491
>>>>>>
>>>>>>
>>>>> Ok, those are good points that I need to think about.
>>>>>
>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
>>>>> how best to fix this.
>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
>>>> doesn't matter?
>>>>
>>> I don't really consider NFS to be a special case here. It just happens
>>> to be where we saw the problem originally. Some of its characteristics
>>> might make it easier to hit this, but I'm not certain of that.
>> Now there are now two possible solutions:
>> - unconditionally update dirtied_when in redirty_tail();
>> - keep dirtied_when and redirty inodes to a new dedicated queue.
>> The first one involves less code, the second one allows more flexible timing.
>>
>> NFS/XFS could be a good starting point for discussing the
>> requirements, so that we can reach a suitable solution.
>>
>
> It sounds like it, yes. I saw that you posted some patches in January
> (including your s_more_io_wait patch). I'll give those a closer look.
> Adding the new s_more_io_wait queue is interesting and might sidestep
> this problem nicely.
>
Yes, I was looking at that bit of code but, so far, I think it won't be
called for the case we are trying to describe.
Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists