[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <483628B7.1080704@np.css.fujitsu.com>
Date: Fri, 23 May 2008 11:15:19 +0900
From: Kentaro Makita <k-makita@...css.fujitsu.com>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Cc: akpm@...ux-foundation.org, dgc@....com, viro@...IV.linux.org.uk,
harvey.harrison@...il.com
Subject: Re: [PATCH][RFC]fix soft lock up at NFS mount by per-SB LRU-list
of unused dentries
Hi, David
Thank you for reviewing the patch.
I'd fix coding style issues at next post. And,...
David Chinner wrote:
> On Thu, May 22, 2008 at 11:22:18AM +0900, Kentaro Makita wrote:
>> + }
>> + }
>> + }
>
> I'm wondering if this loop is an excessively long time to be holding the
> dcache_lock. I guess the hol dtime is limited by the size of *count being
> passed in. I think we could also do a:
>
> cond_resched_lock(&dcache_lock);
>
> in the loop here to prevent this from occurring....
Did you mean:
- scan sb->s_dentry_lru and move dentries to temporary list
with lock held
- cond_resched_lock(&dcache_lock);
- prune dentries on temporary list
Is that right?
....
>> + spin_lock(&sb_lock);
>> + list_for_each_entry(sb, &super_blocks, s_list) {
>
> Question on lock ordering of sb_lock vs dcache_lock - which is the inner
> lock? Are the two of them held together anywhere else? (/me doesn't
> have time to searchthe code right now).
>
The sb_lock is inner lock. And I searched the whole code and found only
prune_dcache() held both sb_lock and dcache_lock.
Best Regards,
Kentaro Makita
>
> Otherwise it's looking good.
>
> Cheers,
>
> Dave.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists