[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <464B2605.9040200@gmail.com>
Date: Wed, 16 May 2007 17:40:53 +0200
From: Tejun Heo <htejun@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Clemens Schwaighofer <cs@...uila.co.jp>,
linux-kernel <linux-kernel@...r.kernel.org>,
Maneesh Soni <maneesh@...ibm.com>,
Dipankar Sarma <dipankar@...ibm.com>, Greg KH <greg@...ah.com>
Subject: Re: Oops and Panics in 2.6.21.1, 2.6.20.6 and 2.6.19.2
Andrew Morton wrote:
>>> a number of people have hit that, on and off.
>> Yeah, I've been seeing that one. It should have been fixed with the big
>> fat patchset.
>
> Great - fingers crossed.
>
>>> We were close to having a fix, I think, but then we decided that great
>>> chunks of sysfs needed rewriting and I believe that we believe that this
>>> great rewrite will fix this bug.
>> How were we gonna fix it? If it isn't too complex, I can cook up a
>> patch for -stable series.
>
> Do we actually understand the causes?
Yeah, I think I do. Basically, the problem is that on-demand attach and
reclamation update sd->s_dentry but accesses to it aren't synchronized
properly. In the big fat patchset, first I tried to fix it by removing
sd->s_dentry completely which didn't work because of shadow nodes, so
the second try was to fix the synchronization which is in -mm now.
>> The safest approach I can think of is making
>> dentries for attributes unreclaimable but those are made reclaimable for
>> good reasons. :-(
>
> Yeah, that was the google workaround. It's OK unless you happen to have
> thousands of disks on an ia32 box.
I see. I thought there was different approach on fixing the problem.
I'll try to backport the synchronization fix but am afraid it can be too
risky for -stable. If it seems too risky, I'll send a patch to disable
reclamation.
--
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists