[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <520270DB.2060909@roeck-us.net>
Date: Wed, 07 Aug 2013 09:07:55 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Jan Kara <jack@...e.cz>
CC: Davidlohr Bueso <davidlohr@...com>, Theodore Ts'o <tytso@....edu>,
LKML <linux-kernel@...r.kernel.org>, linux-ext4@...r.kernel.org
Subject: Re: WARNING: CPU: 26 PID: 93793 at fs/ext4/inode.c:230 ext4_evict_inode+0x4c9/0x500
[ext4]() still in 3.11-rc3
On 08/07/2013 08:33 AM, Jan Kara wrote:
> On Wed 07-08-13 08:27:32, Guenter Roeck wrote:
>> On 08/07/2013 08:20 AM, Jan Kara wrote:
>>> On Thu 01-08-13 20:58:46, Davidlohr Bueso wrote:
>>>> On Thu, 2013-08-01 at 22:33 +0200, Jan Kara wrote:
>>>>> Hi,
>>>>>
>>>>> On Thu 01-08-13 13:14:19, Davidlohr Bueso wrote:
>>>>>> FYI I'm seeing loads of the following messages with Linus' latest
>>>>>> 3.11-rc3 (which includes 822dbba33458cd6ad)
>>>>> Thanks for notice. I see you are running reaim to trigger this. What
>>>>> workload?
>>>>
>>>> After re-running the workloads one by one, I finally hit the issue again
>>>> with 'dbase'. FWIW I'm using ramdisks + ext4.
>>> Hum, I'm not able to reproduce this with current Linus' kernel - commit
>>> e4ef108fcde0b97ed38923ba1ea06c7a152bab9e - I've tried with ramdisk but no
>>> luck. Are you using some special mount options?
>>>
>> I don't see this commit in the upstream kernel ?
> It is Linus's merge of Tejun's libata fix from Tuesday...
>
>> I tried reproducing the problem on the same system I had seen 822dbba33458cd6ad on,
>> with the same workload. It has now been running since last Friday, but I have
>> not seen any problems.
> Ah, OK, so it may be fixed after all. If you happen to see it again,
> please let me know. Thanks!
>
At least the problem I found, yes. The problem Davidlohr found may be a different one.
Guenter
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists