[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E37BFEA.7020601@jp.fujitsu.com>
Date: Tue, 02 Aug 2011 18:14:18 +0900
From: Toshiyuki Okajima <toshi.okajima@...fujitsu.com>
To: Jan Kara <jack@...e.cz>
CC: akpm@...ux-foundation.org, adilger.kernel@...ger.ca,
linux-ext4@...r.kernel.org
Subject: Re: [PATCH] ext3: fix message in ext3_remount for rw-remount case
Hi.
(2011/08/01 18:57), Jan Kara wrote:
> On Mon 01-08-11 18:45:58, Toshiyuki Okajima wrote:
>> (2011/08/01 17:45), Jan Kara wrote:
>>> On Mon 01-08-11 13:54:51, Toshiyuki Okajima wrote:
>>>> If there are some inodes in orphan list while a filesystem is being
>>>> read-only mounted, we should recommend that pepole umount and then
>>>> mount it when they try to remount with read-write. But the current
>>>> message/comment recommends that they umount and then remount it.
>>>>
>>>> ext3_remount:
>>>> /*
>>>> * If we have an unprocessed orphan list hanging
>>>> * around from a previously readonly bdev mount,
>>>> * require a full umount/remount for now.
>>>> ^^^^^^^^^^^^^^
>>>> */
>>>> if (es->s_last_orphan) {
>>>> printk(KERN_WARNING "EXT3-fs: %s: couldn't "
>>>> "remount RDWR because of unprocessed "
>>>> "orphan inode list. Please "
>>>> "umount/remount instead.\n",
>>>> ^^^^^^^^^^^^^^
>>>> sb->s_id);
>>
>>> OK, so how about using "umount& mount"? The '/' is what would confuse me
>> OK. I modify it like your comment.
>>
>> umount/mount => umount& mount
> Thanks.
>
>>> the most... BTW, I guess you didn't really see this message in practice, did
>>> you?
>> No.
>> I have seen this message in practice while quotacheck command was repeatedly
>> executed per an hour.
> Interesting. Are you able to reproduce this? Quotacheck does remount
> read-only + remount read-write but you cannot really remount the filesystem
> read-only when it has orphan inodes and so you should not see those when
> you remount read-write again. Possibly there's race between remounting and
> unlinking...
Yes. I can reproduce it. However, it is not frequently reproduced
by using the original procedure (qutacheck per an hour). So, I made a
reproducer.
It is:
[go.sh]
#!/bin/sh
dd if=/dev/zero of=./img bs=1k count=1 seek=100k > /dev/null 2>&1
/sbin/mkfs.ext3 -Fq ./img
/sbin/tune2fs -c 0 ./img
mkdir -p mnt
LOOP=10000
for ((i=0; i<LOOP; i++));
do
mount -o loop ./img ./mnt
sh ./writer.sh ./mnt &
PID=$!
j=0
while [ 1 ];
do
sleep 1
# remount
if ((j%2 == 0));
then
mount -o loop,remount,ro ./mnt
else
mount -o loop,remount,rw ./mnt
fi
tail -n 1 /var/log/messages| grep "remount RDWR"
if [ $? -eq 0 ];
then
break
fi
j=$((j+1))
done
kill -9 $PID
umount ./mnt
/sbin/e2fsck -pf ./img
done
exit
[writer.sh]
#!/bin/sh
i=0
path=$1
num=0
stride=64
while [ 1 ];
do
for ((j=0;j<stride;j++));
do
num=$((i*stride + j))
dd if=/dev/zero of=${path}/file${num} bs=8k count=1 > /dev/null 2>&1 &
done
for ((j=0;j<stride;j++));
do
num=$((i*stride + j))
rm -f ${path}/file${num} &
done
wait
i=$((i+1))
done
# vi go.sh
# vi writer.sh
# sh go.sh
(It might be reproduced after a while...)
Thanks,
Toshiyuki Okajima
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists