lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090305094623.GA17815@wotan.suse.de>
Date:	Thu, 5 Mar 2009 10:46:23 +0100
From:	Nick Piggin <npiggin@...e.de>
To:	"Jorge Boncompte [DTI2]" <jorge@...2.net>
Cc:	ext-adrian.hunter@...ia.com, LKML <linux-kernel@...r.kernel.org>
Subject: Re: Error testing ext3 on brd ramdisk

On Thu, Mar 05, 2009 at 10:19:46AM +0100, Jorge Boncompte [DTI2] wrote:
> Nick Piggin escribió:
> >>------------
> >>mount -no remount,ro /dev/ram0
> >>dd if=/dev/ram0 of=config.bin bs=1k count=1000
> >>mount -no remount,rw /dev/ram0
> >>md5sum config.bin
> >>dd if=config.bin of=/dev/hda1
> >>echo $md5sum | dd of=/dev/hda1 bs=1k seek=1100 count=32
> >>------------
> >>
> >>on system boot
> >>
> >>------------
> >>CHECK MD5SUM
> >>dd if=/dev/hda1 of=/dev/ram0 bs=1k count=1000
> >>fsck.minix -a /dev/ram0
> >>mount -nt minix /dev/ram0 /etc -o rw
> >>------------
> >>
> >>	I have never seen a MD5 failure on boot, just sometimes the 
> >>	filesystem is corrupted. Kernel config attached.
> >
> >>From your description, it suggests that the corrupted image is being
> >read from /dev/ram0 (becuase the md5sum passes).
> 
> 	No, it is read from /dev/hda1.

No I mean when it is first read from /dev/ram0 when you create the
image. Can you put some fsx.minix checks on the image file to try
to narrow down when exactly it is getting corrupted?


> >In your script, can you run fsck.minix on config.bin when you first
> >create it? What if you unmount /dev/ram0 before copying the image?
> 
> 	Yesterday I did some tests and found that doing...
> 
> -----------
> umount /etc (/etc is what is mounted from /dev/ram0)
> dd if=/dev/zero of=/dev/ram0 bs=1k count=1000
> mount /dev/ram0 /etc -t minix -o rw
> -----------
> ...succeds and mounts a corrupted filesystem with the old content. Doing 
> the same with the all ramdisk driver fails on mount with "no filesystem 
> found".
> 
> If I do...
> -----------
> umount /etc (/etc is what is mounted from /dev/ram0)
> echo 3 > /proc/sys/vm/drop_caches
> dd if=/dev/zero of=/dev/ram0 bs=1k count=1000
> mount /dev/ram0 /etc -t minix -o rw
> ----------
> ... then the mount fails with no filesystem found as it should.
> 
> 	Does this ring any bell? :-)

Humph. It seems like a problem with the buffercache layer rather
than brd itself. I'll dig some more.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ