lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3993afa00711090609j7ba8dee0t8c1772f8654eb2f0@mail.gmail.com>
Date:	Fri, 9 Nov 2007 12:09:24 -0200
From:	"Fabiano Silva" <fabiano@...l.ufpr.br>
To:	"Justin Piszcz" <jpiszcz@...idpixels.com>
Cc:	"Carlos Carvalho" <carlos@...ica.ufpr.br>,
	"Jeff Lessem" <Jeff@...sem.org>, root@...l.ufpr.br,
	"Dan Williams" <dan.j.williams@...el.com>,
	"BERTRAND Joël" <joel.bertrand@...tella.fr>,
	"Neil Brown" <neilb@...e.de>, linux-kernel@...r.kernel.org,
	linux-raid@...r.kernel.org, xfs@....sgi.com
Subject: Re: 2.6.23.1: mdadm/raid5 hung/d-state

On Nov 9, 2007 7:14 AM, Justin Piszcz <jpiszcz@...idpixels.com> wrote:
>
>
>
> On Thu, 8 Nov 2007, Carlos Carvalho wrote:
>
> > Jeff Lessem (Jeff@...sem.org) wrote on 6 November 2007 22:00:
> > >Dan Williams wrote:
> > > > The following patch, also attached, cleans up cases where the code looks
> > > > at sh->ops.pending when it should be looking at the consistent
> > > > stack-based snapshot of the operations flags.
> > >
> > >I tried this patch (against a stock 2.6.23), and it did not work for
> > >me.  Not only did I/O to the effected RAID5 & XFS partition stop, but
> > >also I/O to all other disks.  I was not able to capture any debugging
> > >information, but I should be able to do that tomorrow when I can hook
> > >a serial console to the machine.
> > >
> > >I'm not sure if my problem is identical to these others, as mine only
> > >seems to manifest with RAID5+XFS.  The RAID rebuilds with no problem,
> > >and I've not had any problems with RAID5+ext3.
> >
> > Us too! We're stuck trying to build a disk server with several disks
> > in a raid5 array, and the rsync from the old machine stops writing to
> > the new filesystem. It only happens under heavy IO. We can make it
> > lock without rsync, using 8 simultaneous dd's to the array. All IO
> > stops, including the resync after a newly created raid or after an
> > unclean reboot.
> >
> > We could not trigger the problem with ext3 or reiser3; it only happens
> > with xfs.

In our case all process using md4, including md4_resync, stay in D state.
Call Trace:
  [<ffffffff803615ac>] __generic_unplug_device+0x13/0x24
  [<ffffffff803622cf>] generic_unplug_device+0x18/0x28
  [<ffffffff803f2cf7>] get_active_stripe+0x22b/0x472
...
see dmesg (sysrq t) attached.

We can reproduce this problem in two machines with the same configuration:
  - 2 x Dual-Core Opteron 2.8GHz
  - 8GB memory
  - 3ware 9000 with 10 x 750GB sata disks
  - Debian Etch x86_64
  - raid5 + xfs (/dev/md4)
in all these stock kernel's:
  - 2.6.22.11, 2.6.22.12, 2.6.23.1, 2.6.24-rc2
running:
  - for i in f{0..7}; do (dd bs=1M count=100000 if=/dev/zero of=$i &); done

If we increase /sys/block/md4/md/stripe_cache_size the device and process
back to work.

Download attachment "dmesg_sysrq_t.txt.gz" of type "application/x-gzip" (114342 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ