lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BAY125-DAV14DB3B267D410F37358DFA93AE0@phx.gbl>
Date:	Mon, 22 Jan 2007 09:10:39 -0700
From:	"Liang Yang" <multisyncfe991@...mail.com>
To:	"Justin Piszcz" <jpiszcz@...idpixels.com>,
	"kyle" <kylewong@...tha.com>
Cc:	<linux-raid@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: change strip_cache_size freeze the whole raid

Do we need to consider the chunk size when we adjust the value of 
Striped_Cache_Szie for the MD-RAID5 array?

Liang

----- Original Message ----- 
From: "Justin Piszcz" <jpiszcz@...idpixels.com>
To: "kyle" <kylewong@...tha.com>
Cc: <linux-raid@...r.kernel.org>; <linux-kernel@...r.kernel.org>
Sent: Monday, January 22, 2007 5:18 AM
Subject: Re: change strip_cache_size freeze the whole raid


>
>
> On Mon, 22 Jan 2007, kyle wrote:
>
>> Hi,
>>
>> Yesterday I tried to increase the value of strip_cache_size to see if I 
>> can
>> get better performance or not. I increase the value from 2048 to 
>> something
>> like 16384. After I did that, the raid5 freeze. Any proccess read / write 
>> to
>> it stucked at D state. I tried to change it back to 2048, read
>> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return 
>> back.
>> I even cannot shutdown the machine. Finally I need to press the reset 
>> button
>> in order to get back my control.
>>
>> Kernel is 2.6.17.8 x86-64, running at AMD Athlon3000+, 2GB Ram, 8 x 
>> Seagate
>> 8200.10 250GB HDD, nvidia chipset.
>>
>> cat /proc/mdstat (after reboot):
>> Personalities : [raid1] [raid5] [raid4]
>> md1 : active raid1 hdc2[1] hda2[0]
>>      6144768 blocks [2/2] [UU]
>>
>> md2 : active raid5 sdf1[7] sde1[6] sdd1[5] sdc1[4] sdb1[3] sda1[2] 
>> hdc4[1]
>> hda4[0]
>>      1664893440 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
>>
>> md0 : active raid1 hdc1[1] hda1[0]
>>      104320 blocks [2/2] [UU]
>>
>> Kyle
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works.  Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
>
> Justin.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ