lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Dec 2010 13:51:13 -0500
From:	Greg Freemyer <greg.freemyer@...il.com>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	Rogier Wolff <R.E.Wolff@...wizard.nl>,
	Bruno Prémont <bonbons@...ux-vserver.org>,
	linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org
Subject: Re: Slow disks.

On Thu, Dec 23, 2010 at 12:47 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
> Rogier Wolff <R.E.Wolff@...Wizard.nl> writes:
>
>> On Thu, Dec 23, 2010 at 09:40:54AM -0500, Jeff Moyer wrote:
>>> > In my performance calculations, 10ms average seek (should be around
>>> > 7), 4ms average rotational latency for a total of 14ms. This would
>>> > degrade for read-modify-write to 10+4+8 = 22ms. Still 10 times better
>>> > than what we observe: service times on the order of 200-300ms.
>>>
>>> I didn't say it would account for all of your degradation, just that it
>>> could affect performance.  I'm sorry if I wasn't clear on that.
>>
>> We can live with a "2x performance degradation" due to stupid
>> configuration. But not with the 10x -30x that we're seeing now.
>
> Wow.  I'm not willing to give up any performance due to
> misconfiguration!

I suspect a mailserver on a raid 5 with large chunksize could be a lot
worse than 2x slower.  But most of the blame is just raid 5.

ie.
write 4K from userspace

Kernel
Read old primary data, wait for data to actually arrive
Read old parity data, wait again
modify both for new data
write primary data to drive queue
write parity data to drive queue

userspace: fsync
kernel: force data from queues to drive (requires wait)


I'm guessing raid1 or raid10 would be several times faster.  And is at
least as robust as raid 5.

ie.
write 4K from userspace

Kernel
write 4K to first mirror's queue
write 4K to second mirror's queue
done

userspace: fsync
kernel: force data from queues to drive (requires wait)

Good Luck
Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ