lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080215223224.50102e0a@olorin>
Date:	Fri, 15 Feb 2008 22:32:24 +0100
From:	FD Cami <francois.cami@...e.fr>
To:	Zan Lynx <zlynx@....org>
Cc:	Prakash Punnoor <prakash@...noor.de>,
	Jan Engelhardt <jengelh@...putergmbh.de>,
	Lukas Hejtmanek <xhejtman@....muni.cz>,
	linux-kernel@...r.kernel.org, megaraidlinux@....com
Subject: Re: Disk schedulers

On Fri, 15 Feb 2008 10:11:26 -0700
Zan Lynx <zlynx@....org> wrote:

> 
> On Fri, 2008-02-15 at 15:57 +0100, Prakash Punnoor wrote:
> > On the day of Friday 15 February 2008 Jan Engelhardt hast written:
> > > On Feb 14 2008 17:21, Lukas Hejtmanek wrote:
> > > >Hello,
> > > >
> > > >whom should I blame about disk schedulers?
> > >
> > > Also consider
> > > - DMA (e.g. only UDMA2 selected)
> > > - aging disk
> > 
> > Nope, I also reported this problem _years_ ago, but till now much hasn't 
> > changed. Large writes lead to read starvation.
> 
> Yes, I see this often myself.  It's like the disk IO queue (I set mine
> to 1024) fills up, and pdflush and friends can stuff write requests into
> it much more quickly than any other programs can provide read requests.
> 
> CFQ and ionice work very well up until iostat shows average IO queuing
> above 1024 (where I set the queue number).

I can confirm that as well.

This is easily reproductible with dd if=/dev/zero of=somefile bs=2048
for example. After a short while, trying to read the disks takes an
awfully long time, even if the dd process is ionice'd.

What is worse is that other drives attached to the same controller become
unresponsive as well.
I use a Dell Perc 5/i (megaraid_sas) with :
* 2 SAS 15000 RPM drives, RAID1 => sda
* 4 SAS 15000 RPM drives, RAID5 => sdb
* 2 SATA 72000 RPM drives, RAID1 => sdc
Using dd or mkfs on sdb or sdc makes sda unresponsive as well.
Is this expected ?

Cheers

Francois
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ