[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080221235049.3742484f.pochini@shiny.it>
Date: Thu, 21 Feb 2008 23:50:49 +0000
From: Giuliano Pochini <pochini@...ny.it>
To: Lukas Hejtmanek <xhejtman@....muni.cz>
Cc: Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org,
zdenek.kabelac@...il.com
Subject: Re: Disk schedulers
On Wed, 20 Feb 2008 19:48:42 +0100
Lukas Hejtmanek <xhejtman@....muni.cz> wrote:
> On Sat, Feb 16, 2008 at 05:20:49PM +0000, Pavel Machek wrote:
> > Is cat /dev/zero > file enough to reproduce this?
>
> yes.
>
>
> > ext3 filesystem?
>
> yes.
>
> > Will cat /etc/passwd work while machine is unresponsive?
>
> yes.
>
> while find does not work:
> time find /
> /
> /etc
> /etc/manpath.config
> /etc/update-manager
> /etc/update-manager/release-upgrades
> /etc/gshadow-
> /etc/inputrc
> /etc/openalrc
> /etc/bonobo-activation
> /etc/bonobo-activation/bonobo-activation-config.xml
> /etc/gnome-vfs-2.0
> /etc/gnome-vfs-2.0/modules
> /etc/gnome-vfs-2.0/modules/obex-module.conf
> /etc/gnome-vfs-2.0/modules/extra-modules.conf
> /etc/gnome-vfs-2.0/modules/theme-method.conf
> /etc/gnome-vfs-2.0/modules/font-method.conf
> /etc/gnome-vfs-2.0/modules/default-modules.conf
> ^C
>
> real 0m7.982s
> user 0m0.003s
> sys 0m0.000s
>
>
> i.e., it took 8 seconds to list just 17 dir entries.
It also happens when I'm writing to a slow external disk.
Documentation/block/biodoc.txt says:
"Per-queue granularity unplugging (still a Todo) may help reduce some of
the concerns with just a single tq_disk flush approach. Something like
blk_kick_queue() to unplug a specific queue (right away ?) or optionally,
all queues, is in the plan."
If I understand correctly, there is only one "plug" in common for all
devices. It may explain why when a queue is full, access to other devices
is also blocked.
--
Giuliano.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists