[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.61.0705180829120.3231@yvahk01.tjqt.qr>
Date: Fri, 18 May 2007 08:33:47 +0200 (MEST)
From: Jan Engelhardt <jengelh@...ux01.gwdg.de>
To: DervishD <lkml@...vishd.net>
cc: Stefan Richter <stefanr@...6.in-berlin.de>,
linux-kernel@...r.kernel.org
Subject: Re: usb-storage nice value
On May 18 2007 08:15, DervishD wrote:
>Date: Fri, 18 May 2007 08:15:12 +0200
>From: DervishD <lkml@...vishd.net>
>To: Stefan Richter <stefanr@...6.in-berlin.de>
>Cc: linux-kernel@...r.kernel.org
>Subject: Re: usb-storage nice value
Try ionice.
>> DervishD wrote:
>> > I'm having problems when reading/writing to external USB harddisks:
>> > my *internal* harddisk stalls from time to time, so watching a movie
>> > while copying data is a PITA (well, if the movie is bad, the leaps help
>> > a bit...).
>>
>> If the internal harddisk is source or target of these copy operations...
>
> Not...
>
>> ...
>> > is there any way of modifying the pdflush behaviour so
>> > large buffered writes are less "agressive" and doesn't block apps which
>> > are just reading?
>>
>> ...then switching to another IO scheduler might perhaps help.
>>
>> # cat /sys/block/hda/queue/scheduler
>>
>> # echo deadline > /sys/block/hda/queue/scheduler
>
Or try echo 10 >/proc/sys/kernel/dirty_ratio
> That's another thing I wanted to test, but I'm already using
>"deadline" scheduler. How about changing "read_ahead_kb" to a much
>higher value so there's plenty of data buffered? The problem seems to
>be, according to Alan Cox, that my VIA mobo fills completely the PCI bus
>when doing IO bursts, so I can take advantage of that "bug" and fill the
>bus when reading.
>
> My only doubt is if it will affect performance negatively in other
>usage patterns (e.g. while writing CD's, since the hard disk will take
>the bus on each read operation, and will read more than the current 128
>kB).
Jan
--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists