lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Apr 2013 15:23:06 +0400
From:	Michael Tokarev <mjt@....msk.ru>
To:	lkml@...usoft.pl
CC:	linux-kernel@...r.kernel.org
Subject: Re: Very poor latency when using hard drive (raid1)

15.04.2013 13:59, lkml@...usoft.pl пишет:
> There are 2 hard drives (normal, magnetic) in software raid 1
> on 3.2.41 kernel.
> 
> When I write into them e.g. using dd from /dev/zero to a local file
> (ext4 on default settings), running 2 dd at once (writing two files) it
> starves all other programs that try to use the disk.
> 
> Running ls on any directory on same disk (same fs btw), takes over half
> minute to execute, same for any other disk touching action.
> 
> Did anyone seen such problem, where too look, what to test?

This is typical, known for many years, issue.

Your dds are run against buffer cache, the same as used by all other
regular accesses.  So once it fills up, cached directories and the
like are thrown away to make room for new cache space.  So once
you need something else, that something needs to be read from disk,
which is busy together with the buffer cache.

> What could solve it (other then ionice on applications that I expect to
> use hard drive)?

Just don't mix these two workloads.  Or, if you really need to transfer
large amount of data, use direct I/O (O_DIRECT) -- for dd it is
iflag=direct or oflag=direct (depending on the I/O direction).

ionice wont help much.

Thanks,

/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ