lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Jan 2012 22:05:38 +0800
From:	Wu Fengguang <wfg@...ux.intel.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>, Tejun Heo <tj@...nel.org>,
	Li Shaohua <shaohua.li@...el.com>,
	Herbert Poetzl <herbert@...hfloor.at>
Subject: Re: Bad SSD performance with recent kernels

> - IO size is 256KB (which is not a problem in itself)
> 
> - The dispatch/complete pattern is
> 
>         submit IO for range 1
>         complete IO for range 1
>         <dd busy, disk idle>
>         submit IO for range 2
>         complete IO for range 2
> 
>   So we have periods that no IO is in flight at all, which leads to
>   under-utilized disk (which should show up in iostat as <100% disk util)
> 
> # grep '[DC]' blktrace
> 
>   8,16   1   770606     9.990039807  4580  D   R 4379392 + 512 (   14826) [dd]
>   8,16   1   770607     9.991083069     0  C   R 4379392 + 512 ( 1043262) [0]
> 
>   8,16   1   770739     9.991647434  4580  D   R 4379904 + 512 (   14433) [dd]
>   8,16   1   770740     9.992693317     0  C   R 4379904 + 512 ( 1045883) [0]
> 
>   8,16   1   770872     9.993256451  4580  D   R 4380416 + 512 (   14539) [dd]
>   8,16   1   770873     9.994299156     0  C   R 4380416 + 512 ( 1042705) [0]
> 
>   8,16   1   771005     9.994863680  4580  D   R 4380928 + 512 (   14344) [dd]
>   8,16   1   771006     9.995909291     0  C   R 4380928 + 512 ( 1045611) [0]
> 
>   8,16   1   771138     9.996470460  4580  D   R 4381440 + 512 (   14043) [dd]
>   8,16   1   771139     9.997514205     0  C   R 4381440 + 512 ( 1043745) [0]
> 
>   8,16   1   771271     9.998077269  4580  D   R 4381952 + 512 (   14928) [dd]
>   8,16   1   771272     9.999120396     0  C   R 4381952 + 512 ( 1043127) [0]

Some better pattern for comparison:

  8,0    1    42940     0.990199812  4084  D   R 1432808 + 256 [dd]
  8,0    1    42941     0.990858326     0  C   R 1432552 + 256 [0]
  8,0    1    43009     0.991192107  4084  D   R 1433064 + 256 [dd]
  8,0    3    14691     0.991853319     0  C   R 1432808 + 256 [0]
  8,0    3    14759     0.992189451  4084  D   R 1433320 + 256 [dd]
  8,0    1    43010     0.994473159     0  C   R 1433064 + 256 [0]

Where the life time for requests 1432808, 1432552, 1433064, ...
are *interleaved* so that the disk always has something to do.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ