lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20081009.211414.193713198.ryov@valinux.co.jp>
Date:	Thu, 09 Oct 2008 21:14:14 +0900 (JST)
From:	Ryo Tsuruta <ryov@...inux.co.jp>
To:	baramsori72@...il.com
Cc:	linux-kernel@...r.kernel.org, dm-devel@...hat.com,
	containers@...ts.linux-foundation.org,
	virtualization@...ts.linux-foundation.org,
	xen-devel@...ts.xensource.com, agk@...rceware.org,
	fernando@....ntt.co.jp, xemul@...nvz.org, balbir@...ux.vnet.ibm.com
Subject: Re: [PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0:
 Introduction

Hi Dong-Jae,

> So, I tested dm-ioband and bio-cgroup patches with another IO testing
> tool, xdd ver6.5(http://www.ioperformance.com/),  after your reply.
> Xdd supports O_DIRECT mode and time limit options.
> I think, personally, it is proper tool for testing of IO controllers
> in Linux Container ML.

Xdd is really useful for me. Thanks for letting me know.

> And I found some strange points in test results. In fact, it will be
> not strange for other persons^^
> 
> 1. dm-ioband can control IO bandwidth well in O_DIRECT mode(read and
> write), I think the result is very reasonable. but it can't control it
> in Buffered mode when I checked just only output of xdd. I think
> bio-cgroup patches is for solving the problems, is it right? If so,
> how can I check or confirm the role of bio-cgroup patches?
>
> 2. As showed in test results, the IO performance in Buffered IO mode
> is very low compared with it in O_DIRECT mode. In my opinion, the
> reverse case is more natural in real life.
> Can you give me a answer about it?

Your results show all xdd programs belong to the same cgroup,
could you explain me in detail about your test procedure?

To know how many I/Os are actually issued to a physical device in
buffered mode within a measurement period, you should check the
/sys/block/<dev>/stat file just before starting a test program and
just after the end of the test program. The contents of the stat file
is described in the following document:
   kernel/Documentation/block/stat.txt

> 3. Compared with physical bandwidth(when it is checked with one
> process and without dm-ioband device), the sum of the bandwidth by
> dm-ioband has very considerable gap with the physical bandwidth. I
> wonder the reason. Is it overhead of dm-ioband or bio-cgroup patches?
> or Are there any another reasons?

The followings are the results on my PC with SATA disk, and there is
no big difference between with and without dm-ioband. Please try the
same thing if you have time.

without dm-ioband
=================
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/sdb1 \
  -reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize

T  Q       Bytes      Ops    Time      Rate    IOPS   Latency
%CPU  OP_Type    ReqSize
0 16   140001280    17090    30.121     4.648     567.38    0.0018
0.01   write        8192

with dm-ioband
==============
* cgroup1 (weight 10)
# cat /cgroup/1/bio.id
1
# echo $$ > /cgroup/1/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
  -reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
%CPU  OP_Type    ReqSize     
0 16    14393344     1757    30.430     0.473      57.74    0.0173
0.00   write        8192 

* cgroup2 (weight 20)
# cat /cgroup/2/bio.id
2
# echo $$ > /cgroup/2/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
  -reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
%CPU  OP_Type    ReqSize     
0 16    44113920     5385    30.380     1.452     177.25    0.0056
0.00   write        8192 

* cgroup3 (weight 60)
# cat /cgroup/3/bio.id
3
# echo $$ > /cgroup/3/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
  -reqeize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
%CPU  OP_Type    ReqSize     
0 16    82485248    10069    30.256     2.726     332.79    0.0030
0.00   write        8192 

Total
=====
                  Bytes        Ops    Rate   IOPS
  w/o dm-ioband  140001280    17090  4.648  567.38
  w/  dm-ioband  140992512    17211  4.651  567.78

> > Could you give me the O_DIRECT patch?
> >
> Of course, if you want. But it is nothing
> Tiobench tool is very simple and light source code, so I just add the
> O_DIRECT option in tiotest.c of tiobench testing tool.
> Anyway, after I make a patch file, I send it to you

Thank you very much!

Ryo Tsuruta
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ