lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2891419e0810142313p1aa16295ne2aa8ac3a87be491@mail.gmail.com>
Date:	Wed, 15 Oct 2008 15:13:28 +0900
From:	"Dong-Jae Kang" <baramsori72@...il.com>
To:	righi.andrea@...il.com
Cc:	"Balbir Singh" <balbir@...ux.vnet.ibm.com>,
	"Paul Menage" <menage@...gle.com>, agk@...rceware.org,
	akpm@...ux-foundation.org, axboe@...nel.dk,
	"Carl Henrik Lunde" <chlunde@...g.uio.no>, dave@...ux.vnet.ibm.com,
	"Divyesh Shah" <dpshah@...gle.com>, eric.rannaud@...il.com,
	fernando@....ntt.co.jp, "Hirokazu Takahashi" <taka@...inux.co.jp>,
	"Li Zefan" <lizf@...fujitsu.com>,
	"Marco Innocenti" <m.innocenti@...eca.it>, matt@...ehost.com,
	ngupta@...gle.com, randy.dunlap@...cle.com, roberto@...it.it,
	"Ryo Tsuruta" <ryov@...inux.co.jp>,
	"Satoshi UCHIDA" <s-uchida@...jp.nec.com>,
	subrata@...ux.vnet.ibm.com, yoshikawa.takuya@....ntt.co.jp,
	containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm 0/6] cgroup: block device i/o controller (v11)

Hi, Andrea
Thank you for your comments

> Dong-Jae Kang wrote:
>> Hi, Andrea
>>
>> thank you for your contribution to community
>> these days, I am testing several IO controllers in container ML,
>> dm-ioband by Ryo tsuruta(v1.7.0), 2-Layer CFQ by Satoshi and your
>> io-throttle(v11)
>
> Thanks! this is surely a valuable task.
>
>>
>> I have several question about io-throttle
>> below is my test reusult of io-throttle(v11) with xdd 6.5
>> But, I think I have something wrong, as showed in result
>> In direct IO mode, Only read operation was controlled by io-throttle
>> Can you check my  test procedure and result and comments to me about that
>
> Your procedure is correct. Anyway, you found a known bug in io-throttle
> v11. If you want to properly use it you need to mount the memory
> controller together with blockio, since currently blockio depends on it
> to retrieve the owner of a page during writes in submit_bio().
>
> As reported in:
>
> [PATCH -mm 4/6] memcg: interface to charge the right cgroup of asynchronous i/o activity
>
> this is no more than a hack and in perspective a more generic framework
> able to provide this functionality should be used (i.e. bio-cgroup).

But, In my opinion,
it is some strange that the bandwidth of write operation in direct IO
mode was not controlled by io-throttle
I think [PATCH -mm 4/6] is for control of buffered IO.
Do I misunderstand it?

> I'll fix this issue in the next version of io-throttle (probably I'll
> try to rewrite io-throttle on top of bio-cgroup), but for now the
> workaround is to mount the cgroupfs using -o blockio,memory (at least).
>

Oh, it sounds good.
I look foward to your next io-throttle

>>
>> additionally, your testing shell script(run_io_throttle_test.sh) for
>> io-throttle was not updated for new io-throttle
>> so, it could be operated after I fixed it
>
> The testing of iops limiting is not yet implemented and I don't have a
> very good testcase for this, but I can share with you a small script that
> I'm using to check if iops limiting is working or not, if you're interested.

thank you, Andrea.
I want to check iops limiting of io-throttle is working well.
Have a nice day...
>
>>
>> -----------------------------------------------------------------------------------
>> - Test System Information
>>
>> Computer Name, localhost.localdomain, User Name, root
>> OS release and version, Linux 2.6.27-rc5-mm1 #1 SMP Thu Oct 9 18:27:09 KST 2008
>> Machine hardware type, i686
>> Number of processors on this system, 1
>> Page size in bytes, 4096
>> Number of physical pages, 515885
>> Megabytes of physical memory, 2015
>> Target[0] Q[0], /dev/sdb
>> Per-pass time limit in seconds, 30
>> Blocksize in bytes, 512
>> Request size, 128, blocks, 65536, bytes
>> Number of Requests, 16384
>> Number of MegaBytes, 512 or 1024
>> Direct I/O, disabled or enable
>> Seek pattern, sequential
>> Queue Depth, 1
>>
>> - Test Procedure
>>
>>      mkdir /dev/blockioctl
>>      mount -t cgroup -o blockio cgroup /dev/blockioctl
>>      mkdir /dev/blockioctl/cgroup-1
>>      mkdir /dev/blockioctl/cgroup-2
>>      mkdir /dev/blockioctl/cgroup-3
>>      echo /dev/sdb:$((1024*1024)):0:0 >
>> /dev/blockioctl/cgroup-1/blockio.bandwidth-max
>>      echo /dev/sdb:$((2*1024*1024)):0:0 >
>> /dev/blockioctl/cgroup-2/blockio.bandwidth-max
>>      echo /dev/sdb:$((3*1024*1024)):0:0 >
>> /dev/blockioctl/cgroup-3/blockio.bandwidth-max
>>      in terminal 1, echo $$ > /dev/blockioctl/cgroup-1/tasks
>>      in terminal 2, echo $$ > /dev/blockioctl/cgroup-2/tasks
>>      in terminal 3, echo $$ > /dev/blockioctl/cgroup-3/tasks
>>      in each terminal, xdd.linux -op write( or read ) -targets 1 /dev/sdb
>> -blocksize 512 -reqsize 128 -mbytes 1024( or 512 )  -timelimit 30
>> -verbose –dio(enable or disable)
>>
>> - setting status information
>>
>> [root@...alhost blockioctl]# cat ./cgroup-1/blockio.bandwidth-max
>> 8 16 1048576 0 0 0 13016
>> [root@...alhost blockioctl]# cat ./cgroup-2/blockio.bandwidth-max
>> 8 16 2097152 0 0 0 11763
>> [root@...alhost blockioctl]# cat ./cgroup-3/blockio.bandwidth-max
>> 8 16 3145728 0 0 0 11133
>>
>> - Test Result
>> xdd.linux -op read -targets 1 /dev/sdb -blocksize 512 -reqsize 128
>> -mbytes 512 -timelimit 30 -dio -verbose
>>
>> cgroup-1
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      31522816      481    30.005     1.051      16.03    0.0624
>>   0.00   read       65536
>> 0  1      31522816      481    30.005     1.051      16.03    0.0624
>>   0.00   read       65536
>> 1  1      31522816      481    30.005     1.051      16.03    0.0624
>>   0.00   read       65536
>>
>> cgroup-2
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      62980096      961    30.001     2.099      32.03    0.0312
>>   0.00   read       65536
>> 0  1      62980096      961    30.001     2.099      32.03    0.0312
>>   0.00   read       65536
>> 1  1      62980096      961    30.001     2.099      32.03    0.0312
>>   0.00   read       65536
>>
>> cgroup-3
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      94437376     1441    30.003     3.148      48.03    0.0208
>>   0.00   read       65536
>> 0  1      94437376     1441    30.003     3.148      48.03    0.0208
>>   0.00   read       65536
>> 1  1      94437376     1441    30.003     3.148      48.03    0.0208
>>   0.00   read       65536
>>
>> xdd.linux -op write -targets 1 /dev/sdb -blocksize 512 -reqsize 128
>> -mbytes 512 -timelimit 30 -dio –verbose
>>
>> cgroup-1
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     640221184     9769    30.097    21.272     324.58    0.0031
>>   0.00   write       65536
>> 0  1     640221184     9769    30.097    21.272     324.58    0.0031
>>   0.00   write       65536
>> 1  1     640221184     9769    30.097    21.272     324.58    0.0031
>>   0.00   write       65536
>>
>> cgroup-2
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     633798656     9671    30.001    21.126     322.36    0.0031
>>   0.00   write       65536
>> 0  1     633798656     9671    30.001    21.126     322.36    0.0031
>>   0.00   write       65536
>> 1  1     633798656     9671    30.001    21.126     322.36    0.0031
>>   0.00   write       65536
>>
>> cgroup-3
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     630652928     9623    30.001    21.021     320.76    0.0031
>>   0.00   write       65536
>> 0  1     630652928     9623    30.001    21.021     320.76    0.0031
>>   0.00   write       65536
>> 1  1     630652928     9623    30.001    21.021     320.76    0.0031
>>   0.00   write       65536
>>
>> xdd.linux -op read -targets 1 /dev/sdb -blocksize 512 -reqsize 128
>> -mbytes 1024  -timelimit 30  -verbose
>>
>> cgroup-1
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      70123520     1070    30.150     2.326      35.49    0.0282
>>   0.00   read       65536
>> 0  1      70123520     1070    30.150     2.326      35.49    0.0282
>>   0.00   read       65536
>> 1  1      70123520     1070    30.150     2.326      35.49    0.0282
>>   0.00   read       65536
>>
>> cgroup-2
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      70844416     1081    30.063     2.357      35.96    0.0278
>>   0.00   read       65536
>> 0  1      70844416     1081    30.063     2.357      35.96    0.0278
>>   0.00   read       65536
>> 1  1      70844416     1081    30.063     2.357      35.96    0.0278
>>   0.00   read       65536
>>
>> cgroup-3
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1      72155136     1101    30.204     2.389      36.45    0.0274
>>   0.00   read       65536
>> 0  1      72155136     1101    30.204     2.389      36.45    0.0274
>>   0.00   read       65536
>> 1  1      72155136     1101    30.204     2.389      36.45    0.0274
>>   0.00   read       65536
>>
>> xdd.linux -op write -targets 1 /dev/sdb -blocksize 512 -reqsize 128
>> -mbytes 1024  -timelimit 30  -verbose
>>
>> cgroup-1
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     818610176    12491    30.031    27.258     415.93    0.0024
>>   0.00   write       65536
>> 0  1     818610176    12491    30.031    27.258     415.93    0.0024
>>   0.00   write       65536
>> 1  1     818610176    12491    30.031    27.258     415.93    0.0024
>>   0.00   write       65536
>>
>> cgroup-2
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     848494592    12947    30.066    28.221     430.62    0.0023
>>   0.00   write       65536
>> 0  1     848494592    12947    30.066    28.221     430.62    0.0023
>>   0.00   write       65536
>> 1  1     848494592    12947    30.066    28.221     430.62    0.0023
>>   0.00   write       65536
>>
>> cgroup-3
>>
>> T  Q       Bytes      Ops    Time      Rate      IOPS   Latency
>> %CPU  OP_Type    ReqSize
>> 0  1     786563072    12002    30.078    26.151     399.03    0.0025
>>   0.00   write       65536
>> 0  1     786563072    12002    30.078    26.151     399.03    0.0025
>>   0.00   write       65536
>> 1  1     786563072    12002    30.078    26.151     399.03    0.0025
>>   0.00   write       65536
>>
>> Best Regards,
>> Dong-Jae Kang
>>
>>
>> 2008/10/7 Andrea Righi <righi.andrea@...il.com>:
>>> The objective of the i/o controller is to improve i/o performance
>>> predictability of different cgroups sharing the same block devices.
>>>
>>> Respect to other priority/weight-based solutions the approach used by this
>>> controller is to explicitly choke applications' requests that directly (or
>>> indirectly) generate i/o activity in the system.
>>>
>>> The direct bandwidth and/or iops limiting method has the advantage of improving
>>> the performance predictability at the cost of reducing, in general, the overall
>>> performance of the system (in terms of throughput).
>>>
>>> Detailed informations about design, its goal and usage are described in the
>>> documentation.
>>>
>>> Patchset against 2.6.27-rc5-mm1:
>>>
>>>  [PATCH 0/6] cgroup: block device i/o controller (v11)
>>>  [PATCH 1/6] i/o controller documentation
>>>  [PATCH 2/6] introduce ratelimiting attributes and functionality to res_counter
>>>  [PATCH 3/6] i/o controller infrastructure
>>>  [PATCH 4/6] memcg: interface to charge the right cgroup of asynchronous i/o activity
>>>  [PATCH 5/6] i/o controller instrumentation: accounting and throttling
>>>  [PATCH 6/6] export per-task i/o throttling statistics to userspace
>>>
>>> The all-in-one patch (and previous versions) can be found at:
>>> http://download.systemimager.org/~arighi/linux/patches/io-throttle/
>>>
>>> There are no significant changes respect to v10, I've only implemented/fixed
>>> some suggestions I received.
>>>
>>> Changelog: (v10 -> v11)
>>>
>>> * report per block device i/o statistics (total bytes read/written and iops)
>>>  in blockio.stat for i/o limited cgroups
>>> * distinct bandwidth and iops statistics: both in blockio.throttlecnt and
>>>  /proc/PID/io-throttle-stat (suggested by David Radford)
>>> * merge res_counter_ratelimit functionality into res_counter, to avoid code
>>>  duplication (suggested by Paul Manage)
>>> * use kernel-doc style for documenting struct res_counter attributes
>>>  (suggested by Randy Dunalp)
>>> * udpated documentation
>>>
>>> Thanks to all for the feedback!
>>> -Andrea
>



-- 
-------------------------------------------------------------------------------------------------
   DONG-JAE, KANG
   Senior Member of Engineering Staff
   Internet Platform Research Dept, S/W Content Research Lab
   Electronics and Telecommunications Research Institute(ETRI)
   138 Gajeongno, Yuseong-gu, Daejeon, 305-700 KOREA
   Phone : 82-42-860-1561 Fax : 82-42-860-6699
   Mobile : 82-10-9919-2353 E-mail : djkang@...i.re.kr (MSN)
-------------------------------------------------------------------------------------------------

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ