lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48D0C800.30207@oss.ntt.co.jp>
Date:	Wed, 17 Sep 2008 18:04:00 +0900
From:	Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>
To:	Andrea Righi <righi.andrea@...il.com>
CC:	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Paul Menage <menage@...gle.com>, agk@...rceware.org,
	akpm@...ux-foundation.org, axboe@...nel.dk, baramsori72@...il.com,
	Carl Henrik Lunde <chlunde@...g.uio.no>,
	dave@...ux.vnet.ibm.com, Divyesh Shah <dpshah@...gle.com>,
	eric.rannaud@...il.com, fernando@....ntt.co.jp,
	Hirokazu Takahashi <taka@...inux.co.jp>,
	Li Zefan <lizf@...fujitsu.com>,
	Marco Innocenti <m.innocenti@...eca.it>, matt@...ehost.com,
	ngupta@...gle.com, randy.dunlap@...cle.com, roberto@...it.it,
	Ryo Tsuruta <ryov@...inux.co.jp>,
	Satoshi UCHIDA <s-uchida@...jp.nec.com>,
	subrata@...ux.vnet.ibm.com, containers@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH -mm 0/5] cgroup: block device i/o controller (v9)

Hi,

Andrea Righi wrote:
> 
> TODO:
> 
> * Try to push down the throttling and implement it directly in the I/O
>   schedulers, using bio-cgroup (http://people.valinux.co.jp/~ryov/bio-cgroup/)
>   to keep track of the right cgroup context. This approach could lead to more
>   memory consumption and increases the number of dirty pages (hard/slow to
>   reclaim pages) in the system, since dirty-page ratio in memory is not
>   limited. This could even lead to potential OOM conditions, but these problems
>   can be resolved directly into the memory cgroup subsystem
> 
> * Handle I/O generated by kswapd: at the moment there's no control on the I/O
>   generated by kswapd; try to use the page_cgroup functionality of the memory
>   cgroup controller to track this kind of I/O and charge the right cgroup when
>   pages are swapped in/out

Could you explain which cgroup we should charge when swap in or out occurs?
Are there any difference between the following cases?

Target page is
1. used as page cache and not mapped to any space
2. used as page cache and mapped to some space
3. not used as page cache and mapped to some space

I do not think it is fair to charge the process for this kind of I/O, am I wrong?

> 
> * Improve fair throttling: distribute the time to sleep among all the tasks of
>   a cgroup that exceeded the I/O limits, depending of the amount of IO activity
>   generated in the past by each task (see task_io_accounting)
> 
> * Try to reduce the cost of calling cgroup_io_throttle() on every submit_bio();
>   this is not too much expensive, but the call of task_subsys_state() has
>   surely a cost. A possible solution could be to temporarily account I/O in the
>   current task_struct and call cgroup_io_throttle() only on each X MB of I/O.
>   Or on each Y number of I/O requests as well. Better if both X and/or Y can be
>   tuned at runtime by a userspace tool
> 
> * Think an alternative design for general purpose usage; special purpose usage
>   right now is restricted to improve I/O performance predictability and
>   evaluate more precise response timings for applications doing I/O. To a large
>   degree the block I/O bandwidth controller should implement a more complex
>   logic to better evaluate real I/O operations cost, depending also on the
>   particular block device profile (i.e. USB stick, optical drive, hard disk,
>   etc.). This would also allow to appropriately account I/O cost for seeky
>   workloads, respect to large stream workloads. Instead of looking at the
>   request stream and try to predict how expensive the I/O cost will be, a
>   totally different approach could be to collect request timings (start time /
>   elapsed time) and based on collected informations, try to estimate the I/O
>   cost and usage
> 
> -Andrea
> 

Thanks,
Takuya Yoshikawa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ