lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49E8679D.8010405@oss.ntt.co.jp>
Date:	Fri, 17 Apr 2009 20:27:25 +0900
From:	Fernando Luis Vázquez Cao 
	<fernando@....ntt.co.jp>
To:	Ryo Tsuruta <ryov@...inux.co.jp>
CC:	kamezawa.hiroyu@...fujitsu.com, yoshikawa.takuya@....ntt.co.jp,
	righi.andrea@...il.com, menage@...gle.com,
	balbir@...ux.vnet.ibm.com, guijianfeng@...fujitsu.com,
	agk@...rceware.org, akpm@...ux-foundation.org, axboe@...nel.dk,
	baramsori72@...il.com, chlunde@...g.uio.no,
	dave@...ux.vnet.ibm.com, dpshah@...gle.com, eric.rannaud@...il.com,
	taka@...inux.co.jp, lizf@...fujitsu.com, matt@...ehost.com,
	dradford@...ehost.com, ngupta@...gle.com, randy.dunlap@...cle.com,
	roberto@...it.it, s-uchida@...jp.nec.com,
	subrata@...ux.vnet.ibm.com, containers@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Block I/O tracking (was Re: [PATCH 3/9] bio-cgroup controller)

Ryo Tsuruta wrote:
> Hi,
> 
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> Date: Fri, 17 Apr 2009 11:24:33 +0900
> 
>> On Fri, 17 Apr 2009 10:49:43 +0900
>> Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp> wrote:
>>
>>> Hi,
>>>
>>> I have a few question.
>>>    - I have not yet fully understood how your controller are using
>>>      bio_cgroup. If my view is wrong please tell me.
>>>
>>> o In my view, bio_cgroup's implementation strongly depends on
>>>    page_cgoup's. Could you explain for what purpose does this
>>>    functionality itself should be implemented as cgroup subsystem?
>>>    Using page_cgoup and implementing tracking APIs is not enough?
>> I'll definitely do "Nack" to add full bio-cgroup members to page_cgroup.
>> Now, page_cgroup is 40bytes(in 64bit arch.) And all of them are allocated at
>> boot time as memmap. (and add member to struct page is much harder ;)
>>
>> IIUC, feature for "tracking bio" is just necesary for pages for I/O.
>> So, I think it's much better to add misc. information to struct bio not to the page.
>> But, if people want to add "small hint" to struct page or struct page_cgroup
>> for tracking buffered I/O, I'll give you help as much as I can.
>> Maybe using "unused bits" in page_cgroup->flags is a choice with no overhead.
> 
> In the case where the bio-cgroup data is allocated dynamically,
>    - Sometimes quite a large amount of memory get marked dirty.
>      In this case it requires more kernel memory than that of the
>      current implementation.
>    - The operation is expansive due to memory allocations and exclusive
>      controls by such as spinlocks.
> 
> In the case where the bio-cgroup data is allocated by delayed allocation, 
>   - It makes the operation complicated and expensive, because
>     sometimes a bio has to be created in the context of other
>     processes, such as aio and swap-out operation.
> 
> I'd prefer a simple and lightweight implementation. bio-cgroup only
> needs 4bytes unlike memory controller. The reason why bio-cgroup chose
> this approach is to minimize the overhead.

Elaborating on Yoshikawa-san's comment, I would like to propose a
generic I/O tracking mechanism that is not tied to all the cgroup
paraphernalia. This approach has several advantages:

- By using this functionality, existing I/O schedulers (well, some
relatively minor changes would be needed) would be able to schedule
buffered I/O properly.

- The amount of memory consumed to do the tracking could be
optimized according to the kernel configuration (do we really
need struct page_cgroup when the cgroup memory controller or all
of the cgroup infrastructure has been configured out?).

The I/O tracking functionality would look something like the following:

- Create an API to acquire the I/O context of a certain page, which is
cgroup independent. For discussion purposes, I will assume that the
I/O context of a page is the io_context of the task that dirtied the
page (this can be changed if deemed necessary, though).

- When cgroups are not being used, pages would be tracked using a
pfn-indexed array of struct io_context (à la memcg's array of
struct page_cgroup).

- When cgroups are activated but the memory controller is not, we
would have a pfn-indexed array of struct blkio_cgroup, which would
have both a pointer to the corresponding io_context of the page and a
reference to the cgroup it belongs to (most likely using css_id). The
API offered by the I/O tracking mechanism would be extended so that
the kernel can easily obtain not only the per-task io_context but also
the cgroup a certain page belongs to. Please notice that by doing this
we have all the information we need to schedule buffered I/O both at
the cgroup-level and the task-level. From the memory usage point of
view, memory controller-specific bits would be gone and to top it all
we save one indirection level (since struct page_cgroup would be out
of the picture).

- When the memory controller is active we would have the
pfn-indexed array of struct page_cgroup we have know plus a
reference to the corresponding cgroup and io_context (yes, I
still want to do proper scheduling of buffered I/O within a
cgroup).

- Finally, since bio entering the block layer can generate additional
bios it is necessary to pass the I/O context information of original
bio down to the new bios. For that stacking devices such as dm and
those of that ilk will have to be modified. To improve performance I/O
context information would be cached in bios (to achieve this we have
to ensure that all bios that enter the block layer have the right I/O
context information attached to it).

Yoshikawa-san and myself have been working on a patch-set that
implements just this and we have reached that point where the kernel
does not panic right after booting:), so we will be sending patches soon
(hopefully this weekend).

Any thoughts?

Regards,

Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ