lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTimQsZYTb=n4B-U2BZJcjBbL+DwR8Q@mail.gmail.com>
Date:	Thu, 24 Mar 2011 18:51:24 -0700
From:	Justin TerAvest <teravest@...gle.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	jaxboe@...ionio.com, m-ikeda@...jp.nec.com, ryov@...inux.co.jp,
	taka@...inux.co.jp, kamezawa.hiroyu@...fujitsu.com,
	righi.andrea@...il.com, guijianfeng@...fujitsu.com,
	balbir@...ux.vnet.ibm.com, ctalbott@...gle.com,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH v2 0/8] Provide cgroup isolation for buffered writes.

On Thu, Mar 24, 2011 at 6:56 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Wed, Mar 23, 2011 at 03:32:51PM -0700, Justin TerAvest wrote:
>
> [..]
>> > Ok, In the past I had tried it with 2 cgroups (running dd inside these
>> > cgroups) and I had no success. I am wondering what has changed.
>>
>> It could just be a difference in workload, or dd size, or filesystem?
>
> Once you have sorted out the bug and I can boot my system, I will test
> it to see what's happening.

Thanks. I'll send out a version 3 of the patch (and organize my
current results better in the cover letter), as soon as I have the
problem resolved. I see a very similar panic during fsck that I'm
trying to track down.

>
>>
>> >
>> > In the past a high priority throttled process can very well try to
>> > pick up a inode from low prio cgroup and start writting it and get
>> > blocked. I believe similar thing should happen now.
>>
>> You're right that it's very dependent on what inodes get picked up
>> when from writeback.
>
> So then above results should not be reproducible consistently? Are you
> using additional patches internally to make this work reliably. My
> concenrn is that it will not make sense to have half baked pieces
> in upstream kernel. That's why I was hoping that this piece can go in
> once we have sorted out following.

I am not using additional patches to make this work reliably. However,
I've noticed the behavior is better with some filesystems than others.

>
> - IO less throttling
> - Per cgroup dirty ratio
> - Some work w.r.t cgroup aware writeback.
>
> In fact cgroup aware writeback can be part of this patch series once
> first two pieces are in the kernel.

I understand your preference for a complete, reliable solution for all
of this. Right now, I'm concerned that it's hard to tie all these
systems together, and I haven't seen an overall plan for how all of
these things work together. If this patchset works reliably for enough
workloads, we'll just see isolation improve as writeback is more
cgroup aware.

>
> Thanks
> Vivek
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ