[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <641cca8561405679780a7afa4442e2a5.squirrel@webmail-b.css.fujitsu.com>
Date: Mon, 31 Aug 2009 23:36:12 +0900 (JST)
From: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>
Subject: Re: [RFC][PATCH 2/5] memcg: uncharge in batched manner
Balbir Singh wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-08-31
> 21:14:10]:
>
>> Balbir Singh wrote:
>> >> > Does this effect deleting of a group and delay it by a large
>> amount?
>> >> >
>> >> plz see what cgroup_release_and_xxxx fixed. This is not for delay
>> >> but for race-condition, which makes rmdir sleep permanently.
>> >>
>> >
>> > I've seen those patches, where rmdir() can hang. My conern was time
>> > elapsed since we do css_get() and do a cgroup_release_and_wake_rmdir()
>> >
>> plz read unmap() and truncate() code.
>> The number of pages handled without cond_resched() is limited.
>>
>>
>
> I understand that part, I was referring to tasks stuck doing rmdir()
> while we do batched uncharge, will it be very visible to the end user?
truncate/invalidate etc...is done in chunk of pagevec size.
Now, it's 14. then, batched uncharge is done per 14 pages, IIUC.
> cond_resched() is bad in this case.. since it means we'll stay longer
> before we release the cgroup.
cond_resched() is caller's matter. Not related memcg because we dont't
call it.
Thanks,
-Kame
>
>
> --
> Balbir
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists