[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E764259.5070209@parallels.com>
Date: Sun, 18 Sep 2011 16:11:21 -0300
From: Glauber Costa <glommer@...allels.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
CC: <linux-kernel@...r.kernel.org>, <paul@...lmenage.org>,
<lizf@...fujitsu.com>, <kamezawa.hiroyu@...fujitsu.com>,
<ebiederm@...ssion.com>, <davem@...emloft.net>,
<gthelen@...gle.com>, <netdev@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH v2 1/7] Basic kernel memory functionality for the Memory
Controller
On 09/18/2011 04:05 PM, Kirill A. Shutemov wrote:
> On Sun, Sep 18, 2011 at 12:39:12AM -0300, Glauber Costa wrote:
>>> No kernel memory accounting for root cgroup, right?
>> Not sure. Maybe kernel memory accounting is useful even for root cgroup.
>> Same as normal memory accounting... what we want to avoid is kernel
>> memory limits. OTOH, if we are not limiting it anyway, accounting it is
>> just useless overhead... Even the statistics can then be gathered
>> through all
>> the proc files that show slab usage, I guess?
>
> It's better to leave root cgroup without accounting. At least for now.
> We can add it later if needed.
Fair.
>>>
>>>> @@ -3979,6 +3999,10 @@ static u64 mem_cgroup_read(struct cgroup *cont, struct cftype *cft)
>>>> else
>>>> val = res_counter_read_u64(&mem->memsw, name);
>>>> break;
>>>> + case _KMEM:
>>>> + val = res_counter_read_u64(&mem->kmem, name);
>>>> + break;
>>>> +
>>>
>>> Always zero in root cgroup?
>>
>> Yes, if we're not accounting, it should be zero. WARN_ON, maybe?
>
> -ENOSYS?
>
I'd personally prefer WARN_ON. It is good symmetry from userspace PoV to
always be able to get a value out of it. Also, it something goes wrong
and it is not zero for some reason, this will help us find it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists