lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAKTCnznMGsrpqeRCRTPQApB84GD8FGUPL0fSK9JGysgYKtwVnA@mail.gmail.com>
Date:	Tue, 15 May 2012 16:37:17 +0530
From:	Balbir Singh <bsingharora@...il.com>
To:	Andre Nathan <andre@...irati.com.br>
Cc:	linux-kernel@...r.kernel.org, balbir@...ux.vnet.ibm.com
Subject: Re: About cgroup memory limits

On Tue, May 15, 2012 at 4:36 PM, Balbir Singh <bsingharora@...il.com> wrote:
>
>
> On Thu, May 10, 2012 at 12:07 AM, Andre Nathan <andre@...irati.com.br>
> wrote:
>>
>> Hello
>>
>> I'm doing some tests with LXC and how it interacts with the memory
>> cgroup limits, more specifically the memory.limit_in_bytes control file.
>>
>> Am I correct in my understanding of the memory cgroup documentation[1]
>> that the limit set in memory.limit_in_bytes is applied to the sum of the
>> fields 'cache', 'rss' and 'mapped_file' in the memory.stat file?
>>
>> I am also trying to understand the values reported in memory.stat when
>> compared to the statistics in /proc/$PID/statm.
>>
>> Below is the sum of each field in /proc/$PID/statm for every process
>> running inside a test container, converted to bytes:
>>
>>       size  resident     share     text  lib       data  dt
>>  897208320  28741632  20500480  1171456    0  170676224   0
>>
>> Compare this with the usage reports from memory.stat (fields total_*,
>> hierarchical_* and pg* omitted):
>>
>> cache                     16834560
>> rss                       8192000
>> mapped_file               3743744
>> swap                      0
>> inactive_anon             0
>> active_anon               8192000
>> inactive_file             13996032
>> active_file               2838528
>> unevictable               0
>>
>> Is there a way to reconcile these numbers somehow? I understand that the
>> fields from the two files represent different things. What I'm trying to
>> do is to combine, for example, the fields from memory.stat to
>> approximately reach what is displayed by statm.
>>
>

Resending.. Plain text issues (sorry)

> cgroups accounting is different (sorry for that) from statm. From
> Documentation/filesystems/proc.txt
>
> Table 1-3: Contents of the statm files (as of 2.6.8-rc3)
>
> ..............................................................................
>  Field    Content
>  size     total program size (pages)            (same as VmSize in status)
>  resident size of memory portions (pages)       (same as VmRSS in status)
>  shared   number of pages that are shared       (i.e. backed by a file)
>  trs      number of pages that are 'code'       (not including libs;
> broken,
>                                                         includes data
> segment)
>  lrs      number of pages of library            (always 0 on 2.6)
>  drs      number of pages of data/stack         (including libs; broken,
>                                                         includes library
> text)
>  dt       number of dirty pages                 (always 0 on 2.6)
>
> ..............................................................................
>
> vmRSS accounting is different from RSS accounting in cgroups. I presume
> you acquired this data from process running in the cgroup? What does cat of
> tasks file within the cgroup show you? Ideally you want to make sure there
> is one task inside the cgroup to compare against /proc/$PID/statm and the
> data is collected from a task outside that cgroup
>
> Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ