lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Jan 2011 21:28:09 +0900
From:	Hiroyuki Kamezawa <kamezawa.hiroyuki@...il.com>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
	"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
	Greg Thelen <gthelen@...gle.com>, aarcange@...hat.com,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [PATCH 4/4] [BUGFIX] fix account leak at force_empty, rmdir with THP

2011/1/14 Johannes Weiner <hannes@...xchg.org>:
> On Fri, Jan 14, 2011 at 07:15:35PM +0900, KAMEZAWA Hiroyuki wrote:
>>
>> Now, when THP is enabled, memcg's rmdir() function is broken
>> because move_account() for THP page is not supported.
>>
>> This will cause account leak or -EBUSY issue at rmdir().
>> This patch fixes the issue by supporting move_account() THP pages.
>>
>> And account information will be moved to its parent at rmdir().
>>
>> How to test:
>>    79  mount -t cgroup none /cgroup/memory/ -o memory
>>    80  mkdir /cgroup/A/
>>    81  mkdir /cgroup/memory/A
>>    82  mkdir /cgroup/memory/A/B
>>    83  cgexec -g memory:A/B ./malloc 128 &
>>    84  grep anon /cgroup/memory/A/B/memory.stat
>>    85  grep rss /cgroup/memory/A/B/memory.stat
>>    86  echo 1728 > /cgroup/memory/A/tasks
>>    87  grep rss /cgroup/memory/A/memory.stat
>>    88  rmdir /cgroup/memory/A/B/
>>    89  grep rss /cgroup/memory/A/memory.stat
>>
>> - Create 2 level directory and exec a task calls malloc(big chunk).
>> - Move a task somewhere (its parent cgroup in above)
>> - rmdir /A/B
>> - check memory.stat in /A/B is moved to /A after rmdir. and confirm
>>   RSS/LRU information includes usages it was charged against /A/B.
>>
>> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>> ---
>>  mm/memcontrol.c |   32 ++++++++++++++++++++++----------
>>  1 file changed, 22 insertions(+), 10 deletions(-)
>>
>> Index: mmotm-0107/mm/memcontrol.c
>> ===================================================================
>> --- mmotm-0107.orig/mm/memcontrol.c
>> +++ mmotm-0107/mm/memcontrol.c
>> @@ -2154,6 +2154,10 @@ void mem_cgroup_split_huge_fixup(struct
>>       smp_wmb(); /* see __commit_charge() */
>>       SetPageCgroupUsed(tpc);
>>       VM_BUG_ON(PageCgroupCache(hpc));
>> +     /*
>> +      * Note: if dirty ratio etc..are supported,
>> +         * other flags may need to be copied.
>> +         */
>
> That's a good comment, but it should be in the patch that introduces
> this function and is a bit unrelated in this one.
>
Ok. I'll remove this. This is an alarm for Greg ;)

>>  }
>>  #endif
>>
>> @@ -2175,8 +2179,11 @@ void mem_cgroup_split_huge_fixup(struct
>>   */
>>
>>  static void __mem_cgroup_move_account(struct page_cgroup *pc,
>> -     struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge)
>> +     struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge,
>> +     int charge_size)
>>  {
>> +     int pagenum = charge_size >> PAGE_SHIFT;
>
> nr_pages?
>
Ok. replace pagenum <-> nr_pages.


>> +
>>       VM_BUG_ON(from == to);
>>       VM_BUG_ON(PageLRU(pc->page));
>>       VM_BUG_ON(!page_is_cgroup_locked(pc));
>> @@ -2190,14 +2197,14 @@ static void __mem_cgroup_move_account(st
>>               __this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
>>               preempt_enable();
>>       }
>> -     mem_cgroup_charge_statistics(from, PageCgroupCache(pc), -1);
>> +     mem_cgroup_charge_statistics(from, PageCgroupCache(pc), -pagenum);
>>       if (uncharge)
>>               /* This is not "cancel", but cancel_charge does all we need. */
>> -             mem_cgroup_cancel_charge(from, PAGE_SIZE);
>> +             mem_cgroup_cancel_charge(from, charge_size);
>>
>>       /* caller should have done css_get */
>>       pc->mem_cgroup = to;
>> -     mem_cgroup_charge_statistics(to, PageCgroupCache(pc), 1);
>> +     mem_cgroup_charge_statistics(to, PageCgroupCache(pc), pagenum);
>>       /*
>>        * We charges against "to" which may not have any tasks. Then, "to"
>>        * can be under rmdir(). But in current implementation, caller of
>> @@ -2212,7 +2219,8 @@ static void __mem_cgroup_move_account(st
>>   * __mem_cgroup_move_account()
>>   */
>>  static int mem_cgroup_move_account(struct page_cgroup *pc,
>> -             struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge)
>> +             struct mem_cgroup *from, struct mem_cgroup *to,
>> +             bool uncharge, int charge_size)
>>  {
>>       int ret = -EINVAL;
>>       unsigned long flags;
>> @@ -2220,7 +2228,7 @@ static int mem_cgroup_move_account(struc
>>       lock_page_cgroup(pc);
>>       if (PageCgroupUsed(pc) && pc->mem_cgroup == from) {
>>               move_lock_page_cgroup(pc, &flags);
>> -             __mem_cgroup_move_account(pc, from, to, uncharge);
>> +             __mem_cgroup_move_account(pc, from, to, uncharge, charge_size);
>>               move_unlock_page_cgroup(pc, &flags);
>>               ret = 0;
>>       }
>> @@ -2245,6 +2253,7 @@ static int mem_cgroup_move_parent(struct
>>       struct cgroup *cg = child->css.cgroup;
>>       struct cgroup *pcg = cg->parent;
>>       struct mem_cgroup *parent;
>> +     int charge_size = PAGE_SIZE;
>>       int ret;
>>
>>       /* Is ROOT ? */
>> @@ -2256,16 +2265,19 @@ static int mem_cgroup_move_parent(struct
>>               goto out;
>>       if (isolate_lru_page(page))
>>               goto put;
>> +     /* The page is isolated from LRU and we have no race with splitting */
>> +     if (PageTransHuge(page))
>> +             charge_size = PAGE_SIZE << compound_order(page);
>
> The same as in the previous patch, compound_order() implicitely
> handles order-0 pages and should do the right thing without an extra
> check.
>
Sure.

> The comment is valuable, though!
>
> Nitpicks aside:
> Acked-by: Johannes Weiner <hannes@...xchg.org>

Thank you for quick review!
Updated one will be posted in the next week after some amounts of more tests.

Regards,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ