lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Jul 2012 20:24:27 +0800
From:	Wanpeng Li <liwp.linux@...il.com>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Michal Hocko <mhocko@...e.cz>,
	Balbir Singh <bsingharora@...il.com>,
	AndrewMorton <akpm@...ux-foundation.org>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Mike Frysinger <vapier@...too.org>,
	Arun Sharma <asharma@...com>, linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org, Wanpeng Li <liwp.linux@...il.com>
Subject: Re: [PATCH 1/5 v2] memcg: replace unsigned long by u64 to avoid
 overflow

On Mon, Jun 25, 2012 at 05:30:11PM +0900, Kamezawa Hiroyuki wrote:
>(2012/06/25 16:52), Johannes Weiner wrote:
>>On Mon, Jun 25, 2012 at 02:04:20PM +0800, Wanpeng Li wrote:
>>>Changlog:
>>>
>>>V2 -> V1:
>>>* fix zone_page_state()::/include/linux/vmstat.h returns 'unsigned long'
>>>
>>>From: Wanpeng Li <liwp@...ux.vnet.ibm.com>
>>>
>>>Since the return value variable in mem_cgroup_zone_nr_lru_pages and
>>>mem_cgroup_node_nr_lru_pages functions are u64, so replace the return
>>>value of funtions by u64 to avoid overflow.
>>>
>>>Signed-off-by: Wanpeng Li <liwp.linux@...il.com>
>>>---
>>>  include/linux/vmstat.h |    2 +-
>>>  mm/memcontrol.c        |    5 ++---
>>>  2 files changed, 3 insertions(+), 4 deletions(-)
>>>
>>>diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
>>>index 65efb92..6a14291 100644
>>>--- a/include/linux/vmstat.h
>>>+++ b/include/linux/vmstat.h
>>>@@ -106,7 +106,7 @@ static inline unsigned long global_page_state(enum zone_stat_item item)
>>>  	return x;
>>>  }
>>>
>>>-static inline unsigned long zone_page_state(struct zone *zone,
>>>+static inline u64 zone_page_state(struct zone *zone,
>>>  					enum zone_stat_item item)
>>>  {
>>>  	long x = atomic_long_read(&zone->vm_stat[item]);
>>
>>We established that there is no known reason to use ulong for page
>>counters and that IF YOU HAD ONE, you should obviously say so and then
>>do a wholesale conversion.  But I don't think you have one.
>>
>>This patch makes absolutely no sense.
>>
>I agree. Then, Nack from me.
>
>Thanks,
>-Kame

static unsigned long
mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
			int nid, unsigned int lru_mask)
{
	u64 total = 0;
	int zid;

	for (zid = 0; zid < MAX_NR_ZONES; zid++)
		total += mem_cgroup_zone_nr_lru_pages(memcg,
					    nid, zid, lru_mask);

	return total;
}

Since you use unsigned long to caculate nr_pages and unsigned long long
to caculate bytes, so u64 in function mem_cgroup_node_nr_lru_pages should 
replace by unsigned long to save kernel stack, right?

Regards,
Wanpeng Li

>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ