lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1222098450.8533.41.camel@nimitz>
Date:	Mon, 22 Sep 2008 08:47:30 -0700
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	kamezawa.hiroyu@...fujitsu.com
Cc:	linux-mm@...ck.org, balbir@...ux.vnet.ibm.com,
	nishimura@....nes.nec.co.jp, xemul@...nvz.org,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: Re: [PATCH 9/13] memcg: lookup page cgroup (and remove pointer
	from struct page)

On Tue, 2008-09-23 at 00:14 +0900, kamezawa.hiroyu@...fujitsu.com wrote:
> >Basing it on max_pfn makes me nervous because of what it will do on
> >machines with very sparse memory.  Is this like sparsemem where the
> >structure can be small enough to actually span all of physical memory,
> >or will it be a large memory user?
> >
> I admit this calcuration is too easy. Hmm, based on totalram_pages is 
> better. ok.

No, I was setting a trap. ;)

If you use totalram_pages, I'll just complain that it doesn't work if a
memory hotplug machine drastically changes its size.  You'll end up with
pretty darn big hash buckets.

You basically can't get away with the fact that you (potentially) have
really sparse addresses to play with here.  Using a hash table is
exactly the same as using an array such as sparsemem except you randomly
index into it instead of using straight arithmetic.

My gut says that you'll need to do exactly the same things sparsemem did
here, which is at *least* have a two-level lookup before you get to the
linear search.  The two-level lookup also makes the hotplug problem
easier.

As I look at this, I always have to bounce between these tradeoffs:

1. deal with sparse address spaces (keeps you from using max_pfn)
2. scale as that sparse address space has memory hotplugged into it
   (keeps you from using boot-time present_pages)
3. deal with performance impacts from new data structures created to
   deal with the other two :)

> >Can you lay out how much memory this will use on a machine like Dave
> >Miller's which has 1GB of memory at 0x0 and 1GB of memory at 1TB up in
> >the address space?
> 
> >Also, how large do the hash buckets get in the average case?
> >
> on my 48GB box, hashtable was 16384bytes. (in dmesg log.)
> (section size was 128MB.)

I'm wondering how long the linear searches of those hlists get.

> I'll rewrite this based on totalram_pages.
> 
> BTW, do you know difference between num_physpages and totalram_pages ?

num_physpages appears to be linked to the size of the address space and
totalram_pages looks like the amount of ram present.  Kinda
spanned_pages and present_pages.  But, who knows how consistent they are
these days. :)

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ