lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 9 Sep 2008 15:00:10 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	balbir@...ux.vnet.ibm.com,
	Andrew Morton <akpm@...ux-foundation.org>, hugh@...itas.com,
	menage@...gle.com, xemul@...nvz.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [RFC][PATCH] Remove cgroup member from struct page

On Tuesday 09 September 2008 14:53, KAMEZAWA Hiroyuki wrote:
> On Tue, 9 Sep 2008 13:58:27 +1000
>
> Nick Piggin <nickpiggin@...oo.com.au> wrote:
> > On Tuesday 09 September 2008 13:57, KAMEZAWA Hiroyuki wrote:
> > > On Mon, 8 Sep 2008 20:58:10 +0530
> > >
> > > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> > > > Sorry for the delay in sending out the new patch, I am traveling and
> > > > thus a little less responsive. Here is the update patch
> > >
> > > Hmm.. I've considered this approach for a while and my answer is that
> > > this is not what you really want.
> > >
> > > Because you just moves the placement of pointer from memmap to
> > > radix_tree both in GFP_KERNEL, total kernel memory usage is not
> > > changed. So, at least, you have to add some address calculation (as I
> > > did in March) to getting address of page_cgroup. But page_cgroup itself
> > > consumes 32bytes per page. Then.....
> >
> > Just keep in mind that an important point is to make it more attractive
> > to configure cgroup into the kernel, but have it disabled or unused at
> > runtime.
>
> Hmm..kicking out 4bytes per 4096bytes if disabled ?

Yeah of course. 4 or 8 bytes. Everything adds up. There is nothing special
about cgroups that says it is allowed to use fields in struct page where
others cannot. Put it in perspective: we try very hard not to allocate new
*bits* in page flags, which is only 4 bytes per 131072 bytes.


> maybe a routine like SPARSEMEM is a choice.
>
> Following is pointer pre-allocation. (just pointer, not page_cgroup itself)
> ==
> #define PCG_SECTION_SHIFT	(10)
> #define PCG_SECTION_SIZE	(1 << PCG_SECTION_SHIFT)
>
> struct pcg_section {
> 	struct page_cgroup **map[PCG_SECTION_SHIFT]; //array of pointer.
> };
>
> struct page_cgroup *get_page_cgroup(unsigned long pfn)
> {
> 	struct pcg_section *sec;
> 	sec = pcg_section[(pfn >> PCG_SECTION_SHIFT)];
> 	return *sec->page_cgroup[(pfn & ((1 << PCG_SECTTION_SHIFT) - 1];
> }
> ==
> If we go extreme, we can use kmap_atomic() for pointer array.
>
> Overhead of pointer-walk is not so bad, maybe.
>
> For 64bit systems, we can find a way like SPARSEMEM_VMEMMAP.

Yes I too think that would be the ideal way to go to get the best of
performance in the enabled case. However Balbir I believe is interested
in memory savings if not all pages have cgroups... I don't know, I don't
care so much about the "enabled" case, so I'll leave you two to fight it
out :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists