[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080716195237.GA9127@csn.ul.ie>
Date: Wed, 16 Jul 2008 20:52:37 +0100
From: Mel Gorman <mel@....ul.ie>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: Richard Kennedy <richard@....demon.co.uk>, penberg@...helsinki.fi,
mpm@...enic.com, linux-mm <linux-mm@...ck.org>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH][RFC] slub: increasing order reduces memory usage of some key caches
On (16/07/08 08:21), Christoph Lameter didst pronounce:
> Richard Kennedy wrote:
>
>
> > on my amd64 3 gb ram desktop typical numbers :-
> >
> > [kernel,objects,pages/slab,slabs,total pages,diff]
> > radix_tree_node
> > 2.6.26 33922,2,2423 4846
> > +patch 33541,4,1165 4660,-186
> > dentry
> > 2.6.26 82136,1,4323 4323
> > +patch 79482,2,2038 4076,-247
> > the extra dentries would use 136 pages but that still leaves a saving of
> > 111 pages.
>
> Good numbers....
>
Indeed. clearly internal fragmentation is a problem.
> > Can anyone suggest any other tests that would be useful to run?
> > & Is there any way to measure what impact this is having on
> > fragmentation?
>
> Mel would be able to tell you that but I think we better figure out what went wrong first.
>
For internal fragmentation, there is this crappy script:
http://www.csn.ul.ie/~mel/intfrag_stat
run it as intfrag_stat -a and it should tell you what precentage of
memory is being wasted for dentries. The patch should show a difference
for the dentries.
How it would affect external fragmentation is harder to guess. It will
put more pressure for high-order allocations but at a glance, dentries
are using GFP_KERNEL so it should not be a major problem.
/proc/pagetypeinfo is the file to watch. If the count for "reclaimable"
arenas is higher and climbing over time, it will indiate that external
fragmentation would eventually become a problem.
>
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 315c392..c365b04 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2301,6 +2301,14 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> > if (order < 0)
> > return 0;
> >
> > + if (order < slub_max_order ) {
> > + unsigned long waste = (PAGE_SIZE << order) % size;
> > + if ( waste *2 >= size ) {
> > + order++;
> > + printk ( KERN_INFO "SLUB: increasing order %s->[%d] [%ld]\n",s->name,order,size);
> > + }
> > + }
> > +
> > s->allocflags = 0;
> > if (order)
> > s->allocflags |= __GFP_COMP;
>
> The order and waste calculation occurs in slab_order(). If modifications are needed then they need to occur in that function.
>
> Looks like the existing code is not doing the best thing for dentries on your box?
>
> On my 64 bit box dentries are 208 bytes long, 39 objects per page and 84 bytes
> are lost per order 1 page. So this would not trigger your patch at all. There must be something special to your configuration.
>
>
> /linux-2.6$ slabinfo dentry
>
> Slabcache: dentry Aliases: 0 Order : 1 Objects: 554209
> ** Reclaim accounting active
>
> Sizes (bytes) Slabs Debug Memory
> ------------------------------------------------------------------------
> Object : 208 Total : 14215 Sanity Checks : Off Total: 116449280
> SlabObj: 208 Full : 14179 Redzoning : Off Used : 115275472
> SlabSiz: 8192 Partial: 32 Poisoning : Off Loss : 1173808
> Loss : 0 CpuSlab: 4 Tracking : Off Lalig: 0
> Align : 8 Objects: 39 Tracing : Off Lpadd: 1137200
>
>
> Can you post the slabinfo information about the caches that you are concerned with? Please a before and after state.
>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists