lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Aug 2011 15:05:46 +0400
From:	Pavel Emelyanov <xemul@...allels.com>
To:	Pekka Enberg <penberg@...nel.org>
CC:	Dave Chinner <david@...morbit.com>,
	Glauber Costa <glommer@...allels.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"containers@...ts.linux-foundation.org" 
	<containers@...ts.linux-foundation.org>,
	Al Viro <viro@...iv.linux.org.uk>,
	Hugh Dickins <hughd@...gle.com>,
	Nick Piggin <npiggin@...nel.dk>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Rik van Riel <riel@...hat.com>,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	James Bottomley <jbottomley@...allels.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Christoph Lameter <cl@...ux.com>,
	David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v3 3/4] limit nr_dentries per superblock

On 08/15/2011 02:58 PM, Pekka Enberg wrote:
> Hi Dave,
> 
> On Mon, Aug 15, 2011 at 1:46 PM, Dave Chinner <david@...morbit.com> wrote:
>> That's usage for the entire slab, though, and we don't have a dentry
>> slab per superblock so I don't think that helps us. And with slab
>> merging, I think that even if we did have a slab per superblock,
>> they'd end up in the same slab context anyway, right?
> 
> You could add a flag to disable slab merging but there's no sane way
> to fix the per-superblock thing in slab.
> 
> On Mon, Aug 15, 2011 at 1:46 PM, Dave Chinner <david@...morbit.com> wrote:
>> Ideally what we need is a slab, LRU and shrinkers all rolled into a
>> single infrastructure handle so we can simply set them up per
>> object, per context etc and not have to re-invent the wheel for
>> every single slab cache/LRU/shrinker setup we have in the kernel.
>>
>> I've got a rough node-aware generic LRU/shrinker infrastructure
>> prototype that is generic enough for most of the existing slab
>> caches with shrinkers, but I haven't looked at what is needed to
>> integrate it with the slab cache code. That's mainly because I don't
>> like the idea of having to implement the same thing 3 times in 3
>> different ways and debug them all before anyone would consider it
>> for inclusion in the kernel.
>>
>> Once I've sorted out the select_parent() use-the-LRU-for-disposal
>> abuse and have a patch set that survives a 'rm -rf *' operation,
>> maybe we can then talk about what is needed to integrate stuff into
>> the slab caches....
> 
> Well, now that I really understand what you're trying to do here, it's
> probably best to keep slab as-is and implement "slab accounting" on
> top of it.
> 
> You'd have something like you do now but in slightly more generic form:
> 
>   struct kmem_accounted_cache {
>                   struct kmem_cache *cache;
>                   /* ... statistics... */
>   }
> 
>   void *kmem_accounted_alloc(struct kmem_accounted_cache *c)
>   {
>           if (/* within limits */)
>                   return kmem_cache_alloc(c->cache);
> 
>           return NULL;
>   }
> 
> Does something like that make sense to you?

This will make sense, since the kernel memory management per-cgroup is one of the
things we'd live to have, but this particular idea will definitely not work in case
we keep the containers' files on one partition keeping each container in its own 
chroot environment.

>                         Pekka
> .
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ