lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Aug 2012 22:47:04 +0900
From:	JoonSoo Kim <js1304@...il.com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Pekka Enberg <penberg@...nel.org>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH] slub: try to get cpu partial slab even if we get enough
 objects for cpu freelist

>> I think that s->cpu_partial is for cpu partial slab, not cpu slab.
>
> Ummm... Not entirely. s->cpu_partial is the mininum number of objects to
> "cache" per processor. This includes the objects available in the per cpu
> slab and the other slabs on the per cpu partial list.

Hmm..
When we do test for unfreezing in put_cpu_partial(), we only compare
how many objects is in "cpu partial slab" with s->cpu_partial,
although it is just approximation of number of objects kept in cpu partial slab.
We do not consider number of objects kept in cpu slab in that time.
This makes me "s->cpu_partial is only for cpu partial slab, not cpu slab".

We can't count number of objects kept in in cpu slab easily.
Therefore, it it more consistent that s->cpu_partial is always for cpu
partial slab.

But, if you prefer that s->cpu_partial is for both cpu slab and cpu
partial slab,
get_partial_node() needs an another minor fix.
We should add number of objects in cpu slab when we refill cpu partial slab.
Following is my suggestion.

@@ -1546,7 +1546,7 @@ static void *get_partial_node(struct kmem_cache *s,
        spin_lock(&n->list_lock);
        list_for_each_entry_safe(page, page2, &n->partial, lru) {
                void *t = acquire_slab(s, n, page, object == NULL);
-               int available;
+               int available, nr = 0;

                if (!t)
                        break;
@@ -1557,10 +1557,10 @@ static void *get_partial_node(struct kmem_cache *s,
                        object = t;
                        available =  page->objects - page->inuse;
                } else {
-                       available = put_cpu_partial(s, page, 0);
+                       nr = put_cpu_partial(s, page, 0);
                        stat(s, CPU_PARTIAL_NODE);
                }
-               if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
+               if (kmem_cache_debug(s) || (available + nr) >
s->cpu_partial / 2)
                        break;

        }

If you agree with this suggestion, I send a patch for this.


> If object == NULL then we have so far nothing allocated an c->page ==
> NULL. The first allocation refills the cpu_slab (by freezing a slab) so
> that we can allocate again. If we go through the loop again then we refill
> the per cpu partial lists with more frozen slabs until we have a
> sufficient number of objects that we can allocate without obtaining any
> locks.
>
>> This patch is for correcting this.
>
> There is nothing wrong with this. The name c->cpu_partial is a bit
> awkward. Maybe rename that to c->min_per_cpu_objects or so?

Okay.
It look better.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ