[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0e0616f2-6c5a-8911-7d37-6f2027c2930b@suse.cz>
Date: Wed, 20 May 2020 15:51:45 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
kernel-team@...com, linux-kernel@...r.kernel.org,
Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH v3 04/19] mm: slub: implement SLUB version of
obj_to_index()
On 4/22/20 10:46 PM, Roman Gushchin wrote:
> This commit implements SLUB version of the obj_to_index() function,
> which will be required to calculate the offset of obj_cgroup in the
> obj_cgroups vector to store/obtain the objcg ownership data.
>
> To make it faster, let's repeat the SLAB's trick introduced by
> commit 6a2d7a955d8d ("[PATCH] SLAB: use a multiply instead of a
> divide in obj_to_index()") and avoid an expensive division.
>
> Signed-off-by: Roman Gushchin <guro@...com>
> Acked-by: Christoph Lameter <cl@...ux.com>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
There's already a slab_index() doing the same without the trick, with only
SLUB_DEBUG callers. Maybe just improve it and perhaps rename? (obj_to_index()
seems more descriptive). The difference is that it takes the result of
page_addr() instead of doing that, as it's being called in a loop on objects
from a single page, so you'd have to perhaps split to obj_to_index(page) and
__obj_to_index(addr) or something.
> ---
> include/linux/slub_def.h | 9 +++++++++
> mm/slub.c | 1 +
> 2 files changed, 10 insertions(+)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index d2153789bd9f..200ea292f250 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -8,6 +8,7 @@
> * (C) 2007 SGI, Christoph Lameter
> */
> #include <linux/kobject.h>
> +#include <linux/reciprocal_div.h>
>
> enum stat_item {
> ALLOC_FASTPATH, /* Allocation from cpu slab */
> @@ -86,6 +87,7 @@ struct kmem_cache {
> unsigned long min_partial;
> unsigned int size; /* The size of an object including metadata */
> unsigned int object_size;/* The size of an object without metadata */
> + struct reciprocal_value reciprocal_size;
> unsigned int offset; /* Free pointer offset */
> #ifdef CONFIG_SLUB_CPU_PARTIAL
> /* Number of per cpu partial objects to keep around */
> @@ -182,4 +184,11 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
> return result;
> }
>
> +static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> + const struct page *page, void *obj)
> +{
> + return reciprocal_divide(kasan_reset_tag(obj) - page_address(page),
> + cache->reciprocal_size);
> +}
> +
> #endif /* _LINUX_SLUB_DEF_H */
> diff --git a/mm/slub.c b/mm/slub.c
> index 03071ae5ff07..8d16babe1829 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3660,6 +3660,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> */
> size = ALIGN(size, s->align);
> s->size = size;
> + s->reciprocal_size = reciprocal_value(size);
> if (forced_order >= 0)
> order = forced_order;
> else
>
Powered by blists - more mailing lists