[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231016004356.b5f3f815cb8d7c0994934332@kernel.org>
Date: Mon, 16 Oct 2023 00:43:56 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: "wuqiang.matt" <wuqiang.matt@...edance.com>
Cc: linux-trace-kernel@...r.kernel.org, davem@...emloft.net,
anil.s.keshavamurthy@...el.com, naveen.n.rao@...ux.ibm.com,
rostedt@...dmis.org, peterz@...radead.org,
akpm@...ux-foundation.org, sander@...nheule.net,
ebiggers@...gle.com, dan.j.williams@...el.com, jpoimboe@...nel.org,
linux-kernel@...r.kernel.org, lkp@...el.com, mattwu@....com
Subject: Re: [PATCH v10 1/5] lib: objpool added: ring-array based lockless
MPMC
On Sun, 15 Oct 2023 13:32:47 +0800
"wuqiang.matt" <wuqiang.matt@...edance.com> wrote:
> objpool is a scalable implementation of high performance queue for
> object allocation and reclamation, such as kretprobe instances.
>
> With leveraging percpu ring-array to mitigate hot spots of memory
> contention, it delivers near-linear scalability for high parallel
> scenarios. The objpool is best suited for the following cases:
> 1) Memory allocation or reclamation are prohibited or too expensive
> 2) Consumers are of different priorities, such as irqs and threads
>
> Limitations:
> 1) Maximum objects (capacity) is fixed after objpool creation
> 2) All pre-allocated objects are managed in percpu ring array,
> which consumes more memory than linked lists
>
Thanks for updating! This looks good to me except 2 points.
[...]
> +
> +/* initialize object pool and pre-allocate objects */
> +int objpool_init(struct objpool_head *pool, int nr_objs, int object_size,
> + gfp_t gfp, void *context, objpool_init_obj_cb objinit,
> + objpool_fini_cb release)
> +{
> + int rc, capacity, slot_size;
> +
> + /* check input parameters */
> + if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX ||
> + object_size <= 0 || object_size > OBJPOOL_OBJECT_SIZE_MAX)
> + return -EINVAL;
> +
> + /* align up to unsigned long size */
> + object_size = ALIGN(object_size, sizeof(long));
> +
> + /* calculate capacity of percpu objpool_slot */
> + capacity = roundup_pow_of_two(nr_objs);
This must be 'roundup_pow_of_two(nr_objs + 1)' because if nr_objs is power
of 2 and all objects are pushed on the same slot, tail == head. This
means empty and full is the same.
> + if (!capacity)
> + return -EINVAL;
> +
> + /* initialize objpool pool */
> + memset(pool, 0, sizeof(struct objpool_head));
> + pool->nr_cpus = nr_cpu_ids;
> + pool->obj_size = object_size;
> + pool->capacity = capacity;
> + pool->gfp = gfp & ~__GFP_ZERO;
> + pool->context = context;
> + pool->release = release;
> + slot_size = pool->nr_cpus * sizeof(struct objpool_slot);
> + pool->cpu_slots = kzalloc(slot_size, pool->gfp);
> + if (!pool->cpu_slots)
> + return -ENOMEM;
> +
> + /* initialize per-cpu slots */
> + rc = objpool_init_percpu_slots(pool, nr_objs, context, objinit);
> + if (rc)
> + objpool_fini_percpu_slots(pool);
> + else
> + refcount_set(&pool->ref, pool->nr_objs + 1);
> +
> + return rc;
> +}
> +EXPORT_SYMBOL_GPL(objpool_init);
> +
[...]
> +
> +/* drop unused objects and defref objpool for releasing */
> +void objpool_fini(struct objpool_head *pool)
> +{
> + void *obj;
> +
> + do {
> + /* grab object from objpool and drop it */
> + obj = objpool_pop(pool);
> +
> + /*
> + * drop reference of objpool anyway even if
> + * the obj is NULL, since one extra ref upon
> + * objpool was already grabbed during pool
> + * initialization in objpool_init()
> + */
> + if (refcount_dec_and_test(&pool->ref))
> + objpool_free(pool);
Nit: you can call objpool_drop() instead of repeating the same thing here.
Thank you,
> + } while (obj);
> +}
> +EXPORT_SYMBOL_GPL(objpool_fini);
> --
> 2.40.1
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists