lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231016082659.6ca94a5dff368783698753f9@kernel.org>
Date:   Mon, 16 Oct 2023 08:26:59 +0900
From:   Masami Hiramatsu (Google) <mhiramat@...nel.org>
To:     "wuqiang.matt" <wuqiang.matt@...edance.com>
Cc:     linux-trace-kernel@...r.kernel.org, davem@...emloft.net,
        anil.s.keshavamurthy@...el.com, naveen.n.rao@...ux.ibm.com,
        rostedt@...dmis.org, peterz@...radead.org,
        akpm@...ux-foundation.org, sander@...nheule.net,
        ebiggers@...gle.com, dan.j.williams@...el.com, jpoimboe@...nel.org,
        linux-kernel@...r.kernel.org, lkp@...el.com, mattwu@....com
Subject: Re: [PATCH v10 1/5] lib: objpool added: ring-array based lockless
 MPMC

On Mon, 16 Oct 2023 00:06:11 +0800
"wuqiang.matt" <wuqiang.matt@...edance.com> wrote:

> On 2023/10/15 23:43, Masami Hiramatsu (Google) wrote:
> > On Sun, 15 Oct 2023 13:32:47 +0800
> > "wuqiang.matt" <wuqiang.matt@...edance.com> wrote:
> > 
> >> objpool is a scalable implementation of high performance queue for
> >> object allocation and reclamation, such as kretprobe instances.
> >>
> >> With leveraging percpu ring-array to mitigate hot spots of memory
> >> contention, it delivers near-linear scalability for high parallel
> >> scenarios. The objpool is best suited for the following cases:
> >> 1) Memory allocation or reclamation are prohibited or too expensive
> >> 2) Consumers are of different priorities, such as irqs and threads
> >>
> >> Limitations:
> >> 1) Maximum objects (capacity) is fixed after objpool creation
> >> 2) All pre-allocated objects are managed in percpu ring array,
> >>     which consumes more memory than linked lists
> >>
> > 
> > Thanks for updating! This looks good to me except 2 points.
> > 
> > [...]
> >> +
> >> +/* initialize object pool and pre-allocate objects */
> >> +int objpool_init(struct objpool_head *pool, int nr_objs, int object_size,
> >> +		gfp_t gfp, void *context, objpool_init_obj_cb objinit,
> >> +		objpool_fini_cb release)
> >> +{
> >> +	int rc, capacity, slot_size;
> >> +
> >> +	/* check input parameters */
> >> +	if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX ||
> >> +	    object_size <= 0 || object_size > OBJPOOL_OBJECT_SIZE_MAX)
> >> +		return -EINVAL;
> >> +
> >> +	/* align up to unsigned long size */
> >> +	object_size = ALIGN(object_size, sizeof(long));
> >> +
> >> +	/* calculate capacity of percpu objpool_slot */
> >> +	capacity = roundup_pow_of_two(nr_objs);
> > 
> > This must be 'roundup_pow_of_two(nr_objs + 1)' because if nr_objs is power
> > of 2 and all objects are pushed on the same slot, tail == head. This
> > means empty and full is the same.
> 
> That won't happen. Would tail and head wrap only when >= 2^32. When all
> objects are pushed to the same slot, tail will be (head + capacity).

Ah, indeed. OK.

> 
> > 
> >> +	if (!capacity)
> >> +		return -EINVAL;
> >> +
> >> +	/* initialize objpool pool */
> >> +	memset(pool, 0, sizeof(struct objpool_head));
> >> +	pool->nr_cpus = nr_cpu_ids;
> >> +	pool->obj_size = object_size;
> >> +	pool->capacity = capacity;
> >> +	pool->gfp = gfp & ~__GFP_ZERO;
> >> +	pool->context = context;
> >> +	pool->release = release;
> >> +	slot_size = pool->nr_cpus * sizeof(struct objpool_slot);
> >> +	pool->cpu_slots = kzalloc(slot_size, pool->gfp);
> >> +	if (!pool->cpu_slots)
> >> +		return -ENOMEM;
> >> +
> >> +	/* initialize per-cpu slots */
> >> +	rc = objpool_init_percpu_slots(pool, nr_objs, context, objinit);
> >> +	if (rc)
> >> +		objpool_fini_percpu_slots(pool);
> >> +	else
> >> +		refcount_set(&pool->ref, pool->nr_objs + 1);
> >> +
> >> +	return rc;
> >> +}
> >> +EXPORT_SYMBOL_GPL(objpool_init);
> >> +
> > 
> > [...]
> > 
> >> +
> >> +/* drop unused objects and defref objpool for releasing */
> >> +void objpool_fini(struct objpool_head *pool)
> >> +{
> >> +	void *obj;
> >> +
> >> +	do {
> >> +		/* grab object from objpool and drop it */
> >> +		obj = objpool_pop(pool);
> >> +
> >> +		/*
> >> +		 * drop reference of objpool anyway even if
> >> +		 * the obj is NULL, since one extra ref upon
> >> +		 * objpool was already grabbed during pool
> >> +		 * initialization in objpool_init()
> >> +		 */
> >> +		if (refcount_dec_and_test(&pool->ref))
> >> +			objpool_free(pool);
> > 
> > Nit: you can call objpool_drop() instead of repeating the same thing here.
> 
> objpool_drop won't deref objpool if given obj is NULL. But here we need
> drop objpool anyway even if obj is NULL.

I guess you decrement for the 'objpool' itself if obj=NULL, but I think
it is a bit hacky (so you added the comment).
e.g. rethook is doing something like below.

---
/* extra count for this pool itself */
count = 1;
/* make the pool empty */
while (objpool_pop(pool))
	count++;

if (refcount_sub_and_test(count, &pool->ref))
	objpool_free(pool);
---

> 
> > Thank you,
> > 
> >> +	} while (obj);
> >> +}
> >> +EXPORT_SYMBOL_GPL(objpool_fini);
> >> -- 
> >> 2.40.1
> >>
> 
> Thanks for you time
> 
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ