lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3e5d119-8a29-3345-8074-ad1b47ca9cce@huawei.com>
Date: Tue, 3 Sep 2024 20:06:33 +0800
From: "Leizhen (ThunderTown)" <thunder.leizhen@...wei.com>
To: Thomas Gleixner <tglx@...utronix.de>, Andrew Morton
	<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/5] debugobjects: Don't start fill if there are remaining
 nodes locally



On 2024/9/3 17:52, Thomas Gleixner wrote:
> On Mon, Sep 02 2024 at 22:05, Zhen Lei wrote:
> 
>> If the conditions for starting fill are met, it means that all cores that
>> call fill() later are blocked until the first core completes the fill
>> operation. But obviously, for a core that has free nodes locally, it does
>> not need to be blocked. This is good in stress situations.
> 
> Sure it's good, but is it correct? You need to explain why this can't> cause a pool depletion. The pool is filled opportunistically.
In the case of no nesting, a core uses only one node at a time.
Even if nesting occurs and there is only one local node,
256 / (16 + 1) = 15, the current parameter definition tolerates
15 cores, which should be sufficient. In fact, many cores may
see just >= 256 at the same time without filling. Therefore,
to eliminate the probability problem completely, an additional
mechanism is needed.

#define ODEBUG_POOL_MIN_LEVEL	256
#define ODEBUG_BATCH_SIZE	16

> 
> Aside of that the lock contention in fill_pool() is minimal. The heavy
> lifting is the allocation of objects.

I'm optimizing this, too. However, a new hlist helper function need to
be added. Now that you've mentioned it, I'll send it in V2 too!

> 
>> diff --git a/lib/debugobjects.c b/lib/debugobjects.c
>> index aba3e62a4315f51..fc8224f9f0eda8f 100644
>> --- a/lib/debugobjects.c
>> +++ b/lib/debugobjects.c
>> @@ -130,10 +130,15 @@ static void fill_pool(void)
>>  	gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
>>  	struct debug_obj *obj;
>>  	unsigned long flags;
>> +	struct debug_percpu_free *percpu_pool;
> 
> Please keep variables in reverse fir tree order.
> 
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations

OK, I will correct it.

>   
>>  	if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
>>  		return;
>>  
>> +	percpu_pool = this_cpu_ptr(&percpu_obj_pool);
> 
> You don't need the pointer
> 
>> +	if (likely(obj_cache) && percpu_pool->obj_free > 0)
> 
> 	if (likely(obj_cache) && this_cpu_read(percpu_pool.obj_free) > 0)

Nice, thanks

> 
> This lacks a comment explaining the rationale of this check.

OK, I'll add.

> 
> Thanks,
> 
>         tglx
> 
> 
> 
> 
> .
> 

-- 
Regards,
  Zhen Lei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ