lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Mar 2019 10:35:18 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        selinux@...r.kernel.org, Paul Moore <paul@...l-moore.com>,
        Stephen Smalley <sds@...ho.nsa.gov>,
        Eric Paris <eparis@...isplace.org>,
        Oleg Nesterov <oleg@...hat.com>
Subject: Re: [PATCH 4/4] mm: Do periodic rescheduling when freeing objects in
 kmem_free_up_q()

On 03/21/2019 06:00 PM, Peter Zijlstra wrote:
> On Thu, Mar 21, 2019 at 05:45:12PM -0400, Waiman Long wrote:
>> If the freeing queue has many objects, freeing all of them consecutively
>> may cause soft lockup especially on a debug kernel. So kmem_free_up_q()
>> is modified to call cond_resched() if running in the process context.
>>
>> Signed-off-by: Waiman Long <longman@...hat.com>
>> ---
>>  mm/slab_common.c | 11 ++++++++++-
>>  1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index dba20b4208f1..633a1d0f6d20 100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -1622,11 +1622,14 @@ EXPORT_SYMBOL_GPL(kmem_free_q_add);
>>   * kmem_free_up_q - free all the objects in the freeing queue
>>   * @head: freeing queue head
>>   *
>> - * Free all the objects in the freeing queue.
>> + * Free all the objects in the freeing queue. The caller cannot hold any
>> + * non-sleeping locks.
>>   */
>>  void kmem_free_up_q(struct kmem_free_q_head *head)
>>  {
>>  	struct kmem_free_q_node *node, *next;
>> +	bool do_resched = !in_irq();
>> +	int cnt = 0;
>>  
>>  	for (node = head->first; node; node = next) {
>>  		next = node->next;
>> @@ -1634,6 +1637,12 @@ void kmem_free_up_q(struct kmem_free_q_head *head)
>>  			kmem_cache_free(node->cachep, node);
>>  		else
>>  			kfree(node);
>> +		/*
>> +		 * Call cond_resched() every 256 objects freed when in
>> +		 * process context.
>> +		 */
>> +		if (do_resched && !(++cnt & 0xff))
>> +			cond_resched();
> Why not just: cond_resched() ?

cond_resched() calls ___might_sleep(). So it is prudent to check for
process context first to avoid erroneous message. Yes, I can call
cond_resched() after every free. I added the count just to not call it
too frequently.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ