lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1801180957230.1847@nanos>
Date:   Thu, 18 Jan 2018 10:02:25 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Yang Shi <yang.s@...baba-inc.com>
cc:     longman@...hat.com, LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 2/2 v3] lib: debugobjects: touch watchdog to avoid
 softlockup when !CONFIG_PREEMPT

On Thu, 18 Jan 2018, Yang Shi wrote:
> On 1/17/18 4:21 AM, Thomas Gleixner wrote:
> > There are two things which can be done here:
> > 
> >   1) The collected objects can be put on a global free list and work
> >      scheduled to free them piecewise.
> 
> I don't get your point here. objects free has already been done in a work.
> free_object() -> schedule_work()

But it's doing it in a loop for each freed object.

> Do you mean free all of them out of the for loop in a batch? Then don't call
> free_object() in the for loop?

Somehing like this:

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 2f5349c6e81a..d36940cdc658 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -42,6 +42,7 @@ static struct debug_obj		obj_static_pool[ODEBUG_POOL_SIZE] __initdata;
 static DEFINE_RAW_SPINLOCK(pool_lock);
 
 static HLIST_HEAD(obj_pool);
+static HLIST_HEAD(obj_to_free);
 
 static int			obj_pool_min_free = ODEBUG_POOL_SIZE;
 static int			obj_pool_free = ODEBUG_POOL_SIZE;
@@ -714,13 +715,12 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
 static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 {
 	unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
-	struct hlist_node *tmp;
-	HLIST_HEAD(freelist);
 	struct debug_obj_descr *descr;
 	enum debug_obj_state state;
 	struct debug_bucket *db;
+	struct hlist_node *tmp;
 	struct debug_obj *obj;
-	int cnt;
+	int cnt, freed = 0;
 
 	saddr = (unsigned long) address;
 	eaddr = saddr + size;
@@ -751,21 +751,22 @@ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
 				goto repeat;
 			default:
 				hlist_del(&obj->node);
-				hlist_add_head(&obj->node, &freelist);
+				/* Put them on the freelist */
+				raw_spin_lock_irqsave(&pool_lock, flags);
+				hlist_add_head(&obj->node, &obj_to_free);
+				raw_spin_lock_irqrestore(&pool_lock, flags);
+				freed++;
 				break;
 			}
 		}
 		raw_spin_unlock_irqrestore(&db->lock, flags);
 
-		/* Now free them */
-		hlist_for_each_entry_safe(obj, tmp, &freelist, node) {
-			hlist_del(&obj->node);
-			free_object(obj);
-		}
-
 		if (cnt > debug_objects_maxchain)
 			debug_objects_maxchain = cnt;
 	}
+
+	if (freed)
+		schedule_work(.....);

The allocation side can look at the free list as well and grab objects from
there if the pool level is low if that happens before the work can do that.


> > 
> >   2) We can do a cond_resched() if not in atomic context and interrupts are
> >      enabled.
> 
> I did try this before I went with touching softlockup watchdog approach. The
> problem is in_atomic() can't tell if it is in atomic context on non-preempt
> kernel. For preempt kernel, it is easy.

Peter, can we do anything about that?

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ