lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160129121545.GH31266@X58A-UD3R>
Date:	Fri, 29 Jan 2016 21:15:46 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:	Peter Hurley <peter@...leysoftware.com>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	akpm@...ux-foundation.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org, akinobu.mita@...il.com, jack@...e.cz,
	torvalds@...ux-foundation.org
Subject: Re: [PATCH v4] lib/spinlock_debug.c: prevent a recursive cycle in
 the debug code

On Fri, Jan 29, 2016 at 01:05:00PM +0900, Sergey Senozhatsky wrote:
> then this will explode:
> 
> printk
>  spin_lock
>   >> coding error <<
>  spin_unlock
>   printk
>    spin_lock
>     printk
>      spin_lock
>       printk
>        spin_lock
>         ... boom
> 
> vprintk_emit() recursion detection code will not work for logbuf_lock here.
> because the only criteria how vprintk_emit() can detect a recursion is via
> static `logbuf_cpu' which is set to UINT_MAX right before it
> raw_spin_unlock(&logbuf_lock). so from vprintk_emit() POV the logbuf_lock is
> already unlocked. which is not true.
> 
> 
> in case of memory corruption I don't think we must care, 'coding error case'
> is _probably/may be_ something that can be improved, but I'm not really 100%
> sure... and this still doesn't explain your console_sem.lock case.

Hello, I found the case this bad thing can happen. So the thought occurred
struck me that we need a patch, similar to my v3 patch, even though the
consideration of logbug_lock in the v3 patch may not be necessary now.

cpu0
====
printk
  console_trylock
  console_unlock
    up_console_sem
      up
        raw_spin_lock_irqsave(&sem->lock, flags)
        __up
          wake_up_process
            try_to_wake_up
              raw_spin_lock_irqsave(&p->pi_lock)
                __spin_lock_debug
                  spin_dump // once it happened
                    printk
                      console_trylock
                        raw_spin_lock_irqsave(&sem->lock, flags)

                        <=== DEADLOCK

cpu1
====
printk
  console_trylock
    raw_spin_lock_irqsave(&sem->lock, flags)
    __spin_lock_debug
      spin_dump
        printk
          ...

          <=== repeat the recursive cycle infinitely

This was the my v3 patch.
-----8<-----
>From 92c84ea45ac18010804aa09eeb9e03f797a4b2b0 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Wed, 27 Jan 2016 13:33:24 +0900
Subject: [PATCH v3] lib/spinlock_debug.c: prevent an infinite recursive cycle
 in spin_dump()

It causes an infinite recursive cycle when using CONFIG_DEBUG_SPINLOCK,
in the spin_dump(). Backtrace prints printk() -> console_trylock() ->
do_raw_spin_lock() -> spin_dump() -> printk()... infinitely.

When the spin_dump() is called from printk(), we should prevent the
debug spinlock code from calling printk() again in that context. It's
reasonable to avoid printing "lockup suspected" which is just a warning
message but it would cause a real lockup definitely.

However, this patch does not deal with spin_bug(), since avoiding it in
the spin_bug() does not help it at all. Calling spin_bug() nearly means a
real lockup happened!. In that case, it's not helpful.

Signed-off-by: Byungchul Park <byungchul.park@....com>
---
 kernel/locking/spinlock_debug.c | 16 +++++++++++++---
 kernel/printk/printk.c          |  6 ++++++
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 0374a59..fefc76c 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -103,6 +103,8 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock)
 	lock->owner_cpu = -1;
 }
 
+extern int is_printk_lock(raw_spinlock_t *lock);
+
 static void __spin_lock_debug(raw_spinlock_t *lock)
 {
 	u64 i;
@@ -113,11 +115,19 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
 			return;
 		__delay(1);
 	}
-	/* lockup suspected: */
-	spin_dump(lock, "lockup suspected");
+
+	/*
+	 * If this function is called from printk(), then we should
+	 * not call printk() more. Or it will cause an infinite
+	 * recursive cycle!
+	 */
+	if (likely(!is_printk_lock(lock))) {
+		/* lockup suspected: */
+		spin_dump(lock, "lockup suspected");
 #ifdef CONFIG_SMP
-	trigger_all_cpu_backtrace();
+		trigger_all_cpu_backtrace();
 #endif
+	}
 
 	/*
 	 * The trylock above was causing a livelock.  Give the lower level arch
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 2ce8826..657f8dd 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1981,6 +1981,12 @@ asmlinkage __visible void early_printk(const char *fmt, ...)
 }
 #endif
 
+int is_printk_lock(raw_spinlock_t *lock)
+{
+	return	(lock == &console_sem.lock) ||
+		(lock == &logbuf_lock)      ;
+}
+
 static int __add_preferred_console(char *name, int idx, char *options,
 				   char *brl_options)
 {
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ