lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Nov 2017 10:49:51 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Petr Mladek <pmladek@...e.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Cong Wang <xiyou.wangcong@...il.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...nel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        "yuwang.yuwang" <yuwang.yuwang@...baba-inc.com>
Subject: Re: [PATCH] mm: don't warn about allocations which stall for too
 long

On Thu, 2 Nov 2017 12:46:50 +0100
Petr Mladek <pmladek@...e.com> wrote:

> On Wed 2017-11-01 11:36:47, Steven Rostedt wrote:
> > On Wed, 1 Nov 2017 14:38:45 +0100
> > Petr Mladek <pmladek@...e.com> wrote:  
> > > My current main worry with Steven's approach is a risk of deadlocks
> > > that Jan Kara saw when he played with similar solution.  
> > 
> > And if there exists such a deadlock, then the deadlock exists today.  
> 
> The patch is going to effectively change console_trylock() to
> console_lock() and this might add problems.
> 
> The most simple example is:
> 
>        console_lock()
>          printk()
> 	    console_trylock() was SAFE.
> 
>        console_lock()
>          printk()
> 	   console_lock() cause DEADLOCK!
> 
> Sure, we could detect this and avoid waiting when
> console_owner == current. But does this cover all

Which I will do.

> situations? What about?
> 
> CPU0			CPU1
> 
> console_lock()          func()
>   console->write()        take_lockA()
>     func()		    printk()
> 			      busy wait for console_lock()
> 
>       take_lockA()

How does this not deadlock without my changes?

 func()
   take_lockA()
     printk()
       console_lock()
         console->write()
             func()
                take_lockA()

DEADLOCK!


> 
> By other words, it used to be safe to call printk() from
> console->write() functions because printk() used console_trylock().

I still don't see how this can be safe now.

> Your patch is going to change this. It is even worse because
> you probably will not use console_lock() directly and therefore
> this might be hidden for lockdep.

And no, my patch adds lockdep annotation for the spinner. And if I get
that wrong, I'm sure Peter Zijltra will help.

> 
> BTW: I am still not sure how to make the busy waiter preferred
> over console_lock() callers. I mean that the busy waiter has
> to get console_sem even if there are some tasks in the workqueue.

I started struggling with this, then realized that console_sem is just
that: a semaphore. Which doesn't have a concept of ownership. I can
simply hand off the semaphore without ever letting it go. My RFC patch
is almost done, you'll see it soon.

> 
> 
> > > But let's wait for the patch. It might look and work nicely
> > > in the end.  
> > 
> > Oh, I need to write a patch? Bah, I guess I should. Where's all those
> > developers dying to do kernel programing where I can pass this off to?  
> 
> Yes, where are these days when my primary task was to learn kernel
> hacking? This would have been a great training material.

:)

> 
> I still have to invest time into fixing printk. But I personally
> think that the lazy offloading to kthreads is more promising
> way to go. It is pretty straightforward. The only problem is
> the guaranty of the takeover. But there must be a reasonable
> way how to detect that the system heart is still beating
> and we are not the only working CPU.

My patch isn't that big. Let's talk more after I post it.

-- Steve

Powered by blists - more mailing lists