lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56BA04AD.7090903@redhat.com>
Date:	Tue, 09 Feb 2016 16:24:29 +0100
From:	Denys Vlasenko <dvlasenk@...hat.com>
To:	Steven Rostedt <rostedt@...dmis.org>
CC:	linux-kernel@...r.kernel.org, srostedt@...hat.com,
	Tejun Heo <tj@...nel.org>,
	Peter Hurley <peter@...leysoftware.com>
Subject: Re: [PATCH] printk: avoid livelock if another CPU printks continuously

On 02/09/2016 04:17 PM, Steven Rostedt wrote:
> On Tue, 09 Feb 2016 15:59:48 +0100
> Denys Vlasenko <dvlasenk@...hat.com> wrote:
> 
>>> First, console_seq needs logbuf_lock protection. On some archs, this may
>>> hit 9999 every time as the console_seq is most likely in cache and isn't
>>> updating.  
>>
>> We end up here only in one case:
>>
>>                if (console_seq == log_next_seq)
>>                        break;
>>
>>                if (--cnt == 0)
>>                        break;  /* Someone else printk's like crazy */
>>
>> - if this second "break" triggers.
>> In this case, "console_seq == log_next_seq" can't be true.
> 
> We released all locks, why can't it be true? What prevents another task
> on another CPU from coming into this section and updating everything?

If we see that happening, it means another CPU started serving printk
backlog. In which case, we (this CPU) no longer have the obligation
to service it.

>> Therefore, if we later see that console_seq become
>> equal to log_next_seq (even if momentarily), then other CPU definitely
>> entered printk servicing loop. Which means that we are off the hook
>> - that other CPU is responsible now. We can bail out.
>>
>> I saw this as a cheap (no locking ops, no writes) way to check whether
>> we can exit cpu_relax() loop early. It is not reliable, yes.
>> The definitive check for the case where no one took the job is later,
>> "if (console_trylock())". If that trylock succeeds, we must iterate
>> over the buffer once again - there might be unserviced printk messages.
>>
>> (Why early exit check it is effective?
>> IIRC cpu_relax() is a memory barrier. Compiler is not allowed to cache
>> memory variables across it, thus "if (console_seq == log_next_seq)..."
>> check always re-reads these variables from memory.
> 
> cpu_relax() is a compiler barrier, not a memory barrier. Yes, the
> compiler wont cache the variables, but nothing stops the CPU from doing
> so. All archs use this code.

We expect caches to be coherent. If another CPU modifies these variables,
our CPU will (eventually) see modified values.

My code does not depend on always catching "console_seq != log_next_seq"
==> "console_seq == log_next_seq" transition. It's okay to miss it.

> spin_locks() are memory barriers, and will sync everything.

They are also much heavier than memory reads.

>>>> +				/* Good, other CPU entered "for(;;)" loop */
>>>> +				goto out;
>>>> +			}
>>>> +		}
>>>> +		/* No one seems to be willing to take it... */
>>>> +		if (console_trylock())
>>>> +			goto again; /* we took it */  
>>>
>>> Perhaps add a few loops to the taking of the console sem.  
>>
>> Why?
>>
>> If we fail to take the lock, another CPU took it.
>> There is no need to try harder, as soon as we know that any
>> other CPU took that lock, we can safely exit this function.
> 
> Because if this CPU is the one spamming the other CPU, it will widen
> the window to be the one that takes the lock.

If we reached this code, we aren't the spamming CPU. We are the CPU
which is being spammed (we are in the loop which services the backlog).

> I still think this is very hacky.

I'm not insisting on my patch. Propose a different solution.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ