lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150529105607.GG3135@pathway.suse.cz>
Date:	Fri, 29 May 2015 12:56:07 +0200
From:	Petr Mladek <pmladek@...e.cz>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Dave Anderson <anderson@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Kay Sievers <kay@...y.org>, Jiri Kosina <jkosina@...e.cz>,
	Michal Hocko <mhocko@...e.cz>, Jan Kara <jack@...e.cz>,
	linux-kernel@...r.kernel.org, Wang Long <long.wanglong@...wei.com>,
	peifeiyue@...wei.com, dzickus@...hat.com, morgan.wang@...wei.com,
	sasha.levin@...cle.com
Subject: Re: [PATCH 02/10] printk: Try harder to get logbuf_lock on NMI

On Thu 2015-05-28 13:09:44, Andrew Morton wrote:
> On Thu, 28 May 2015 15:50:54 +0200 Petr Mladek <pmladek@...e.cz> wrote:
> 
> > > > +{
> > > > +	u64 start_time, current_time;
> > > > +	int this_cpu = smp_processor_id();
> > > > +
> > > > +	/* no way if we are already locked on this CPU */
> > > > +	if (logbuf_cpu == this_cpu)
> > > > +		return 0;
> > > > +
> > > > +	/* try hard to get the lock but do not wait forever */
> > > > +	start_time = cpu_clock(this_cpu);
> > > > +	current_time = start_time;
> > > > +	while (current_time - start_time < TRY_LOCKBUF_LOCK_MAX_DELAY_NS) {
> > > > +		if (raw_spin_trylock(&logbuf_lock))
> > > > +			return 1;
> > > > +		cpu_relax();
> > > > +		current_time = cpu_clock(this_cpu);
> > > > +	}
> > > 
> > > (Looks at the read_seqcount_retry() in
> > > kernel/time/sched_clock.c:sched_clock())
> > >
> > > Running cpu_clock() in NMI context seems a generally bad idea.
> > 
> > I am sorry but this is too cryptic for me :-)
> > read_seqcount_retry() looks safe to me under NMI.
> 
> hmpf.  If you guys say so...
> 
> Note that it's not just a matter of "safe to call from NMI context". 
> The above loop also assume that cpu_clock() is *being updated* within
> the context of single NMI.  Is that true/safe now and in the future?
> Probably.  I didn't check all architectures but ARM looks OK at present.
> 
> We should at least update Documentation/timers/timekeeping.txt: "a sane
> value" becomes "the correct value", no alternatives.
> 
> > > There are many sites in kernel/printk/printk.c which take logbuf_lock,
> > > but this patch only sets logbuf_cpu in one of those cases:
> > > vprintk_emit().  I suggest adding helper functions to take/release
> > > logbuf_lock.  And rename logbuf_lock to something else to ensure that
> > > nobody accidentally takes the lock directly.
> > 
> > IMHO, vprintk_emit() is special. It is the only location where the
> > lock is taken in NMI context. The other functions are used to dump
> > @logbuf and are called in normal context.
> > 
> > try_logbuf_lock_in_nmi() could fail and we need to handle the error
> > path. We do not need to do this in the other locations.
> > 
> > Note that we do not want to get the console in NMI because
> > there are even more locks that might cause a deadlock.
> 
> Consider the case where a CPU has taken logbuf_lock within
> devkmsg_read() and then receives an NMI, from which it calls
> try_logbuf_lock_in_nmi():

I am not sure that I understand. My point is that we do not call
devkmsg_read() from NMI context, so we do not need to use
try_logbuf_lock_in_nmi() there. IMHO, the same is true for
all other locations except for vprintk_emit().


> > +/* We must be careful in NMI when we managed to preempt a running printk */
> > +static int try_logbuf_lock_in_nmi(void)
> > +{
> > +	u64 start_time, current_time;
> > +	int this_cpu = smp_processor_id();
> > +
> > +	/* no way if we are already locked on this CPU */
> > +	if (logbuf_cpu == this_cpu)
> > +		return 0;

Or do you have this check in mind? It will detect the deadlock
immediately but @logbuf_cpu is set only in vprintk_emit(). We
will spin when NMI comes inside the other functions,
e.g. devkmsg_read().


> > +	/* try hard to get the lock but do not wait forever */
> > +	start_time = cpu_clock(this_cpu);
> > +	current_time = start_time;
> > +	while (current_time - start_time < TRY_LOCKBUF_LOCK_MAX_DELAY_NS) {
> > +		if (raw_spin_trylock(&logbuf_lock))
> > +			return 1;
> > +		cpu_relax();
> > +		current_time = cpu_clock(this_cpu);
> > +	}
> > +
> > +	return 0;
> > +}
> 
> That CPU is now going to spin around for 100us and then time out.

Yes, there was a deadlock without the patch. So, limited spinning is
still a win.

Or would you like to detect the deadlock immediately in all cases?
I mean to add the proposed wrapper around take/release lock calls
and set/test some cpu-specific variable there?

It sounds interesting. Well, the detection will not be 100% correct
because there is a small race window between taking @logbuf_lock
and setting @lockbuf_cpu. I wonder if it is worth doing. But I will
do it if you want.

Best Regards,
Petr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ