lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Jan 2016 19:10:42 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Prarit Bhargava <prarit@...hat.com>
cc:	John Stultz <john.stultz@...aro.org>,
	lkml <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>,
	Xunlei Pang <pang.xunlei@...aro.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Baolin Wang <baolin.wang@...aro.org>,
	Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH 1/2] kernel, timekeeping, add trylock option to
 ktime_get_with_offset()

On Wed, 6 Jan 2016, Prarit Bhargava wrote:
> On 01/06/2016 12:33 PM, John Stultz wrote:
> > On Wed, Jan 6, 2016 at 9:28 AM, John Stultz <john.stultz@...aro.org> wrote:
> >> On Wed, Jan 6, 2016 at 5:00 AM, Prarit Bhargava <prarit@...hat.com> wrote:
> >>> -ktime_t ktime_get_with_offset(enum tk_offsets offs)
> >>> +ktime_t ktime_get_with_offset(enum tk_offsets offs, int trylock)
> >>>  {
> >>>         struct timekeeper *tk = &tk_core.timekeeper;
> >>>         unsigned int seq;
> >>>         ktime_t base, *offset = offsets[offs];
> >>>         s64 nsecs;
> >>> +       unsigned long flags = 0;
> >>> +
> >>> +       if (unlikely(!timekeeping_initialized))
> >>> +               return ktime_set(0, 0);
> >>>
> >>>         WARN_ON(timekeeping_suspended);
> >>>
> >>> +       if (trylock && !raw_spin_trylock_irqsave(&timekeeper_lock, flags))
> >>> +               return ktime_set(KTIME_MAX, 0);
> >>
> >> Wait.. this doesn't make sense. The timekeeper lock is only for reading.
> > 
> > Only for writing.. sorry.. still drinking my coffee.
> > 
> >> What I was suggesting to you off line is to have something that avoids
> >> spinning on the seqcounter should if a bug occurs and we IPI all the
> >> cpus, that we don't deadlock or block any printk messages.
> > 
> > And more clearly here, if a cpu takes a write on the seqcounter in
> > update_wall_time() and at that point another cpu hits a bug, and IPIs
> > the cpus, the system would deadlock. That's really what I want to
> > avoid.
> 
> Right -- but the only time that the seq_lock is taken for writing is when the
> timekeeper_lock is acquired (including update_wall_time()).  This means that
> 
> if (!raw_spin_trylock_irqsave(&timekeeper_lock, flags))
> 
> is equivalent to
> 
> if (tk_core.seq & 1) // sequence_t is odd when writing
> 
> The problem with the latter is that it is possible that there is no
> protection from a writer setting tk_core.seq odd AFTER I've read it,
> and the protection for that AFAICT comes from the timekeeper_lock.
> 
> That means I need to check to see if the timekeeper_lock is locked.  And
> the patch does exactly that -- checks to see if the lock is available, and
> if not avoids spinning on the seq_lock.

And no, we don't want that in every code path.

We already have the concept of the fast timekeeper, which is lockless and NMI
safe. It's useable for tracing and perf, so it can be used for printk as well.

It supports clock monotonic today, which is good enough for printk, but it
could be extended to other clocks if really required.

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ