lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 4 Jun 2017 20:52:07 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     John Stultz <john.stultz@...aro.org>
cc:     lkml <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Miroslav Lichvar <mlichvar@...hat.com>,
        Richard Cochran <richardcochran@...il.com>,
        Prarit Bhargava <prarit@...hat.com>,
        Stephen Boyd <stephen.boyd@...aro.org>,
        Daniel Mentz <danielmentz@...gle.com>,
        stable <stable@...r.kernel.org>
Subject: Re: [PATCH 1/3 v2] time: Fix clock->read(clock) race around clocksource
 changes

On Wed, 31 May 2017, John Stultz wrote:

> In some testing on arm64 platforms, I was seeing null ptr
> crashes in the kselftest/timers clocksource-switch test.
> 
> This was happening in a read function like:
> u64 clocksource_mmio_readl_down(struct clocksource *c)
> {
>     return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask;
> }
> 
> Where the callers enter the seqlock, and then call something
> like:
>     cycle_now = tkr->read(tkr->clock);
> 
> The problem seeming to be that since the ->read() and ->clock
> pointer references are happening separately, its possible the
> clocksource change happens in between and we end up calling the
> old ->read() function with the new clocksource, (or vice-versa)
> which causes the to_mmio_clksrc() in the read function to run
> off into space.
> 
> This patch tries to address the issue by providing a helper
> function that atomically reads the clock value and then calls
> the clock->read(clock) function so that we always call the read
> funciton with the appropriate clocksource and don't accidentally
> mix them.

This changelog is still horrible to read. This really want's proper
explanations and not 'seeming ot be', 'tries to address' ....

Something like this:

  "In tests, which excercise switching of clocksources, a NULL pointer
   dereference can be observed on AMR64 platforms in the clocksource read()
   function:

   u64 clocksource_mmio_readl_down(struct clocksource *c)
   {
	return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask;
   }

   This is called from the core timekeeping code via:

    	cycle_now = tkr->read(tkr->clock);

   tkr->read is the cached tkr->clock->read() function pointer. When the
   clocksource is changed then tkr->clock and tkr->read are updated
   sequentially. The code above results in a sequential load operation of
   tkr->read and tkr->clock as well.

   If the store to tkr->clock hits between the loads of tkr->read and
   tkr->clock, then the old read() function is called with the new clock
   pointer. As a consequence the read() function dereferences a different data
   structure and the resulting 'reg' pointer can point anywhere including
   NULL.

   This problem was introduced when the timekeeping code was switched over to
   use struct tk_read_base. Before that, it was theoretically possible as well
   when the compiler decided to reload clock in the code sequence:

     now = tk->clock-read(tk->clock);

   Add a helper function which avoids the issue by reading tk_read_base->clock
   once into a local variable clk and then issue the read function via
   clk->read(clk). This guarantees that the read() function always gets the
   proper clocksource pointer handed in."

The whole problem was introduced by me, when I (over)optimized the cache
line footprint of the timekeeping stuff and wanted to avoid touching the
clocksource cache line when the clocksource does not need it, like TSC on
x86. The above race did not come to my mind at all when I wrote that
code. Bummer..

> The one exception where this helper isn't necessary is for the
> fast-timekepers which use their own locking and update logic
> to the tkr structures.

That's simply wrong. The fast time keepers have exactly the same issue.

   seq = tkf->seq;
   tkr = tkr->base + (seq & 0x01)
   now = tkr->read(tkr->clock);

So this is exactly the same because this decomposes to

   rd = tkr->read;
   cl = tkr->clock;
   now = rd(cl);

So if you put the update in context:

CPU0  	      	  	CPU1
   rd = tkr->read;
			update_fast_timekeeper()
			write_seqcount_latch(tkr->seq);
			memcpy(tkr->base[0], newtkr);
			write_seqcount_latch(tkr->seq);
			memcpy(tkr->base[1], newtkr);
   cl = tkr->clock;
   now = rd(cl);

Then you end up with the very same problem as with the general timekeeping
itself.

The two bases and the seqcount_latch() magic are there to allow using the
fast timekeeper in NMI context, which can interrupt the update
sequence. That guarantees that the reader which interrupted the update will
always use a consistent tkr->base. But in no way does it protect against
the read -> clock inconsistency caused by a concurrent or interrupting
update.

> +/*
> + * tk_clock_read - atomic clocksource read() helper
> + *
> + * This helper is necessary to use in the read paths because, while the
> + * seqlock ensures we don't return a bad value while structures are updated,
> + * it doesn't protect from potential crashes. There is the possibility that
> + * the tkr's clocksource may change between the read reference, and the
> + * clock reference passed to the read function.  This can cause crashes if
> + * the wrong clocksource is passed to the wrong read function.

Come on. The problem is not that it can cause crashes.

The problem is that it hands in the wrong pointer. Even if it does not
crash, it still can read from a location which has other way harder to
debug side effects.

Comments and changelogs should be written in a factual manner not like
fairy tales.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ