lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875xv5h7y9.ffs@tglx>
Date: Wed, 22 May 2024 23:07:10 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
 linux-kernel@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson
 <seanjc@...gle.com>, Marc Zyngier <maz@...nel.org>, Vitaly Kuznetsov
 <vkuznets@...hat.com>, Anna-Maria Behnsen <anna-maria@...utronix.de>,
 Frederic Weisbecker <frederic@...nel.org>
Subject: Re: RFC: NTP adjustments interfere with KVM emulation of TSC
 deadline timers

On Thu, Dec 21 2023 at 18:51, Maxim Levitsky wrote:
> The test usually fails because L2 observes TSC after the 
> preemption timer deadline, before the VM exit happens.

That's an arguably silly failure condition.

Timer interrupt delivery can be late even on bare metal, so observing
TSC ahead of the expected timer event is not really wrong.

Btw, the kernel also handles it nicely when the timer event arrives
_before_ the expected time. It simply reprograms the timer and is done
with it. That's actually required because clocksource (which determines
time) and clockevent (which expires timers) can be on different clocks
which might drift against each other.

>     In particular, NTP performing a forward correction will result in
>     a timer expiring sooner than expected from a guest point of view.
>     Not a big deal, we kick the vcpu anyway.
>
>     But on wake-up, the vcpu thread is going to perform a check to
>     find out whether or not it should block. And at that point, the
>     timer check is going to say "timer has not expired yet, go back
>     to sleep". This results in the timer event being lost forever.

That's obviously a real problem.

>     There are multiple ways to handle this. One would be record that
>     the timer has expired and let kvm_cpu_has_pending_timer return
>     true in that case, but that would be fairly invasive. Another is
>     to check for the "short sleep" condition in the hrtimer callback,
>     and restart the timer for the remaining time when the condition
>     is detected.

:)

> So to solve this issue there are two options:

There is a third option:

   3. Unconditionally inject the timer interrupt into the guest when the
      underlying hrtimer has expired

      That's fine because timer interrupts can be early (see above) and
      any sane OS has to be able to handle it.

> 1. Have another go at implementing support for CLOCK_MONOTONIC_RAW timers. 
>    I don't know if that is feasible and I would be very happy to hear
>    a feedback from you.

That's a non-trivial exercise.

The charm of having all clocks related to CLOCK_MONOTONIC is that there
is zero requirement to take NTP frequency adjustments into account,
which makes the implementation reasonably simple and robust.

Changing everything over in that area (hrtimers, clockevents, NOHZ) to
be raw hardware frequency based would be an Herculean task and just a
huge pile of horrors.

So the only realistic way to do that is to correlate a
CLOCK_MONOTONIC_RAW timer to CLOCK_MONOTONIC, which obviously has the
same problem you are trying to solve :)

But we could be smart about it. Let's look at the math:

    mraw  = base_mraw + (tsc - base_r) * factor_r;
    mono  = base_mono + (tsc - base_m) * factor_m;

So converting a MONOTONIC_RAW time into MONOTONIC would be:

   tsc = (mraw - base_mraw)/factor_r + base_r

   mono = base_mono + ((mraw - base_mraw)/factor_r + base_r - base_m) * factor_m;

It's guaranteed that base_r == base_m, so:

   mono = base_mono + (mraw - base_mraw) * factor_m / factor_r;

The conversion factors are actually implemented with scaled math:

   mono = base_mono + (((delta_raw * mult_m) >> sft_m) << sft_r) / mult_r;

As sft_m and sft_r are guaranteed to be identical:

   mono = base_mono + (delta_raw * mult_m) / mult_r;

That obviously only works correctly when mult_m is constant between the
time the timer is enqueued and the time the timer is expired as you
figured out.

But even if mult_m changes this will be correct if we take NOHZ out of
the picture for a moment. Why?

In a NOHZ=n scenario the next expiring timer is at least reevaluated
once every tick. As mult_m is stable between ticks any MONOTONIC_RAW
timer which expires before the next tick will be pretty accurately
mapped back onto MONOTONIC and therefore expire at the expected time.

Now NOHZ comes into play and ruins everything under the following
condition:

   1) CPU takes an idle nap for a longer period of time
   
   2) Time synchronization (NTP/PTP/PPS) is adjusting mult_m during that
      #1 period

That's the only condition where the conversion fails. If NTP slows down
the conversion then the timer is going to be late. If it speeds it up
then the hrtimer core will take care of it and guarantee that the timer
callback is never invoked early.

But that's going to be a rare problem because it requires:

    1) the CPU to be in idle for a longer period

    2) the MONOTONIC_RAW timer to be the effective first timer to fire
       after that idle period

    3) Time synchronization adjusting right during that idle period

Sure that can happen, but the question is whether it's really a
problem. As I said before timer events coming late is to be expected
even on bare metal (think SMI, NMI, long interrupt disabled regions).

So the main benefit of such a change would be to spare the various
architecture specific implementations the stupid exercise of
implementing half baked workarounds which will suffer from
the very same problems.

If done right then the extra overhead of the division will be not really
noticable and only take effect when there is a MONOTONIC_RAW timer
queued. IOW, it's a penalty on virtualization hosts, but not for
everyone. The facility will introduce some extra cycles due to
conditionals vs. MONOTONIC_RAW in a few places, but that's probably
something which can't even be measured.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ