lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <875xdd8oag.ffs@tglx>
Date: Sat, 20 Sep 2025 11:29:43 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: arnd@...db.de, anna-maria@...utronix.de, frederic@...nel.org,
 peterz@...radead.org, luto@...nel.org, mingo@...hat.com,
 juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
 oliver.sang@...el.com
Subject: Re: [RFC][PATCH 7/8] entry,hrtimer: Push reprogramming timers into
 the interrupt return path

On Thu, Sep 18 2025 at 09:52, Peter Zijlstra wrote:
> Currently hrtimer_interrupt() runs expired timers, which can re-arm
> themselves, after which it computes the next expiration time and
> re-programs the hardware.
>
> However, things like HRTICK, a highres timer driving preemption,
> cannot re-arm itself at the point of running, since the next task has
> not been determined yet. The schedule() in the interrupt return path
> will switch to the next task, which then causes a new hrtimer to be
> programmed.
>
> This then results in reprogramming the hardware at least twice, once
> after running the timers, and once upon selecting the new task.
>
> Notably, *both* events happen in the interrupt.
>
> By pushing the hrtimer reprogram all the way into the interrupt return
> path, it runs after schedule() and this double reprogram can be
> avoided.
>
> XXX: 0-day is unhappy with this patch -- it is reporting lockups that
> very much look like a timer goes missing. Am unable to reproduce.
> Notable: the lockup goes away when the workloads are ran without perf
> monitors.

After staring at it for a while, I have two observations.

1) In the 0-day report the lockup detector triggers on a spinlock
   contention in futex_wait_setup()

   I'm not really seeing how that's related to a missing timer.

   Without knowing what the other CPUs are doing and what holds the
   lock, it's pretty much impossible to tell what the hell is going on.

   So that might need a back trace triggered on all CPUs and perhaps
   some debug output in the backtrace about the hrtimer state.

   On the CPU where the lockup is detected, the timer is working.


2) I came up with the following scenario, which is broken with this
   delayed rearm.

   Assume this happens on the timekeeping CPU.

      hrtimer_interrupt()
        expire_timers();
        set(TIF_REARM);

      exit_to_user_mode_prepare()
        handle_tif_muck()
          ...
          to = jiffies + 2;
          while (!cond() && time_before(jiffies, to))
          	relax();

     If cond() does not become true for whatever reason, then this won't
     make progress ever because the tick hrtimer which increments
     jiffies is not happening.

     It can also be a wait on a remote CPU preventing progress
     indirectly or a subtle dependency on a timer (timer list or
     hrtimer) to expire.

  I have no idea whether that's related to the reported 0-day fallout,
  but it definitely is a real problem lurking in the dark.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ