lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210930163858.orndmu5xfxue3zck@linutronix.de>
Date:   Thu, 30 Sep 2021 18:38:58 +0200
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Jiri Olsa <jolsa@...nel.org>
Subject: Re: [PATCH 4/5] irq_work: Handle some irq_work in SOFTIRQ on
 PREEMPT_RT

On 2021-09-30 16:39:51 [+0200], Peter Zijlstra wrote:
> > > I think the problem was something Jolsa found a while ago, where perf
> > > defers to an irq_work (from NMI context) and that irq_work wants to
> > > deliver signals, which it can't on -RT, so the whole thing gets punted
> > > to softirq. With the end-result that if you self-profile RT tasks,
> > > things come apart or something.
> > 
> > For signals (at least on x86) we this ARCH_RT_DELAYS_SIGNAL_SEND thingy
> > where the signal is delayed until exit_to_user_mode_loop().
> 
> Yeah, I think that is what started much of the entry rework.. the signal
> rework is still pending.

posix timer were also guilty here :)

> > perf_pending_event() is the only non-HARD on RT (on the perf side). I
> > think that is due to perf_event_wakeup() where we have wake_up_all() and
> 
> Right, and that is exactly the problem, that needs to run at a higher
> prio than the task that needs it, but softirq makes that 'difficult'.
> 
> One possible 'solution' would be to, instead of softirq, run the thing
> as a kthread (worker or otherwise) such that userspace can at least set
> the priority and has a small chance of making it work.
>
> Runing them all at the same prio still sucks (much like the single
> net-RX thing), but at least a kthread is somewhat controllable.

I could replace the softirq processing with a per-CPU thread. This
should work. But I would have to (still) delay the wake-up of the thread
to the timer tick - or - we try the wake from the irqwork-self-IPI. I
just don't know how many will arrive back-to-back. The RCU callback
(rcu_preempt_deferred_qs_handler()) pops up a lot. By my naive guesswork
I would say that the irqwork is not needed since preempt-enable
somewhere should do needed scheduling. But then commit
  0864f057b050b ("rcu: Use irq_work to get scheduler's attention in clean context")

claims it is not enough.

> > read_lock_irqsave().
> 
> That one is really vexing, that really is just signal delivery to self
> but even when signal stuff is fixed, we're stuck behind that fasync
> rwlock :/

Yea. We are already in a RCU section and then this.

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ