lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Mar 2022 22:47:05 +0000
From:   "Zhang, Qiang1" <qiang1.zhang@...el.com>
To:     "paulmck@...nel.org" <paulmck@...nel.org>
CC:     "frederic@...nel.org" <frederic@...nel.org>,
        "rcu@...r.kernel.org" <rcu@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] rcu: Put the irq work into hard interrupt context for
 execution

On Wed, Mar 30, 2022 at 02:00:12PM +0800, Zqiang wrote:
> In PREEMPT_RT kernel, if irq work flags is not set, it will be 
> executed in per-CPU irq_work kthreads. set IRQ_WORK_HARD_IRQ flags to 
> irq work, put it in the context of hard interrupt execution, 
> accelerate scheduler to re-evaluate.
> 
> Signed-off-by: Zqiang <qiang1.zhang@...el.com>
> ---
>  kernel/rcu/tree.c        | 2 +-
>  kernel/rcu/tree_plugin.h | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 
> e2ffbeceba69..a69587773a85 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -678,7 +678,7 @@ static void late_wakeup_func(struct irq_work 
> *work)  }
>  
>  static DEFINE_PER_CPU(struct irq_work, late_wakeup_work) =
> -	IRQ_WORK_INIT(late_wakeup_func);
> +	IRQ_WORK_INIT_HARD(late_wakeup_func);

>This is used only by rcu_irq_work_resched(), which is invoked only by rcu_user_enter(), which is never invoked until userspace is enabled, by which time all of the various kthreads will have been spawned, correct?
>
>Either way, please show me the exact sequence of events that lead to a problem with the current IRQ_WORK_INIT().
>
>  /*
>   * If either:
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 
> 3037c2536e1f..cf7bd28af8ef 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -661,7 +661,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
>  			    expboost && !rdp->defer_qs_iw_pending && cpu_online(rdp->cpu)) {
>  				// Get scheduler to re-evaluate and call hooks.
>  				// If !IRQ_WORK, FQS scan will eventually IPI.
> -				init_irq_work(&rdp->defer_qs_iw, rcu_preempt_deferred_qs_handler);
> +				rdp->defer_qs_iw = 
> +IRQ_WORK_INIT_HARD(rcu_preempt_deferred_qs_handler);
>  				rdp->defer_qs_iw_pending = true;
>  				irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
>  			}
>
>OK, in theory, rcu_read_unlock() could get to this point before all of the various kthreads were spawned.  In practice, the next time that the boot CPU went idle, the end of the quiescent state would be noticed.

Through my understanding, use irq_work in order to make the quiescent state be noticed earlier,
Because the irq_work execute in interrupt, this irq_work can be executed in time, but In RT kernel
The irq_work  is put into the kthread for execution, when it is executed, it is affected by the scheduling delay.
Is there anything I missed?

Thanks
Zqiang	

>
>Or has this been failing in some other manner?  If so, please let me know the exact sequence of events.
>
>							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ