lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Jan 2021 18:04:36 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH] tracing: Merge irqflags + preemt counter, add RT
 bits

On Wed, 16 Dec 2020 18:22:05 +0100
Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:

> PREEMPT_RT never reported "serving softirq". I took a look to see if it
> could be changed. The tracing infrastructure examinates the preemtion
                                                           "preemption"

> counter for that. PREEMPT_RT does not change the preemption counter
> while disabling the bottom half or serving the softirqs in order to
> remain preemptible. The in_serving_softirq() macro and the SOFTIRQ_OFFSET
> define are still working but not on the preempt-counter.
> I started to look how to integrate the RT bits regarding softirq.
> 
> The state of the interrupts (irqflags) and the preemption counter are
> passed down to tracing_generic_entry_update(). However only one bit of
> irqflags is actually required: The on/off state.
> The irqflags and the preemption counter could be evaluated early and the
> information stored in an integer `trace_ctx'.
> tracing_generic_entry_update() would use the upper bits as the
> TRACE_FLAG_* and the lower 16bit as the preemption counter (considering
> that 1 must be substracted from the counter in some cases).
> 
> Whith this change the preemption counter is read in one place and the
  "With"

> relevant RT bits for softirq can be set there.
> 
> The actual preemption value is not used except for the tracing record.
> The `irqflags' is also not used except for the _irqsave() locking in a
> few spots.

Which spots?

> As part of the patch I added __ to trace_event_buffer_commit() while
> evaluating trace_event_buffer() for the struct trace_event_buffer usage
> regarding the `pc' and `flags' members. It appears that those two can
> also be merged into the `trace_ctx' integer.

Looks like you did change the trace_event_buffer. I don't understand why
the "__" was added?


> With this change the callchain passes one argument less and evaluates
> the flags early. A build with all tracers enabled on x86-64 with and
> without the patch:
> 
>     text     data      bss      dec      hex    filename
> 24301717 22148594 13996284 60446595  39a5783 vmlinux.old
> 24301248 22148850 13996284 60446382  39a56ae vmlinux.new
> 
> data increased by 256 bytes, text shrank by 469 bytes.
> 

> -void
> -tracing_generic_entry_update(struct trace_entry *entry, unsigned short type,
> -			     unsigned long flags, int pc)
> +static unsigned int __tracing_gen_ctx_flags(unsigned long irqflags)
>  {
> -	struct task_struct *tsk = current;
> +	unsigned int trace_flags = 0;
> +	unsigned int pc;
> +
> +	pc = preempt_count();
>  
> -	entry->preempt_count		= pc & 0xff;
> -	entry->pid			= (tsk) ? tsk->pid : 0;
> -	entry->type			= type;
> -	entry->flags =
>  #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
> -		(irqs_disabled_flags(flags) ? TRACE_FLAG_IRQS_OFF : 0) |
> +	if (irqs_disabled_flags(irqflags))
> +		trace_flags |= TRACE_FLAG_IRQS_OFF;
>  #else
> -		TRACE_FLAG_IRQS_NOSUPPORT |
> +		trace_flags |= TRACE_FLAG_IRQS_NOSUPPORT;
>  #endif
> -		((pc & NMI_MASK    ) ? TRACE_FLAG_NMI     : 0) |
> -		((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
> -		((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) |
> -		(tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
> -		(test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);

Note, the above was a result of playing around with seeing how the compiler
optimized the above. Have you seen how the below logic looks as assembly?
This is a very hot path.

> +
> +	if (pc & NMI_MASK)
> +		trace_flags |= TRACE_FLAG_NMI;
> +	if (pc & HARDIRQ_MASK)
> +		trace_flags |= TRACE_FLAG_HARDIRQ;
> +
> +	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> +		if (in_serving_softirq())
> +			trace_flags |= TRACE_FLAG_SOFTIRQ;
> +	} else {
> +		if (pc & SOFTIRQ_OFFSET)
> +			trace_flags |= TRACE_FLAG_SOFTIRQ;
> +	}
> +	if (tif_need_resched())
> +		trace_flags |= TRACE_FLAG_NEED_RESCHED;
> +	if (test_preempt_need_resched())
> +		trace_flags |= TRACE_FLAG_PREEMPT_RESCHED;
> +	return (trace_flags << 16) | (pc & 0xff);
> +}

Can you break this into two patches. One that adds the trace_ctx and merges
the flags and pc, and another patch that only updates to add the RT bits.

Thanks!

-- Steve

> +
> +unsigned int _tracing_gen_ctx_flags(unsigned long irqflags)
> +{
> +	return __tracing_gen_ctx_flags(irqflags);
> +}
> +
> +unsigned int tracing_gen_ctx_flags(void)
> +{
> +	unsigned long irqflags;
> +
>

Powered by blists - more mailing lists