lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Mar 2017 19:35:42 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Daniel Lezcano <daniel.lezcano@...aro.org>
Cc:     tglx@...utronix.de, linux-kernel@...r.kernel.org,
        nicolas.pitre@...aro.org, rafael@...nel.org,
        vincent.guittot@...aro.org
Subject: Re: [PATCH V8 2/3] irq: Track the interrupt timings

On Thu, Mar 23, 2017 at 06:42:02PM +0100, Daniel Lezcano wrote:
> +/*
> + * The function record_irq_time is only called in one place in the
> + * interrupts handler. We want this function always inline so the code
> + * inside is embedded in the function and the static key branching
> + * code can act at the higher level. Without the explicit
> + * __always_inline we can end up with a function call and a small
> + * overhead in the hotpath for nothing.
> + */
> +static __always_inline void record_irq_time(struct irq_desc *desc)
> +{
> +	if (static_key_enabled(&irq_timing_enabled)) {

I think you meant to have either static_branch_likely() or
static_branch_unlikely() here. Those are runtime code patched,
static_key_enabled() generates a regular load and test condition.

Also; if you do something like:

	if (!static_branch_likely(&irq_timing_enabled))
		return;

you can save one level of indent.

> +		if (desc->istate & IRQS_TIMINGS) {
> +			struct irq_timings *timings = this_cpu_ptr(&irq_timings);
> +			unsigned int index = timings->count & IRQ_TIMINGS_MASK;
> +
> +			timings->values[index].ts = local_clock();
> +			timings->values[index].irq = irq_desc_get_irq(desc);
> +			timings->count++;
> +		}
> +	}
> +}



> +DEFINE_STATIC_KEY_FALSE(irq_timing_enabled);
> +
> +DEFINE_PER_CPU(struct irq_timings, irq_timings);
> +
> +void irq_timings_enable(void)
> +{
> +	static_branch_inc(&irq_timing_enabled);

Do you really need counting, or do you want static_branch_enable() here?

> +}
> +
> +void irq_timings_disable(void)
> +{
> +	static_branch_dec(&irq_timing_enabled);

idem.

> +}
> -- 
> 1.9.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ