lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Jun 2015 14:21:30 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
cc:	Jiang Liu <jiang.liu@...ux.intel.com>,
	Borislav Petkov <bp@...en8.de>, linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [-next] !irqd_can_balance() WARNINGs at irq_move_masked_irq()

On Fri, 19 Jun 2015, Thomas Gleixner wrote:
> On Fri, 19 Jun 2015, Sergey Senozhatsky wrote:
> > [    0.412291] WARNING: CPU: 0 PID: 0 at kernel/irq/migration.c:21 irq_move_masked_irq+0x57/0xc4()
> > [    0.412371] Can't balance irq 0 [edge]
> 
> Yuck.
> 
> > Do you guys want to replace WAN_ON() with WARN_ONCE(), perhaps? This, of course,
> > doesn't fix anything; but at least one can boot the system. (not really a patch,
> > just an idea).
> 
> Indeed. We really want to clear the move pending bit before the can
> balance check. Patch below. But that does not explain why this happens
> in the first place.
> 
> Can you please send me a full dmesg, kernel config and output of
> /proc/interrupts ? (Private mail is fine, or upload it to some place)

Thanks for providing the data. I think I know what happens.

Something in the kernel (not yet clear what) tries to move the hpet
irq 0 by calling irq_set_affinity(). That's an kernel internal
interface which does not check whether the NO BALANCE flag is set for
the irq. So the call runs and triggers the move from next interrupt
machinery which ends up calling irq_move_masked_irq() and that trips
over the flag and yells.

That's why I changed the WARN to a pr_warn() because we already know
the call stack.

So the core behaviour is inconsistent. We let the caller of
irq_set_affinity() succeed and yell later because we think it's wrong.

I'm pretty sure that we must drop the check for NO BALANCE in
irq_move_masked_irq() and only check for the per_cpu bit, but at the
same time I really want to know where that call to irq_set_affinity(irq0)
is coming from.

Can you please collect the output of /proc/timer_list for the previous
patch and then replace the previous patch with the one below and
gather all the data again?

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ