lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49020B39.6080805@redhat.com>
Date:	Fri, 24 Oct 2008 13:51:53 -0400
From:	Chris Snook <csnook@...hat.com>
To:	Kumar Gala <galak@...nel.crashing.org>
CC:	maxk@...lcomm.com, LinuxPPC-dev list <linuxppc-dev@...abs.org>,
	linux-kernel Kernel <linux-kernel@...r.kernel.org>,
	tglx@...utronix.de
Subject: Re: default IRQ affinity change in v2.6.27 (breaking several SMP
 PPC based systems)

Kumar Gala wrote:
> 
> On Oct 24, 2008, at 11:09 AM, Chris Snook wrote:
> 
>> Kumar Gala wrote:
>>> On Oct 24, 2008, at 10:17 AM, Chris Snook wrote:
>>>> Kumar Gala wrote:
>>>>> It appears the default IRQ affinity changes from being just cpu 0 
>>>>> to all cpu's.  This breaks several PPC SMP systems in which only a 
>>>>> single processor is allowed to be selected as the destination of 
>>>>> the IRQ.
>>>>> What is the right answer in fixing this?  Should we:
>>>>>   cpumask_t irq_default_affinity = 1;
>>>>> instead of
>>>>>   cpumask_t irq_default_affinity = CPU_MASK_ALL?
>>>>
>>>> On those systems, perhaps, but not universally.  There's plenty of 
>>>> hardware where the physical topology of the machine is abstracted 
>>>> away from the OS, and you need to leave the mask wide open and let 
>>>> the APIC figure out where to map the IRQs.  Ideally, we should 
>>>> probably make this decision based on the APIC, but if there's no PPC 
>>>> hardware that uses this technique, then it would suffice to make 
>>>> this arch-specific.
>>> What did those systems do before this patch?  Its one thing to expose 
>>> a mask in the ability to change the default mask in 
>>> /proc/irq/default_smp_affinity.  Its another (and a regression in my 
>>> opinion) to change the mask value itself.
>>
>> Before the patch they took an extremely long time to boot if they had 
>> storage attached to each node of a multi-chassis system, performed 
>> poorly unless special irqbalance hackery or manual assignment was 
>> used, and imposed artificial restrictions on the granularity of 
>> hardware partitioning to ensure that CPU 0 would always be a CPU that 
>> could service all interrupts necessary to boot the OS.
>>
>>> As for making it ARCH specific, that doesn't really help since not 
>>> all PPC hw has the limitation I spoke of.  Not even all MPIC (in our 
>>> cases) have the limitation.
>>
>> What did those systems do before this patch? :)
>>
>> Making it arch-specific is an extremely simple way to solve your 
>> problem without making trouble for the people who wanted this patch in 
>> the first place.  If PPC needs further refinement to handle particular 
>> *PICs, you can implement that without touching any arch-generic code.
> 
> 
> So why not just have x86 startup code set irq_default_affinity = 
> CPU_MASK_ALL than?

It's an issue on Itanium as well, and potentially any SMP architecture with a 
non-trivial interconnect.

-- Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ