lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3a403ed-6a32-504e-7e57-a761b8db83c3@gmail.com>
Date:   Sat, 15 Oct 2016 15:23:27 +0800
From:   Cheng Chao <cs.os.kernel@...il.com>
To:     Marc Zyngier <marc.zyngier@....com>
Cc:     tglx@...utronix.de, jason@...edaemon.net,
        linux-kernel@...r.kernel.org, cs.os.kernel@...il.com
Subject: Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one
 cpu

On 10/15/2016 01:33 AM, Marc Zyngier wrote:
>> on 10/13/2016 11:31 PM, Marc Zyngier wrote:
>>> On Thu, 13 Oct 2016 18:57:14 +0800
>>> Cheng Chao <cs.os.kernel@...il.com> wrote:
>>>
>>>> GIC can distribute an interrupt to more than one cpu,
>>>> but now, gic_set_affinity sets only one cpu to handle interrupt.
>>>
>>> What makes you think this is a good idea? What purpose does it serves?
>>> I can only see drawbacks to this: You're waking up more than one CPU,
>>> wasting power, adding jitter and clobbering the cache.
>>>
>>> I assume you see a benefit to that approach, so can you please spell it
>>> out?
>>>
>>
>> Ok, You are right, but the performance is another point that we should consider.
>>
>> We use E1 device to transmit/receive video stream. we find that E1's interrupt is
>> only on the one cpu that cause this cpu usage is almost 100%,
>> but other cpus is much lower load, so the performance is not good.
>> the cpu is 4-core.
>
> It looks to me like you're barking up the wrong tree. We have
> NAPI-enabled network drivers for this exact reason, and adding more
> interrupts to an already overloaded system doesn't strike me as going in
> the right direction. May I suggest that you look at integrating NAPI
> into your E1 driver?
>

great, NAPI maybe is a good option, I can try to use NAPI. thank you.

In other hand, gic_set_affinity sets only one cpu to handle interrupt,
that really makes me a little confused, why does GIC's driver not like 
the others(MPIC, APIC etc) to support many cpus to handle interrupt?

It seems that the GIC's driver constrain too much.

We can use /proc/irq/xx/smp_affinity to set what we expect.
echo 1 > /proc/irq/xx/smp_affinity, the interrupt on the first cpu.
echo 2 > /proc/irq/xx/smp_affinity, the interrupt on the second cpu.

but:
echo 3 > /proc/irq/xx/smp_affinity, the interrupt on the first cpu,
no interrupt on the second cpu.
what? why does the second cpu has no interrupt?


regardless of:
 >>> What makes you think this is a good idea? What purpose does it serves?
 >>> I can only see drawbacks to this: You're waking up more than one CPU,
 >>> wasting power, adding jitter and clobbering the cache.


I think it is more reasonable to let user decide what to do.

If I care about the power etc, then I only echo single cpu to
/proc/irq/xx/smp_affinity, but if I expect more than one cpu to handle 
one special interrupt, I can echo 'what I expect cpus' to
/proc/irq/xx/smp_affinity.


>> so add CONFIG_ARM_GIC_AFFINITY_SINGLE_CPU is better?
>> thus we can make a trade-off between the performance with the power etc.
>
> No, that's pretty horrible, and I'm not even going to entertain the
> idea.

Yes, in fact /proc/irq/xx/smp_affinity is enough.

> I suggest you start investigating how to mitigate your interrupt
> rate instead of just taking more of them.
>

Ok, thanks again.

> Thanks,
>
> 	M.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ