[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6936b80-25ad-5e06-5fcc-c211adb70ceb@huawei.com>
Date: Fri, 7 May 2021 15:30:06 +0800
From: He Ying <heying24@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: <vincent.guittot@...aro.org>, <Valentin.Schneider@....com>,
<andrew@...n.ch>, <catalin.marinas@....com>,
<f.fainelli@...il.com>, <gregory.clement@...tlin.com>,
<jason@...edaemon.net>, <kernel-team@...roid.com>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux@....linux.org.uk>,
<saravanak@...gle.com>, <sumit.garg@...aro.org>,
<tglx@...utronix.de>, <will@...nel.org>
Subject: Re: [PATCH v3 03/16] arm64: Allow IPIs to be handled as normal
interrupts
在 2021/5/6 19:44, Marc Zyngier 写道:
> On Thu, 06 May 2021 08:50:42 +0100,
> He Ying <heying24@...wei.com> wrote:
>> Hello Marc,
>>
>> We have faced a performance regression for handling ipis since this
>> commit. I think it's the same issue reported by Vincent.
> Can you share more details on what regression you have observed?
> What's the workload, the system, the performance drop?
OK. We have just calculated the pmu cycles from the entry of gic_handle_irq
to the entry of do_handle_ipi. Here is some more information about our test:
CPU: Hisilicon hip05-d02
Applying the patch series: 1115 cycles
Reverting the patch series: 599 cycles
>
>> I found you pointed out the possible two causes:
>>
>> (1) irq_enter/exit on the rescheduling IPI means we reschedule much
>> more often.
> It turned out to be a red herring. We don't reschedule more often, but
> we instead suffer from the overhead of irq_enter()/irq_exit().
> However, this only matters for silly benchmarks, and no real-life
> workload showed any significant regression. Have you identified such
> realistic workload?
I'm afraid not. We just run some benchmarks and calculated pmu cycle
counters.
But we have observed running time from the entry of gic_handle_irq to
the entry
of do_handle_ipi almost doubles. Doesn't it affect realistic workload?
>
>> (2) irq_domain lookups add some overhead.
> While this is also a potential source of overhead, it turned out not
> to be the case.
OK.
>
>> But I don't see any following patches in mainline. So, are you still
>> working on this issue? Looking forward to your reply.
> See [1]. However, there is probably better things to do than this
> low-level specialisation of IPIs, and Thomas outlined what needs to be
> done (see v1 of the patch series).
OK. I see the patch series. Would it be applied to the mainline someday?
I notice
that more than 5 months have passed since you sent the patch series.
Thanks.
>
> Thanks,
>
> M.
>
> [1] https://lore.kernel.org/lkml/20201124141449.572446-1-maz@kernel.org/
>
Powered by blists - more mailing lists