[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <46db52fe-b69e-3ea9-4581-858658be8a2c@huawei.com>
Date: Fri, 7 May 2021 17:31:23 +0800
From: He Ying <heying24@...wei.com>
To: Marc Zyngier <maz@...nel.org>
CC: <vincent.guittot@...aro.org>, <Valentin.Schneider@....com>,
<andrew@...n.ch>, <catalin.marinas@....com>,
<f.fainelli@...il.com>, <gregory.clement@...tlin.com>,
<kernel-team@...roid.com>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux@....linux.org.uk>,
<saravanak@...gle.com>, <sumit.garg@...aro.org>,
<tglx@...utronix.de>, <will@...nel.org>
Subject: Re: [PATCH v3 03/16] arm64: Allow IPIs to be handled as normal
interrupts
在 2021/5/7 16:56, Marc Zyngier 写道:
> On Fri, 07 May 2021 08:30:06 +0100,
> He Ying <heying24@...wei.com> wrote:
>>
>> 在 2021/5/6 19:44, Marc Zyngier 写道:
>>> On Thu, 06 May 2021 08:50:42 +0100,
>>> He Ying <heying24@...wei.com> wrote:
>>>> Hello Marc,
>>>>
>>>> We have faced a performance regression for handling ipis since this
>>>> commit. I think it's the same issue reported by Vincent.
>>> Can you share more details on what regression you have observed?
>>> What's the workload, the system, the performance drop?
>> OK. We have just calculated the pmu cycles from the entry of gic_handle_irq
>> to the entry of do_handle_ipi. Here is some more information about our test:
>>
>> CPU: Hisilicon hip05-d02
>>
>> Applying the patch series: 1115 cycles
>> Reverting the patch series: 599 cycles
> And? How is that meaningful? Interrupts are pretty rare compared to
> everything that happens in the system. How does it affect the
> behaviour of the system as a whole?
OK.
>
>>>> I found you pointed out the possible two causes:
>>>>
>>>> (1) irq_enter/exit on the rescheduling IPI means we reschedule much
>>>> more often.
>>> It turned out to be a red herring. We don't reschedule more often, but
>>> we instead suffer from the overhead of irq_enter()/irq_exit().
>>> However, this only matters for silly benchmarks, and no real-life
>>> workload showed any significant regression. Have you identified such
>>> realistic workload?
>> I'm afraid not. We just run some benchmarks and calculated pmu cycle
>> counters. But we have observed running time from the entry of
>> gic_handle_irq to the entry of do_handle_ipi almost doubles. Doesn't
>> it affect realistic workload?
> Then I'm not that interested. Show me an actual regression in a real
> workload that affects people, and I'll be a bit more sympathetic to
> your complain. But quoting raw numbers do not help.
>
> There is a number of advantages to having IPI as IRQs, as it allows us
> to deal with proper allocation (other subsystem want to use IPIs), and
> eventually NMIs. There is a trade-off, and if that means wasting a few
> cycles, so be it.
OK. I see.
>
>>>> (2) irq_domain lookups add some overhead.
>>> While this is also a potential source of overhead, it turned out not
>>> to be the case.
>> OK.
>>>> But I don't see any following patches in mainline. So, are you still
>>>> working on this issue? Looking forward to your reply.
>>> See [1]. However, there is probably better things to do than this
>>> low-level specialisation of IPIs, and Thomas outlined what needs to be
>>> done (see v1 of the patch series).
>> OK. I see the patch series. Would it be applied to the mainline
>> someday? I notice that more than 5 months have passed since you sent
>> the patch series.
> I have no plan to merge these patches any time soon, given that nobody
> has shown a measurable regression using something other than a trivial
> benchmark. If you come up with such an example, I will of course
> reconsider this position.
OK. Thanks a lot for all your reply. If I come up with a measurable
regression
with a realistic workload, I'll contact you again.
Thanks.
>
> Thanks,
>
> M.
>
Powered by blists - more mailing lists