[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa19a0e1-2742-d74f-50b2-e81ba1fed7a6@raspberrypi.com>
Date: Mon, 23 May 2022 12:01:03 +0100
From: Phil Elwell <phil@...pberrypi.com>
To: Stefan Wahren <stefan.wahren@...e.com>, paulmck@...nel.org
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nicolas Saenz Julienne <nsaenzju@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Minchan Kim <minchan@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Juri Lelli <juri.lelli@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
regressions@...ts.linux.dev, riel@...riel.com,
viro@...iv.linux.org.uk
Subject: Re: vchiq: Performance regression since 5.18-rc1
Hi Stefan,
On 23/05/2022 11:48, Stefan Wahren wrote:
> Hi Phil,
>
> Am 23.05.22 um 11:29 schrieb Phil Elwell:
>> Hi Stefan,
>>
>> On 23/05/2022 07:19, Stefan Wahren wrote:
>>> Hi Paul,
>>>
>>> Am 23.05.22 um 06:48 schrieb Paul E. McKenney:
>>>> On Sun, May 22, 2022 at 05:11:36PM +0200, Stefan Wahren wrote:
>>>>> Hi Paul,
>>>>>
>>>>> Am 22.05.22 um 01:46 schrieb Paul E. McKenney:
>>>>>> On Sun, May 22, 2022 at 01:22:00AM +0200, Stefan Wahren wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> while testing the staging/vc04_services/interface/vchiq_arm driver with my
>>>>>>> Raspberry Pi 3 B+ (multi_v7_defconfig) i noticed a huge performance
>>>>>>> regression since [ff042f4a9b050895a42cae893cc01fa2ca81b95c] mm:
>>>>>>> lru_cache_disable: replace work queue synchronization with synchronize_rcu
>>>>>>>
>>>>>>> Usually i run "vchiq_test -f 1" to see the driver is still working [1].
>>>>>>>
>>>>>>> Before commit:
>>>>>>>
>>>>>>> real 0m1,500s
>>>>>>> user 0m0,068s
>>>>>>> sys 0m0,846s
>>>>>>>
>>>>>>> After commit:
>>>>>>>
>>>>>>> real 7m11,449s
>>>>>>> user 0m2,049s
>>>>>>> sys 0m0,023s
>>>>>>>
>>>>>>> Best regards
>>>>>>>
>>>>>>> [1] - https://github.com/raspberrypi/userland
>>>>>> Please feel free to try the patch shown below. Or the pair of patches
>>>>>> from Rik here:
>>>>>>
>>>>>> https://lore.kernel.org/lkml/20220218183114.2867528-2-riel@surriel.com/
>>>>>> https://lore.kernel.org/lkml/20220218183114.2867528-3-riel@surriel.com/
>>>>> I tried your patch and Rik's patches but in both cases vchiq_test runs 7
>>>>> minutes instead of ~ 1 second.
>>>> That is surprising. Do you boot with rcupdate.rcu_normal=1?
>>> No, not explicit.
>>>> That would
>>>> nullify my patch, but I would expect that Rik's patch would still provide
>>>> increased performance even in that case.
>>> I will retest with a fresh SD card image.
>>>>
>>>> Could you please characterize where the slowdown is occurring?
>>>
>>> Unfortunately i don't have a deep insight into driver and vchiq_test tool.
>>> Just a user view.
>>>
>>> Do you think an strace would be a good starting point?
>>>
>>> @Phil Any advices to analyse this issue?
>>
>> Sending many small control packets:
>>
>> vchiq_test -c 1 10000
>>
>> essentially tests interrupt latency. Using a small number of large bulk
>> transfers:
>>
>> vchiq_test -b 10000 1
>>
>> becomes a test of how long it takes to lock down pages. It also tests DMA
>> transfer speeds, but since the DMA is run by the firmware (which you aren't
>> changing), I think you can rule that.
> Thanks i will try.
>>
>> You may also find it helpful to include "force_turbo=1" in config.txt for more
>> predictable results.
>>
>> By the way, running our 5.18-rc7-based branch on a 3B+ I'm not seeing any
>> performance problems:
> I assume you are using arm/bcm2709_defconfig and not arm/multi_v7_defconfig as me?
That's correct. Simply switching to multi_v7_defconfig breaks vchiq completely,
presumably because it doesn't define CONFIG_BCM2835_VCHIQ.
Phil
>>
>> pi@...pberrypi:~$ time vchiq_test -f 1
>> Functional test - iters:1
>> ======== iteration 1 ========
>> Testing bulk transfer for alignment.
>> Testing bulk transfer at PAGE_SIZE.
>>
>> real 0m0.512s
>> user 0m0.042s
>> sys 0m0.165s
>>
>> Phil
Powered by blists - more mailing lists