lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Sep 2017 19:02:03 +0800
From:   qiaozhou <qiaozhou@...micro.com>
To:     Vikram Mulukutla <markivx@...eaurora.org>,
        Will Deacon <will.deacon@....com>
CC:     Thomas Gleixner <tglx@...utronix.de>,
        John Stultz <john.stultz@...aro.org>, <sboyd@...eaurora.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Wang Wilbur <wilburwang@...micro.com>,
        "Marc Zyngier" <marc.zyngier@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        <linux-kernel-owner@...r.kernel.org>, <sudeep.holla@....com>
Subject: Re: [Question]: try to fix contention between expire_timers and
 try_to_del_timer_sync

Hi Will,

Will this bodging patch be merged? It can solve the livelock issue on 
arm64 platforms(at least improve a lot).

I suspected that CCI-freq might impact the contention between little and 
big core, but on my platform, it impacts little. In fact the frequency 
of external DDR controller impacts the contention.(My last reply has 
detailed data). It might be flushed out of cache after entering WFE and 
be loaded from DDR to cache when woken up.(I guessed that's why external 
DDR freq matters.)

Even with the lowest DDR freq(78M) on my platform, the maximum delay to 
get locked of the little core drops to ~10 ms with this bodging patch, 
while without the patch, the delay can be in 10s level by my testing, as 
discussed previously. So I'm wondering whether it's will be pushed into 
mainline, or still need more data?

Thanks a lot.

Best Regards
Qiao

On 2017年08月29日 07:12, Vikram Mulukutla wrote:
> Hi Will,
> 
> On 2017-08-25 12:48, Vikram Mulukutla wrote:
>> Hi Will,
>>
>> On 2017-08-15 11:40, Will Deacon wrote:
>>> Hi Vikram,
>>>
>>> On Thu, Aug 03, 2017 at 04:25:12PM -0700, Vikram Mulukutla wrote:
>>>> On 2017-07-31 06:13, Will Deacon wrote:
>>>> >On Fri, Jul 28, 2017 at 12:09:38PM -0700, Vikram Mulukutla wrote:
>>>> >>On 2017-07-28 02:28, Will Deacon wrote:
>>>> >>>On Thu, Jul 27, 2017 at 06:10:34PM -0700, Vikram Mulukutla wrote:
>>>>
>>>> >>>
>>>> >>This does seem to help. Here's some data after 5 runs with and 
>>>> without
>>>> >>the
>>>> >>patch.
>>>> >
>>>> >Blimey, that does seem to make a difference. Shame it's so ugly! 
>>>> Would you
>>>> >be able to experiment with other values for 
>>>> CPU_RELAX_WFE_THRESHOLD? I had
>>>> >it set to 10000 in the diff I posted, but that might be higher than
>>>> >optimal.
>>>> >It would be interested to see if it correlates with 
>>>> num_possible_cpus()
>>>> >for the highly contended case.
>>>> >
>>>> >Will
>>>>
>>>> Sorry for the late response - I should hopefully have some more data 
>>>> with
>>>> different thresholds before the week is finished or on Monday.
>>>
>>> Did you get anywhere with the threshold heuristic?
>>>
>>> Will
>>
>> Here's some data from experiments that I finally got to today. I decided
>> to recompile for every value of the threshold. Was doing a binary search
>> of sorts and then started reducing by orders of magnitude. There pairs
>> of rows here:
>>
> 
> Well here's something interesting. I tried a different platform and 
> found that
> the workaround doesn't help much at all, similar to Qiao's observation 
> on his b.L
> chipset. Something to do with the WFE implementation or event-stream?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ