lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2022 15:07:53 +0800
From:   Yunfeng Ye <yeyunfeng@...wei.com>
To:     Catalin Marinas <catalin.marinas@....com>
CC:     <will@...nel.org>, <wangkefeng.wang@...wei.com>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <linfeilong@...wei.com>
Subject: Re: [PATCH 4/5] arm64: mm: Support ASID isolation feature



On 2022/11/9 20:43, Catalin Marinas wrote:
> On Mon, Oct 17, 2022 at 04:32:02PM +0800, Yunfeng Ye wrote:
>> After a rollover, the global generation will be flushed, which will
>> cause the process mm->context.id on all CPUs do not match the
>> generation. Thus, the process will compete for the global spinlock lock
>> to reallocate a new ASID and refresh the TLBs of all CPUs on context
>> switch. This will lead to the increase of scheduling delay and TLB miss.
>>
>> In some delay-sensitive scenarios, for example, part of CPUs are
>> isolated, only a limited number of processes are deployed to run on the
>> isolated CPUs. In this case, we do not want these key processes to be
>> affected by the rollover of ASID.
> 
> Part of this commit log should also go in the cover letter and it would> help to back this up by some numbers, e.g. what percentage improvement
> you get with this patchset by running hackbench on an isolated CPU.
> 
> In theory it looks like CPU isolation would benefit from this patchset
> but we try not to touch this code often, so any modification should come
> with proper justification, backed by numbers.
> 
Yes, CPU isolation will benefit from this patchset. We use cyclictest tool
to test the maximum scheduling and interrupt delays, found that the
sched_switch process takes several microseconds sometimes, The analysis
result shows that the delay is caused by the ASID refresh.

We use simple test cases to construct the case of quickly consumption of
ASIDs, this increases the ASID refresh frequency and the contention for
the global ASID spin lock. In this case, the delay between sched_switch
and tlb_flush can reach 63 us. The following is the trace log:

    stress-ng-2864907 [012] dN.. 17006.430048: sched_stat_runtime: comm=stress-ng pid=2864907 runtime=859130 [ns] vruntime=9015202524211 [ns]
    stress-ng-2864907 [012] d... 17006.430048: sched_switch: prev_comm=stress-ng prev_pid=2864907 prev_prio=120 prev_state=R ==> next_comm=cyclictest next_pid=2866344 next_prio=19
    stress-ng-2864907 [012] d... 17006.430111: tlb_flush: pages:-1 reason:flush on task switch (0)
// 17006.430111 - 17006.430048 = 63 us

    cyclictest-2866344 [012] .... 17006.430112: kfree: call_site=__audit_syscall_exit+0x210/0x250 ptr=0000000000000000
    cyclictest-2866344 [012] .... 17006.430112: sys_exit: NR 115 = 0
    cyclictest-2866344 [012] .... 17006.430112: sys_clock_nanosleep -> 0x0
    cyclictest-2866344 [012] d... 17006.430113: user_enter:
    cyclictest-2866344 [012] d... 17006.430126: user_exit:
    cyclictest-2866344 [012] .... 17006.430126: sys_enter: NR 64 (4, ffffa451c4d0, 1f, 0, 3b, 0)
    cyclictest-2866344 [012] .... 17006.430126: sys_write(fd: 4, buf: ffffa451c4d0, count: 1f)
    cyclictest-2866344 [012] .... 17006.430129: tracing_mark_write: hit latency threshold (72 > 30)

The delay caused by ASID interference is variable, may be several nanoseconds,
or may be several microseconds, it depends on the concurrent competition.
If this patch series is used, the delay caused by ASID interference on the
isolated CPU can be reduced.

Thanks.

> Note that I haven't reviewed the algorithm you are proposing in detail,
> only had a brief look.
> 

Powered by blists - more mailing lists