[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f15f835-cf73-be5b-8bb0-cabb6e4eeed2@huawei.com>
Date: Tue, 2 Jun 2020 20:06:08 +0800
From: Zhenyu Ye <yezhenyu2@...wei.com>
To: <catalin.marinas@....com>, <will@...nel.org>,
<suzuki.poulose@....com>, <maz@...nel.org>, <steven.price@....com>,
<guohanjun@...wei.com>, <olof@...om.net>
CC: <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux-arch@...r.kernel.org>,
<linux-mm@...ck.org>, <arm@...nel.org>, <xiexiangyou@...wei.com>,
<prime.zeng@...ilicon.com>, <zhangshaokun@...ilicon.com>,
<kuhn.chenqun@...wei.com>
Subject: Re: [RFC PATCH v4 2/2] arm64: tlb: Use the TLBI RANGE feature in
arm64
Hi all,
Some optimizations to the codes:
On 2020/6/1 22:47, Zhenyu Ye wrote:
> - start = __TLBI_VADDR(start, asid);
> - end = __TLBI_VADDR(end, asid);
> + /*
> + * The minimum size of TLB RANGE is 2 pages;
> + * Use normal TLB instruction to handle odd pages.
> + * If the stride != PAGE_SIZE, this will never happen.
> + */
> + if (range_pages % 2 == 1) {
> + addr = __TLBI_VADDR(start, asid);
> + __tlbi_last_level(vale1is, vae1is, addr, last_level);
> + start += 1 << PAGE_SHIFT;
> + range_pages >>= 1;
> + }
>
We flush a single page here, and below loop does the same thing
if cpu not support TLB RANGE feature. So may we use a goto statement
to simplify the code.
> + while (range_pages > 0) {
> + if (cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) &&
> + stride == PAGE_SIZE) {
> + num = (range_pages & TLB_RANGE_MASK) - 1;
> + if (num >= 0) {
> + addr = __TLBI_VADDR_RANGE(start, asid, scale,
> + num, 0);
> + __tlbi_last_level(rvale1is, rvae1is, addr,
> + last_level);
> + start += __TLBI_RANGE_SIZES(num, scale);
> + }
> + scale++;
> + range_pages >>= TLB_RANGE_MASK_SHIFT;
> + continue;
> }
> +
> + addr = __TLBI_VADDR(start, asid);
> + __tlbi_last_level(vale1is, vae1is, addr, last_level);
> + start += stride;
> + range_pages -= stride >> 12;
> }
> dsb(ish);
> }
>
Just like:
--8<---
if (range_pages %2 == 1)
goto flush_single_tlb;
while (range_pages > 0) {
if (cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) &&
stride == PAGE_SIZE) {
num = ((range_pages >> 1) & TLB_RANGE_MASK) - 1;
if (num >= 0) {
addr = __TLBI_VADDR_RANGE(start, asid, scale,
num, 0);
__tlbi_last_level(rvale1is, rvae1is, addr,
last_level);
start += __TLBI_RANGE_SIZES(num, scale);
}
scale++;
range_pages >>= TLB_RANGE_MASK_SHIFT;
continue;
}
flush_single_tlb:
addr = __TLBI_VADDR(start, asid);
__tlbi_last_level(vale1is, vae1is, addr, last_level);
start += stride;
range_pages -= stride >> PAGE_SHIFT;
}
--8<---
Powered by blists - more mailing lists