lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200520170759.GE18302@gaia>
Date:   Wed, 20 May 2020 18:08:00 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     Zhenyu Ye <yezhenyu2@...wei.com>
Cc:     linux-arch@...r.kernel.org, suzuki.poulose@....com, maz@...nel.org,
        linux-kernel@...r.kernel.org, xiexiangyou@...wei.com,
        steven.price@....com, zhangshaokun@...ilicon.com,
        linux-mm@...ck.org, arm@...nel.org, prime.zeng@...ilicon.com,
        guohanjun@...wei.com, olof@...om.net, kuhn.chenqun@...wei.com,
        will@...nel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH v3 2/2] arm64: tlb: Use the TLBI RANGE feature in
 arm64

On Mon, May 18, 2020 at 08:21:02PM +0800, Zhenyu Ye wrote:
> On 2020/5/14 23:28, Catalin Marinas wrote:
> > On Tue, Apr 14, 2020 at 07:28:35PM +0800, Zhenyu Ye wrote:
> >> +		}
> >> +		scale++;
> >> +		range_size >>= TLB_RANGE_MASK_SHIFT;
> >> +	}
> > 
> > So, you start from scale 0 and increment it until you reach the maximum.
> > I think (haven't done the maths on paper) you could also start from the
> > top with something like scale = ilog2(range_size) / 5. Not sure it's
> > significantly better though, maybe avoiding the loop 3 times if your
> > range is 2MB (which happens with huge pages).
> 
> This optimization is only effective when the range is a multiple of 256KB
> (when the page size is 4KB), and I'm worried about the performance
> of ilog2().  I traced the __flush_tlb_range() last year and found that in
> most cases the range is less than 256K (see details in [1]).

THP or hugetlbfs would exercise bigger strides but I guess it depends on
the use-case. ilog2() should be reduced to a few instructions on arm64
AFAICT (haven't tried but it should use the CLZ instruction).

> > Anyway, I think it would be more efficient if we combine the
> > __flush_tlb_range() and the _directly one into the same function with a
> > single loop for both. For example, if the stride is 2MB already, we can
> > handle this with a single classic TLBI without all the calculations for
> > the range operation. The hardware may also handle this better since the
> > software already told it there can be only one entry in that 2MB range.
> > So each loop iteration could figure which operation to use based on
> > cpucaps, TLBI range ops, stride and reduce range_size accordingly.
> 
> Summarize your suggestion in one sentence: use 'stride' to optimize the
> preformance of TLBI.  This can also be done by dividing into two functions,
> and this should indeed be taken into account in the TLBI RANGE feature.
> 
> But if we figure which operation to use based on cpucaps in each loop
> iteration, then cpus_have_const_cap() will be called frequently, which
> may affect performance of TLBI.  In my opinion, we should do as few
> judgments as possible in the loop, so judge the cpucaps outside the
> loop maybe a good choice.

cpus_have_const_cap() is a static label, so should be patched with a
branch or nop. My point was that in the classic __flush_tlb_range()
loop, instead of an addr += stride we could have something more dynamic
depending on whether the CPU supports range TLBI ops or not. But we
would indeed have more (static) branches in the loop, so possibly some
performance degradation.

If the code looks ok, I'd favour this and we can look at the
optimisation later. But I can't really tell how the code would look
without attempting to merge the two.

Anyway, a first step would be to to add the the range and stride to the
decision (i.e. (end-start)/stride > 1) before jumping to the range
operations. You can avoid the additional checks in the new TLBI
functions since we know we have at least two (huge)pages.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ