lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wtkSfH2DTDg10qTbUkxD1QTNBD09nx_H+S6H_-tBPQBw@mail.gmail.com>
Date:   Wed, 21 Sep 2022 19:15:10 +1200
From:   Barry Song <21cnbao@...il.com>
To:     Anshuman Khandual <anshuman.khandual@....com>
Cc:     Yicong Yang <yangyicong@...wei.com>, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
        x86@...nel.org, catalin.marinas@....com, will@...nel.org,
        linux-doc@...r.kernel.org, corbet@....net, peterz@...radead.org,
        arnd@...db.de, linux-kernel@...r.kernel.org,
        darren@...amperecomputing.com, yangyicong@...ilicon.com,
        huzhanyuan@...o.com, lipeifeng@...o.com, zhangshiming@...o.com,
        guojian@...o.com, realmz6@...il.com, linux-mips@...r.kernel.org,
        openrisc@...ts.librecores.org, linuxppc-dev@...ts.ozlabs.org,
        linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
        wangkefeng.wang@...wei.com, xhao@...ux.alibaba.com,
        prime.zeng@...ilicon.com, Barry Song <v-songbaohua@...o.com>,
        Nadav Amit <namit@...are.com>, Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH v3 4/4] arm64: support batched/deferred tlb shootdown
 during page reclamation

On Wed, Sep 21, 2022 at 6:53 PM Anshuman Khandual
<anshuman.khandual@....com> wrote:
>
>
> On 8/22/22 13:51, Yicong Yang wrote:
> > +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
> > +                                     struct mm_struct *mm,
> > +                                     unsigned long uaddr)
> > +{
> > +     __flush_tlb_page_nosync(mm, uaddr);
> > +}
> > +
> > +static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > +{
> > +     dsb(ish);
> > +}
>
> Just wondering if arch_tlbbatch_add_mm() could also detect continuous mapping
> TLB invalidation requests on a given mm and try to generate a range based TLB
> invalidation such as flush_tlb_range().
>
> struct arch_tlbflush_unmap_batch via task->tlb_ubc->arch can track continuous
> ranges while being queued up via arch_tlbbatch_add_mm(), any range formed can
> later be flushed in subsequent arch_tlbbatch_flush() ?
>
> OR
>
> It might not be worth the effort and complexity, in comparison to performance
> improvement, TLB range flush brings in ?

Probably it is not worth the complexity as perf annotate shows
"
Further perf annonate shows 95% cpu time of ptep_clear_flush
is actually used by the final dsb() to wait for the completion
of tlb flush."

so any further optimization before dsb(ish) might bring some improvement
but seems minor.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ