lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yC0i6MYwvosRSrdQ1iT7n88ypmK3aOQJkuusqNKtddtg@mail.gmail.com>
Date:   Sun, 8 Jan 2023 18:48:41 +0800
From:   Barry Song <21cnbao@...il.com>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Yicong Yang <yangyicong@...wei.com>, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
        x86@...nel.org, will@...nel.org, anshuman.khandual@....com,
        linux-doc@...r.kernel.org, corbet@....net, peterz@...radead.org,
        arnd@...db.de, punit.agrawal@...edance.com,
        linux-kernel@...r.kernel.org, darren@...amperecomputing.com,
        yangyicong@...ilicon.com, huzhanyuan@...o.com, lipeifeng@...o.com,
        zhangshiming@...o.com, guojian@...o.com, realmz6@...il.com,
        linux-mips@...r.kernel.org, openrisc@...ts.librecores.org,
        linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
        linux-s390@...r.kernel.org, wangkefeng.wang@...wei.com,
        xhao@...ux.alibaba.com, prime.zeng@...ilicon.com,
        Barry Song <v-songbaohua@...o.com>,
        Nadav Amit <namit@...are.com>, Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown
 during page reclamation

On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas <catalin.marinas@....com> wrote:
>
> On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> > It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> > large systems but may not have improvement on small systems like on
> > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> > on CONFIG_EXPERT for this stage and make this disabled on systems
> > with less than 8 CPUs. User can modify this threshold according to
> > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
>
> What's the overhead of such batching on systems with 4 or fewer CPUs? If
> it isn't noticeable, I'd rather have it always on than some number
> chosen on whichever SoC you tested.

On the one hand, tlb flush is cheap on a small system. so batching tlb flush
helps very minorly.

On the other hand, since we have batched the tlb flush, new PTEs might be
invisible to others before the final broadcast is done and Ack-ed. thus, there
is a risk someone else might do mprotect or similar things  on those deferred
pages which will ask for read-modify-write on those deferred PTEs. in this
case, mm will do an explicit flush by flush_tlb_batched_pending which is
not required if tlb flush is not deferred. the code is in:

static unsigned long change_pte_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{
        ...

      pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);

      flush_tlb_batched_pending(vma->vm_mm);
      arch_enter_lazy_mmu_mode();
      do {
                oldpte = *pte;
                if (pte_present(oldpte)) {
                           pte_t ptent;
                ...
}

since we don't have the mechanism to record which pages should be
flushed in flush_tlb_batched_pending(), flush_tlb_batched_pending()
is flushing the whole process,

void flush_tlb_batched_pending(struct mm_struct *mm)
{
       int batch = atomic_read(&mm->tlb_flush_batched);
       int pending = batch & TLB_FLUSH_BATCH_PENDING_MASK;
       int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

       if (pending != flushed) {
               flush_tlb_mm(mm);
        /*
         * If the new TLB flushing is pending during flushing, leave
         * mm->tlb_flush_batched as is, to avoid losing flushing.
        */
      atomic_cmpxchg(&mm->tlb_flush_batched, batch,
           pending | (pending << TLB_FLUSH_BATCH_FLUSHED_SHIFT));
     }
}

I guess mprotect things won't be that often for a running process especially
when the system has begun to reclaim its memory. it might be more often
only during the initialization of a process. And x86 has enabled this feature
for a long time, probably this concurrency doesn't matter too much.

but it is still case by case. That is why we have decided to be more
conservative
on globally enabling this feature and why it also depends on CONFIG_EXPERT.

I believe Anshuman has contributed many points on this in those previous
discussions.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ