lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 May 2019 10:36:06 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Nadav Amit <namit@...are.com>
Cc:     Yang Shi <yang.shi@...ux.alibaba.com>,
        "jstancek@...hat.com" <jstancek@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Nick Piggin <npiggin@...il.com>,
        Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>,
        Will Deacon <will.deacon@....com>
Subject: Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force
 flush

On Thu, May 09, 2019 at 09:21:35PM +0000, Nadav Amit wrote:

> >>> And we can fix that by having tlb_finish_mmu() sync up. Never let a
> >>> concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers
> >>> have completed.
> >>> 
> >>> This should not be too hard to make happen.
> >> 
> >> This synchronization sounds much more expensive than what I proposed. But I
> >> agree that cache-lines that move from one CPU to another might become an
> >> issue. But I think that the scheme I suggested would minimize this overhead.
> > 
> > Well, it would have a lot more unconditional atomic ops. My scheme only
> > waits when there is actual concurrency.
> 
> Well, something has to give. I didn’t think that if the same core does the
> atomic op it would be too expensive.

They're still at least 20 cycles a pop, uncontended.

> > I _think_ something like the below ought to work, but its not even been
> > near a compiler. The only problem is the unconditional wakeup; we can
> > play games to avoid that if we want to continue with this.
> > 
> > Ideally we'd only do this when there's been actual overlap, but I've not
> > found a sensible way to detect that.
> > 
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 4ef4bbe78a1d..b70e35792d29 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -590,7 +590,12 @@ static inline void dec_tlb_flush_pending(struct mm_struct *mm)
> > 	 *
> > 	 * Therefore we must rely on tlb_flush_*() to guarantee order.
> > 	 */
> > -	atomic_dec(&mm->tlb_flush_pending);
> > +	if (atomic_dec_and_test(&mm->tlb_flush_pending)) {
> > +		wake_up_var(&mm->tlb_flush_pending);
> > +	} else {
> > +		wait_event_var(&mm->tlb_flush_pending,
> > +			       !atomic_read_acquire(&mm->tlb_flush_pending));
> > +	}
> > }
> 
> It still seems very expensive to me, at least for certain workloads (e.g.,
> Apache with multithreaded MPM).

Is that Apache-MPM workload triggering this lots? Having a known
benchmark for this stuff is good for when someone has time to play with
things.

> It may be possible to avoid false-positive nesting indications (when the
> flushes do not overlap) by creating a new struct mmu_gather_pending, with
> something like:
> 
>   struct mmu_gather_pending {
>  	u64 start;
> 	u64 end;
> 	struct mmu_gather_pending *next;
>   }
> 
> tlb_finish_mmu() would then iterate over the mm->mmu_gather_pending
> (pointing to the linked list) and find whether there is any overlap. This
> would still require synchronization (acquiring a lock when allocating and
> deallocating or something fancier).

We have an interval_tree for this, and yes, that's how far I got :/

The other thing I was thinking of is trying to detect overlap through
the page-tables themselves, but we have a distinct lack of storage
there.

The things is, if this threaded monster runs on all CPUs (busy front end
server) and does a ton of invalidation due to all the short lived
request crud, then all the extra invalidations will add up too. Having
to do process (machine in this case) wide invalidations is expensive,
having to do more of them surely isn't cheap either.

So there might be something to win here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ