lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706301406001.12517@blonde.wat.veritas.com>
Date:	Sat, 30 Jun 2007 14:16:44 +0100 (BST)
From:	Hugh Dickins <hugh@...itas.com>
To:	Martin Schwidefsky <schwidefsky@...ibm.com>
cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch 1/5] avoid tlb gather restarts.

On Fri, 29 Jun 2007, Martin Schwidefsky wrote:
> On Fri, 2007-06-29 at 19:56 +0100, Hugh Dickins wrote:
> > I don't dare comment on your page_mkclean_one patch (5/5),
> > that dirty page business has grown too subtle for me.
> 
> Oh yes, the dirty handling is tricky....

I'll move that discussion over to 5/5 and Cc Peter
(sorry I was too lazy to do so in the first place).

> > On Fri, 29 Jun 2007, Martin Schwidefsky wrote:
> > You think you're just moving the finish/gather to where they're
> > actually necessary; but the thing is, that per-cpu struct mmu_gather
> > is liable to accumulate a lot of unpreemptible work for the future
> > tlb_finish_mmu, particularly when anon pages are associated with swap.
> 
> Hmm, ok, so you are saying that we should do a flush at the end of each
> vma.

I think of it as doing a flush every ZAP_BLOCK_SIZE, with the imperfect
structure of the loop forcing perhaps an early flush at the end of each
vma: I seem to assume large vmas, and you to assume small ones.

IIRC, the common case for doing multiple vmas here is exit, when it
ends up that the TLB flush can often be skipped because already done
by the switch from exiting task; so the premature flush per vma doesn't
matter much.  But treat that claim with maximum scepticism: I've not
rechecked it, several aspects may be wrong.  What I do remember is
that (at least on i386) there's a lot less actual TLB flushing done
here than it appears from the outside.

> > So although there may be no need to resched right now, if we keep on
> > gathering more and more without flushing, we'll be very unresponsive
> > when a resched is needed later on.  Hence Ingo's ZAP_BLOCK_SIZE to
> > split it up, small when CONFIG_PREEMPT, more reasonable but still
> > limited when not.
> 
> Would it be acceptable to call tlb_flush_mmu instead of the
> tlb_finish_mmu / tlb_gather_mmu pair if the condition around
> cond_resched evaluates to false?

That sounds a good idea, yes, that should be fine.  But beware,
tlb_flush_mmu is an internal detail of the asm-generic/tlb.h method
and perhaps some others, it currently doesn't exist on some arches.

I think you just need to add a simple one to arm & arm26, and take
the "ia64_" off the ia64 one.  powerpc and sparc64 go about it all 
a bit differently, but it should be easy to give them one too.
There may be some others missing.

> The background for this change is that I'm working on another patch that
> will change the tlb flushing for s390 quite a bit. We won't have
> anything to flush with tlb_finish_mmu because we will either flush all
> tlbs with tlb_gather_mmu or each pte seperatly. The pages will always be
> freed immediatly. If we are forced to restart the tlb gather then we'll
> do multiple flush_tlb_mm because the information that we already flushed
> everything is lost with tlb_finish_mmu.

Thanks for the info.  Sounds like we may have trouble ahead when
rearranging this stuff, easy to forget s390 from our assumptions:
keep watch!

Hugh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ