lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150610170700.GG26425@suse.de>
Date:	Wed, 10 Jun 2015 18:07:00 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Dave Hansen <dave.hansen@...el.com>,
	Ingo Molnar <mingo@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	Minchan Kim <minchan@...nel.org>,
	H Peter Anvin <hpa@...or.com>, Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/3] TLB flush multiple pages per IPI v5

On Wed, Jun 10, 2015 at 09:17:15AM -0700, Linus Torvalds wrote:
> On Wed, Jun 10, 2015 at 6:13 AM, Andi Kleen <andi@...stfloor.org> wrote:
> >
> > Assuming the page tables are cache-hot... And hot here does not mean
> > L3 cache, but higher. But a memory intensive workload can easily
> > violate that.
> 
> In practice, no.
> 
> You'll spend all your time on the actual real data cache misses, the
> TLB misses won't be any more noticeable.
> 
> And if your access patters are even *remoptely* cache-friendly (ie
> _not_ spending all your time just waiting for regular data cache
> misses), then a radix-tree-like page table like Intel will have much
> better locality in the page tables than in the actual data. So again,
> the TLB misses won't be your big problem.
> 
> There may be pathological cases where you just look at one word per
> page, but let's face it, we don't optimize for pathological or
> unrealistic cases.
> 

It's concerns like this that have me avoiding any micro-benchmarking approach
that tried to measure the indirect costs of refills. No matter what the
microbenchmark does, there will be other cases that render it irrelevant.

> And the thing is, you need to look at the costs. Single-page
> invalidation taking hundreds of cycles? Yeah, we definitely need to
> take the downside of trying to be clever into account.
> 
> If the invalidation was really cheap, the rules might change. As it
> is, I really don't think there is any question about this.
> 
> That's particularly true when the single-page invalidation approach
> has lots of *software* overhead too - not just the complexity, but
> even "obvious" costs feeding the list of pages to be invalidated
> across CPU's. Think about it - there are cache misses there too, and
> because we do those across CPU's those cache misses are *mandatory*.
> 
> So trying to avoid a few TLB misses by forcing mandatory cache misses
> and extra complexity, and by doing lots of 200+ cycle operations?
> Really? In what universe does that sound like a good idea?
> 
> Quite frankly, I can pretty much *guarantee* that you didn't actually
> think about any real numbers, you've just been taught that fairy-tale
> of "TLB misses are expensive". As if TLB entries were somehow sacred.
> 

Everyone has been taught that one. Papers I've read from the last two
years on TLB implementations or page reclaim management bring this up as
a supporting point for whatever they are proposing. It was partially why
I kept PFN tracking and also to put much of the cost on the reclaimer and
minimise interference on the recipient of the IPI. I still think it was
a rational concern but will assume that refills are cheaper than smart
invalidations until it can be proven otherwise.

> If somebody can show real numbers on a real workload, that's one
> thing.

The last adjustments made today  to the series are at
http://git.kernel.org/cgit/linux/kernel/git/mel/linux-balancenuma.git/log/?h=mm-vmscan-lessipi-v7r5
I'll redo it on top of 4.2-rc1 whenever that happens so gets a full round
in linux-next.  Patch 4 can be revisited if a real workload is found that
is not deliberately pathological running on a CPU that matters. The forward
port of patch 4 for testing will be trivial.

It also separated out the dynamic allocation of the structure so that it
can be excluded if deemed to be an unnecessary complication.

> So anyway, I like the patch series. I just think that the final patch
> - the one that actually saves the addreses, and limits things to
> BATCH_TLBFLUSH_SIZE, should be limited.
> 

I see your logic but if it's limited then we send more IPIs and it's all
crappy tradeoffs. If a real workload complains, it'll be far easier to
work with.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ