lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160713013711.GA65916@ast-mbp.thefacebook.com>
Date:	Tue, 12 Jul 2016 18:37:14 -0700
From:	Alexei Starovoitov <alexei.starovoitov@...il.com>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Alexander Duyck <alexander.duyck@...il.com>,
	Netdev <netdev@...r.kernel.org>, kafai@...com,
	Daniel Borkmann <daniel@...earbox.net>,
	Tom Herbert <tom@...bertland.com>,
	Brenden Blanco <bblanco@...mgrid.com>,
	john fastabend <john.fastabend@...il.com>,
	Or Gerlitz <gerlitz.or@...il.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	rana.shahot@...il.com, Thomas Graf <tgraf@...g.ch>,
	"David S. Miller" <davem@...emloft.net>, as754m@....com,
	Saeed Mahameed <saeedm@...lanox.com>, amira@...lanox.com,
	tzahio@...lanox.com, Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [net-next PATCH RFC] mlx4: RX prefetch loop

On Tue, Jul 12, 2016 at 09:52:52PM +0200, Jesper Dangaard Brouer wrote:
> > 
> > >> Also unconditionally doing batch of 8 may also hurt depending on what
> > >> is happening either with the stack, bpf afterwards or even cpu version.  
> > >
> > > See this as software DDIO, if the unlikely case that data will get
> > > evicted, it will still exist in L2 or L3 cache (like DDIO). Notice,
> > > only 1024 bytes are getting prefetched here.  
> > 
> > I disagree.  DDIO only pushes received frames into the L3 cache.  What
> > you are potentially doing is flooding the L2 cache.  The difference in
> > size between the L3 and L2 caches is very significant.  L3 cache size
> > is in the MB range while the L2 cache is only 256KB or so for Xeon
> > processors and such.  In addition DDIO is really meant for an
> > architecture that has a fairly large cache region to spare and it it
> > limits itself to that cache region, the approach taken in this code
> > could potentially prefetch a fairly significant chunk of memory.
> 
> No matter how you slice it, reading this memory is needed, as I'm
> making sure only to prefetch packets that are "ready" and are within
> the NAPI budget.  (eth_type_trans/eth_get_headlen)

when compilers insert prefetches it typically looks like:
for (int i;...; i += S) {
  prefetch(data + i + N);
  access data[i]
}
the N is calculated based on weight of the loop and there is
no check that i + N is within loop bounds.
prefetch by definition is speculative. Too many prefetches hurt.
Wrong prefetch distance N hurts too.
Modern cpus compute stride in hw and do automatic prefetch, so
compilers rarely emit sw prefetch anymore, but the same logic
still applies. The ideal packet processing loop:
for (...) {
  prefetch(packet + i + N);
  access packet + i
}
if there is no loop there is no value in prefetch, since there
is no deterministic way to figure out exact time when packet
data would be accessed.
In case of bpf the program author can tell us 'weight' of
the program and since the program processes the packets
mostly through the same branches and lookups we can issue
prefetch based on author's hint.
Compilers never do:
prefetch data + i
prefetch data + i + 1
prefetch data + i + 2
access data + i
access data + i + 1
access data + i + 2
because by the time access is happening the prefetched data
may be already evicted.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ