lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Jul 2016 14:45:21 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:	netdev@...r.kernel.org, kafai@...com, daniel@...earbox.net,
	tom@...bertland.com, bblanco@...mgrid.com,
	john.fastabend@...il.com, gerlitz.or@...il.com,
	hannes@...essinduktion.org, rana.shahot@...il.com, tgraf@...g.ch,
	"David S. Miller" <davem@...emloft.net>, as754m@....com,
	saeedm@...lanox.com, amira@...lanox.com, tzahio@...lanox.com,
	Eric Dumazet <eric.dumazet@...il.com>, brouer@...hat.com
Subject: Re: [net-next PATCH RFC] mlx4: RX prefetch loop

On Mon, 11 Jul 2016 16:05:11 -0700
Alexei Starovoitov <alexei.starovoitov@...il.com> wrote:

> On Mon, Jul 11, 2016 at 01:09:22PM +0200, Jesper Dangaard Brouer wrote:
> > > -	/* Process all completed CQEs */
> > > +	/* Extract and prefetch completed CQEs */
> > >  	while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
> > >  		    cq->mcq.cons_index & cq->size)) {
> > > +		void *data;
> > >  
> > >  		frags = ring->rx_info + (index << priv->log_rx_info);
> > >  		rx_desc = ring->buf + (index << ring->log_stride);
> > > +		prefetch(rx_desc);
> > >  
> > >  		/*
> > >  		 * make sure we read the CQE after we read the ownership bit
> > >  		 */
> > >  		dma_rmb();
> > >  
> > > +		cqe_array[cqe_idx++] = cqe;
> > > +
> > > +		/* Base error handling here, free handled in next loop */
> > > +		if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) ==
> > > +			     MLX4_CQE_OPCODE_ERROR))
> > > +			goto skip;
> > > +
> > > +		data = page_address(frags[0].page) + frags[0].page_offset;
> > > +		prefetch(data);  
> 
> that's probably not correct in all cases, since doing prefetch on the address
> that is going to be evicted soon may hurt performance.
> We need to dma_sync_single_for_cpu() before doing a prefetch or
> somehow figure out that dma_sync is a nop, so we can omit it altogether
> and do whatever prefetches we like.

Sure, DMA can be synced first (actually already played with this).

> Also unconditionally doing batch of 8 may also hurt depending on what
> is happening either with the stack, bpf afterwards or even cpu version.

See this as software DDIO, if the unlikely case that data will get
evicted, it will still exist in L2 or L3 cache (like DDIO). Notice,
only 1024 bytes are getting prefetched here.

> Doing single prefetch of Nth packet is probably ok most of the time,
> but asking cpu to prefetch 8 packets at once is unnecessary especially
> since single prefetch gives the same performance.

No, unconditionally prefetch of the Nth packet, will be wrong most of
the time, for real work loads, as Eric Dumazet already pointed out.

This patch does NOT unconditionally prefetch 8 packets.  Prefetching
_only_ happens when it is known that packets are ready in the RX ring.
We know this prefetch data will be used/touched, within the NAPI cycle.
Even if the processing of the packet flush L1 cache, then it will be in
L2 or L3 (like DDIO).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ