lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 03 Nov 2010 21:31:40 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Chris Metcalf <cmetcalf@...era.com>
Cc:	Stephen Hemminger <shemminger@...tta.com>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH] drivers/net/tile/: on-chip network drivers for the
 tile architecture

Le mercredi 03 novembre 2010 à 15:39 -0400, Chris Metcalf a écrit :

> I read it and internalized it long ago, and re-read it when I got Stephen's
> original email.  I should have said that explicitly instead of a comment
> with a smiley -- email is a tricky communication medium sometimes.
> 
> Several uses of "*(volatile int *)ptr" in that file are intended as
> performance hints.  A more obvious way to state this, for our compiler, is
> to say "prefetch_L1(ptr)".  This generates essentially the same code, but
> avoids the red flag for "volatile" and also reads more clearly, so it's a
> good change.
> 
> The other use is part of a very precise dance that involves detailed
> knowledge of the Tile memory subsystem micro-architecture.  This doesn't
> really belong in the network device driver code, so I've moved it to
> <asm/cacheflush.h>, and cleaned it up, with detailed comments.  The use
> here is that our network hardware's DMA engine can be used in a mode where
> it reads directly from memory, in which case you must ensure that any
> cached values have been flushed.
> 

This kind of things really must be discussed before using it in a
network driver.

Because, an skb can be built by one CPU, queued on a qdisc queue, with
no particular "cached values have been flushed" ...

It then can be dequeued by another CPU, and given to the device.
What happens then ?

> /*
>  * Flush & invalidate a VA range that is homed remotely on a single core,
>  * waiting until the memory controller holds the flushed values.
>  */
> static inline void finv_buffer_remote(void *buffer, size_t size)
> {
> 	char *p;
> 	int i;
> 
> 	/*
> 	 * Flush and invalidate the buffer out of the local L1/L2
> 	 * and request the home cache to flush and invalidate as well.
> 	 */
> 	__finv_buffer(buffer, size);
> 
> 	/*
> 	 * Wait for the home cache to acknowledge that it has processed
> 	 * all the flush-and-invalidate requests.  This does not mean
> 	 * that the flushed data has reached the memory controller yet,
> 	 * but it does mean the home cache is processing the flushes.
> 	 */
> 	__insn_mf();
> 
> 	/*
> 	 * Issue a load to the last cache line, which can't complete
> 	 * until all the previously-issued flushes to the same memory
> 	 * controller have also completed.  If we weren't striping
> 	 * memory, that one load would be sufficient, but since we may
> 	 * be, we also need to back up to the last load issued to
> 	 * another memory controller, which would be the point where
> 	 * we crossed an 8KB boundary (the granularity of striping
> 	 * across memory controllers).  Keep backing up and doing this
> 	 * until we are before the beginning of the buffer, or have
> 	 * hit all the controllers.
> 	 */
> 	for (i = 0, p = (char *)buffer + size - 1;
> 	     i < (1 << CHIP_LOG_NUM_MSHIMS()) && p >= (char *)buffer;
> 	     ++i) {
> 		const unsigned long STRIPE_WIDTH = 8192;
> 
> 		/* Force a load instruction to issue. */
> 		*(volatile char *)p;
> 
> 		/* Jump to end of previous stripe. */
> 		p -= STRIPE_WIDTH;
> 		p = (char *)((unsigned long)p | (STRIPE_WIDTH - 1));
> 	}
> 
> 	/* Wait for the loads (and thus flushes) to have completed. */
> 	__insn_mf();
> }
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ