lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <94F013E7935FF44C83EBE7784D62AD3F0571F5FE@039-SN2MPN1-022.039d.mgd.msft.net>
Date:	Thu, 9 Feb 2012 10:44:54 +0000
From:	Li Yang-R58472 <r58472@...escale.com>
To:	Paul Gortmaker <paul.gortmaker@...driver.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>
Subject: RE: [RFC] Multi queue support in ethernet/freescale/ucc_geth.c

> -----Original Message-----
> From: Paul Gortmaker [mailto:paul.gortmaker@...driver.com]
> Sent: Friday, February 03, 2012 6:42 AM
> To: Li Yang-R58472
> Cc: netdev@...r.kernel.org; linuxppc-dev@...ts.ozlabs.org
> Subject: [RFC] Multi queue support in ethernet/freescale/ucc_geth.c
> 
> Hi Li,

Hi Paul,

Sorry for the late response due to holidays.

> 
> A while back DaveM mentioned that it would be good to break out the ring
> allocations[1] in this driver.
> 
> I was looking at it, and in the process noticed this:
> 
> $ grep 'numQueues.*=' drivers/net/ethernet/freescale/ucc_geth.c
>         .numQueuesTx = 1,
>         .numQueuesRx = 1,
> $
> 
> My interpretation of the above is that there is no way (aside from a code
> edit) to enable multi queue support.
> They are only ever assigned one time, to a value of one.
> 
> Assuming I'm not missing something obvious, is the multi queue support
> functional and tested, or just old code that never got tested and
> subsequently enabled?

Previously the device is only used on single core cpu, so we didn't have the incentive to enable multi-queue.  It is not tested on Linux currently.

> 
> The reason I ask, is that the ring allocation code gets rid of the loop
> wrapping it, if the driver is really only meant to ever have just single
> queues for Rx/Tx. And other areas of the driver can also be simplified
> accordingly as well.

Well.  I would prefer the other way which is to add the multi-queue support as we are using the QE in multi-core SoC and the current driver is having almost all the code needed for multi-queue except interface to the protocol layer.

- Leo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ