lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <016a01d8ce69$ed6a6350$c83f29f0$@trustnetic.com>
Date:   Thu, 22 Sep 2022 17:58:55 +0800
From:   Jiawen Wu <jiawenwu@...stnetic.com>
To:     "'Andrew Lunn'" <andrew@...n.ch>
Cc:     <netdev@...r.kernel.org>, <mengyuanlou@...-swift.com>
Subject: RE: [PATCH net-next v2 03/16] net: txgbe: Set MAC address and register netdev

On Wednesday, August 31, 2022 9:03 AM, Andrew Lunn wrote:
> > +struct txgbe_ring {
> > +	u8 reg_idx;
> > +} ____cacheline_internodealigned_in_smp;
> 
> Am i right in thinking that is one byte actually takes up one L3 cache
line?
> 
> >  struct txgbe_adapter {
> >  	u8 __iomem *io_addr;    /* Mainly for iounmap use */
> > @@ -18,11 +35,33 @@ struct txgbe_adapter {
> >  	struct net_device *netdev;
> >  	struct pci_dev *pdev;
> >
> > +	/* Tx fast path data */
> > +	int num_tx_queues;
> > +
> > +	/* TX */
> > +	struct txgbe_ring *tx_ring[TXGBE_MAX_TX_QUEUES]
> > +____cacheline_aligned_in_smp;
> > +
> 
> I assume this causes tx_ring to be aligned to a cache line. Have you use
pahole
> to see how much space you are wasting? Can some of the other members be
> moved around to reduce the waste? Generally, try to arrange everything for
RX
> on one cache line, everything for TX on another cache line.
> 

More members will be added to 'struct txgbe_ring', but this current patch
only adds one byte 'reg_idx'.
I will postpone it until the actual need to add.

> > +void txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
> > +		   u32 enable_addr)
> > +{
> > +	u32 rar_entries = hw->mac.num_rar_entries;
> > +	u32 rar_low, rar_high;
> > +
> > +	/* Make sure we are using a valid rar index range */
> > +	if (index >= rar_entries) {
> > +		txgbe_info(hw, "RAR index %d is out of range.\n", index);
> > +		return;
> > +	}
> > +
> > +	/* select the MAC address */
> > +	wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index);
> > +
> > +	/* setup VMDq pool mapping */
> > +	wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF);
> > +	wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, pools >> 32);
> > +
> > +	/* HW expects these in little endian so we reverse the byte
> > +	 * order from network order (big endian) to little endian
> 
> And what happens when the machine is already little endian?
> 

It is not evaluated here, so it doesn't matter what the machine's byte order
is.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ