[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1307770931.2872.70.camel@edumazet-laptop>
Date: Sat, 11 Jun 2011 07:42:11 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jeff Kirsher <jeffrey.t.kirsher@...el.com>
Cc: davem@...emloft.net, Vasu Dev <vasu.dev@...el.com>,
netdev@...r.kernel.org, gospo@...hat.com
Subject: Re: [net-next 13/13] ixgbe: use per NUMA node lock for FCoE DDP
Le samedi 11 juin 2011 à 07:18 +0200, Eric Dumazet a écrit :
> > /**
> > diff --git a/drivers/net/ixgbe/ixgbe_fcoe.h b/drivers/net/ixgbe/ixgbe_fcoe.h
> > index d876e7a..8618892 100644
> > --- a/drivers/net/ixgbe/ixgbe_fcoe.h
> > +++ b/drivers/net/ixgbe/ixgbe_fcoe.h
> > @@ -69,6 +69,7 @@ struct ixgbe_fcoe {
> > struct pci_pool **pool;
> > atomic_t refcnt;
> > spinlock_t lock;
> > + struct spinlock **node_lock;
>
> Wont this read_mostly pointer sits in often modified cache line ?
>
> > struct ixgbe_fcoe_ddp ddp[IXGBE_FCOE_DDP_MAX];
> > unsigned char *extra_ddp_buffer;
> > dma_addr_t extra_ddp_buffer_dma;
>
This patch seems overkill to me, have you tried the more simple way I
did in commit 79640a4ca6955e3ebdb7038508fa7a0cd7fa5527
(net: add additional lock to qdisc to increase throughput )
(remember you must place ->busylock in a separate cache line, to not
slow down the two cpus that have access to ->lock)
struct ixgbe_fcoe could probably be more carefuly reordered to lower
false sharing
I kindly ask you guys provide actual perf numbers between
1) before any patch
2) After your multilevel per numanode locks
3) A more simple way (my suggestion of adding a single 'busylock')
Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists