[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100821033024.GC8616@verge.net.au>
Date: Sat, 21 Aug 2010 12:30:25 +0900
From: Simon Horman <horms@...ge.net.au>
To: Julian Anastasov <ja@....bg>
Cc: lvs-devel@...r.kernel.org, netdev@...r.kernel.org,
netfilter-devel@...r.kernel.org,
Stephen Hemminger <shemminger@...tta.com>,
Wensong Zhang <wensong@...ux-vs.org>
Subject: Re: [rfc] IPVS: convert scheduler management to RCU
On Fri, Aug 20, 2010 at 09:03:03PM +0300, Julian Anastasov wrote:
>
> Hello,
>
> On Fri, 20 Aug 2010, Simon Horman wrote:
>
> > Signed-off-by: Simon Horman <horms@...ge.net.au>
> >
> > ---
> >
> > I'm still getting my head around RCU, so review would be greatly appreciated.
> >
> > It occurs to me that this code is not performance critical, so
> > perhaps simply replacing the rwlock with a spinlock would be better?
>
> This specific code does not need RCU conversion, see below
Agreed.
> > Index: nf-next-2.6/net/netfilter/ipvs/ip_vs_sched.c
> > ===================================================================
> > --- nf-next-2.6.orig/net/netfilter/ipvs/ip_vs_sched.c 2010-08-20 22:21:01.000000000 +0900
> > +++ nf-next-2.6/net/netfilter/ipvs/ip_vs_sched.c 2010-08-20 22:21:51.000000000 +0900
> > @@ -35,7 +35,7 @@
> > static LIST_HEAD(ip_vs_schedulers);
> >
> > /* lock for service table */
> > -static DEFINE_RWLOCK(__ip_vs_sched_lock);
> > +static DEFINE_SPINLOCK(ip_vs_sched_mutex);
>
> Here is what I got as list of locking points:
>
> __ip_vs_conntbl_lock_array:
> - can benefit from RCU, main benefits come from here
>
> - ip_vs_conn_unhash() followed by ip_vs_conn_hash() is tricky with RCU,
> needs more thinking, eg. when cport is changed
>
> cp->lock, cp->refcnt:
> - not a problem
>
> tcp_app_lock, udp_app_lock, sctp_app_lock:
> - can benefit from RCU (once per connection)
>
> svc->sched_lock:
> - only 1 read_lock, mostly writers that need exclusive access
> - so, not suitable for RCU, can be switched to spin_lock for speed
>
> __ip_vs_sched_lock:
> - not called by packet handlers, no need for RCU
> - used only by one ip_vs_ctl user (configuration) and the
> scheduler modules
> - can remain RWLOCK, no changes in locking are needed
>
> __ip_vs_svc_lock:
> - spin_lock, use RCU
> - restrictions for schedulers with .update_service method
> because svc->sched_lock is write locked, see below
>
> __ip_vs_rs_lock:
> - spin_lock, use RCU
>
> Schedulers:
> - every .schedule method has its own locking, two examples:
> - write_lock: to protect the scheduler state (can be
> changed to spin_lock), see WRR. Difficult for RCU.
> - no lock: relies on IP_VS_WAIT_WHILE, no state
> is protected explicitly, fast like RCU, see WLC
>
> Scheduler state, eg. mark->cl:
> - careful RCU assignment, may be all .update_service methods
> should use copy-on-update (WRR). OTOH, ip_vs_wlc_schedule (WLC)
> has no locks at all, thanks to the IP_VS_WAIT_WHILE, so
> it is fast as RCU.
>
> Statistics:
> dest->stats.lock, svc->stats.lock, ip_vs_stats.lock:
> - called for every packet, BAD for SMP, see ip_vs_in_stats(),
> ip_vs_out_stats(), ip_vs_conn_stats()
>
> curr_sb_lock:
> - called for every packet depending on conn state
> - No benefits from RCU, should be spin_lock
>
> To summarize:
>
> - the main problem remains stats:
> dest->stats.lock, svc->stats.lock, ip_vs_stats.lock
>
> - RCU benefits when connection processes many packets per connection, eg.
> for TCP, SCTP, not much for UDP. No gains for the 1st
> packet in connection.
>
> - svc: no benefits from RCU, some schedulers protect state and
> need exclusive access, others have no state (and they do not use
> locks even now)
Thanks for the list. It looks like a good basis for some conversion work.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists