[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130627215411.GC5936@sbohrermbp13-local.rgmadvisors.com>
Date: Thu, 27 Jun 2013 16:54:11 -0500
From: Shawn Bohrer <sbohrer@...advisors.com>
To: Rick Jones <rick.jones2@...com>
Cc: netdev@...r.kernel.org
Subject: Re: Understanding lock contention in __udp4_lib_mcast_deliver
On Thu, Jun 27, 2013 at 01:46:58PM -0700, Rick Jones wrote:
> On 06/27/2013 01:20 PM, Shawn Bohrer wrote:
> >On Thu, Jun 27, 2013 at 12:58:39PM -0700, Rick Jones wrote:
> >>Are there other processes showing _raw_spin_lock time? It may be
> >>more clear to add a --sort symbol,dso or some such to your perf
> >>report command. Because what you show there suggests less than 1%
> >>of the active cycles are in _raw_spin_lock.
> >
> >You think I'm wasting time going after small potatoes huh?
>
> Perhaps. I also find it difficult to see the potatoes' (symbols')
> big picture in perf's default sorting :)
>
> >On a normal system it looks like it is about .12% total which is
> >indeed small but my thinking was that I should be able to make that
> >go to 0 easily by ensuring we use unique ports and only have one
> >socket per multicast addr:port. Now that I've failed at making it go
> >to 0 I would mostly like to understand what part of my thinking was
> >flawed. Or perhaps I can make it go to zero if I do ...
>
> How do you know that time is actually contention and not simply
> acquire and release overhead?
Excellent point, and that could be the problem with my thinking. I
just now tried (unsuccessfully) to use lockstat to see if there was
any contention reported. I read Documentation/lockstat.txt and
followed the instructions but the lock in question did not appear to
be in the output. I think I'm going to have to go with the assumption
that this is just acquire and release overhead.
Thanks,
Shawn
--
---------------------------------------------------------------
This email, along with any attachments, is confidential. If you
believe you received this message in error, please contact the
sender immediately and delete all copies of the message.
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists