[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070426.141847.10298516.davem@davemloft.net>
Date: Thu, 26 Apr 2007 14:18:47 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: hadi@...erus.ca
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH][XFRM] export SAD info
From: jamal <hadi@...erus.ca>
Date: Thu, 26 Apr 2007 09:10:10 -0400
> I would have liked to just do a read_lock_bh when retrieving the table
> metadata; however, the state table lock is defined as DEFINE_SPINLOCK
> unlike the policy table which is defined as DEFINE_RWLOCK.
> Any objection to change the state lock to be RW?
I wouldn't mind if it actually helped anything.
The SMP cache line transactions are more expensive than the
execution of the code blocks they are protecting. rwlock's
rarely help, and when they do (the execution path is more
expensive than the SMP atomic operations) then you're holding
the lock too long :-)
> One other angle is start rejecting additions to the table after some
> point. To test, I wrote a little DOS tool that just kept adding entries
> until an OOM hit. It is a lot of fun to watch when you hit a point that
> swap is guzzling 2G or more. The add latency starts going up
> exponentially.
I would prefer a dynamic algorithm that reacts to system memory
pressure and yet-another-knob that people will get wrong and
there is no sane default for.
I plan to do away with all the GC threshold madness in the
routing cache, for example, and just let the MM layer call
back into us when there is memory pressure to trigger GC.
See set_shrinker() and friends. The MM calls into these
handlers in response to memory pressure. There is no
reason the networking can't hook into this and do things
properly instead of the ad-hoc manner we currently use.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists