[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB07BF6.6000505@iki.fi>
Date: Mon, 29 Mar 2010 13:07:50 +0300
From: Timo Teräs <timo.teras@....fi>
To: Herbert Xu <herbert@...dor.apana.org.au>
CC: David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH RFC 1/2] flow: virtualize get and entry deletion methods
Herbert Xu wrote:
> On Mon, Mar 29, 2010 at 12:00:38PM +0300, Timo Teräs wrote:
>>> Timo, what was the reason for getting rid of the synchronous
>>> flush again?
>> No, just having the flush call is not enough to guarantee
>> liveliness. The flushing happens in delayed work, and the flows
>> might be in use before the flush has been finished or even
>> started.
>>
>> I was also hoping to move the "delayed" deletion part to
>> flow cache core so the code would be shared with policies and
>> bundles.
>>
>> As stated before, we don't really need lock for the 'dead' check.
>> It's only written once, and actually read without lock in some
>> other places too. And all the other places that do take the lock,
>> release it right away making the resulting dead check result
>> "old" anyway.
>
> No that's not the point.
>
> The lock is not there to protect reading ->dead, which is atomic
> anyway.
No it's pretty much for reading ->dead what I can tell. That's
how ->dead is accessed in multiple other places too. That's the
only reason I added the locking in my new code.
But yes, it's pointless. ->dead access is atomic, except where
it's read/written to in xfrm_policy_kill. It's trivially
changeable to atomic_t and I have a patch for this.
> It's there to guarantee that you don't increment the ref count
> on a dead policy.
Previous code did not do any locking before adding a
reference. The lock is not needed for that.
> For the flow cache we didn't need this because the policy code
> would flush the cache synchronously so we can always grab a ref
> count safely as long as BH is still off.
The old code could return policy object that was killed. It
relies solely on the fact that policy gc will flush the flow
cache. Between the time of 'policy killed' and 'policy gc ran'
the old code would return policy object that is marked dead.
The change is an improvement in this regard as the flow cache
objects get refreshed immediately after marking a policy dead.
The reason for policy GC calling flush was there for two
reasons:
- purging the stale entries
- making sure that refcount of policy won't go to zero after
releasing flow cache's reference (because the flow cache
did only atomic_dec but did not call destructor)
Both issues are handled otherwise in the patch. By refreshing
stale entries immediately. Or calling virtual destructor when
the object gets deleted. Thus the slow flushing is not needed
as often now.
> So if you leave the flow cache flushing as is, then we should
> still be able to do the it without holding the lock, or checking
> for deadness.
We can still drop the locking, as ->dead can be made atomic_t.
Checking ->dead improves cache's speed to react to policy object
changes. And the virtual ->get is especially needed for bundles
as they can get stale a lot more often.
>> Looks to me that the whole policy locking is not up-to-date
>> anymore. Apparently the only place that actually needs it is
>> the migration code (which just happens to be broke anyway since
>> bundle creation does not take read lock). But it could be
>> relatively easily changed to replace policy with new
>> templates... and the whole policy stuff converted to RCU.
>
> I wouldn't be surprised that the migration code is buggy. But
> that's orthogonal to your patch.
Yeah. Just a note that the road map to RCU policies is trivial
and fixes some races in locking currently.
- Timo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists