[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100321.183656.173864141.davem@davemloft.net>
Date: Sun, 21 Mar 2010 18:36:56 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: herbert@...dor.apana.org.au
Cc: timo.teras@....fi, netdev@...r.kernel.org
Subject: Re: [PATCH] xfrm: cache bundle lookup results in flow cache
From: Herbert Xu <herbert@...dor.apana.org.au>
Date: Mon, 22 Mar 2010 09:32:57 +0800
> On Sun, Mar 21, 2010 at 06:28:46PM -0700, David Miller wrote:
>> From: Herbert Xu <herbert@...dor.apana.org.au>
>> Date: Sat, 20 Mar 2010 23:17:51 +0800
>>
>> > Actually I just realised that the other way we can fix this is
>> > to make xfrm_dst objects per-cpu just like IPv4 routes. That
>> > is, when you fail to find an xfrm_dst object in the per-cpu
>> > cache, you dont' bother calling xfrm_find_bundle but just make
>> > a new bundle.
>>
>> How are ipv4 routing cache entries per-cpu? That would screw up route
>> metrics for TCP sockets quite a lot if they were.
>
> You're right of course, s/just like IPv4 routes// :)
And as a consequence, making the xfrm_dst's be per-cpu would mess with
route metrics for TCP.
If we do something like that, then there is simply no reason any
longer to have such fine-grained routing metrics if the one thing that
would use it heavily (ipsec) stops doing so completely.
At that point we can go to a host cache for metrics just like BSD, and
pull all of the metrics out of struct dst (enormous win as it makes
all routes significantly smaller).
I'm willing to consider this seriously, to be honest.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists