lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Mar 2010 08:21:02 +0200
From:	Timo Teräs <timo.teras@....fi>
To:	Herbert Xu <herbert@...dor.apana.org.au>
CC:	netdev@...r.kernel.org
Subject: Re: [PATCH] xfrm: cache bundle lookup results in flow cache

Herbert Xu wrote:
> On Fri, Mar 19, 2010 at 07:48:57AM +0200, Timo Teräs wrote:
>> But it always matches. The caching happens using the inner
>> flow. Inner flow always matches with the same bundle unless
>> the bundle expires or goes stale. What happens is that I get
>> multiple cache entries per-inner flow each referencing to the
>> same bundle.
> 
> Sorry for being slow, but if it always matches, doesn't that mean
> you'll only have a single bundle in the policy bundle list? IOW
> why do we need this at all?

No. The bundle created for specific flow, matches always later
that flow.

With transport mode wildcard policy, e.g. single policy saying
encrypt traffic with protocol X to all IP-address, you get a
separate bundle per-public IP destination. Bundle matches only
that specific IP since it gets a separate xfrm_state. But you
can talk to all the hosts in internet using same policy, so
you can end up with a whole lot of valid bundles in the same
policy.

I'm not sure how this works in tunnel mode. It might be that
single bundle can be reused for all packets. But I think the same
applies to tunnel mode. Since afinfo->fill_dst() puts the
inner flow to bundle xfrm_dst->u.rt.fl, which is later compared
against the inner flow by afinfo->find_bundle(). I think this
implies that for each flow traveling inside tunnel, it gets it's
separate xfrm_dst, so again you end up with a whole lot of
valid bundles in the same policy.

> Or have I misread your patch? You *are* proposing to cache the last
> used bundle in the policy, right?

Yes and no. The bundle used is cached on per-flow basis.
The flow cache can have lot of entries each referring to
same policy but separate bundle.

>> True. But if we go and prune a bundle due to it being bad or
>> needing garbage collection we need to invalidate all bundles
>> pointers, and we cannot access the back-pointer. Alternatively
> 
> Why can't you access the back-pointer? You should always have
> a reference held on the policy, either explicit or implicit.
> 
>> we need to keep xfrm_dst references again in the flow cache
>> requiring an expensive iteration of all flow cache entries
>> whenever a xfrm_dst needs to be deleted (which happens often).
> 
> So does the IPv4 routing cache.  I think what this reflects is
> just that the IPsec garbage collection mechanism is broken.
> 
> There is no point in doing a GC on every dst_alloc if we know
> that it isn't going to go below the threshold.  It should gain
> a minimum GC interval like IPv4.  Or perhaps we can move the
> minimum GC interval check into the dst core.

Yes, I reported xfrm_dst GC being broke in the earlier mail.

But keeping policy and bundle in cache is still a win. If we
kill the xfrm_dst due to GC, we will also lose the policy
the flow matched. We might need to kill xfrm_dst due to the
inside dst going old, but the flow cache would still give
hit with policy info (but no bundle) the next a packet comes
in using the same flow.

- Timo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ