lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100320151751.GB2950@gondor.apana.org.au>
Date:	Sat, 20 Mar 2010 23:17:51 +0800
From:	Herbert Xu <herbert@...dor.apana.org.au>
To:	Timo Teräs <timo.teras@....fi>
Cc:	netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH] xfrm: cache bundle lookup results in flow cache

On Fri, Mar 19, 2010 at 11:53:44AM +0200, Timo Teräs wrote:
>
> But now, thinking more, it would probably make more sense to
> just cache xfrm_dst's and keep ref to the policy on them. So
> in general I agree with your recommendation. The only immediate
> problem I can think now is that the resolved would need to
> atomically check if xfrm_dst is valid, if not, resolve new one.
> But creating new xfrm_dst involves taking locks and can sleep
> so it cannot be inside the main resolver.

OK this brings out my favourite topic :)

The reason we have to sleep is to resolve the template.  So if
we had queueing for pending xfrm_dst objects, we wouldn't have
to sleep at all when creating the top-level xfrm_dst.

Packets using that xfrm_dst can wait in the queue until it is
fully resolved.  Now obviously this code doesn't exist so this
is all just a wet dream.

Setting my favourite topic aside, I have to come to the conclusion
that your patch still doesn't fully resolve the problem you set out
to fix.

The crux of the issue is the linked list of all bundles in a
policy and the obvious problems stemming from walking a linked
list that is unbounded.

The reason I think it doesn't fully resolve this is because of
the flow cache.  Being a per-cpu cache, when you create the xfrm
dst the first time around, you'll at most put it in one CPU's
cache.

The next CPU that comes along will still have to walk that same
bundle linked list.  So we're back to square one.

Now Dave, my impression is that we picked the per-cpu design
because it was the best data structure we had back in 2002,
right?

If so I'd like us to think about the possibility of switching
over to a different design, in particular, an RCU-based hash
table, similar to the one I just used for bridge multicasting.

This would eliminate the need for walking the bundle list apart
from the case when we're destroying the policy, which can be
done in process context.

Actually I just realised that the other way we can fix this is
to make xfrm_dst objects per-cpu just like IPv4 routes.  That
is, when you fail to find an xfrm_dst object in the per-cpu
cache, you dont' bother calling xfrm_find_bundle but just make
a new bundle.

This is probably much easier than replacing the whole flow cache.
Can any one think of any problems with duplicate xfrm_dst objects?

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ