lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 25 Dec 2013 16:44:26 +0800
From:	Fan Du <fan.du@...driver.com>
To:	Timo Teras <timo.teras@....fi>
CC:	Steffen Klassert <steffen.klassert@...unet.com>,
	Eric Dumazet <eric.dumazet@...il.com>, <davem@...emloft.net>,
	<netdev@...r.kernel.org>
Subject: Re: [PATCHv4 net-next] xfrm: Namespacify xfrm_policy_sk_bundles



On 2013年12月25日 16:11, Timo Teras wrote:
> On Wed, 25 Dec 2013 14:40:36 +0800
> Fan Du<fan.du@...driver.com>  wrote:
>
>> >  ccing Timo
>> >
>> >  On 2013年12月24日 18:35, Steffen Klassert wrote:
>>> >  >  On Fri, Dec 20, 2013 at 11:34:41AM +0800, Fan Du wrote:
>>>> >  >>
>>>> >  >>  Subject: [PATCHv4 net-next] xfrm: Namespacify
>>>> >  >>  xfrm_policy_sk_bundles
>>>> >  >>
>>>> >  >>  xfrm_policy_sk_bundles, protected by
>>>> >  >>  net->xfrm.xfrm_policy_sk_bundle_lock should be put into netns xfrm
>>>> >  >>  structure, otherwise xfrm_policy_sk_bundles can be corrupted from
>>>> >  >>  different net namespace.
>>> >  >
>>> >  >  I'm ok with this patch, but I wonder where we use these cached
>>> >  >  socket bundles. After a quick look I see where we add and where we
>>> >  >  delete them, but I can't see how we use these cached bundles.
>> >
>> >  Interesting
>> >
>> >  The per socket bundles is introduced by Timo in commit 80c802f3
>> >  ("xfrm: cache bundles instead of policies for outgoing flows")
> Those existed even before. I just did systematic transformation of the
> caching code to work on bundle level instead of policy level.

Apologizes and thanks for your quick reply :)

>> >  But one fundamental question is why not use existing flow cache
>> >  for per socket bundles as well? then no need to create such per
>> >  sock xdst for every packet, and also share the same flow cache
>> >  flush mechanism.
> It was needed when the flow cache cached policies. They explicitly
> needed to check the socket for per-socket policy. So it made no sense
> to have anything socket related in the cache.

I understand your concern.

per sk bundles could be distinguished by putting per sk policy pointer into
struct flow_cache_entry, and then compare sk policy between cached policy
against with sk policy.

And I also notice flow cache is global across different namespaces, but flow
cache flush is doing a per-cpu(also global) operation, that's not fair for
slim netns as compared with fat netns which floods flow cache. Maybe it's
time to make flow cache also name space aware.

Let's see what Steffen think about it.

-- 
浮沉随浪只记今朝笑

--fan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ