lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 Jan 2014 10:43:38 +0800
From:	Fan Du <fan.du@...driver.com>
To:	Steffen Klassert <steffen.klassert@...unet.com>
CC:	Timo Teras <timo.teras@....fi>,
	Eric Dumazet <eric.dumazet@...il.com>, <davem@...emloft.net>,
	<netdev@...r.kernel.org>
Subject: Re: [PATCHv4 net-next] xfrm: Namespacify xfrm_policy_sk_bundles



On 2014年01月06日 18:35, Steffen Klassert wrote:
> On Wed, Dec 25, 2013 at 04:44:26PM +0800, Fan Du wrote:
>>
>>
>> On 2013年12月25日 16:11, Timo Teras wrote:
>>> On Wed, 25 Dec 2013 14:40:36 +0800
>>> Fan Du<fan.du@...driver.com>   wrote:
>>>
>>>>>   ccing Timo
>>>>>
>>>>>   On 2013年12月24日 18:35, Steffen Klassert wrote:
>>>>>>   >   On Fri, Dec 20, 2013 at 11:34:41AM +0800, Fan Du wrote:
>>>>>>>   >>
>>>>>>>   >>   Subject: [PATCHv4 net-next] xfrm: Namespacify
>>>>>>>   >>   xfrm_policy_sk_bundles
>>>>>>>   >>
>>>>>>>   >>   xfrm_policy_sk_bundles, protected by
>>>>>>>   >>   net->xfrm.xfrm_policy_sk_bundle_lock should be put into netns xfrm
>>>>>>>   >>   structure, otherwise xfrm_policy_sk_bundles can be corrupted from
>>>>>>>   >>   different net namespace.
>>>>>>   >
>>>>>>   >   I'm ok with this patch, but I wonder where we use these cached
>>>>>>   >   socket bundles. After a quick look I see where we add and where we
>>>>>>   >   delete them, but I can't see how we use these cached bundles.
>>>>>
>>>>>   Interesting
>>>>>
>>>>>   The per socket bundles is introduced by Timo in commit 80c802f3
>>>>>   ("xfrm: cache bundles instead of policies for outgoing flows")
>>> Those existed even before. I just did systematic transformation of the
>>> caching code to work on bundle level instead of policy level.
>>
>> Apologizes and thanks for your quick reply :)
>>
>>>>>   But one fundamental question is why not use existing flow cache
>>>>>   for per socket bundles as well? then no need to create such per
>>>>>   sock xdst for every packet, and also share the same flow cache
>>>>>   flush mechanism.
>>> It was needed when the flow cache cached policies. They explicitly
>>> needed to check the socket for per-socket policy. So it made no sense
>>> to have anything socket related in the cache.
>>
>> I understand your concern.
>>
>> per sk bundles could be distinguished by putting per sk policy pointer into
>> struct flow_cache_entry, and then compare sk policy between cached policy
>> against with sk policy.

Yes, I tested sk policy with udp, when transmit, dst will be cached into sk
by sk_dst_set. Let's leave current implementation as it is.

Please kindly review if there is any concern about v4.

> Most protocols cache the used routes at the sockets, so I'm not sure if
> we really need to cache them in xfrm too.
>
> Given the fact that we don't use these cached socket policy bundles,
> it would be already an improvement if we would simply remove this caching.
> All we are doing here is wasting memory.
>>
>> And I also notice flow cache is global across different namespaces, but flow
>> cache flush is doing a per-cpu(also global) operation, that's not fair for
>> slim netns as compared with fat netns which floods flow cache. Maybe it's
>> time to make flow cache also name space aware.
>
> Yes, making the flow cache namespace aware would be a good thing.
>

I will give it a try :)

-- 
浮沉随浪只记今朝笑

--fan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ