lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Jul 2009 15:02:20 -0500
From:	Joe Nall <joe@...l.com>
To:	David Miller <davem@...emloft.net>
Cc:	nhorman@...driver.com, netdev@...r.kernel.org,
	herbert@...dor.apana.org.au, kuznet@....inr.ac.ru,
	pekkas@...core.fi, jmorris@...ei.org, yoshfuji@...ux-ipv6.org,
	kaber@...sh.net
Subject: Re: [PATCH] xfrm: export xfrm garbage collector thresholds via sysctl


On Jul 27, 2009, at 2:40 PM, David Miller wrote:

> From: Neil Horman <nhorman@...driver.com>
> Date: Mon, 27 Jul 2009 15:36:25 -0400
>
>> I think that makes sense, since it means we only keep cache entries
>> for active connections, and clean them up as soon as they close
>> (e.g. I don't really see the advantage to unhashing a xfrm cache
>> entry only to recreate it on the next packet sent).
>
> How is this related to the user's problem?
>
> My impression was that they were forwarding IPSEC traffic when
> running up against these limits, and that has no socket based
> assosciation at all.
>
> Making the XFRM GC limits get computed similarly to how the
> ipv4/ipv6 one's do probably makes sense.

The problem was seen serving TCP connections over IPSec when the  
server (a 24 core machine w 32GB RAM) and the IPSec host were the same  
(transport not tunnel).

The problem was originally identified in an MLS ipsec thin client  
stress test with 25 clients and 200+ windows per client.

We then duplicated the issue with single level xclocks on the same  
hardware.

I then duplicated the problem with two bone stock (not MLS) F10 (and  
later F11) boxes running ab (apache benchmark tool) with 10k requests  
over 2k concurrent connections. See https://bugzilla.redhat.com/show_bug.cgi?id=503124

With the gc_thresh raised to 8k, my ab test passes with 5k+  
connections and 100k requests. Our test guys ran 11 workstations with  
about 2200 concurrent X connections over the weekend and I'm hoping  
for some MLS results before we lose the test suite on Wednesday.

joe
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ