lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 02 Jun 2012 08:56:56 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Hans Schillström 
	<hans.schillstrom@...csson.com>
Cc:	netdev <netdev@...r.kernel.org>,
	Jesper Dangaard Brouer <brouer@...hat.com>,
	David Miller <davem@...emloft.net>,
	Neal Cardwell <ncardwell@...gle.com>,
	Tom Herbert <therbert@...gle.com>
Subject: RE: [PATCH] tcp: do not create inetpeer on SYNACK message

On Fri, 2012-06-01 at 23:34 +0200, Hans Schillström wrote:

> It think we are on the right way now,
> 
> Some results from one of our testers:
> before applying "reflect SYN queue_mapping into SYNACK"
> 
> "(The latest one from Eric is not included. I am building with
> that one right now.)
> Results were that with the same number of SYN/s, load went down
> 30% on each of the three Cpus that were handling the SYNs.
> Great !!!"
> 

I am not sure reflecting queue_mapping will help your workload, since
you specifically asked to your NIC to queue all SYN packets on one
single queue.

Eventually not relying on skb->queue_mapping but skb->rxhash to chose an
outgoing queue for the SYNACKS to not harm a single tx queue ?

Then it might be not needed, if the queue is dedicated to SYN and SYNACK
packets, since net_rx_action/net_tx_action should both dequeue 64
packets each round, in a round robin fashion.

(I had problems in a standard setup, where you can have a single cpu
(CPU0 in my case) servicing all NAPI interrupts, so with 16 queues, the
rx_action/tx_action ratio is 16/1 if all synack go to a single queue,
while SYN are distributed to all 16 rx queues)


> I'm looking forward to see the results of the latests patch.
> 
> Then I think conntrack need a little shape up, like a "mini-conntrack"
> it is way to expensive to alloc a full "coontack for every SYN.
> 
> I have a bunch of patches and ideas for that...
> 

Cool ! the conntrack issue is a real one for sure.


Given the conntrack current requirement (being protected by a central
lock), I guess your best bet would be following setup :

One single CPU to handle all SYN packets.

Eventually not relying on skb->queue_mapping but skb->rxhash to chose an
outgoing queue for the SYNACKS to not harm a single tx queue.

> Thanks Eric for a great job
> 

Thanks for giving testing results and ideas !


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ