lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 Apr 2010 09:29:53 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Changli Gao <xiaosuo@...il.com>
Cc:	Tom Herbert <therbert@...gle.com>, davem@...emloft.net,
	netdev@...r.kernel.org
Subject: Re: [PATCH] rfs: Receive Flow Steering

Le vendredi 02 avril 2010 à 13:04 +0800, Changli Gao a écrit :

> for sending packets, how about letting sender compute the rxhash of
> the packets from the other side if the rxhash of socket hasn't been
> set yet. I is better for client applications.
> 



> For router and bridge, the current RPS can work well, but not for
> server or client applications. So I propose a new socket option to get
> the rps cpu of the packets received on a socket. It may be like this:
> 


Your claim of RPS being not good for applications is wrong, our test
results show an improvement as is. Maybe your applications dont scale,
because of bad habits, or collidings heuristics, I dont know.

> int cpu;
> getsockopt(sock, SOL_SOCKET, SO_RPSCPU, &cpu, sizeof(cpu));
> 
> As Tom's patch did, rxhash is recorded in socket. When the call above
> is made, rps_map is looked up to find the RPSCPU for that hash. Once
> we get the cpu of the current connection, for a TCP server, it can
> dispatch the new connection to the processes which run on that CPU.
> the server code will be like this:
> 
> fd = accpet(fd, NULL, NULL);
> getsockopt(fd, SOL_SOCKET, SO_RPSCPU, &cpu, sizeof(cpu));
> asyncq_enqueue(work_queue[cpu], fd);
> 
> For a client program, the rxhash can be got after the first packet of
> the connection is sent. So the client code will be:
> 
> fd = connect(fd, &addr, addr_len);
> getsockopt(fd, SOL_SOCKET, SO_RPSCPU, &cpu, sizeof(cpu));
> asyncq_enqueue(work_queue[cpu], fd);
> 
> I do think this idea is easier to understood. I'll cook a patch later
> if it is welcomed.
> 

Whole point of Herbert patches is you dont need to change applications
and put complex logic in them, knowing exact machine topology.

Your suggestion is very complex, because you must bind each thread on a
particular cpu, and this is pretty bad for many reasons. We should allow
thread migrations, because scheduler or admin know better than the
application.

Application writers should rely on standard kernel mechanisms, and
schedulers, because an application have a limited point of view of what
really happens on the machine.




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ