lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100412171205.561a1aec@nehalam>
Date:	Mon, 12 Apr 2010 17:12:05 -0700
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	davem@...emloft.net, netdev@...r.kernel.org,
	eric.dumazet@...il.com, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH v4] rfs: Receive Flow Steering

On Mon, 12 Apr 2010 17:03:39 -0700 (PDT)
Tom Herbert <therbert@...gle.com> wrote:

> The basic idea of RFS is that when an application calls recvmsg
> (or sendmsg) the application's running CPU is stored in a hash
> table that is indexed by the connection's rxhash which is stored in
> the socket structure.  The rxhash is passed in skb's received on
> the connection from netif_receive_skb.  For each received packet,
> the associated rxhash is used to look up the CPU in the hash table,
> if a valid CPU is set then the packet is steered to that CPU using
> the RPS mechanisms.

There are two sometimes conflicting models:

One model is to have the flow's be dispersed and let the scheduler
be smarter about running the applications on the right CPU's where
the packets arrive.

The other is to have the flows redirected to the CPU where the application
previously ran which is what RFS does.

For benchmarks and private fixed configuration systems it is tempting
to just nail everything down: i.e. use hard SMP affinity, for hardware, processes,
and flows.  But this is the wrong solution for general purpose systems with
varying workloads and requirements.  How well does RFS really work when
applications, processes, and sockets come and go or get migrated among
CPU's by the scheduler? My concern is this is overlapping scheduler
design and might be a step backwards.


-- 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ