[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <o2k65634d661004160851wc00c609p7136a22fd07503c1@mail.gmail.com>
Date: Fri, 16 Apr 2010 08:51:28 -0700
From: Tom Herbert <therbert@...gle.com>
To: Stephen Hemminger <shemminger@...tta.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org,
eric.dumazet@...il.com, Ingo Molnar <mingo@...e.hu>,
Paul Turner <pjt@...gle.com>
Subject: Re: [PATCH v4] rfs: Receive Flow Steering
> There are two sometimes conflicting models:
>
> One model is to have the flow's be dispersed and let the scheduler
> be smarter about running the applications on the right CPU's where
> the packets arrive.
>
> The other is to have the flows redirected to the CPU where the application
> previously ran which is what RFS does.
>
> For benchmarks and private fixed configuration systems it is tempting
> to just nail everything down: i.e. use hard SMP affinity, for hardware, processes,
> and flows. But this is the wrong solution for general purpose systems with
> varying workloads and requirements. How well does RFS really work when
> applications, processes, and sockets come and go or get migrated among
> CPU's by the scheduler? My concern is this is overlapping scheduler
> design and might be a step backwards.
>
This is true. There is a fundamental question of whether scheduler
should lead networking or vice versa. The advantages of networking
following scheduler seem to become more apparent on heavily loaded
systems or with threads that handle more than one flow.
I'm not sure these two models have to be mutually exclusive, we are
looking at some ways to make a hybrid model.
The statement about pinning down resources is also true, we are
actively try to squash any instances this in our applications!
Tom
>
> --
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists