[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOrHB_Byo=q2Om1_s+wM3X8m4dbez7WcjF3bXsm_Q6d8YWcAjw@mail.gmail.com>
Date: Tue, 10 Jul 2018 11:31:19 -0700
From: Pravin Shelar <pshelar@....org>
To: Matteo Croce <mcroce@...hat.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
ovs dev <dev@...nvswitch.org>,
Stefano Brivio <sbrivio@...hat.com>,
Jiri Benc <jbenc@...hat.com>, Aaron Conole <aconole@...hat.com>
Subject: Re: [PATCH RFC net-next] openvswitch: Queue upcalls to userspace in
per-port round-robin order
On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce <mcroce@...hat.com> wrote:
> From: Stefano Brivio <sbrivio@...hat.com>
>
> Open vSwitch sends to userspace all received packets that have
> no associated flow (thus doing an "upcall"). Then the userspace
> program creates a new flow and determines the actions to apply
> based on its configuration.
>
> When a single port generates a high rate of upcalls, it can
> prevent other ports from dispatching their own upcalls. vswitchd
> overcomes this problem by creating many netlink sockets for each
> port, but it quickly exceeds any reasonable maximum number of
> open files when dealing with huge amounts of ports.
>
> This patch queues all the upcalls into a list, ordering them in
> a per-port round-robin fashion, and schedules a deferred work to
> queue them to userspace.
>
> The algorithm to queue upcalls in a round-robin fashion,
> provided by Stefano, is based on these two rules:
> - upcalls for a given port must be inserted after all the other
> occurrences of upcalls for the same port already in the queue,
> in order to avoid out-of-order upcalls for a given port
> - insertion happens once the highest upcall count for any given
> port (excluding the one currently at hand) is greater than the
> count for the port we're queuing to -- if this condition is
> never true, upcall is queued at the tail. This results in a
> per-port round-robin order.
>
> In order to implement a fair round-robin behaviour, a variable
> queueing delay is introduced. This will be zero if the upcalls
> rate is below a given threshold, and grows linearly with the
> queue utilisation (i.e. upcalls rate) otherwise.
>
> This ensures fairness among ports under load and with few
> netlink sockets.
>
Thanks for the patch.
This patch is adding following overhead for upcall handling:
1. kmalloc.
2. global spin-lock.
3. context switch to single worker thread.
I think this could become bottle neck on most of multi core systems.
You have mentioned issue with existing fairness mechanism, Can you
elaborate on those, I think we could improve that before implementing
heavy weight fairness in upcall handling.
Powered by blists - more mailing lists