[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180105172523.GD14358@orbyte.nwl.cc>
Date: Fri, 5 Jan 2018 18:25:23 +0100
From: Phil Sutter <phil@....cc>
To: Chris Mi <chrism@...lanox.com>
Cc: netdev@...r.kernel.org, gerlitz.or@...il.com,
stephen@...workplumber.org, dsahern@...il.com,
marcelo.leitner@...il.com
Subject: Re: [patch iproute2 v6 0/3] tc: Add -bs option to batch mode
Hi Chris,
On Thu, Jan 04, 2018 at 04:34:51PM +0900, Chris Mi wrote:
> Currently in tc batch mode, only one command is read from the batch
> file and sent to kernel to process. With this patchset, we can accumulate
> several commands before sending to kernel. The batch size is specified
> using option -bs or -batchsize.
>
> To accumulate the commands in tc, client should allocate an array of
> struct iovec. If batchsize is bigger than 1, only after the client
> has accumulated enough commands, can the client call rtnl_talk_msg
> to send the message that includes the iov array. One exception is
> that there is no more command in the batch file.
>
> But please note that kernel still processes the requests one by one.
> To process the requests in parallel in kernel is another effort.
> The time we're saving in this patchset is the user mode and kernel mode
> context switch. So this patchset works on top of the current kernel.
>
> Using the following script in kernel, we can generate 1,000,000 rules.
> tools/testing/selftests/tc-testing/tdc_batch.py
>
> Without this patchset, 'tc -b $file' exection time is:
>
> real 0m15.555s
> user 0m7.211s
> sys 0m8.284s
>
> With this patchset, 'tc -b $file -bs 10' exection time is:
>
> real 0m13.043s
> user 0m6.479s
> sys 0m6.504s
>
> The insertion rate is improved more than 10%.
Did you measure the effect of increasing batch sizes?
I wonder whether specifying the batch size is necessary at all. Couldn't
batch mode just collect messages until either EOF or an incompatible
command is encountered which then triggers a commit to kernel? This
might simplify code quite a bit.
Cheers, Phil
Powered by blists - more mailing lists