[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180108133150.GE14358@orbyte.nwl.cc>
Date: Mon, 8 Jan 2018 14:31:50 +0100
From: Phil Sutter <phil@....cc>
To: Chris Mi <chrism@...lanox.com>
Cc: "dsahern@...il.com" <dsahern@...il.com>,
"marcelo.leitner@...il.com" <marcelo.leitner@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"gerlitz.or@...il.com" <gerlitz.or@...il.com>,
"stephen@...workplumber.org" <stephen@...workplumber.org>
Subject: Re: [patch iproute2 v6 0/3] tc: Add -bs option to batch mode
Hi Chris,
On Mon, Jan 08, 2018 at 02:03:53AM +0000, Chris Mi wrote:
> > On Thu, Jan 04, 2018 at 04:34:51PM +0900, Chris Mi wrote:
> > > The insertion rate is improved more than 10%.
> >
> > Did you measure the effect of increasing batch sizes?
> Yes. Even if we enlarge the batch size bigger than 10, there is no big improvement.
> I think that's because current kernel doesn't process the requests in parallel.
> If kernel processes the requests in parallel, I believe specifying a bigger batch size
> will get a better result.
But throughput doesn't regress at some point, right? I think that's the
critical aspect when considering an "unlimited" batch size.
On Mon, Jan 08, 2018 at 08:00:00AM +0000, Chris Mi wrote:
> After testing, I find that the message passed to kernel should not be too big.
> If it is bigger than about 64K, sendmsg returns -1, errno is 90 (EMSGSIZE).
> That is about 400 commands. So how about set batch size to 128 which is big enough?
If that's the easiest way, why not. At first, I thought one could maybe
send the collected messages in chunks of suitable size, but that's
probably not worth the effort.
Cheers, Phil
Powered by blists - more mailing lists