lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5406BC19.9020009@redhat.com>
Date:	Wed, 03 Sep 2014 14:58:33 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	"Michael S. Tsirkin" <mst@...hat.com>,
	Mike Galbraith <umgwanakikbuti@...il.com>, davem@...emloft.net,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...e.hu>,
	Eliezer Tamir <eliezer.tamir@...ux.intel.com>
Subject: Re: [PATCH net-next 2/2] net: exit busy loop when another process
 is runnable

On 09/02/2014 06:24 PM, Peter Zijlstra wrote:
> On Tue, Sep 02, 2014 at 12:03:42PM +0800, Jason Wang wrote:
>> > On 09/01/2014 06:19 PM, Peter Zijlstra wrote:
>>> > > OK I suppose that more or less makes sense, the contextual behaviour is
>>> > > of course tedious in that it makes behaviour less predictable. The
>>> > > 'other' tasks might not want to generate data and you then destroy
>>> > > throughput by not spinning.
>> > 
>> > The patch try to make sure:
>> > - the the performance of busy read was not worse than it was disabled in
>> > any cases.
>> > - the performance improvement of a single socket was not achieved by
>> > sacrificing the total performance (all other processes) of the system
>> >  
>> > If 'other' tasks are also CPU or I/O intensive jobs, we switch to do
>> > them so the total performance were kept or even increased, and the
>> > performance of current process were guaranteed not worse than when busy
>> > read was disabled (or even better since it may still do busy read
>> > sometimes when it was the only runnable process). If 'other' task are
>> > not intensive, they just do little work and sleep soon, then the busy
>> > read can still work in most of the time during the future reads, we may
>> > still get obvious improvements
> Not entirely true; the select/poll whatever will now block, which means
> we need a wakeup, which increases the latency immensely.

Not sure I get your meaning. This patch does not change the logic or
dynamic of select/poll since sock_poll() always call sk_busy_loop() with
noblock is true. This means sk_busy_loop() will only try ndo_busy_poll()
once whatever the result of other checks. The busy polling was done
through its caller in fact.
>>> > > I'm not entirely sure I see how its all supposed to work though; the
>>> > > various poll functions call sk_busy_poll() and do_select() also loops.
>>> > >
>>> > > The patch only kills the sk_busy_poll() loop, but then do_select() will
>>> > > still loop and not sleep, so how is this helping?
>> > 
>> > Yes, the patch only help for processes who did a blocking reads (busy
>> > read). For select(), maybe we can do the same thing but need more test
>> > and thoughts.
> What's the blocking read callgraph, how so we end up in sk_busy_poll() there?
>
> But that's another reason the patch is wrong.

The patch only try to improve the performance of busy read (and test
results shows impressive changes). It does not change anything for busy
poll. Considering there maybe two processes in one cpu, one is doing
busy read and one is doing busy polling. This patch may in fact help the
busy polling performance in this case.

It's good to discuss the ideas of busy poll together, but it was out of
the scope of this patch. We can try to do optimization on top.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ