lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 Jul 2017 15:26:41 -0700 (PDT)
From:   Stefano Stabellini <sstabellini@...nel.org>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>
cc:     Stefano Stabellini <sstabellini@...nel.org>, jgross@...e.com,
        Stefano Stabellini <stefano@...reto.com>,
        linux-kernel@...r.kernel.org, xen-devel@...ts.xen.org
Subject: Re: [Xen-devel] [PATCH v2 09/13] xen/pvcalls: implement recvmsg

On Thu, 27 Jul 2017, Boris Ostrovsky wrote:
> On 07/26/2017 08:08 PM, Stefano Stabellini wrote:
> > On Wed, 26 Jul 2017, Boris Ostrovsky wrote:
> >>>> +			count++;
> >>>> +		else
> >>>> +			wait_event_interruptible(map->active.inflight_conn_req,
> >>>> +						 pvcalls_front_read_todo(map));
> >>>> +	}
> >>> Should we be using PVCALLS_FRONT_MAX_SPIN here? In sendmsg it is
> >>> counting non-sleeping iterations but here we are sleeping so
> >>> PVCALLS_FRONT_MAX_SPIN (5000) may take a while.
> >>>
> >>> In fact, what shouldn't this waiting be a function of MSG_DONTWAIT
> >> err, which it already is. But the question still stands (except for
> >> MSG_DONTWAIT).
> > The code (admittedly unintuitive) is busy-looping (non-sleeping) for
> > 5000 iterations *before* attempting to sleep. So in that regard, recvmsg
> > and sendmsg use PVCALLS_FRONT_MAX_SPIN in the same way: only for
> > non-sleeping iterations.
> >
> 
> OK.
> 
> Why not go directly into wait_event_interruptible()? I see you write in
> the commit message
> 
> If not enough data is available on the ring, rather than returning
> immediately or sleep-waiting, spin for up to 5000 cycles. This small
> optimization turns out to improve performance and latency significantly.
> 
> 
> Is this because of scheduling latency? I think this should be mentioned not just in the commit message but also as a comment in the code.

It tries to mitigate scheduling latencies on both ends (dom0 and domU)
when the ring buffer is the bottleneck (high bandwidth connections). But
to be honest with you, it's mostly beneficial in the sendmsg case,
because for recvmsg we also introduce a busy-wait in regular
circumstances, when no data is actually available. I confirmed this
statement with a quick iperf test. I'll remove the spin from recvmsg and
keep it in sendmsg.


> 
> (I also think it's not "not enough data" but rather "no data"?)

you are right 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ