lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 29 Mar 2010 12:24:51 -0500
From:	Brandon Black <blblack@...il.com>
To:	Chris Friesen <cfriesen@...tel.com>
Cc:	Arnaldo Carvalho de Melo <acme@...hat.com>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: behavior of recvmmsg() on blocking sockets

On Mon, Mar 29, 2010 at 11:18 AM, Chris Friesen <cfriesen@...tel.com> wrote:
>
> prev = current time
> loop forever
>        cur = current time
>        timeout = max_latency - (cur - prev)
>        recvmmsg(timeout)
>        process all received messages
>        prev = cur
>
>
> Basically you determine the max latency you're willing to wait for a
> packet to be handled, then subtract the amount of time you spent
> processing messages from that and pass it into the recvmmsg() call as
> the timeout.  That way no messages will be delayed for longer than the
> max latency. (Not considering scheduling delays.)

With a blocking socket, you'd also need to set SO_RCVTIMEO on the
underlying socket to some value that makes sense and is below your max
latency, because recvmmsg()'s timeout argument only applies in-between
underlying recvmsg() calls, not during them.  You're going to spend a
lot of time spinning if max_latency is low and there are any gaps in
the input stream though.  I guess for some uses this must makes sense.

The other potential usage is with non-blocking sockets, in which case
the timeout argument is putting on upper boundary on how long
recvmmsg() can spend fetching packets from the queue before it must
return, even if more packets are available.  Seems like for a given
kernel and hardware you could accomplish the same by tuning the vlen
argument.  In either case though, it seems like if you're running into
your hard latency limit on a non-blocking packet fetch and there are
already more packets waiting, you're probably (at least) verging on
being unable to meet the latency requirement for (at least) some of
your packets due to a hard lack of CPU horsepower for the workload.

-- Brandon
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ