lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <570FCCC1.6090504@gmail.com>
Date:	Thu, 14 Apr 2016 14:00:49 -0300
From:	Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
To:	Neil Horman <nhorman@...driver.com>,
	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org, vyasevich@...il.com,
	linux-sctp@...r.kernel.org, David.Laight@...LAB.COM,
	jkbs@...hat.com
Subject: Re: [PATCH v3 0/2] sctp: delay calls to sk_data_ready() as much as
 possible

Em 14-04-2016 10:03, Neil Horman escreveu:
> On Wed, Apr 13, 2016 at 11:05:32PM -0400, David Miller wrote:
>> From: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
>> Date: Fri,  8 Apr 2016 16:41:26 -0300
>>
>>> 1st patch is a preparation for the 2nd. The idea is to not call
>>> ->sk_data_ready() for every data chunk processed while processing
>>> packets but only once before releasing the socket.
>>>
>>> v2: patchset re-checked, small changelog fixes
>>> v3: on patch 2, make use of local vars to make it more readable
>>
>> Applied to net-next, but isn't this reduced overhead coming at the
>> expense of latency?  What if that lower latency is important to the
>> application and/or consumer?
> Thats a fair point, but I'd make the counter argument that, as it currently
> stands, any latency introduced (or removed), is an artifact of our
> implementation rather than a designed feature of it.  That is to say, we make no
> guarantees at the application level regarding how long it takes to signal data
> readines from the time we get data off the wire, so I would rather see our
> throughput raised if we can, as thats been sctp's more pressing achilles heel.
>
>
> Thats not to say I'd like to enable lower latency, but I'd rather have this now,
> and start pondering how to design that in.  Perhaps we can convert the pending
> flag to a counter to count the number of events we enqueue, and call
> sk_data_ready every  time we reach a sysctl defined threshold.

That and also that there is no chance of the application reading the 
first chunks before all current ToDo's are performed by either the bh or 
backlog handlers for that packet. Socket lock won't be cycled in between 
chunks so the application is going to wait all the processing one way or 
another.

Thanks,
Marcelo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ