lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 04 Mar 2009 00:16:46 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	dada1@...mosbay.com
Cc:	kchang@...enacr.com, netdev@...r.kernel.org,
	cl@...ux-foundation.org
Subject: Re: Multicast packet loss

From: Eric Dumazet <dada1@...mosbay.com>
Date: Sat, 28 Feb 2009 09:51:11 +0100

> David, this is a preliminary work, not meant for inclusion as is,
> comments are welcome.
> 
> [PATCH] net: sk_forward_alloc becomes an atomic_t
> 
> Commit 95766fff6b9a78d11fc2d3812dd035381690b55d
> (UDP: Add memory accounting) introduced a regression for high rate UDP flows,
> because of extra lock_sock() in udp_recvmsg()
> 
> In order to reduce need for lock_sock() in UDP receive path, we might need
> to declare sk_forward_alloc as an atomic_t.
> 
> udp_recvmsg() can avoid a lock_sock()/release_sock() pair.
> 
> Signed-off-by: Eric Dumazet <dada1@...mosbay.com>

This adds new overhead for TCP which has to hold the socket
lock for other reasons in these paths.

I don't get how an atomic_t operation is cheaper than a
lock_sock/release_sock.  Is it the case that in many
executions of these paths only atomic_read()'s are necessary?

I actually think this scheme is racy.  There is a reason we
have to hold the socket lock when doing memory scheduling.
Two threads can get in there and say "hey I have enough space
already" even though only enough space is allocated for one
of their requests.

What did I miss? :)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ