lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Nov 2006 22:30:55 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	mingo@...e.hu
Cc:	wenji@...l.gov, akpm@...l.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

From: Ingo Molnar <mingo@...e.hu>
Date: Thu, 30 Nov 2006 07:17:58 +0100

> 
> * David Miller <davem@...emloft.net> wrote:
> 
> > We can make explicitl preemption checks in the main loop of 
> > tcp_recvmsg(), and release the socket and run the backlog if 
> > need_resched() is TRUE.
> > 
> > This is the simplest and most elegant solution to this problem.
> 
> yeah, i like this one. If the problem is "too long locked section", then
> the most natural solution is to "break up the lock", not to "boost the 
> priority of the lock-holding task" (which is what the proposed patch 
> does).

Ingo you're mis-read the problem :-)

The issue is that we actually don't hold any locks that prevent
preemption, so we can take preemption points which the TCP code
wasn't designed with in-mind.

Normally, we control the sleep point very carefully in the TCP
sendmsg/recvmsg code, such that when we sleep we drop the socket
lock and process the backlog packets that accumulated while the
socket was locked.

With pre-emption we can't control that properly.

The problem is that we really do need to run the backlog any time
we give up the cpu in the sendmsg/recvmsg path, or things get real
erratic.  ACKs don't go out as early as we'd like them to, etc.

It isn't easy to do generically, perhaps, because we can only
drop the socket lock at certain points and we need to do that to
run the backlog.

This is why my suggestion is to preempt_disable() as soon as we
grab the socket lock, and explicitly test need_resched() at places
where it is absolutely safe, like this:

	if (need_resched()) {
		/* Run packet backlog... */
		release_sock(sk);
		schedule();
		lock_sock(sk);
	}

The socket lock is just a by-hand binary semaphore, so it doesn't
block pre-emption.  We have to be able to sleep while holding it.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ