lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071207.173201.95379610.davem@davemloft.net>
Date:	Fri, 07 Dec 2007 17:32:01 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	ilpo.jarvinen@...sinki.fi
Cc:	lachlan.andrew@...il.com, netdev@...r.kernel.org,
	quetchen@...tech.edu
Subject: Re: [RFC] TCP illinois max rtt aging

From: "Ilpo_Järvinen" <ilpo.jarvinen@...sinki.fi>
Date: Fri, 7 Dec 2007 15:05:59 +0200 (EET)

> On Fri, 7 Dec 2007, David Miller wrote:
> 
> > From: "Ilpo_Järvinen" <ilpo.jarvinen@...sinki.fi>
> > Date: Fri, 7 Dec 2007 13:05:46 +0200 (EET)
> > 
> > > I guess if you get a large cumulative ACK, the amount of processing is 
> > > still overwhelming (added DaveM if he has some idea how to combat it).
> > > 
> > > Even a simple scenario (this isn't anything fancy at all, will occur all 
> > > the time): Just one loss => rest skbs grow one by one into a single 
> > > very large SACK block (and we do that efficiently for sure) => then the 
> > > fast retransmit gets delivered and a cumulative ACK for whole orig_window 
> > > arrives => clean_rtx_queue has to do a lot of processing. In this case we 
> > > could optimize RB-tree cleanup away (by just blanking it all) but still 
> > > getting rid of all those skbs is going to take a larger moment than I'd 
> > > like to see.
> > > 
> > > That tree blanking could be extended to cover anything which ACK more than 
> > > half of the tree by just replacing the root (and dealing with potential 
> > > recolorization of the root).
> > 
> > Yes, it's the classic problem.  But it ought to be at least
> > partially masked when TSO is in use, because we'll only process
> > a handful of SKBs.  The more effectively TSO batches, the
> > less work clean_rtx_queue() will do.
> 
> No, that's not what is going to happen, TSO won't help at all
> because one-by-one SACKs will fragment every single one of them
> (see tcp_match_skb_to_sack) :-(. ...So we're back in non-TSO
> case, or am I missing something?

You're of course right, and it's ironic that I wrote the SACK
splitting code so I should have known this :-)

A possible approach just occurred to me wherein we maintain
the SACK state external to the SKBs so that we don't need to
mess with them at all.

That would allow us to eliminate the TSO splitting but it would
not remove the general problem of clean_rtx_queue()'s overhead.

I'll try to give some thought to this over the weekend.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ