lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1448380194.22599.303.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Tue, 24 Nov 2015 07:49:54 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Florian Westphal <fw@...len.de>
Cc:	David Miller <davem@...emloft.net>, tom@...bertland.com,
	hannes@...essinduktion.org, netdev@...r.kernel.org,
	kernel-team@...com, davewatson@...com, alexei.starovoitov@...il.com
Subject: Re: [PATCH net-next 0/6] kcm: Kernel Connection Multiplexor (KCM)

On Tue, 2015-11-24 at 16:27 +0100, Florian Westphal wrote:
> David Miller <davem@...emloft.net> wrote:
> > From: Tom Herbert <tom@...bertland.com>
> > Date: Mon, 23 Nov 2015 09:33:44 -0800
> > 
> > > The TCP PSH flag is not defined for message delineation (neither is
> > > urgent pointer). We can't change that (many people have tried to add
> > > message semantics to TCP protocol but have always failed miserably).
> >
> > Agreed.
> >
> > My only gripe with kcm right now is a lack of a native sendpage.
> 
> Aside from Hannes comment -- KCM seems to be tied to the TLS work, i.e.
> I have the impression that KCM without ability to do TLS in the kernel
> is pretty much useless for whatever use case Tom has in mind.
> 
> And that ktls thing just gives me the creeps.
> 
> For KCM itself I don't even get the use case -- its in the 'yeah, you
> can do that, but... why?' category 8-/

Note that I also played with a similar idea in TCP stack, trying to
wakeup the receiver only full RPC was present in the receive queue.

(When dealing with our internal Google RPC format, we can easily delimit
RPC boundaries)

But in the end, latencies were bigger, because the application had to
copy from kernel to user (read()) the full message in one go. While if
you wake up application for every incoming GRO message, we prefill cpu
caches, and the last read() only has to copy the remaining part and
benefit from hot caches (RFS up2date state, TCP socket structure, but
also data in the application)

I focused on making TSO/GRO more effective, and had better results.

One nice idea was to mark the PSH flag for every TSO packet we send.
(One line patch in TCP senders)

This had the nice effect of keeping the number of flows in the receiver
GRO engine as small as possible, avoiding the evictions that we do when
we reach 8 flows per RX queue (This is because the PSH flag tells GRO to
immediately complete the GRO packet to upper stacks)

-> Less cpu spent in GRO engine, as the gro_list is kept small,
better aggregation efficiency.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ