lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Mar 2019 10:01:14 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     brakmo <brakmo@...com>, netdev <netdev@...r.kernel.org>,
        Martin Lau <kafai@...com>, Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf-next 0/7] bpf: Propagate cn to TCP

On Tue, Mar 26, 2019 at 08:43:11AM -0700, Eric Dumazet wrote:
> 
> 
> On 03/26/2019 08:07 AM, Alexei Starovoitov wrote:
> 
> > so after 20+ years linux qdisc design is wrong?
> 
> Yes it is how it is, return values can not be propagated back to the TCP stack in all cases.
> 
> When a packet is queued to Qdisc 1, there is no way we can return
> a value that can represent what the packet becomes when dequeued later and queued into Qdisc 2.
> 
> Also some qdisc take their drop decision later (eg : codel and fq_codel), so ->enqueue() will
> return a success which might be a lie.

root and children qdiscs propagate the return value already.
different unrelated qdiscs are just like different physical switches.
they can communicate only via on the wire protocol.
nothing wrong with that.
But not everything can and should communicate over the wire.

> > bpf is about choice. We have to give people tools to experiment even
> > when we philosophically disagree on the design.
> 
> Maybe, but I feel that for the moment, the choice is only for FB, and rest
> of the world has to re-invent private ebpf code in order to benefit from all of this.

Clearly a misunderstanding. Please see samples/bpf/hbm_out_kern.c
The source code of bpf program is not only public, but algorithm is well documented.

> I doubt many TCP users will have the skills/money to benefit from this.

majority of bpf features are actively used and often not by authors
who introduced them.

> Meanwhile, we as a community have to maintain a TCP/IP stack with added hooks and complexity.

your herculean effort to keep tcp stack in excellent shape
is greatly appreciated. No doubt about that.

> It seems TCP stack became a playground for experiments.

exactly.
The next tcp congestion control algorithm should be implementable in bpf.
If kernel was extensible to that degree likely there would have been
no need to bypass it and invent quic.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ