lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Sep 2022 17:41:05 -0400
From:   Demi Marie Obenour <>
To:     Elliott Mitchell <>
Cc:     Xen developer discussion <>,
Subject: Re: Layer 3 (point-to-point) netfront and netback drivers

On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > How difficult would it be to provide layer 3 (point-to-point) versions
> > of the existing netfront and netback drivers?  Ideally, these would
> > share almost all of the code with the existing drivers, with the only
> > difference being how they are registered with the kernel.  Advantages
> > compared to the existing drivers include less attack surface (since the
> > peer is no longer network-adjacent), slightly better performance, and no
> > need for ARP or NDP traffic.
> I've actually been wondering about a similar idea.  How about breaking
> the entire network stack off and placing /that/ in a separate VM?

This is going to be very hard to do without awesome but difficult
changes to applications.  Switching to layer 3 links is a much smaller
change that should be transparent to applications.

> One use for this is a VM could be constrained to *exclusively* have
> network access via Tor.  This would allow a better hidden service as it
> would have no network topology knowledge.

That is great in theory, but in practice programs will expect to use
network protocols to connect to Tor.  Whonix already implements this
with the current Xen netfront/netback.

> The other use is network cards which are increasingly able to handle more
> of the network stack.  The Linux network team have been resistant to
> allowing more offloading, so perhaps it is time to break *everything*
> off.

Do you have any particular examples?  The only one I can think of is
that Linux is not okay with TCP offload engines.

> I'm unsure the benefits would justify the effort, but I keep thinking of
> this as the solution to some interesting issues.  Filtering becomes more
> interesting, but BPF could work across VMs.

Classic BPF perhaps, but eBPF's attack surface is far too large for this
to be viable.  Unprivileged eBPF is already disabled by default.
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists