lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1445284997.621186.414538017.6E35B341@webmail.messagingengine.com>
Date:	Mon, 19 Oct 2015 22:03:17 +0200
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Alexei Starovoitov <ast@...mgrid.com>,
	Daniel Borkmann <daniel@...earbox.net>,
	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	davem@...emloft.net, viro@...IV.linux.org.uk, tgraf@...g.ch,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH net-next 3/4] bpf: add support for persistent maps/progs

Hi Alexei,

On Mon, Oct 19, 2015, at 21:34, Alexei Starovoitov wrote:
> On 10/19/15 11:46 AM, Hannes Frederic Sowa wrote:
> > Hi,
> >
> > On Mon, Oct 19, 2015, at 20:15, Alexei Starovoitov wrote:
> >> On 10/19/15 10:37 AM, Daniel Borkmann wrote:
> >>> An eBPF program or map loading/destruction is *not* by any means to be
> >>> considered fast-path. We currently hold a global mutex during loading.
> >>> So, how can that be considered fast-path? Similarly, socket creation/
> >>> destruction is also not fast-path, etc. Do you expect that applications
> >>> would create/destroy these devices within milliseconds? I'd argue that
> >>> something would be seriously wrong with that application, then. Such
> >>> persistent maps are to be considered rather mid-long living objects in
> >>> the system. The fast-path surely is the data-path of them.
> >>
> >> consider seccomp that loads several programs for every tab, then
> >> container use case where we're loading two for each as well.
> >> Who knows what use cases will come up in the future.
> >> It's obviously not a fast path that is being hit million times a second,
> >> but why add overhead when it's unnecessary?
> >
> > But the browser using seccomp does not need persistent maps at all. It
> 
> today it doesn't, but if it was a light weight feature, it could
> have been used much more broadly.
> Currently we're using user agent for networking and it's a pain.

I doubt it will stay a lightweight feature as it should not be in the
responsibility of user space to provide those debug facilities.

> >>>> completely unnecessary here. The kernel will consume more memory for
> >>>> no real reason other than cdev are used to keep prog/maps around.
> >>>
> >>> I don't consider this a big issue, and well worth the trade-off. You'll
> >>> have an infrastructure that integrates *nicely* into the *existing* kernel
> >>> model *and* tooling with the proposed patch. This is a HUGE plus. The
> >>> UAPI of this is simple and minimal. And to me, these are in-fact special
> >>> files, not regular ones.
> >>
> >> Seriously? Syscall to create/destory cdevs is a nice api?
> >
> > It is pretty normal to do. They don't use syscall but ioctl on a special
> > pseudo device node (lvm2/device-mapper, tun, etc.). Same thing, very
> > overloaded syscall. Also I don't see a reason to not make the creation
> > possible via sysfs, which would be even nicer but not orthogonal to the
> > current creation of maps, which is nowadays set in stone by uapi.
> 
> Isn't it your point going against cdev? ioctls on devs are overloaded,
> whereas here we have clean bpf syscall with no extra baggage.
> Going old-school to device model goes against that clean syscall
> approach.

The bpf syscall is still used to create the pseudo nodes. If they should
be persistent they just get registered in the sysfs class hierarchy.

> > sysfs already has infrastructure to ensure supportability and
> > debugability. Dependency graphs can be created to see which programs
> > depend on which bpf_progs. kobjects are singeltons in sysfs
> > representation. If you have multiple ebpf filesystems, even maybe
> > referencing the same hashtable with gigabytes of data multiple times,
> > there needs to be some way to help administrators to check resource
> > usage, statistics, tweak and tune the rhashtable. All this needs to be
> > handled as well in the future. It doesn't really fit the filesystem
> > model, but representing a kobject seems to be a good fit to me.
> 
> quite the opposite. The way cdev patch is done now, there is no way
> to extend it to create a hierarchy without breaking users,
> whereas fs style with initial Daniel's patch can be extended.
> Doing 'resource stats' via sysfs requires bpf to add to sysfs, which
> is not this cdev approach.

This is not yet part of the patch, but I think this would be added.
Daniel?

> > Policy can be done by user space with help of udev. Selinux policies can
> > easily being extended to allow specific domains access. Namespace
> > "passthrough" is defined for devices already.
> 
> that's an interesting point, but isn't it better done with fs?
> you can go much more fine-grained with directory permissions in fs.
> With different users/namespaces having their own hierarchies and mounts
> whereas cdev dumps everything into one spot.

Policy can already be defined in terms of cgroups:
<http://lxr.free-electrons.com/source/Documentation/cgroups/devices.txt>

I don't think there are broad differences. But in case a namespaces uses
huge number of maps with tons of data, the admin in the initial
namespace might want to debug that without searching all mountpoints and
find dependencies between processes etc. IMHO sysfs approach can be
better extended here.
 
> >> where do you see 'over-design' in fs? It's a straightforward code and
> >> most of it is boilerplate like all other fs.
> >> Kill rmdir/mkdir ops, term bit and options and the fs diff will shrink
> >> by another 200+ lines.
> >
> > I don't think that only a filesystem will do it in the foreseeable
> > future. You want to have tools like lsof reporting which map and which
> > program has references to each other. Those file nodes will need more
> > metadata attached in future anyway, so currently just comparing lines of
> > codes seems not to be a good way for arguing.
> 
> my point was that both have roughly the same number, but the lines of
> code will grow in either approach and when cdev starts as a hack I can
> only see more hacks added in the future, whereas fs gives us
> full flexibility to do any type of file access and user visible
> representation.
> Also I don't buy the point of reinventing sysfs. bpffs is not doing
> sysfs. I don't want to see _every_ bpf object in sysfs. It's way too
> much overhead. Classic doesn't have sysfs and everyone have been
> using it just fine.

But classic bpf does not have persistence for maps and data. ;) There is
a 1:1 relationship between socket and bpf_prog for example.

> bpffs is solving the need to 'pin_fd' when user process exits.
> That's it. Let's solve that and not go into any of this sysfs stuff.

But how can the filesystem be extended in terms of tunables and
information? File attributes? Wouldn't it need the same infrastructure
otherwise as sysfs? Some third-party lookup filesystem or ioctl? This
char dev approach also pins maps and progs while giving more policy in
hand of central user space programs we are currently using (udev,
systemd, whatever, etc.).

Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ