lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <562545AA.2080207@plumgrid.com>
Date:	Mon, 19 Oct 2015 12:34:02 -0700
From:	Alexei Starovoitov <ast@...mgrid.com>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Daniel Borkmann <daniel@...earbox.net>,
	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	davem@...emloft.net, viro@...IV.linux.org.uk, tgraf@...g.ch,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH net-next 3/4] bpf: add support for persistent maps/progs

On 10/19/15 11:46 AM, Hannes Frederic Sowa wrote:
> Hi,
>
> On Mon, Oct 19, 2015, at 20:15, Alexei Starovoitov wrote:
>> On 10/19/15 10:37 AM, Daniel Borkmann wrote:
>>> An eBPF program or map loading/destruction is *not* by any means to be
>>> considered fast-path. We currently hold a global mutex during loading.
>>> So, how can that be considered fast-path? Similarly, socket creation/
>>> destruction is also not fast-path, etc. Do you expect that applications
>>> would create/destroy these devices within milliseconds? I'd argue that
>>> something would be seriously wrong with that application, then. Such
>>> persistent maps are to be considered rather mid-long living objects in
>>> the system. The fast-path surely is the data-path of them.
>>
>> consider seccomp that loads several programs for every tab, then
>> container use case where we're loading two for each as well.
>> Who knows what use cases will come up in the future.
>> It's obviously not a fast path that is being hit million times a second,
>> but why add overhead when it's unnecessary?
>
> But the browser using seccomp does not need persistent maps at all. It

today it doesn't, but if it was a light weight feature, it could
have been used much more broadly.
Currently we're using user agent for networking and it's a pain.

>>>> completely unnecessary here. The kernel will consume more memory for
>>>> no real reason other than cdev are used to keep prog/maps around.
>>>
>>> I don't consider this a big issue, and well worth the trade-off. You'll
>>> have an infrastructure that integrates *nicely* into the *existing* kernel
>>> model *and* tooling with the proposed patch. This is a HUGE plus. The
>>> UAPI of this is simple and minimal. And to me, these are in-fact special
>>> files, not regular ones.
>>
>> Seriously? Syscall to create/destory cdevs is a nice api?
>
> It is pretty normal to do. They don't use syscall but ioctl on a special
> pseudo device node (lvm2/device-mapper, tun, etc.). Same thing, very
> overloaded syscall. Also I don't see a reason to not make the creation
> possible via sysfs, which would be even nicer but not orthogonal to the
> current creation of maps, which is nowadays set in stone by uapi.

Isn't it your point going against cdev? ioctls on devs are overloaded,
whereas here we have clean bpf syscall with no extra baggage.
Going old-school to device model goes against that clean syscall
approach.

> sysfs already has infrastructure to ensure supportability and
> debugability. Dependency graphs can be created to see which programs
> depend on which bpf_progs. kobjects are singeltons in sysfs
> representation. If you have multiple ebpf filesystems, even maybe
> referencing the same hashtable with gigabytes of data multiple times,
> there needs to be some way to help administrators to check resource
> usage, statistics, tweak and tune the rhashtable. All this needs to be
> handled as well in the future. It doesn't really fit the filesystem
> model, but representing a kobject seems to be a good fit to me.

quite the opposite. The way cdev patch is done now, there is no way
to extend it to create a hierarchy without breaking users,
whereas fs style with initial Daniel's patch can be extended.
Doing 'resource stats' via sysfs requires bpf to add to sysfs, which
is not this cdev approach.

> Policy can be done by user space with help of udev. Selinux policies can
> easily being extended to allow specific domains access. Namespace
> "passthrough" is defined for devices already.

that's an interesting point, but isn't it better done with fs?
you can go much more fine-grained with directory permissions in fs.
With different users/namespaces having their own hierarchies and mounts
whereas cdev dumps everything into one spot.

>> where do you see 'over-design' in fs? It's a straightforward code and
>> most of it is boilerplate like all other fs.
>> Kill rmdir/mkdir ops, term bit and options and the fs diff will shrink
>> by another 200+ lines.
>
> I don't think that only a filesystem will do it in the foreseeable
> future. You want to have tools like lsof reporting which map and which
> program has references to each other. Those file nodes will need more
> metadata attached in future anyway, so currently just comparing lines of
> codes seems not to be a good way for arguing.

my point was that both have roughly the same number, but the lines of
code will grow in either approach and when cdev starts as a hack I can
only see more hacks added in the future, whereas fs gives us
full flexibility to do any type of file access and user visible
representation.
Also I don't buy the point of reinventing sysfs. bpffs is not doing
sysfs. I don't want to see _every_ bpf object in sysfs. It's way too
much overhead. Classic doesn't have sysfs and everyone have been
using it just fine.
bpffs is solving the need to 'pin_fd' when user process exits.
That's it. Let's solve that and not go into any of this sysfs stuff.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ