[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56253335.9000206@plumgrid.com>
Date: Mon, 19 Oct 2015 11:15:17 -0700
From: Alexei Starovoitov <ast@...mgrid.com>
To: Daniel Borkmann <daniel@...earbox.net>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>
Cc: davem@...emloft.net, viro@...IV.linux.org.uk, tgraf@...g.ch,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH net-next 3/4] bpf: add support for persistent maps/progs
On 10/19/15 10:37 AM, Daniel Borkmann wrote:
> An eBPF program or map loading/destruction is *not* by any means to be
> considered fast-path. We currently hold a global mutex during loading.
> So, how can that be considered fast-path? Similarly, socket creation/
> destruction is also not fast-path, etc. Do you expect that applications
> would create/destroy these devices within milliseconds? I'd argue that
> something would be seriously wrong with that application, then. Such
> persistent maps are to be considered rather mid-long living objects in
> the system. The fast-path surely is the data-path of them.
consider seccomp that loads several programs for every tab, then
container use case where we're loading two for each as well.
Who knows what use cases will come up in the future.
It's obviously not a fast path that is being hit million times a second,
but why add overhead when it's unnecessary?
>> completely unnecessary here. The kernel will consume more memory for
>> no real reason other than cdev are used to keep prog/maps around.
>
> I don't consider this a big issue, and well worth the trade-off. You'll
> have an infrastructure that integrates *nicely* into the *existing* kernel
> model *and* tooling with the proposed patch. This is a HUGE plus. The
> UAPI of this is simple and minimal. And to me, these are in-fact special
> files, not regular ones.
Seriously? Syscall to create/destory cdevs is a nice api?
Not by any means. We can argue in circles, but it doesn't fit.
Using cdev to hold maps is a hack.
Telling kernel that fake device is created only to hold a map?
This fake device doesn't have any of the device properties.
Look at fops->open. Surely it's a smart trick, but it's not behaving
like device.
uapi for fs adds only two commands, whereas cdev needs three.
>> imo fs is cleaner and we can tailor it to be similar to cdev style.
>
> Really, IMHO I think this is over-designed, and much much more hacky. We
> design a whole new file system that works *exactly* like cdevs, takes
> likely more than twice the code and complexity to realize but just to
> save a few bytes ...? I don't understand that.
Let's argue with facts. Your fs patch 758 lines vs cdev 483.
In fs the cost is single alloc_inode(), whereas in cdev the struct cdev
and mutex will add to all maps and progs (even when they're not going
to be pinned), plus kobj allocation, plus gigantic struct device, plus
sysfs inodes, etc. Way more than few bytes of difference.
where do you see 'over-design' in fs? It's a straightforward code and
most of it is boilerplate like all other fs.
Kill rmdir/mkdir ops, term bit and options and the fs diff will shrink
by another 200+ lines.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists