[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201231064728.x7vywfzxxn3sqq7e@kafai-mbp.dhcp.thefacebook.com>
Date: Wed, 30 Dec 2020 22:47:28 -0800
From: Martin KaFai Lau <kafai@...com>
To: Song Liu <song@...nel.org>
CC: Stanislav Fomichev <sdf@...gle.com>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH bpf-next 1/2] bpf: try to avoid kzalloc in
cgroup/{s,g}etsockopt
On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote:
> On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@...gle.com> wrote:
> >
> > When we attach a bpf program to cgroup/getsockopt any other getsockopt()
> > syscall starts incurring kzalloc/kfree cost. While, in general, it's
> > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE.
> > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement
> > fastpath for incoming TCP, we don't want to have extra allocations in
> > there.
> >
> > Let add a small buffer on the stack and use it for small (majority)
> > {s,g}etsockopt values. I've started with 128 bytes to cover
> > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes
> > currently, with some planned extension to 64 + some headroom
> > for the future).
>
> I don't really know the rule of thumb, but 128 bytes on stack feels too big to
> me. I would like to hear others' opinions on this. Can we solve the problem
> with some other mechanisms, e.g. a mempool?
It seems the do_tcp_getsockopt() is also having "struct tcp_zerocopy_receive"
in the stack. I think the buf here is also mimicking
"struct tcp_zerocopy_receive", so should not cause any
new problem.
However, "struct tcp_zerocopy_receive" is only 40 bytes now. I think it
is better to have a smaller buf for now and increase it later when the
the future needs in "struct tcp_zerocopy_receive" is also upstreamed.
Powered by blists - more mailing lists