[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmzdr1dBZb4=TYscXtN66weRvsO6p74K-K3aa_7UJ=sEuQ@mail.gmail.com>
Date: Sat, 12 Nov 2022 22:25:50 -0800
From: John Ousterhout <ouster@...stanford.edu>
To: Jiri Pirko <jiri@...nulli.us>
Cc: Andrew Lunn <andrew@...n.ch>,
Stephen Hemminger <stephen@...workplumber.org>,
netdev@...r.kernel.org
Subject: Re: Upstream Homa?
On Fri, Nov 11, 2022 at 11:53 PM Jiri Pirko <jiri@...nulli.us> wrote:
>
> Fri, Nov 11, 2022 at 08:25:44PM CET, andrew@...n.ch wrote:
> >On Fri, Nov 11, 2022 at 10:59:58AM -0800, John Ousterhout wrote:
> >> The netlink and 32-bit kernel issues are new for me; I've done some digging to
> >> learn more, but still have some questions.
> >>
> >
> >> * Is the intent that netlink replaces *all* uses of /proc and ioctl? Homa
> >> currently uses ioctls on sockets for I/O (its APIs aren't sockets-compatible).
>
> Why exactly it isn't sockets-comatible?
Homa implements RPCs rather than streams like TCP or messages like
UDP. An RPC consists of a request message sent from client to server,
followed by a response message from server back to client. This requires
additional information in the API beyond what is provided in the arguments to
sendto and recvfrom. For example, when sending a request message, the
kernel returns an RPC identifier back to the application; when waiting for
a response, the application can specify that it wants to receive the reply for
a specific RPC identifier (or, it can specify that it will accept any
reply, or any
request, or both).
> >> It looks like switching to netlink would double the number of system calls that
> >> have to be invoked, which would be unfortunate given Homa's goal of getting the
> >> lowest possible latency. It also looks like netlink might be awkward for
> >> dumping large volumes of kernel data to user space (potential for buffer
> >> overflow?).
>
> Netlink is slow, you should use it for fast path. It is for
> configuration and stats.
>
>
Powered by blists - more mailing lists