lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmzzmNrp-7OANK1yQX+xJcGAYjTWDFuyhdD5pvjN28CtXQ@mail.gmail.com>
Date: Mon, 9 Dec 2024 09:03:08 -0800
From: John Ousterhout <ouster@...stanford.edu>
To: "D. Wythe" <alibuda@...ux.alibaba.com>
Cc: netdev@...r.kernel.org, linux-api@...r.kernel.org
Subject: Re: [PATCH net-next v2 11/12] net: homa: create homa_plumbing.c homa_utils.c

A follow-up question on this, if I may. Is it OK to vmap a large
region of user address space (say, 64 MB) and leave this mapped for an
extended period of time (say, the life of the application), or would
this have undesirable consequences? In other words, if I do this,
would I need to monitor how actively the memory is being used and
release the vmap for space that is inactive?

-John-


On Mon, Dec 9, 2024 at 8:53 AM John Ousterhout <ouster@...stanford.edu> wrote:
>
> Thanks for the additional information; I'll put this on my list of
> things to consider for performance optimization.
>
> -John-
>
>
> On Sun, Dec 8, 2024 at 10:56 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> >
> >
> >
> > On 12/6/24 3:49 AM, John Ousterhout wrote:
> > > On Sun, Dec 1, 2024 at 7:51 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> > >>> +int homa_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
> > >>> +                 unsigned int optlen)
> > >>> +{
> > >>> +     struct homa_sock *hsk = homa_sk(sk);
> > >>> +     struct homa_set_buf_args args;
> > >>> +     int ret;
> > >>> +
> > >>> +     if (level != IPPROTO_HOMA || optname != SO_HOMA_SET_BUF ||
> > >>> +         optlen != sizeof(struct homa_set_buf_args))
> > >>> +             return -EINVAL;
> > >>
> > >> SO_HOMA_SET_BUF is a bit odd here, maybe HOMA_RCVBUF ? which also can be
> > >> implemented in getsockopt.
> > >
> > > I have changed it to HOMA_RCVBUF (and renamed struct homa_set_buf_args
> > > to struct homa_rcvbuf_args). I also implemented getsockopt for
> > > HOMA_RCVBUF.
> > >
> > >>> +
> > >>> +     if (copy_from_sockptr(&args, optval, optlen))
> > >>> +             return -EFAULT;
> > >>> +
> > >>> +     /* Do a trivial test to make sure we can at least write the first
> > >>> +      * page of the region.
> > >>> +      */
> > >>> +     if (copy_to_user((__force void __user *)args.start, &args, sizeof(args)))
> > >>> +             return -EFAULT;
> > >>
> > >> To share buffer between kernel and userspace, maybe you should refer to the implementation of
> > >> io_pin_pbuf_ring()
> > >
> > > I'm not sure what you mean here. Are you suggesting that I look at the
> > > code of io_pin_pbuf_ring to make sure I've done everything needed to
> > > share buffers? I don't believe that Homa needs to do anything special
> > > (e.g. it doesn't need to pin the user's buffers); it just saves the
> > > address and makes copy_to_user calls later when needed (and these
> > > calls are all done at syscall level in the context of the
> > > application). Is there something I'm missing?
> > >
> >
> > I just thought that since the received buffer is shared between kernel and user-space, if using
> > vmap() to map the very memory, so that we don't need to use such "copy_to_user" to transfer the data
> > from kernel to user-space, we can use memcpy() instead. This shall be more faster, but I had no
> > relevant data to prove it..
> >
> > So I'm not going to insist on it, it ups to you.
> >
> > D. Wythe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ