lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmwA-aEdEWezOxWhHB8tdsB6aaBYjwYCo+=Hnhh0j8up4Q@mail.gmail.com>
Date: Mon, 9 Dec 2024 21:50:03 -0800
From: John Ousterhout <ouster@...stanford.edu>
To: "D. Wythe" <alibuda@...ux.alibaba.com>
Cc: netdev@...r.kernel.org, linux-api@...r.kernel.org
Subject: Re: [PATCH net-next v2 11/12] net: homa: create homa_plumbing.c homa_utils.c

Thanks for the additional information. My concern was whether the
available kernel virtual address space for vmapping is a scarce
resource, in which case it might not be good for Homa to lock up large
amounts of it for long periods of time. I'm not as worried about
physical memory usage (that will happen regardless of whether the
buffers get vmapped into the kernel).

-John-


On Mon, Dec 9, 2024 at 9:14 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
>
> On Mon, Dec 09, 2024 at 09:03:08AM -0800, John Ousterhout wrote:
> > A follow-up question on this, if I may. Is it OK to vmap a large
> > region of user address space (say, 64 MB) and leave this mapped for an
> > extended period of time (say, the life of the application), or would
> > this have undesirable consequences? In other words, if I do this,
> > would I need to monitor how actively the memory is being used and
> > release the vmap for space that is inactive?
> >
> > -John-
> >
> >
>
> I am not an expert in this field, so the following is just my personal
> opinion.
>
> When users call setsockopt(HOMA_RCV), they should be aware that this
> memory will be occupied by the kernel no matter how large this
> memory is, until the user explicitly notifies the kernel to release this memory
> (HOMA can only do this through close ?).
>
> Therefore, my understanding is that the kernel does not
> need to be responsible for the lifecycle of this memory. For example, if
> the user space forgets that this registered memory has already been
> freed, then a write from the kernel of cause could corrupt the user-space data,
> but the kernel has no need to responsible for that.
>
> If you believe that this could waste memory, perhaps you should provide
> a sparse data structure instead of a fixed memory interface.
>
> D. Wythe
>
> > On Mon, Dec 9, 2024 at 8:53 AM John Ousterhout <ouster@...stanford.edu> wrote:
> > >
> > > Thanks for the additional information; I'll put this on my list of
> > > things to consider for performance optimization.
> > >
> > > -John-
> > >
> > >
> > > On Sun, Dec 8, 2024 at 10:56 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> > > >
> > > >
> > > >
> > > > On 12/6/24 3:49 AM, John Ousterhout wrote:
> > > > > On Sun, Dec 1, 2024 at 7:51 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> > > > >>> +int homa_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
> > > > >>> +                 unsigned int optlen)
> > > > >>> +{
> > > > >>> +     struct homa_sock *hsk = homa_sk(sk);
> > > > >>> +     struct homa_set_buf_args args;
> > > > >>> +     int ret;
> > > > >>> +
> > > > >>> +     if (level != IPPROTO_HOMA || optname != SO_HOMA_SET_BUF ||
> > > > >>> +         optlen != sizeof(struct homa_set_buf_args))
> > > > >>> +             return -EINVAL;
> > > > >>
> > > > >> SO_HOMA_SET_BUF is a bit odd here, maybe HOMA_RCVBUF ? which also can be
> > > > >> implemented in getsockopt.
> > > > >
> > > > > I have changed it to HOMA_RCVBUF (and renamed struct homa_set_buf_args
> > > > > to struct homa_rcvbuf_args). I also implemented getsockopt for
> > > > > HOMA_RCVBUF.
> > > > >
> > > > >>> +
> > > > >>> +     if (copy_from_sockptr(&args, optval, optlen))
> > > > >>> +             return -EFAULT;
> > > > >>> +
> > > > >>> +     /* Do a trivial test to make sure we can at least write the first
> > > > >>> +      * page of the region.
> > > > >>> +      */
> > > > >>> +     if (copy_to_user((__force void __user *)args.start, &args, sizeof(args)))
> > > > >>> +             return -EFAULT;
> > > > >>
> > > > >> To share buffer between kernel and userspace, maybe you should refer to the implementation of
> > > > >> io_pin_pbuf_ring()
> > > > >
> > > > > I'm not sure what you mean here. Are you suggesting that I look at the
> > > > > code of io_pin_pbuf_ring to make sure I've done everything needed to
> > > > > share buffers? I don't believe that Homa needs to do anything special
> > > > > (e.g. it doesn't need to pin the user's buffers); it just saves the
> > > > > address and makes copy_to_user calls later when needed (and these
> > > > > calls are all done at syscall level in the context of the
> > > > > application). Is there something I'm missing?
> > > > >
> > > >
> > > > I just thought that since the received buffer is shared between kernel and user-space, if using
> > > > vmap() to map the very memory, so that we don't need to use such "copy_to_user" to transfer the data
> > > > from kernel to user-space, we can use memcpy() instead. This shall be more faster, but I had no
> > > > relevant data to prove it..
> > > >
> > > > So I'm not going to insist on it, it ups to you.
> > > >
> > > > D. Wythe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ