lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241210061335.GA22834@j66a10360.sqa.eu95>
Date: Tue, 10 Dec 2024 14:13:35 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com    >
To: John Ousterhout <ouster@...stanford.edu>
Cc: "D. Wythe" <alibuda@...ux.alibaba.com>, netdev@...r.kernel.org,
	linux-api@...r.kernel.org
Subject: Re: [PATCH net-next v2 11/12] net: homa: create homa_plumbing.c
 homa_utils.c

On Mon, Dec 09, 2024 at 09:50:03PM -0800, John Ousterhout wrote:

It seems I misunderstood your point... I'm indeed not familiar with this.
Perhaps other members of the community can help you.

D. Wythe

> Thanks for the additional information. My concern was whether the
> available kernel virtual address space for vmapping is a scarce
> resource, in which case it might not be good for Homa to lock up large
> amounts of it for long periods of time. I'm not as worried about
> physical memory usage (that will happen regardless of whether the
> buffers get vmapped into the kernel).
> 
> -John-
> 
> 
> On Mon, Dec 9, 2024 at 9:14 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> >
> > On Mon, Dec 09, 2024 at 09:03:08AM -0800, John Ousterhout wrote:
> > > A follow-up question on this, if I may. Is it OK to vmap a large
> > > region of user address space (say, 64 MB) and leave this mapped for an
> > > extended period of time (say, the life of the application), or would
> > > this have undesirable consequences? In other words, if I do this,
> > > would I need to monitor how actively the memory is being used and
> > > release the vmap for space that is inactive?
> > >
> > > -John-
> > >
> > >
> >
> > I am not an expert in this field, so the following is just my personal
> > opinion.
> >
> > When users call setsockopt(HOMA_RCV), they should be aware that this
> > memory will be occupied by the kernel no matter how large this
> > memory is, until the user explicitly notifies the kernel to release this memory
> > (HOMA can only do this through close ?).
> >
> > Therefore, my understanding is that the kernel does not
> > need to be responsible for the lifecycle of this memory. For example, if
> > the user space forgets that this registered memory has already been
> > freed, then a write from the kernel of cause could corrupt the user-space data,
> > but the kernel has no need to responsible for that.
> >
> > If you believe that this could waste memory, perhaps you should provide
> > a sparse data structure instead of a fixed memory interface.
> >
> > D. Wythe
> >
> > > On Mon, Dec 9, 2024 at 8:53 AM John Ousterhout <ouster@...stanford.edu> wrote:
> > > >
> > > > Thanks for the additional information; I'll put this on my list of
> > > > things to consider for performance optimization.
> > > >
> > > > -John-
> > > >
> > > >
> > > > On Sun, Dec 8, 2024 at 10:56 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > On 12/6/24 3:49 AM, John Ousterhout wrote:
> > > > > > On Sun, Dec 1, 2024 at 7:51 PM D. Wythe <alibuda@...ux.alibaba.com> wrote:
> > > > > >>> +int homa_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
> > > > > >>> +                 unsigned int optlen)
> > > > > >>> +{
> > > > > >>> +     struct homa_sock *hsk = homa_sk(sk);
> > > > > >>> +     struct homa_set_buf_args args;
> > > > > >>> +     int ret;
> > > > > >>> +
> > > > > >>> +     if (level != IPPROTO_HOMA || optname != SO_HOMA_SET_BUF ||
> > > > > >>> +         optlen != sizeof(struct homa_set_buf_args))
> > > > > >>> +             return -EINVAL;
> > > > > >>
> > > > > >> SO_HOMA_SET_BUF is a bit odd here, maybe HOMA_RCVBUF ? which also can be
> > > > > >> implemented in getsockopt.
> > > > > >
> > > > > > I have changed it to HOMA_RCVBUF (and renamed struct homa_set_buf_args
> > > > > > to struct homa_rcvbuf_args). I also implemented getsockopt for
> > > > > > HOMA_RCVBUF.
> > > > > >
> > > > > >>> +
> > > > > >>> +     if (copy_from_sockptr(&args, optval, optlen))
> > > > > >>> +             return -EFAULT;
> > > > > >>> +
> > > > > >>> +     /* Do a trivial test to make sure we can at least write the first
> > > > > >>> +      * page of the region.
> > > > > >>> +      */
> > > > > >>> +     if (copy_to_user((__force void __user *)args.start, &args, sizeof(args)))
> > > > > >>> +             return -EFAULT;
> > > > > >>
> > > > > >> To share buffer between kernel and userspace, maybe you should refer to the implementation of
> > > > > >> io_pin_pbuf_ring()
> > > > > >
> > > > > > I'm not sure what you mean here. Are you suggesting that I look at the
> > > > > > code of io_pin_pbuf_ring to make sure I've done everything needed to
> > > > > > share buffers? I don't believe that Homa needs to do anything special
> > > > > > (e.g. it doesn't need to pin the user's buffers); it just saves the
> > > > > > address and makes copy_to_user calls later when needed (and these
> > > > > > calls are all done at syscall level in the context of the
> > > > > > application). Is there something I'm missing?
> > > > > >
> > > > >
> > > > > I just thought that since the received buffer is shared between kernel and user-space, if using
> > > > > vmap() to map the very memory, so that we don't need to use such "copy_to_user" to transfer the data
> > > > > from kernel to user-space, we can use memcpy() instead. This shall be more faster, but I had no
> > > > > relevant data to prove it..
> > > > >
> > > > > So I'm not going to insist on it, it ups to you.
> > > > >
> > > > > D. Wythe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ