lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmyNuzX9DpZFvaWFbY95_sAC0gy9AfOT7gToYamnU0RZRQ@mail.gmail.com>
Date: Wed, 30 Oct 2024 13:17:59 -0700
From: John Ousterhout <ouster@...stanford.edu>
To: Andrew Lunn <andrew@...n.ch>
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH net-next 04/12] net: homa: create homa_pool.h and homa_pool.c

On Wed, Oct 30, 2024 at 9:03 AM Andrew Lunn <andrew@...n.ch> wrote:
>
> On Wed, Oct 30, 2024 at 08:46:33AM -0700, John Ousterhout wrote:
> > On Wed, Oct 30, 2024 at 5:54 AM Andrew Lunn <andrew@...n.ch> wrote:
> > > > I think this is a different problem from what page pools solve. Rather
> > > > than the application providing a buffer each time it calls recvmsg, it
> > > > provides a large region of memory in its virtual address space in
> > > > advance;
> > >
> > > Ah, O.K. Yes, page pool is for kernel memory. However, is the virtual
> > > address space mapped to pages and pinned? Or do you allocate pages
> > > into that VM range as you need them? And then free them once the
> > > application says it has completed? If you are allocating and freeing
> > > pages, the page pool might be useful for these allocations.
> >
> > Homa doesn't allocate or free pages for this: the application mmap's a
> > region and passes the virtual address range to Homa. Homa doesn't need
> > to pin the pages. This memory is used in a fashion similar to how a
> > buffer passed to recvmsg would be used, except that Homa maintains
> > access to the region for the lifetime of the associated socket. When
> > the application finishes processing an incoming message, it notifies
> > Homa so that Homa can reuse the message's buffer space for future
> > messages; there's no page allocation or freeing in this process.
>
> I clearly don't know enough about memory management! I would of
> expected the kernel to do lazy allocation of pages to VM addresses as
> needed. Maybe it is, and when you actually access one of these missing
> pages, you get a page fault and the MM code is kicking in to put an
> actual page there? This could all be hidden inside the copy_to_user()
> call.

Yes, this is all correct.  MM code gets called during copy_to_user to
allocate pages as needed (I should have been more clear: Homa doesn't
allocate or free pages directly). Homa tries to be clever about using
the buffer region to minimize the number of physical pages that
actually need to be allocated (it tries to allocate at the beginning
of the region, only using higher addresses when the lower addresses
are in use).

> > > Taking a step back here, the kernel already has a number of allocators
> > > and ideally we don't want to add yet another one unless it is really
> > > required. So it would be good to get some reviews from the MM people.
> >
> > I'm happy to do that if you still think it's necessary; how do I do that?
>
> Reach out to Andrew Morton <akpm@...ux-foundation.org>, the main
> Memory Management Maintainer. Ask who a good person would be to review
> this code.

I have started this process in a separate email.

-John-

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ