lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXJAmyOAEC+d6aM1VQ=w2EYZXB+s4RwuD6TeDiyWpo1bnGE4w@mail.gmail.com>
Date: Wed, 30 Oct 2024 08:48:41 -0700
From: John Ousterhout <ouster@...stanford.edu>
To: Andrew Lunn <andrew@...n.ch>
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH net-next 04/12] net: homa: create homa_pool.h and homa_pool.c

(resending... forgot to cc netdev in the original response)

On Wed, Oct 30, 2024 at 5:54 AM Andrew Lunn <andrew@...n.ch> wrote:

> > I think this is a different problem from what page pools solve. Rather
> > than the application providing a buffer each time it calls recvmsg, it
> > provides a large region of memory in its virtual address space in
> > advance;
>
> Ah, O.K. Yes, page pool is for kernel memory. However, is the virtual
> address space mapped to pages and pinned? Or do you allocate pages
> into that VM range as you need them? And then free them once the
> application says it has completed? If you are allocating and freeing
> pages, the page pool might be useful for these allocations.

Homa doesn't allocate or free pages for this: the application mmap's a
region and passes the virtual address range to Homa. Homa doesn't need
to pin the pages. This memory is used in a fashion similar to how a
buffer passed to recvmsg would be used, except that Homa maintains
access to the region for the lifetime of the associated socket. When
the application finishes processing an incoming message, it notifies
Homa so that Homa can reuse the message's buffer space for future
messages; there's no page allocation or freeing in this process.

> Taking a step back here, the kernel already has a number of allocators
> and ideally we don't want to add yet another one unless it is really
> required. So it would be good to get some reviews from the MM people.

I'm happy to do that if you still think it's necessary; how do I do that?

-John-

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ