lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGXJAmzwk87WCjxrxQbTn3bM8nemKcnzHzOeFTBJiKWABRf+Nw@mail.gmail.com>
Date: Tue, 26 Aug 2025 16:10:56 -0700
From: John Ousterhout <ouster@...stanford.edu>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, edumazet@...gle.com, horms@...nel.org, 
	kuba@...nel.org
Subject: Re: [PATCH net-next v15 03/15] net: homa: create shared Homa header files

On Tue, Aug 26, 2025 at 2:06 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On 8/18/25 10:55 PM, John Ousterhout wrote:
> > +/**
> > + * struct homa_net - Contains Homa information that is specific to a
> > + * particular network namespace.
> > + */
> > +struct homa_net {
> > +     /** @net: Network namespace corresponding to this structure. */
> > +     struct net *net;
> > +
> > +     /** @homa: Global Homa information. */
> > +     struct homa *homa;
>
> It's not clear why the above 2 fields are needed. You could access
> directly the global struct homa instance, and 'struct net' is usually
> available when struct home_net is avail.

I have eliminated net but would like to retain homa. I have tried very
hard to avoid global variables in Homa, both for general pedagogical
reasons and because it simplifies unit testing. Right now there is no
need for a global homa except a couple of places in homa_plumbing.c,
and I'd like to maintain that encapsulation.

> > +/**
> > + * homa_clock() - Return a fine-grain clock value that is monotonic and
> > + * consistent across cores.
> > + * Return: see above.
> > + */
> > +static inline u64 homa_clock(void)
> > +{
> > +     /* As of May 2025 there does not appear to be a portable API that
> > +      * meets Homa's needs:
> > +      * - The Intel X86 TSC works well but is not portable.
> > +      * - sched_clock() does not guarantee monotonicity or consistency.
> > +      * - ktime_get_mono_fast_ns and ktime_get_raw_fast_ns are very slow
> > +      *   (27 ns to read, vs 8 ns for TSC)
> > +      * Thus we use a hybrid approach that uses TSC (via get_cycles) where
> > +      * available (which should be just about everywhere Homa runs).
> > +      */
> > +#ifdef CONFIG_X86_TSC
> > +     return get_cycles();
> > +#else
> > +     return ktime_get_mono_fast_ns();
> > +#endif /* CONFIG_X86_TSC */
> > +}
>
> ktime_get*() variant are fast enough to allow e.g. pktgen deals with
> millions of packets x seconds. Both tsc() and ktime_get_mono_fast_ns()
> suffer of various inconsistencies which will cause the most unexpected
> issues in the most dangerous situation. I strongly advice against this
> early optimization.

Which ktime_get variant do you recommend instead of ktime_get_mono_fast_ns?

I feel pretty strongly about retaining the use of TSC on Intel
platforms. As I have said before, Homa is attempting to operate in a
much more aggressive latency domain than Linux is used to, and
nanoseconds matter. I have been using TSC on Intel and AMD platforms
for more than 15 years and I have never had any problems. Is there a
specific inconsistency you know of that will cause "unexpected issues
in the most dangerous situations"? If not, I would prefer to retain
the use of TSC until someone can identify a real problem. Note that
the choice of clock is now well encapsulated, so if a change should
become necessary it will be very easy to make.

For all of your comments that I have not responded to explicitly
above, I have implemented the changes you recommended.

-John-

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ