lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d99c24c-a327-471b-964f-cfe02aef7ce2@redhat.com>
Date: Wed, 27 Aug 2025 09:21:18 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>
Cc: netdev@...r.kernel.org, edumazet@...gle.com, horms@...nel.org,
 kuba@...nel.org
Subject: Re: [PATCH net-next v15 03/15] net: homa: create shared Homa header
 files

On 8/27/25 1:10 AM, John Ousterhout wrote:
> On Tue, Aug 26, 2025 at 2:06 AM Paolo Abeni <pabeni@...hat.com> wrote:
>> On 8/18/25 10:55 PM, John Ousterhout wrote:
>>> +/**
>>> + * struct homa_net - Contains Homa information that is specific to a
>>> + * particular network namespace.
>>> + */
>>> +struct homa_net {
>>> +     /** @net: Network namespace corresponding to this structure. */
>>> +     struct net *net;
>>> +
>>> +     /** @homa: Global Homa information. */
>>> +     struct homa *homa;
>>
>> It's not clear why the above 2 fields are needed. You could access
>> directly the global struct homa instance, and 'struct net' is usually
>> available when struct home_net is avail.
> 
> I have eliminated net but would like to retain homa. I have tried very
> hard to avoid global variables in Homa, both for general pedagogical
> reasons and because it simplifies unit testing. Right now there is no
> need for a global homa except a couple of places in homa_plumbing.c,
> and I'd like to maintain that encapsulation.

Note that there is no kernel convention against global per protocol
variables, when that does not prevent scaling.

> 
>>> +/**
>>> + * homa_clock() - Return a fine-grain clock value that is monotonic and
>>> + * consistent across cores.
>>> + * Return: see above.
>>> + */
>>> +static inline u64 homa_clock(void)
>>> +{
>>> +     /* As of May 2025 there does not appear to be a portable API that
>>> +      * meets Homa's needs:
>>> +      * - The Intel X86 TSC works well but is not portable.
>>> +      * - sched_clock() does not guarantee monotonicity or consistency.
>>> +      * - ktime_get_mono_fast_ns and ktime_get_raw_fast_ns are very slow
>>> +      *   (27 ns to read, vs 8 ns for TSC)
>>> +      * Thus we use a hybrid approach that uses TSC (via get_cycles) where
>>> +      * available (which should be just about everywhere Homa runs).
>>> +      */
>>> +#ifdef CONFIG_X86_TSC
>>> +     return get_cycles();
>>> +#else
>>> +     return ktime_get_mono_fast_ns();
>>> +#endif /* CONFIG_X86_TSC */
>>> +}
>>
>> ktime_get*() variant are fast enough to allow e.g. pktgen deals with
>> millions of packets x seconds. Both tsc() and ktime_get_mono_fast_ns()
>> suffer of various inconsistencies which will cause the most unexpected
>> issues in the most dangerous situation. I strongly advice against this
>> early optimization.
> 
> Which ktime_get variant do you recommend instead of ktime_get_mono_fast_ns?
> 
> I feel pretty strongly about retaining the use of TSC on Intel
> platforms. As I have said before, Homa is attempting to operate in a
> much more aggressive latency domain than Linux is used to, and
> nanoseconds matter. I have been using TSC on Intel and AMD platforms
> for more than 15 years and I have never had any problems. Is there a
> specific inconsistency you know of that will cause "unexpected issues
> in the most dangerous situations"? 

The TSC raw value depends on the current CPU. According to the relevant
documentation ktime_get_mono_fast_ns() is allowed to jump under certain
conditions: with either of them you can get sudden/unexpected tick
increases.

> If not, I would prefer to retain
> the use of TSC until someone can identify a real problem. Note that
> the choice of clock is now well encapsulated, so if a change should
> become necessary it will be very easy to make.

AFAICS, in the current revision there are several points that could
cause much greater latency - i.e. the long loops under BH lock with no
reschedule. I'm surprised they don't show as ms-latency bottle-necks
under stress test.

I suggest removing such issues before doing micro optimization that at
very least use APIs that are explicitly discouraged.

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ