lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Sep 2021 17:00:56 +0200
From:   Sebastien Laveze <sebastien.laveze@....nxp.com>
To:     Richard Cochran <richardcochran@...il.com>
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        yangbo.lu@....com, yannick.vignon@....nxp.com,
        rui.sousa@....nxp.com
Subject: Re: [PATCH net-next] ptp: add vclock timestamp conversion IOCTL

On Tue, 2021-09-28 at 06:31 -0700, Richard Cochran wrote:
> On Tue, Sep 28, 2021 at 01:50:23PM +0200, Sebastien Laveze wrote:
> > Yes that would do it. Only drawback is that ALL rx and tx timestamps
> > are converted to the N domains instead of a few as needed.
> 
> No, the kernel would provide only those that are selected by the
> program via the socket option API.

But _all_ timestamps (rx and tx) are converted when a domain is
selected.

If we consider gPTP,
-using the ioctl, you only need to convert the sync receive timestamps.
PDelay (rx, tx, fup), sync (tx and fup) and signalling don't need to be
converted. So that's for a default sync period of 125 ms, 8 ioctl /
second / domain.
-doing the conversion in the kernel will necessarly be done for every
timestamp handled by the socket. In addition, the vclock device lookup
is not free as well and done for _each_ conversion.

In the end, if the new ioctl is not accepted, I think opening multiple
sockets with bpf filtering and vclock bind is the most appropriate
solution for us. (more details below)

> > 
> Okay, so I briefly reviewed IEEE Std 802.1AS-2020 Clause 11.2.17.
> 
> I think what it boils down to is this:  (please correct me if I'm wrong)
> 
> When running PTP over multiple domains on one port, the P2P delay
> needs only to be measured in one domain.  It would be wasteful to
> measure the peer delay in multiple parallel domains.

>From a high-level view, I understand that you would have N
instance/process of linuxptp to support N domains ? CMLDS performed by
one of them and then some signalling to the other instances ?

For our own stack (we work on a multi-OS 802.1AS-2020 implementation)
we took the approach of handling everything in a single process with a
single socket per port (our implementation also supported Bridge mode
with multiple ports).

What we miss currently in the kernel for a better multi-domain usage
and would like to find a solution:
-allow PHC adjustment with virtual clocks. Otherwise scheduled traffic
cannot be used... (I've read your comments on this topic, we are
experimenting things on MCUs and we need to assess on measurements)
-timer support for virtual clocks (nanosleep likely, as yous suggested
IIRC).

Thanks,
Seb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ