[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f132922-2bf7-4749-b8c7-4c57445f9cde@opensynergy.com>
Date: Tue, 16 Jul 2024 13:54:52 +0200
From: Peter Hilber <peter.hilber@...nsynergy.com>
To: David Woodhouse <dwmw2@...radead.org>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-rtc@...r.kernel.org, "Ridoux, Julien" <ridouxj@...zon.com>,
virtio-dev@...ts.linux.dev, "Luu, Ryan" <rluu@...zon.com>,
"Chashper, David" <chashper@...zon.com>
Cc: "Christopher S . Hall" <christopher.s.hall@...el.com>,
Jason Wang <jasowang@...hat.com>, John Stultz <jstultz@...gle.com>,
"Michael S . Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
Richard Cochran <richardcochran@...il.com>, Stephen Boyd <sboyd@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, Xuan Zhuo
<xuanzhuo@...ux.alibaba.com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Alessandro Zummo <a.zummo@...ertech.it>,
Alexandre Belloni <alexandre.belloni@...tlin.com>,
qemu-devel <qemu-devel@...gnu.org>, Simon Horman <horms@...nel.org>
Subject: Re: [RFC PATCH v4] ptp: Add vDSO-style vmclock support
On 11.07.24 09:50, David Woodhouse wrote:
> On Thu, 2024-07-11 at 09:25 +0200, Peter Hilber wrote:
>>
>> IMHO this phrasing is better, since it directly refers to the state of the
>> structure.
>
> Thanks. I'll update it.
>
>> AFAIU if there would be abnormal delays in store buffers, causing some
>> driver to still see the old clock for some time, the monotonicity could be
>> violated:
>>
>> 1. device writes new, much slower clock to store buffer
>> 2. some time passes
>> 3. driver reads old, much faster clock
>> 4. device writes store buffer to cache
>> 5. driver reads new, much slower clock
>>
>> But I hope such delays do not occur.
>
> For the case of the hypervisor←→guest interface this should be handled
> by the use of memory barriers and the seqcount lock.
>
> The guest driver reads the seqcount, performs a read memory barrier,
> then reads the contents of the structure. Then performs *another* read
> memory barrier, and checks the seqcount hasn't changed:
> https://git.infradead.org/?p=users/dwmw2/linux.git;a=blob;f=drivers/ptp/ptp_vmclock.c;hb=vmclock#l351
>
> The converse happens with write barriers on the hypervisor side:
> https://git.infradead.org/?p=users/dwmw2/qemu.git;a=blob;f=hw/acpi/vmclock.c;hb=vmclock#l68
My point is that, looking at the above steps 1. - 5.:
3. read HW counter, smp_rmb, read seqcount
4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount become effective
5. read seqcount, smp_rmb, read HW counter
AFAIU this would still be a theoretical problem suggesting the use of
stronger barriers.
>
> Do we need to think harder about the ordering across a real PCI bus? It
> isn't entirely unreasonable for this to be implemented in hardware if
> we eventually add a counter_id value for a bus-visible counter like the
> Intel Always Running Timer (ART). I'm also OK with saying that device
> implementations may only provide the shared memory structure if they
> can ensure memory ordering.
Sounds good to me. This statement would then also address the above.
Powered by blists - more mailing lists