[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DD886A0D-B8E2-4749-AB21-7B26A4B70374@infradead.org>
Date: Tue, 16 Jul 2024 13:32:23 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Peter Hilber <peter.hilber@...nsynergy.com>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-rtc@...r.kernel.org, "Ridoux, Julien" <ridouxj@...zon.com>,
virtio-dev@...ts.linux.dev, "Luu, Ryan" <rluu@...zon.com>,
"Chashper, David" <chashper@...zon.com>
CC: "Christopher S . Hall" <christopher.s.hall@...el.com>,
Jason Wang <jasowang@...hat.com>, John Stultz <jstultz@...gle.com>,
"Michael S . Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
Richard Cochran <richardcochran@...il.com>, Stephen Boyd <sboyd@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Marc Zyngier <maz@...nel.org>, Mark Rutland <mark.rutland@....com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Alessandro Zummo <a.zummo@...ertech.it>,
Alexandre Belloni <alexandre.belloni@...tlin.com>,
qemu-devel <qemu-devel@...gnu.org>, Simon Horman <horms@...nel.org>
Subject: Re: [RFC PATCH v4] ptp: Add vDSO-style vmclock support
On 16 July 2024 12:54:52 BST, Peter Hilber <peter.hilber@...nsynergy.com> wrote:
>On 11.07.24 09:50, David Woodhouse wrote:
>> On Thu, 2024-07-11 at 09:25 +0200, Peter Hilber wrote:
>>>
>>> IMHO this phrasing is better, since it directly refers to the state of the
>>> structure.
>>
>> Thanks. I'll update it.
>>
>>> AFAIU if there would be abnormal delays in store buffers, causing some
>>> driver to still see the old clock for some time, the monotonicity could be
>>> violated:
>>>
>>> 1. device writes new, much slower clock to store buffer
>>> 2. some time passes
>>> 3. driver reads old, much faster clock
>>> 4. device writes store buffer to cache
>>> 5. driver reads new, much slower clock
>>>
>>> But I hope such delays do not occur.
>>
>> For the case of the hypervisor←→guest interface this should be handled
>> by the use of memory barriers and the seqcount lock.
>>
>> The guest driver reads the seqcount, performs a read memory barrier,
>> then reads the contents of the structure. Then performs *another* read
>> memory barrier, and checks the seqcount hasn't changed:
>> https://git.infradead.org/?p=users/dwmw2/linux.git;a=blob;f=drivers/ptp/ptp_vmclock.c;hb=vmclock#l351
>>
>> The converse happens with write barriers on the hypervisor side:
>> https://git.infradead.org/?p=users/dwmw2/qemu.git;a=blob;f=hw/acpi/vmclock.c;hb=vmclock#l68
>
>My point is that, looking at the above steps 1. - 5.:
>
>3. read HW counter, smp_rmb, read seqcount
>4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount become effective
>5. read seqcount, smp_rmb, read HW counter
>
>AFAIU this would still be a theoretical problem suggesting the use of
>stronger barriers.
This seems like a bug on the guest side. The HW counter needs to be read *within* the (paired, matching) seqcount reads, not before or after.
Powered by blists - more mailing lists