lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10db46e9-b753-43bb-a826-14d4c11026bd@opensynergy.com>
Date: Tue, 16 Jul 2024 15:20:37 +0200
From: Peter Hilber <peter.hilber@...nsynergy.com>
To: David Woodhouse <dwmw2@...radead.org>, linux-kernel@...r.kernel.org,
 virtualization@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
 linux-rtc@...r.kernel.org, "Ridoux, Julien" <ridouxj@...zon.com>,
 virtio-dev@...ts.linux.dev, "Luu, Ryan" <rluu@...zon.com>,
 "Chashper, David" <chashper@...zon.com>
Cc: "Christopher S . Hall" <christopher.s.hall@...el.com>,
 Jason Wang <jasowang@...hat.com>, John Stultz <jstultz@...gle.com>,
 "Michael S . Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
 Richard Cochran <richardcochran@...il.com>, Stephen Boyd <sboyd@...nel.org>,
 Thomas Gleixner <tglx@...utronix.de>, Xuan Zhuo
 <xuanzhuo@...ux.alibaba.com>, Marc Zyngier <maz@...nel.org>,
 Mark Rutland <mark.rutland@....com>,
 Daniel Lezcano <daniel.lezcano@...aro.org>,
 Alessandro Zummo <a.zummo@...ertech.it>,
 Alexandre Belloni <alexandre.belloni@...tlin.com>,
 qemu-devel <qemu-devel@...gnu.org>, Simon Horman <horms@...nel.org>
Subject: Re: [RFC PATCH v4] ptp: Add vDSO-style vmclock support

On 16.07.24 14:32, David Woodhouse wrote:
> On 16 July 2024 12:54:52 BST, Peter Hilber <peter.hilber@...nsynergy.com> wrote:
>> On 11.07.24 09:50, David Woodhouse wrote:
>>> On Thu, 2024-07-11 at 09:25 +0200, Peter Hilber wrote:
>>>>
>>>> IMHO this phrasing is better, since it directly refers to the state of the
>>>> structure.
>>>
>>> Thanks. I'll update it.
>>>
>>>> AFAIU if there would be abnormal delays in store buffers, causing some
>>>> driver to still see the old clock for some time, the monotonicity could be
>>>> violated:
>>>>
>>>> 1. device writes new, much slower clock to store buffer
>>>> 2. some time passes
>>>> 3. driver reads old, much faster clock
>>>> 4. device writes store buffer to cache
>>>> 5. driver reads new, much slower clock
>>>>
>>>> But I hope such delays do not occur.
>>>
>>> For the case of the hypervisor←→guest interface this should be handled
>>> by the use of memory barriers and the seqcount lock.
>>>
>>> The guest driver reads the seqcount, performs a read memory barrier,
>>> then reads the contents of the structure. Then performs *another* read
>>> memory barrier, and checks the seqcount hasn't changed:
>>> https://git.infradead.org/?p=users/dwmw2/linux.git;a=blob;f=drivers/ptp/ptp_vmclock.c;hb=vmclock#l351
>>>
>>> The converse happens with write barriers on the hypervisor side:
>>> https://git.infradead.org/?p=users/dwmw2/qemu.git;a=blob;f=hw/acpi/vmclock.c;hb=vmclock#l68
>>
>> My point is that, looking at the above steps 1. - 5.:
>>
>> 3. read HW counter, smp_rmb, read seqcount
>> 4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount become effective
>> 5. read seqcount, smp_rmb, read HW counter
>>
>> AFAIU this would still be a theoretical problem suggesting the use of
>> stronger barriers.
> 
> This seems like a bug on the guest side. The HW counter needs to be read *within* the (paired, matching) seqcount reads, not before or after.
> 
> 

There would be paired reads:

1. device writes new, much slower clock to store buffer
2. some time passes
3. read seqcount, smp_rmb, ..., read HW counter, smp_rmb, read seqcount
4. store seqcount, smp_wmb, stores, smp_wmb, store seqcount all become
   effective only now
5. read seqcount, smp_rmb, read HW counter, ..., smp_rmb, read seqcount

I just omitted the parts which do not necessarily need to happen close to
4. for the monotonicity to be violated. My point is that 1. could become
visible to other cores long after it happened on the local core (during
4.).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ