lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJfpegtbwfioa0gv4j1SoRu3X8vX3kZUQC=QJo=dcfOD2OjKzw@mail.gmail.com>
Date:   Wed, 7 Nov 2018 15:45:55 +0100
From:   Miklos Szeredi <miklos@...redi.hu>
To:     Kirill Tkhai <ktkhai@...tuozzo.com>
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] fuse: Verify userspace asks to requeue interrupt that
 we really sent

On Wed, Nov 7, 2018 at 3:25 PM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
> On 07.11.2018 16:55, Miklos Szeredi wrote:
>> On Tue, Nov 6, 2018 at 10:31 AM, Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
>>> When queue_interrupt() is called from fuse_dev_do_write(),
>>> it came from userspace directly. Userspace may pass any
>>> request id, even the request's we have not interrupted
>>> (or even background's request). This patch adds sanity
>>> check to make kernel safe against that.
>>
>> Okay, I understand this far.
>>
>>> The problem is real interrupt may be queued and requeued
>>> by two tasks on two cpus. This case, the requeuer has not
>>> guarantees it sees FR_INTERRUPTED bit on its cpu (since
>>> we know nothing about the way userspace manages requests
>>> between its threads and whether it uses smp barriers).
>>
>> This sounds like BS. There's an explicit  smp_mb__after_atomic()
>> after the set_bit(FR_INTERRUPTED,...).  Which means FR_INTERRUPTED is
>> going to be visible on all CPU's after this, no need to fool around
>> with setting FR_INTERRUPTED again, etc...
>
> Hm, but how does it make the bit visible on all CPUS?
>
> The problem is that smp_mb_xxx() barrier on a cpu has a sense
> only in pair with the appropriate barrier on the second cpu.
> There is no guarantee for visibility, if second cpu does not
> have a barrier:
>
>   CPU 1                  CPU2                        CPU3                       CPU4                        CPU5
>
>   set FR_INTERRUPTED     set FR_SENT
>   <smp mb>               <smp mb>
>   test FR_SENT (== 0)    test FR_INTERRUPTED (==1)
>                          list_add[&req->intr_entry]  read[req by intr_entry]
>                                                      <place to insert a barrier>
>                                                      goto userspace
>                                                      write in userspace buffer
>                                                                                 read from userspace buffer
>                                                                                 write to userspace buffer
>                                                                                                              read from userspace buffer
>                                                                                                              enter kernel
>                                                                                                              <place to insert a barrier>
>                                                                                                              test FR_INTERRUPTED <- Not visible
>
> The sequence:
>
> set_bit(FR_INTERRUPTED, ...)
> smp_mb__after_atomic();
> test_bit(FR_SENT, &req->flags)
>
> just guarantees the expected order on CPU2, which uses <smp mb>,
> but CPU5 does not have any guarantees.

What you are missing is that there are other things that synchronize
memory access besides the memory barrier.  In your example the
ordering will be guaranteed by the fiq->waitq.lock in
queue_interrupt() on CPU2: the set_bit() cannot move past the one-way
barrier provided by the spin_unlock().

Thanks,
Miklos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ