[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEv2kB9J1qGYkGkywk1YHV2gU2fMr7qx4vEv9L5f6qL5mg@mail.gmail.com>
Date: Wed, 31 May 2023 16:17:27 +0800
From: Jason Wang <jasowang@...hat.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Mike Christie <michael.christie@...cle.com>, linux@...mhuis.info,
nicolas.dichtel@...nd.com, axboe@...nel.dk, ebiederm@...ssion.com,
torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, mst@...hat.com,
sgarzare@...hat.com, stefanha@...hat.com, brauner@...nel.org
Subject: Re: [PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On Wed, May 31, 2023 at 3:25 PM Oleg Nesterov <oleg@...hat.com> wrote:
>
> On 05/31, Jason Wang wrote:
> >
> > 在 2023/5/23 20:15, Oleg Nesterov 写道:
> > >
> > > /* make sure flag is seen after deletion */
> > > smp_wmb();
> > > llist_for_each_entry_safe(work, work_next, node, node) {
> > > clear_bit(VHOST_WORK_QUEUED, &work->flags);
> > >
> > >I am not sure about smp_wmb + clear_bit. Once we clear VHOST_WORK_QUEUED,
> > >vhost_work_queue() can add this work again and change work->node->next.
> > >
> > >That is why we use _safe, but we need to ensure that llist_for_each_safe()
> > >completes LOAD(work->node->next) before VHOST_WORK_QUEUED is cleared.
> >
> > This should be fine since store is not speculated, so work->node->next needs
> > to be loaded before VHOST_WORK_QUEUED is cleared to meet the loop condition.
>
> I don't understand you. OK, to simplify, suppose we have 2 global vars
>
> void *PTR = something_non_null;
> unsigned long FLAGS = -1ul;
>
> Now I think this code
>
> CPU_0 CPU_1
>
> void *ptr = PTR; if (!test_and_set_bit(0, FLAGS))
> clear_bit(0, FLAGS); PTR = NULL;
> BUG_ON(!ptr);
>
> is racy and can hit the BUG_ON(!ptr).
This seems different to the above case? And you can hit BUG_ON with
the following execution sequence:
[cpu 0] clear_bit(0, FLAGS);
[cpu 1] if (!test_and_set_bit(0, FLAGS))
[cpu 1] PTR = NULL;
[cpu 0] BUG_ON(!ptr)
In vhost code, there's a condition before the clear_bit() which sits
inside llist_for_each_entry_safe():
#define llist_for_each_entry_safe(pos, n, node, member) \
for (pos = llist_entry((node), typeof(*pos), member); \
member_address_is_nonnull(pos, member) && \
(n = llist_entry(pos->member.next, typeof(*n), member), true); \
pos = n)
The clear_bit() is a store which is not speculated, so there's a
control dependency, the store can't be executed until the condition
expression is evaluated which requires pos->member.next
(work->node.next) to be loaded.
>
> I guess it is fine on x86, but in general you need smp_mb__before_atomic()
> before clear_bit(), or clear_bit_unlock().
>
> > > __set_current_state(TASK_RUNNING);
> > >
> > >Why do we set TASK_RUNNING inside the loop? Does this mean that work->fn()
> > >can return with current->state != RUNNING ?
> >
> > It is because the state were set to TASK_INTERRUPTIBLE in the beginning of
> > the loop otherwise it might be side effect while executing work->fn().
>
> Again, I don't understand you. So let me repeat: can work->fn() return with
> current->_state != TASK_RUNNING ? If not (and I'd say it should not), you can
> do __set_current_state(TASK_RUNNING) once, before llist_for_each_entry_safe().
>
Ok, that should be fine.
Thanks
> > >Now the main question. Whatever we do, SIGKILL/SIGSTOP/etc can come right
> > >before we call work->fn(). Is it "safe" to run this callback with
> > >signal_pending() or fatal_signal_pending() ?
> >
> > It looks safe since:
> >
> > 1) vhost hold refcnt of the mm
> > 2) release will sync with the worker
>
> Well, that's not what I asked... nevermind, please forget.
>
> Thanks.
>
> Oleg.
>
Powered by blists - more mailing lists