[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <538DD3E1.8000805@redhat.com>
Date: Tue, 03 Jun 2014 15:55:45 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Vlad Yasevich <vyasevich@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>,
"Michael S. Tsirkin" <mst@...hat.com>
CC: netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
David Miller <davem@...emloft.net>
Subject: Re: [PULL 2/2] vhost: replace rcu with mutex
Il 03/06/2014 15:35, Vlad Yasevich ha scritto:
> > Yes, vhost_get_vq_desc must be called with the vq mutex held.
> >
> > The rcu_read_lock/unlock in translate_desc is unnecessary.
>
> If that's true, then does dev->memory really needs to be rcu protected?
> It appears to always be read under mutex.
It's always read under one of many mutexes, yes.
However, it's still RCU-like in the sense that you separate the removal
and reclamation phases so you still need rcu_dereference/rcu_assign_pointer.
With this mechanism, readers do not contend the mutexes with the
VHOST_SET_MEMORY ioctl, except for the very short lock-and-unlock
sequence at the end of it. They also never contend the mutexes between
themselves (which would be the case if VHOST_SET_MEMORY locked all the
mutexes).
You could also wrap all virtqueue processing with a rwsem and take the
rwsem for write in VHOST_SET_MEMORY. That simplifies some things however:
- unnecessarily complicates the code for all users of vhost_get_vq_desc
- suppose the reader-writer lock is fair, and VHOST_SET_MEMORY places a
writer in the queue. Then a long-running reader R1 could still block
another reader R2, because the writer would be served before R2.
The RCU-like approach avoids all this, which is important because of the
generally simpler code and because VHOST_SET_MEMORY is the only vhost
ioctl that can happen in the hot path.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists