lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Oct 2022 11:29:00 +0200
From:   Niklas Schnelle <schnelle@...ux.ibm.com>
To:     Jason Gunthorpe <jgg@...dia.com>
Cc:     Matthew Rosato <mjrosato@...ux.ibm.com>, iommu@...ts.linux.dev,
        Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
        Robin Murphy <robin.murphy@....com>,
        Gerd Bayer <gbayer@...ux.ibm.com>,
        Pierre Morel <pmorel@...ux.ibm.com>,
        linux-s390@...r.kernel.org, borntraeger@...ux.ibm.com,
        hca@...ux.ibm.com, gor@...ux.ibm.com,
        gerald.schaefer@...ux.ibm.com, agordeev@...ux.ibm.com,
        svens@...ux.ibm.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] iommu/s390: Use RCU to allow concurrent domain_list
 iteration

On Thu, 2022-10-27 at 11:03 -0300, Jason Gunthorpe wrote:
> On Thu, Oct 27, 2022 at 03:35:57PM +0200, Niklas Schnelle wrote:
> > On Thu, 2022-10-27 at 09:56 -0300, Jason Gunthorpe wrote:
> > > On Thu, Oct 27, 2022 at 02:44:49PM +0200, Niklas Schnelle wrote:
> > > > On Mon, 2022-10-24 at 13:26 -0300, Jason Gunthorpe wrote:
> > > > > On Mon, Oct 24, 2022 at 05:22:24PM +0200, Niklas Schnelle wrote:
> > > > > 
> > > > > > Thanks for the explanation, still would like to grok this a bit more if
> > > > > > you don't mind. If I do read things correctly synchronize_rcu() should
> > > > > > run in the conext of the VFIO ioctl in this case and shouldn't block
> > > > > > anything else in the kernel, correct? At least that's how I understand
> > > > > > the synchronize_rcu() comments and the fact that e.g.
> > > > > > net/vmw_vsock/virtio_transport.c:virtio_vsock_remove() also does a
> > > > > > synchronize_rcu() and can be triggered from user-space too.
> > > > > 
> > > > > Yes, but I wouldn't look in the kernel to understand if things are OK
> > > > >  
> > > > > > So we're
> > > > > > more worried about user-space getting slowed down rather than a Denial-
> > > > > > of-Service against other kernel tasks.
> > > > > 
> > > > > Yes, functionally it is OK, but for something like vfio with vIOMMU
> > > > > you could be looking at several domains that have to be detached
> > > > > sequentially and with grace periods > 1s you can reach multiple
> > > > > seconds to complete something like a close() system call. Generally it
> > > > > should be weighed carefully
> > > > > 
> > > > > Jason
> > > > 
> > > > Thanks for the detailed explanation. Then let's not put a
> > > > synchronize_rcu() in detach, as I said as long as the I/O translation
> > > > tables are there an IOTLB flush after zpci_unregister_ioat() should
> > > > result in an ignorable error. That said, I think if we don't have the
> > > > synchronize_rcu() in detach we need it in s390_domain_free() before
> > > > freeing the I/O translation tables.
> > > 
> > > Yes, it would be appropriate to free those using one of the rcu
> > > free'rs, (eg kfree_rcu) not synchronize_rcu()
> > > 
> > > Jason
> > 
> > They are allocated via kmem_cache_alloc() from caches shared by all
> > IOMMU's so can't use kfree_rcu() directly. Also we're only freeing the
> > entire I/O translation table of one IOMMU at once after it is not used
> > anymore. Before that it is only grown. So I think synchronize_rcu() is
> > the obvious and simple choice since we only need one grace period.
> 
> It has the same issue as doing it for the other reason, adding
> synchronize_rcu() to the domain free path is undesirable.
> 
> The best thing is to do as kfree_rcu() does now, basically:
> 
> rcu_head = kzalloc(rcu_head, GFP_NOWAIT, GFP_NOWARN)
> if (!rcu_head)
>    synchronize_rcu()
> else
>    call_rcu(rcu_head)
> 
> And then call kmem_cache_free() from the rcu callback

Hmm, maybe a stupid question but why can't I just put the rcu_head in
struct s390_domain and then do a call_rcu() on that with a callback
that does:

	dma_cleanup_tables(s390_domain->dma_table);
	kfree(s390_domain);

I.e. the rest of the current s390_domain_free().
Then I don't have to worry about failing to allocate the rcu_head and
it's simple enough. Basically just do the actual freeing of the
s390_domain via call_rcu().

> 
> But this is getting very complicated, you might be better to refcount
> the domain itself and acquire the refcount under RCU. This turns the
> locking problem into a per-domain-object lock instead of a global lock
> which is usually good enough and simpler to understand.
> 
> Jason

Sorry I might be a bit slow as I'm new to RCU but I don't understand
this yet, especially the last part. Before this patch we do have a per-
domain lock but I'm sure that's not the kind of "per-domain-object
lock" you're talking about or else we wouldn't need RCU at all. Is this
maybe a different way of expressing the above idea using the analogy
with reference counting from whatisRCU.rst? Meaning we treat the fact
that there may still be RCU readers as "there are still references to
s390_domain"? 

Or do you mean to use a kref that is taken by RCU readers together with
rcu_read_lock() and dropped at rcu_read_unlock() such that during the
RCU read critical sections the refcount can't fall below 1 and the
domain is actually freed once we have a) put the initial reference
during s390_domain_free() and b) put all temporary references on
exiting the RCU read critical sections?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ