[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190905060627.GA1753@lst.de>
Date: Thu, 5 Sep 2019 08:06:27 +0200
From: Christoph Hellwig <hch@....de>
To: David Rientjes <rientjes@...gle.com>
Cc: Tom Lendacky <thomas.lendacky@....com>,
Brijesh Singh <brijesh.singh@....com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Ming Lei <ming.lei@...hat.com>,
Peter Gonda <pgonda@...gle.com>,
Jianxiong Gao <jxgao@...gle.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, iommu@...ts.linux-foundation.org
Subject: Re: [bug] __blk_mq_run_hw_queue suspicious rcu usage
On Wed, Sep 04, 2019 at 02:40:44PM -0700, David Rientjes wrote:
> Hi Christoph, Jens, and Ming,
>
> While booting a 5.2 SEV-enabled guest we have encountered the following
> WARNING that is followed up by a BUG because we are in atomic context
> while trying to call set_memory_decrypted:
Well, this really is a x86 / DMA API issue unfortunately. Drivers
are allowed to do GFP_ATOMIC dma allocation under locks / rcu critical
sections and from interrupts. And it seems like the SEV case can't
handle that. We have some semi-generic code to have a fixed sized
pool in kernel/dma for non-coherent platforms that have similar issues
that we could try to wire up, but I wonder if there is a better way
to handle the issue, so I've added Tom and the x86 maintainers.
Now independent of that issue using DMA coherent memory for the nvme
PRPs/SGLs doesn't actually feel very optional. We could do with
normal kmalloc allocations and just sync it to the device and back.
I wonder if we should create some general mempool-like helpers for that.
Powered by blists - more mailing lists