[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2004171126240.88048@chino.kir.corp.google.com>
Date: Fri, 17 Apr 2020 11:41:59 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Hellwig <hch@....de>
cc: Tom Lendacky <thomas.lendacky@....com>,
Brijesh Singh <brijesh.singh@....com>,
Jon Grimm <jon.grimm@....com>, Joerg Roedel <joro@...tes.org>,
linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org
Subject: Re: [patch 0/7] unencrypted atomic DMA pools with dynamic
expansion
On Fri, 17 Apr 2020, Christoph Hellwig wrote:
> So modulo a few comments that I can fix up myself this looks good. Unless
> you want to resend for some reason I'm ready to pick this up once I open
> the dma-mapping tree after -rc2.
>
Yes, please do, and thanks to both you and Thomas for the guidance and
code reviews.
Once these patches take on their final form in your branch, how supportive
would you be of stable backports going back to 4.19 LTS?
There have been several changes to this area over time, so there are
varying levels of rework that need to be done for each stable kernel back
to 4.19. But I'd be happy to do that work if you are receptive to it.
For rationale, without these fixes, all SEV enabled guests warn of
blocking in rcu read side critical sections when using drivers that
allocate atomically though the DMA API that calls set_memory_decrypted().
Users can see warnings such as these in the guest:
BUG: sleeping function called from invalid context at mm/vmalloc.c:1710
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 3383, name: fio
2 locks held by fio/3383:
#0: ffff93b6a8568348 (&sb->s_type->i_mutex_key#16){+.+.}, at: ext4_file_write_iter+0xa2/0x5d0
#1: ffffffffa52a61a0 (rcu_read_lock){....}, at: hctx_lock+0x1a/0xe0
CPU: 0 PID: 3383 Comm: fio Tainted: G W 5.5.10 #14
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
dump_stack+0x98/0xd5
___might_sleep+0x175/0x260
__might_sleep+0x4a/0x80
_vm_unmap_aliases+0x45/0x250
vm_unmap_aliases+0x19/0x20
__set_memory_enc_dec+0xa4/0x130
set_memory_decrypted+0x10/0x20
dma_direct_alloc_pages+0x148/0x150
dma_direct_alloc+0xe/0x10
dma_alloc_attrs+0x86/0xc0
dma_pool_alloc+0x16f/0x2b0
nvme_queue_rq+0x878/0xc30 [nvme]
__blk_mq_try_issue_directly+0x135/0x200
blk_mq_request_issue_directly+0x4f/0x80
blk_mq_try_issue_list_directly+0x46/0xb0
blk_mq_sched_insert_requests+0x19b/0x2b0
blk_mq_flush_plug_list+0x22f/0x3b0
blk_flush_plug_list+0xd1/0x100
blk_finish_plug+0x2c/0x40
iomap_dio_rw+0x427/0x490
ext4_file_write_iter+0x181/0x5d0
aio_write+0x109/0x1b0
io_submit_one+0x7d0/0xfa0
__x64_sys_io_submit+0xa2/0x280
do_syscall_64+0x5f/0x250
Powered by blists - more mailing lists