lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Sep 2023 08:54:01 -0700
From:   Brett Creeley <bcreeley@....com>
To:     Alex Williamson <alex.williamson@...hat.com>,
        Brett Creeley <brett.creeley@....com>
Cc:     jgg@...pe.ca, yishaih@...dia.com,
        shameerali.kolothum.thodi@...wei.com, kevin.tian@...el.com,
        dan.carpenter@...aro.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, shannon.nelson@....com
Subject: Re: [PATCH vfio 3/3] pds/vfio: Fix possible sleep while in atomic
 context

On 9/14/2023 3:38 PM, Alex Williamson wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> On Thu, 14 Sep 2023 12:15:40 -0700
> Brett Creeley <brett.creeley@....com> wrote:
> 
>> The driver could possibly sleep while in atomic context resulting
>> in the following call trace while CONFIG_DEBUG_ATOMIC_SLEEP=y is
>> set:
>>
>> [  227.229806] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:283
>> [  227.229818] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2817, name: bash
>> [  227.229824] preempt_count: 1, expected: 0
>> [  227.229827] RCU nest depth: 0, expected: 0
>> [  227.229832] CPU: 5 PID: 2817 Comm: bash Tainted: G S         OE      6.6.0-rc1-next-20230911 #1
>> [  227.229839] Hardware name: HPE ProLiant DL360 Gen10/ProLiant DL360 Gen10, BIOS U32 01/23/2021
>> [  227.229843] Call Trace:
>> [  227.229848]  <TASK>
>> [  227.229853]  dump_stack_lvl+0x36/0x50
>> [  227.229865]  __might_resched+0x123/0x170
>> [  227.229877]  mutex_lock+0x1e/0x50
>> [  227.229891]  pds_vfio_put_lm_file+0x1e/0xa0 [pds_vfio_pci]
>> [  227.229909]  pds_vfio_put_save_file+0x19/0x30 [pds_vfio_pci]
>> [  227.229923]  pds_vfio_state_mutex_unlock+0x2e/0x80 [pds_vfio_pci]
>> [  227.229937]  pci_reset_function+0x4b/0x70
>> [  227.229948]  reset_store+0x5b/0xa0
>> [  227.229959]  kernfs_fop_write_iter+0x137/0x1d0
>> [  227.229972]  vfs_write+0x2de/0x410
>> [  227.229986]  ksys_write+0x5d/0xd0
>> [  227.229996]  do_syscall_64+0x3b/0x90
>> [  227.230004]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
>> [  227.230017] RIP: 0033:0x7fb202b1fa28
>> [  227.230023] Code: 89 02 48 c7 c0 ff ff ff ff eb b3 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 8d 05 15 4d 2a 00 8b 00 85 c0 75 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 41 54 49 89 d4 55
>> [  227.230028] RSP: 002b:00007fff6915fbd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> [  227.230036] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fb202b1fa28
>> [  227.230040] RDX: 0000000000000002 RSI: 000055f3834d5aa0 RDI: 0000000000000001
>> [  227.230044] RBP: 000055f3834d5aa0 R08: 000000000000000a R09: 00007fb202b7fae0
>> [  227.230047] R10: 000000000000000a R11: 0000000000000246 R12: 00007fb202dc06e0
>> [  227.230050] R13: 0000000000000002 R14: 00007fb202dbb860 R15: 0000000000000002
>> [  227.230056]  </TASK>
>>
>> This can happen if pds_vfio_put_restore_file() and/or
>> pds_vfio_put_save_file() grab the mutex_lock(&lm_file->lock)
>> while the spin_lock(&pds_vfio->reset_lock) is held, which can
>> happen during while calling pds_vfio_state_mutex_unlock().
>>
>> Fix this by releasing the spin_unlock(&pds_vfio->reset_lock) before
>> calling pds_vfio_put_restore_file() and pds_vfio_put_save_file() and
>> re-acquiring spin_lock(&pds_vfio->reset_lock) after the previously
>> mentioned functions are called to protect setting the subsequent
>> state/deferred reset settings.
>>
>> The only possible concerns are other threads that may call
>> pds_vfio_put_restore_file() and/or pds_vfio_put_save_file(). However,
>> those paths are already protected by the state mutex_lock().
> 
> Is there another viable solution to change reset_lock to a mutex?
> 
> I think this is the origin of this algorithm:
> 
> https://lore.kernel.org/all/20211019191025.GA4072278@nvidia.com/
> 
> But it's not clear to me why Jason chose an example with a spinlock and
> if some subtlety here requires it.  Thanks,
> 
> Alex

It would be good to get some feedback from Jason on this before thinking 
about a different solution.

Thanks,

Brett

> 
>> Reported-by: Dan Carpenter <dan.carpenter@...aro.org>
>> Closes: https://lore.kernel.org/kvm/1f9bc27b-3de9-4891-9687-ba2820c1b390@moroto.mountain/
>> Signed-off-by: Brett Creeley <brett.creeley@....com>
>> Reviewed-by: Shannon Nelson <shannon.nelson@....com>
>> ---
>>   drivers/vfio/pci/pds/vfio_dev.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/vfio/pci/pds/vfio_dev.c b/drivers/vfio/pci/pds/vfio_dev.c
>> index 9db5f2c8f1ea..6e664cb05dd1 100644
>> --- a/drivers/vfio/pci/pds/vfio_dev.c
>> +++ b/drivers/vfio/pci/pds/vfio_dev.c
>> @@ -33,8 +33,10 @@ void pds_vfio_state_mutex_unlock(struct pds_vfio_pci_device *pds_vfio)
>>        if (pds_vfio->deferred_reset) {
>>                pds_vfio->deferred_reset = false;
>>                if (pds_vfio->state == VFIO_DEVICE_STATE_ERROR) {
>> +                     spin_unlock(&pds_vfio->reset_lock);
>>                        pds_vfio_put_restore_file(pds_vfio);
>>                        pds_vfio_put_save_file(pds_vfio);
>> +                     spin_lock(&pds_vfio->reset_lock);
>>                        pds_vfio_dirty_disable(pds_vfio, false);
>>                }
>>                pds_vfio->state = pds_vfio->deferred_reset_state;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ