[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_B683AC1146DB6A6ABB4D73697C0D6A1D7608@qq.com>
Date: Tue, 12 Apr 2022 15:04:09 +0800
From: "zhangfei.gao@...mail.com" <zhangfei.gao@...mail.com>
To: Dave Hansen <dave.hansen@...el.com>,
Joerg Roedel <joro@...tes.org>,
Fenghua Yu <fenghua.yu@...el.com>,
jean-philippe <jean-philippe@...aro.org>
Cc: Ravi V Shankar <ravi.v.shankar@...el.com>,
Tony Luck <tony.luck@...el.com>,
Ashok Raj <ashok.raj@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
x86 <x86@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
iommu <iommu@...ts.linux-foundation.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v4 05/11] iommu/sva: Assign a PASID to mm on PASID
allocation and free it on mm exit
On 2022/4/11 下午10:52, Dave Hansen wrote:
> On 4/11/22 07:44, zhangfei.gao@...mail.com wrote:
>> On 2022/4/11 下午10:36, Dave Hansen wrote:
>>> On 4/11/22 07:20, zhangfei.gao@...mail.com wrote:
>>>>> Is there nothing before this call trace? Usually there will be at least
>>>>> some warning text.
>>>> I added dump_stack() in ioasid_free.
>>> Hold on a sec, though...
>>>
>>> What's the *problem* here? Did something break or are you just saying
>>> that something looks weird to _you_?
>> After this, nginx is not working at all, and hardware reports error.
>> Suppose the the master use the ioasid for init, but got freed.
>>
>> hardware reports:
>> [ 152.731869] hisi_sec2 0000:76:00.0: qm_acc_do_task_timeout [error status=0x20] found
>> [ 152.739657] hisi_sec2 0000:76:00.0: qm_acc_wb_not_ready_timeout [error status=0x40] found
>> [ 152.747877] hisi_sec2 0000:76:00.0: sec_fsm_hbeat_rint [error status=0x20] found
>> [ 152.755340] hisi_sec2 0000:76:00.0: Controller resetting...
>> [ 152.762044] hisi_sec2 0000:76:00.0: QM mailbox operation timeout!
>> [ 152.768198] hisi_sec2 0000:76:00.0: Failed to dump sqc!
>> [ 152.773490] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
>> [ 152.781426] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
>> [ 152.787468] hisi_sec2 0000:76:00.0: Failed to dump sqc!
>> [ 152.792753] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
>> [ 152.800685] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
>> [ 152.806730] hisi_sec2 0000:76:00.0: Failed to dump sqc!
>> [ 152.812017] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
>> [ 152.819946] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
>> [ 152.825992] hisi_sec2 0000:76:00.0: Failed to dump sqc!
> That would have been awfully handy information to have in an initial bug report. :)
> Is there a chance you could dump out that ioasid alloc *and* free information in ioasid_alloc/free()? This could be some kind of problem with the allocator, or with copying the ioasid at fork.
The issue is nginx master process init resource, start daemon process,
then master process quit and free ioasid.
The daemon nginx process is not the original master process.
master process: init resource
driver -> iommu_sva_bind_device -> ioasid_alloc
nginx : ngx_daemon
fork daemon, without add mm's refcount.
src/os/unix/ngx_daemon.c
ngx_daemon(ngx_log_t *log)
{
int fd;
switch (fork()) {
case -1:
ngx_log_error(NGX_LOG_EMERG, log, ngx_errno, "fork() failed");
return NGX_ERROR;
case 0: // here master process is quit directly and will be
released.
break;
default:
exit(0);
}
// here daemon process take control.
ngx_parent = ngx_pid;
ngx_pid = ngx_getpid();
fork.c
copy_mm
if (clone_flags & CLONE_VM) {
mmget(oldmm);
mm = oldmm;
} else {
mm = dup_mm(tsk, current->mm); // here daemon
process handling without mmget.
master process quit, mmput -> mm_pasid_drop->ioasid_free
But this ignore driver's iommu_sva_unbind_device function,
iommu_sva_bind_device and iommu_sva_unbind_device are not pair, So
driver does not know ioasid is freed.
Any suggestion?
Or can we still use the original ioasid refcount mechanism?
Thanks
Powered by blists - more mailing lists