[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0665d21-a390-6263-0018-09b4cb57e87b@ozlabs.ru>
Date: Tue, 3 Aug 2021 23:13:57 +1000
From: Alexey Kardashevskiy <aik@...abs.ru>
To: Paolo Bonzini <pbonzini@...hat.com>,
Greg KH <gregkh@...uxfoundation.org>
Cc: "Kernel Mailing List, Linux" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>
Subject: Re: [RFC PATCH kernel] KVM: Stop leaking memory in debugfs
On 03/08/2021 22:52, Paolo Bonzini wrote:
> On Tue, Aug 3, 2021 at 1:16 PM Greg KH <gregkh@...uxfoundation.org> wrote:
>> On Fri, Jul 30, 2021 at 02:32:17PM +1000, Alexey Kardashevskiy wrote:
>>> snprintf(dir_name, sizeof(dir_name), "%d-%d", task_pid_nr(current), fd);
>>> kvm->debugfs_dentry = debugfs_create_dir(dir_name, kvm_debugfs_dir);
>>> + if (IS_ERR_OR_NULL(kvm->debugfs_dentry)) {
>>> + pr_err("Failed to create %s\n", dir_name);
>>> + return 0;
>>> + }
>>
>> It should not matter if you fail a debugfs call at all.
>>
>> If there is a larger race at work here, please fix that root cause, do
>> not paper over it by attempting to have debugfs catch the issue for you.
>
> I don't think it's a race, it's really just a bug that is intrinsic in how
> the debugfs files are named. You can just do something like this:
>
> #include <unistd.h>
> #include <stdio.h>
> #include <fcntl.h>
> #include <sys/wait.h>
> #include <sys/ioctl.h>
> #include <linux/kvm.h>
> #include <stdlib.h>
> int main() {
> int kvmfd = open("/dev/kvm", O_RDONLY);
> int fd = ioctl(kvmfd, KVM_CREATE_VM, 0);
> if (fork() == 0) {
> printf("before: %d\n", fd);
> sleep(2);
> } else {
> close(fd);
> sleep(1);
> int fd = ioctl(kvmfd, KVM_CREATE_VM, 0);
> printf("after: %d\n", fd);
> wait(NULL);
> }
> }
oh nice demo :)
although I still think there was a race when I saw it as there was no
fork() in the picture but continuous create/destroy VM in 16 threads on
16 VCPUs with no KVM_RUN in between.
>
> So Alexey's patch is okay and I've queued it, though with pr_warn_ratelimited
> instead of pr_err.
Makes sense with your reproducer. Thanks,
--
Alexey
Powered by blists - more mailing lists