lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: 
 <SJ0PR18MB52168AC4B874C0B99BD37039DB61A@SJ0PR18MB5216.namprd18.prod.outlook.com>
Date: Tue, 2 Jan 2024 08:27:37 +0000
From: Suman Ghosh <sumang@...vell.com>
To: Markus Elfring <Markus.Elfring@....de>,
        "linux-s390@...r.kernel.org"
	<linux-s390@...r.kernel.org>,
        "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>,
        "kernel-janitors@...r.kernel.org"
	<kernel-janitors@...r.kernel.org>,
        Alexandra Winter <wintera@...ux.ibm.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>, Wenjia
 Zhang <wenjia@...ux.ibm.com>
CC: LKML <linux-kernel@...r.kernel.org>
Subject: RE: [EXT] Re: [PATCH 1/2] net/iucv: Improve unlocking in
 iucv_enable()

>>> 	if (cpumask_empty(&iucv_buffer_cpumask))
>>> 		/* No cpu could declare an iucv buffer. */
>>> 		goto out;
>>> +
>>> +	rc = 0;
>>> +unlock:
>>> 	cpus_read_unlock();
>>> -	return 0;
>>> +	return rc;
>>> +
>>> out:
>>> 	kfree(iucv_path_table);
>>> 	iucv_path_table = NULL;
>>> -	cpus_read_unlock();
>>> -	return rc;
>>> +	goto unlock;
>> [Suman] This looks confusing. What is the issue with retaining the
>original change?
>
>I propose to reduce the number of cpus_read_unlock() calls (in the
>source code).
>
>Regards,
>Markus
[Suman] Then I think we should do something like this. Changing the code flow back-and-forth using "goto" does not seem correct.

static int iucv_enable(void)
{
        size_t alloc_size;
        int cpu, rc = 0;

        cpus_read_lock();
        alloc_size = iucv_max_pathid * sizeof(struct iucv_path);
        iucv_path_table = kzalloc(alloc_size, GFP_KERNEL);
        if (!iucv_path_table) {
                rc = -ENOMEM;
                goto out;
        }

        /* Declare per cpu buffers. */
        for_each_online_cpu(cpu)
                smp_call_function_single(cpu, iucv_declare_cpu, NULL, 1);
        if (cpumask_empty(&iucv_buffer_cpumask))
                /* No cpu could declare an iucv buffer. */
                rc = -EIO;

out:
        if (rc) {
                kfree(iucv_path_table); //kfree is itself NULL protected. So, kzalloc failure should also be handled.
                iucv_path_table = NULL;
        }

        cpus_read_unlock();
        return rc;
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ