lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1811282228570.1879@nanos.tec.linutronix.de>
Date:   Wed, 28 Nov 2018 23:03:14 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Dou Liyang <douliyangs@...il.com>
cc:     LKML <linux-kernel@...r.kernel.org>, kashyap.desai@...adcom.com,
        shivasharan.srikanteshwara@...adcom.com, sumit.saxena@...adcom.com,
        ming.lei@...hat.com, Christoph Hellwig <hch@....de>,
        bhelgaas@...gle.com, linux-pci@...r.kernel.org
Subject: Re: [RFC PATCH v3] genirq/affinity: Create and transfer more irq
 desc info by a new structure

Dou,

On Thu, 29 Nov 2018, Dou Liyang wrote:
> +/**
> + * struct irq_affinity_desc - Description for kinds of irq assignements
> + *		which will be transferred to irqdesc core

Please align this proper

 * struct irq_affinity_desc - Description for kinds of irq assignements
 *			      which will be transferred to irqdesc core

Aside of that, it's not interesting where these structs are going to be
transferred today because that might change tomorrow. So something like
this:

 * struct irq_affinity_desc - Interrupt affinity descriptor


> + * @masks:	cpumask of automatic irq affinity assignments

@mask: please. It's one cpumask per descriptor.

> + * @flags:	flags to differentiate between managed and
> + *		unmanaged interrupts

Again, that's the purpose today.

 * @flags:	Flags to convey complementary information

But see further down.

> + */
> +struct irq_affinity_desc {
> +	struct cpumask masks;
> +	unsigned int	flags;
> +};

Please align the member names vertically with tabs

struct irq_affinity_desc {
	struct cpumask	masks;
	unsigned int	flags;
};

> +/**
> + * irq_create_affinity_desc - Create affinity desc for multiqueue spreading
> + * @nvec:	The total number of vectors
> + * @affd:	Description of the affinity requirements
> + *
> + * Returns the irq_affinity_desc pointer or NULL if allocation failed.
> + */
> +struct irq_affinity_desc *
> +irq_create_affinity_desc(int nvec, const struct irq_affinity *affd)
> +{
> +	struct irq_affinity_desc *cur_affi_desc, *affi_desc = NULL;
> +	struct cpumask *curmask, *masks = NULL;
> +	int i;
> +
> +	masks = irq_create_affinity_masks(nvec, affd);
> +	if (masks) {
> +		affi_desc = kcalloc(nvec, sizeof(*affi_desc), GFP_KERNEL);

Why do you want to do that separate allocation here? Just let
irq_create_affinity_masks() allocate an array of affinity descriptors and
use that. There is no point in copying that stuff over and over. Setting
the flag field can be done in the existing function as well.

>  /**
>   * irq_calc_affinity_vectors - Calculate the optimal number of vectors
>   * @minvec:	The minimum number of vectors available
> diff --git a/kernel/irq/devres.c b/kernel/irq/devres.c
> index 6a682c229e10..2335b89d9bde 100644
> --- a/kernel/irq/devres.c
> +++ b/kernel/irq/devres.c
> @@ -181,14 +181,33 @@ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
>  			   unsigned int cnt, int node, struct module *owner,
>  			   const struct cpumask *affinity)
>  {
> +	struct irq_affinity_desc *cur_affi_desc, *affi_desc = NULL;
> +	const struct cpumask *curmask;
>  	struct irq_desc_devres *dr;
> -	int base;
> +	int base, i;
>  
>  	dr = devres_alloc(devm_irq_desc_release, sizeof(*dr), GFP_KERNEL);
>  	if (!dr)
>  		return -ENOMEM;
>  
> -	base = __irq_alloc_descs(irq, from, cnt, node, owner, affinity);
> +	if (affinity) {
> +		affi_desc = kcalloc(cnt, sizeof(*affi_desc), GFP_KERNEL);
> +		if (!affi_desc)
> +			return -ENOMEM;
> +
> +		curmask = affinity;
> +		cur_affi_desc = affi_desc;
> +		for (i = 0; i < cnt; i++) {
> +			cpumask_copy(&cur_affi_desc->masks, curmask);
> +			cur_affi_desc->flags = 1;

No magic constant's please for a flags field. You really want proper
constants for that, but I'd rather avoid the whole flags fiddling
completely. See below.

> +			curmask++;
> +			cur_affi_desc++;
> +		}

Can you please change the function signature and fixup the callers, if
there are any of them? Copying this over and over is horrible.

>  	for (i = 0; i < cnt; i++) {
> -		if (affinity) {
> -			node = cpu_to_node(cpumask_first(affinity));
> -			mask = affinity;
> -			affinity++;
> +		if (affi_desc && affi_desc->flags ) {

Please don't check flags for being !=0. Once we add other information than
the managed/non-managed decision to the flag field, then this will fall
apart.

To avoid issues like that please use explicit bits in the structure instead
of the opaque flags field:

struct irq_affinity_desc {
	struct cpumask	masks;
	unsigned int	managed : 1;
};

So if we add new things later they will have an explicit name and there is
absolute no conflict with existing code.

> +			flags =  IRQD_AFFINITY_MANAGED |
> +				IRQD_MANAGED_SHUTDOWN;
> +		} else
> +			flags = 0;

If you need braces for the if then the else path wants them as well.

> +
> +		if (affi_desc) {
> +			mask = &affi_desc->masks;
> +			node = cpu_to_node(cpumask_first(mask));
> +			affi_desc++;
>  		}

>  int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq,
> -			   int node, const struct cpumask *affinity)
> +			   int node, const struct irq_affinity_desc *affi_desc)

You can spare a lot of pointless churn by just keeping the 'affinity' name
and only changing the struct type. The compiler will catch all places which
need to be fixed and 'affinity' is generic enough to be used with the new
struct type as well. As Bjorn said, even 'masks' is fine.

Other than the things I pointed out, the whole thing starts to look
palatable. Please split it up in the way Bjorn suggested and avoid the
extra hoops and loops where possible.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ