lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Apr 2019 08:03:57 +0100
From:   Lee Jones <lee.jones@...aro.org>
To:     Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] mfd: Add support for Merrifield Basin Cove PMIC

On Thu, 04 Apr 2019, Lee Jones wrote:

> On Tue, 02 Apr 2019, Andy Shevchenko wrote:
> 
> > On Tue, Apr 02, 2019 at 06:12:11AM +0100, Lee Jones wrote:
> > > On Mon, 18 Mar 2019, Andy Shevchenko wrote:
> > > 
> > > > Add an mfd driver for Intel Merrifield Basin Cove PMIC.
> > > 
> > > Nit: s/mfd/MFD/
> > 
> > Noted. And changed for v2.
> > 
> > > > +static const struct mfd_cell bcove_dev[] = {
> > > > +	{
> > > > +		.name = "mrfld_bcove_pwrbtn",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[0],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_tmu",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[1],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_thermal",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[2],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_bcu",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[3],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_adc",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[4],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_charger",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[5],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_extcon",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[5],
> > > > +	}, {
> > > > +		.name = "mrfld_bcove_gpio",
> > > > +		.num_resources = 1,
> > > > +		.resources = &irq_level2_resources[6],
> > > > +	},
> > > > +	{	.name = "mrfld_bcove_region", },
> > > > +};
> > 
> > > > +static int regmap_ipc_byte_reg_read(void *context, unsigned int reg,
> > > 
> > > Prefixing these with regmap is pretty confusing, since this it not
> > > part of the Regmap API.  Better to provide them with local names
> > > instead.
> > > 
> > >   bcove_ipc_byte_reg_read()
> > 
> > Good point. And changed for v2.
> > 
> > > > +	for (i = 0; i < ARRAY_SIZE(irq_level2_resources); i++) {
> > > > +		ret = platform_get_irq(pdev, i);
> > > > +		if (ret < 0)
> > > > +			return ret;
> > > > +
> > > > +		irq_level2_resources[i].start = ret;
> > > > +		irq_level2_resources[i].end = ret;
> > > > +	}
> > > 
> > > Although succinct, dragging values from one platform device into
> > > another doesn't sound that neat.
> > 
> > So, how to split resources given in one _physical_ multi-functional device to
> > several of them?  Isn't it what MFD framework for?
> > 
> > Any other approach here? I'm all ears!
> 
> From the child:
> 
>   platform_get_irq(dev->parent, CLIENT_ID);

If you set the .id of the cell properly you could do:

  platform_get_irq(dev->parent, dev->id);

> > > Also, since the ordering of the
> > > devices is critical in this implementation, it also comes across as
> > > fragile.
> > 
> > How fragile? In ACPI we don't have IRQ labeling scheme. Index is used for that.
> > 
> > > Any reason why ACPI can't register all of the child devices, or for
> > > the child devices to obtain their IRQ directly from the tables?
> > 
> > And how are we supposed to enumerated them taking into consideration single
> > ACPI ID given?
> 
> This question was a little whimsical, since I have no idea how the
> ACPI tables you're working with are laid out.
> 

-- 
Lee Jones [李琼斯]
Linaro Services Technical Lead
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ