lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YicsXm9JboW2b+5f@smile.fi.intel.com>
Date:   Tue, 8 Mar 2022 12:13:50 +0200
From:   Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To:     Rob Herring <robh@...nel.org>
Cc:     shruthi.sanil@...el.com,
        Daniel Lezcano <daniel.lezcano@...aro.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        devicetree@...r.kernel.org, Mark Gross <mgross@...ux.intel.com>,
        srikanth.thokala@...el.com,
        "Raja Subramanian, Lakshmi Bai" 
        <lakshmi.bai.raja.subramanian@...el.com>,
        mallikarjunappa.sangannavar@...el.com
Subject: Re: [PATCH v8 1/2] dt-bindings: timer: Add bindings for Intel Keem
 Bay SoC Timer

On Mon, Mar 07, 2022 at 04:33:23PM -0600, Rob Herring wrote:
> On Wed, Feb 23, 2022 at 5:31 AM Andy Shevchenko
> <andriy.shevchenko@...ux.intel.com> wrote:
> >
> > On Tue, Feb 22, 2022 at 05:13:41PM -0600, Rob Herring wrote:
> > > On Tue, Feb 22, 2022 at 03:26:53PM +0530, shruthi.sanil@...el.com wrote:
> > > > From: Shruthi Sanil <shruthi.sanil@...el.com>
> > > >
> > > > Add Device Tree bindings for the Timer IP, which can be used as
> > > > clocksource and clockevent device in the Intel Keem Bay SoC.
> >
> > ...
> >
> > > > +    soc {
> > > > +        #address-cells = <0x2>;
> > > > +        #size-cells = <0x2>;
> > > > +
> > > > +        gpt@...31000 {
> > > > +            compatible = "intel,keembay-gpt-creg", "simple-mfd";
> > >
> > > It looks like you are splitting things based on Linux implementation
> > > details. Does this h/w block have different combinations of timers and
> > > counters? If not, then you don't need the child nodes at all. There's
> > > plenty of h/w blocks that get used as both a clocksource and clockevent.
> > >
> > > Maybe I already raised this, but assume I don't remember and this patch
> > > needs to address any questions I already asked.
> >
> > I dunno if I mentioned that hardware seems to have 5 or so devices behind
> > the block, so ideally it should be one device node that represents the global
> > register spaces and several children nodes.
> 
> Is it 5 devices or 9 devices?

5 devices, one of which is a timer block out of 8 timers.
You may count them as 12 altogether.

> > However, I am not familiar with the established practices in DT world, but
> > above seems to me the right thing to do since it describes the hardware as
> > is (without any linuxisms).
> 
> The Linuxism in these cases defining 1 node per driver because that's
> what is convenient for automatic probing. That appears to be exactly
> the case here. The red flag is nodes with a compatible and nothing
> else. The next question is whether the sub-devices are blocks that
> will be assembled in varying combinations and quantities. If not, then
> not much point subdividing the h/w blocks.

AFAIU the hardware architecture the amount of timers is dependent on
the IP synthesis configuration. On this platform it's 8, but it may be
1 or 2, for example.

> There's also many cases of having multiple 'identical' timers and
> wanting to encode which timer gets assigned to clocksource vs.
> clockevent. But those 'identical' timers aren't if you care about
> which timer gets assigned where. I *think* that's not the case here
> unless you are trying to pick the timer for the clockevent by not
> defining the other timers.
> 
> Without having a complete picture of what's in 'gpt-creg', I can't
> give better advice.

I guess they need to share TRM, if possible, to show what this
block is.

-- 
With Best Regards,
Andy Shevchenko


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ