lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy17NphDM=bgyvM-eNA9DAGku6p56HSXv5hgZh5VmMfE7Q@mail.gmail.com>
Date: Mon, 12 Jan 2026 17:28:39 +0530
From: Anup Patel <anup@...infault.org>
To: Rahul Pathak <rahul@...mations.net>
Cc: Joshua Yeong <joshua.yeong@...rfivetech.com>, leyfoon.tan@...rfivetech.com, 
	robh@...nel.org, krzk+dt@...nel.org, conor+dt@...nel.org, pjw@...nel.org, 
	palmer@...belt.com, aou@...s.berkeley.edu, alex@...ti.fr, rafael@...nel.org, 
	viresh.kumar@...aro.org, sboyd@...nel.org, jms@....tenstorrent.com, 
	darshan.prajapati@...fochips.com, charlie@...osinc.com, 
	dfustini@....tenstorrent.com, michal.simek@....com, cyy@...self.name, 
	jassisinghbrar@...il.com, andriy.shevchenko@...ux.intel.com, 
	linux-riscv@...ts.infradead.org, devicetree@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH 4/5] cpufreq: Add cpufreq driver for the RISC-V RPMI
 performance service group

On Mon, Jan 12, 2026 at 4:18 PM Rahul Pathak <rahul@...mations.net> wrote:
>
> > +
> > +static int rpmi_cpufreq_probe(struct platform_device *pdev)
> > +{
> > +       struct device *dev = &pdev->dev;
> > +       struct rpmi_perf *mpxy_perf;
> > +       struct rpmi_ctx *mpxy_ctx;
> > +       int num_domains = 0;
> > +       int ret, i;
> > +
> > +       mpxy_ctx = devm_kzalloc(&pdev->dev, sizeof(*mpxy_ctx), GFP_KERNEL);
> > +       if (!mpxy_ctx)
> > +               return -ENOMEM;
> > +
> > +       /* Setup mailbox client */
> > +       mpxy_ctx->client.dev            = dev;
> > +       mpxy_ctx->client.rx_callback    = NULL;
> > +       mpxy_ctx->client.tx_block       = false;
> > +       mpxy_ctx->client.knows_txdone   = true;
> > +       mpxy_ctx->client.tx_tout        = 0;
> > +
> > +       /* Request mailbox channel */
> > +       mpxy_ctx->chan = mbox_request_channel(&mpxy_ctx->client, 0);
> > +       if (IS_ERR(mpxy_ctx->chan))
> > +               return PTR_ERR(mpxy_ctx->chan);
> > +
> > +       ret = rpmi_cpufreq_attr_setup(dev, mpxy_ctx);
> > +       if (ret) {
> > +               dev_err(dev, "failed to verify RPMI attribute - err:%d\n", ret);
> > +               goto fail_free_channel;
> > +       }
> > +
> > +       /* Get number of performance domain */
> > +       ret = rpmi_perf_get_num_domains(mpxy_ctx, &num_domains);
> > +       if (ret) {
> > +               dev_err(dev, "invalid number of perf domains - err:%d\n", ret);
> > +               goto fail_free_channel;
> > +       }
>
> The domain space in RPMI performance for CPU and Device
> is not separate and a domain can be either CPU or a Device.
> How the driver will make sure that the domains which are returned
> are CPU only and not the device.
>
> > +MODULE_DEVICE_TABLE(of, rpmi_cpufreq_of_match);
> > +
> > +static struct platform_driver rpmi_cpufreq_platdrv = {
> > +       .driver = {
> > +               .name = "riscv-rpmi-performance",
> > +               .of_match_table = rpmi_cpufreq_of_match,
> > +       },
> > +       .probe = rpmi_cpufreq_probe,
> > +       .remove = rpmi_cpufreq_remove,
> > +};
> > +
> > +module_platform_driver(rpmi_cpufreq_platdrv);
> > +
> > +MODULE_AUTHOR("Joshua Yeong <joshua.yeong@...rfivetech.com>");
> > +MODULE_DESCRIPTION("CPUFreq Driver based on SBI MPXY extension");
>
> NIT: CPUFreq driver based on SBI MPXY extension and RPMI protocol   -
> something like this

Currently, the mailbox controller is based on SBI MPXY but in
the future mailbox controller for some other RPMI transport can
also show up.

In reality, the driver is only RPMI based since it uses mailbox APIs.

Regards,
Anup

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ