lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOoeyxWBGrhNSuVVO+Dbi2qwar7=LNXTPkFWgoZTJWcBQaLMag@mail.gmail.com>
Date: Wed, 15 Jan 2025 15:03:06 +0800
From: Ming Yu <a0282524688@...il.com>
To: Vincent Mailhol <mailhol.vincent@...adoo.fr>
Cc: tmyu0@...oton.com, lee@...nel.org, linus.walleij@...aro.org, brgl@...ev.pl, 
	andi.shyti@...nel.org, mkl@...gutronix.de, andrew+netdev@...n.ch, 
	davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com, 
	wim@...ux-watchdog.org, linux@...ck-us.net, jdelvare@...e.com, 
	alexandre.belloni@...tlin.com, linux-kernel@...r.kernel.org, 
	linux-gpio@...r.kernel.org, linux-i2c@...r.kernel.org, 
	linux-can@...r.kernel.org, netdev@...r.kernel.org, 
	linux-watchdog@...r.kernel.org, linux-hwmon@...r.kernel.org, 
	linux-rtc@...r.kernel.org, linux-usb@...r.kernel.org
Subject: Re: [PATCH v5 4/7] can: Add Nuvoton NCT6694 CAN support

Vincent Mailhol <mailhol.vincent@...adoo.fr> 於 2025年1月15日 週三 下午2:44寫道:
>
> On Wed. 15 Jan 2025 at 14:35, Ming Yu <a0282524688@...il.com> wrote:
> > Vincent Mailhol <mailhol.vincent@...adoo.fr> 於 2025年1月15日 週三 上午11:36寫道:
> > > >>>>> +static int nct6694_can_get_berr_counter(const struct net_device *ndev,
> > > >>>>> +                                       struct can_berr_counter *bec)
> > > >>>>> +{
> > > >>>>> +       struct nct6694_can_priv *priv = netdev_priv(ndev);
> > > >>>>> +       struct nct6694_can_event *evt = priv->rx->event;
> > > >>>>> +       struct nct6694_cmd_header cmd_hd;
> > > >>>>> +       u8 mask = NCT6694_CAN_EVENT_REC | NCT6694_CAN_EVENT_TEC;
> > > >>>>> +       int ret;
> > > >>>>> +
> > > >>>>> +       guard(mutex)(&priv->lock);
> > > >>>>> +
> > > >>>>> +       cmd_hd = (struct nct6694_cmd_header) {
> > > >>>>> +               .mod = NCT6694_CAN_MOD,
> > > >>>>> +               .cmd = NCT6694_CAN_EVENT,
> > > >>>>> +               .sel = NCT6694_CAN_EVENT_SEL(priv->can_idx, mask),
> > > >>>>> +               .len = cpu_to_le16(sizeof(priv->rx->event))
> > > >>>>> +       };
> > > >>>>> +
> > > >>>>> +       ret = nct6694_read_msg(priv->nct6694, &cmd_hd, evt);
> > > >>>>> +       if (ret < 0)
> > > >>>>> +               return ret;
> > > >>>>
> > > >>>> You are holding the priv->lock mutex before calling
> > > >>>> nct6694_read_msg(). But nct6694_read_msg() then holds the
> > > >>>> nct6694->access_lock mutex. Why do you need a double mutex here? What
> > > >>>> kind of race scenario are you trying to prevent here?
> > > >>>>
> > > >>>
> > > >>> I think priv->lock need to be placed here to prevent priv->rx from
> > > >>> being assigned by other functions, and nct6694->access_lock ensures
> > > >>> that the nct6694_read_msg() transaction is completed.
> > > >>> But in this case, cmd_hd does not need to be in priv->lock's scope.
> > > >>
> > > >> So, the only reason for holding priv->lock is because priv->rx is shared
> > > >> between functions.
> > > >>
> > > >> struct nct6694_can_event is only 8 bytes. And you only need it for the
> > > >> life time of the function so it can simply be declared on the stack:
> > > >>
> > > >>         struct nct6694_can_event evt;
> > > >>
> > > >> and with this, no more need to hold the lock. And the same thing also
> > > >> applies to the other functions.
> > > >>
> > > >> Here, by trying to optimize the memory for only a few bytes, you are
> > > >> getting a huge penalty on the performance by putting locks on all the
> > > >> functions. This is not a good tradeoff.
> > > >>
> > > >
> > > > Since nct6694_read_msg()/nct6694_write_msg() process URBs via
> > > > usb_bulk_msg(), the transferred data must not be located on the stack.
> > > > For more details about allocating buffers for transmitting data,
> > > > please refer to the link:
> > > > https://lore.kernel.org/linux-can/20241028-observant-gentle-doberman-0a2baa-mkl@pengutronix.de/
> > >
> > > Ack, I forgot that you can not use stack memory in usb_bulk_msg().
> > >
> > > Then, instead, you can either:
> > >
> > >   - do a dynamic memory allocation directly in the function (good for
> > >     when you are outside of the hot path, for example struct
> > >     nct6694_can_setting)
> > >
> > >   - and for the other structures which are part of the hot path
> > >     (typically struct nct6694_can_frame) continue to use a dynamically
> > >     allocated buffer stored in your priv but change the type of
> > >     nct6694_can_tx and nct6694_can_rx from union to structures.
> > >
> > > And no more overlaps, thus no more need for the mutex.
> > >
> >
> > Understood, I will remove the unions and add members to private
> > structure in the next patch.
> > e.g.
> > struct nct6694_can_priv {
> >     struct can_priv can;
> >     ...
> >     struct nct6694_can_frame tx;
> >     struct nct6694_can_frame rx;
> > };
> > And do dynamic memory allocation for struct nct6694_can_setting and
> > struct nct6694_can_information.
> >
> > In addition, I would like to know your thoughts on how struct
> > nct6694_can_event[2] should be handled?
> > It is utilized in both nct6694_can_get_berr_counter() and
> > nct6694_can_irq(), with the latter being called more frequently during
> > runtime.
>
> For the nct6694_can_event in nct6694_can_irq(), I would say it is part
> of the hot path and thus you can have it in your struct
> nct6694_can_priv.
>
> For the nct6694_can_get_berr_counter(), the easiest is actually to
> just add the error counter structure to your nct6694_can_priv:
>
>           struct can_berr_counter berr_cnt;
>
> Each time you receive an event, you update this local error counter
> copy, and this way, in your nct6694_can_get_berr_counter(), no more
> need to query your device, you just return the berr_cnt which is saved
> locally.
>

Understood! I will make these modifications in the next patch.

Best regards,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ