lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Oct 2022 08:58:18 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc:     Ira Weiny <ira.weiny@...el.com>, dan.j.williams@...el.com,
        dave.jiang@...el.com, alison.schofield@...el.com,
        bwidawsk@...nel.org, vishal.l.verma@...el.com,
        a.manzanares@...sung.com, linux-cxl@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] cxl/pci: Add generic MSI-X/MSI irq support

On Fri, 21 Oct 2022, Jonathan Cameron wrote:

>> FWIW I did this for the event stuff and did not find it so distasteful...  :-/
>>
>> However the information I am stashing in the cxlds is all interrupt
>> information.  So I think it is different from what I see in the CPMU stuff.
>
>Right now I'm just stashing the max interrupt number to squirt into a callback
>a few lines later. That feels like a hack to get around parsing the structures
>4 times.  If it's an acceptable hack then fair enough.
>
>>
>> > 2. The callback below to find those numbers
>> > 3. Registration of the cpmu devices.
>> >
>> > Reality is that it is cleaner to more or less ignore the infrastructure
>> > proposed in this patch.
>> >
>> > 1. Query how many CPMU devices there are. Whilst there stash the maximim
>> >    cpmu vector number in the cxlds.
>> > 2. Run a stub in this infrastructure that does max(irq, cxlds->irq_num);
>> > 3. Carry on as before.
>> >
>> > Thus destroying the point of this infrastructure for that usecase at least
>> > and leaving an extra bit of state in the cxl_dev_state that is just
>> > to squirt a value into the callback...
>>
>> I'm not sure I follow?  Do you mean this?
>>
>> static int cxl_cpmu_get_max_msgnum(struct cxl_dev_state *cxlds)
>> {
>>	return cxlds->cpmu_max_vector;
>> }
>
>Yup. That state is no relevance to the cxl_dev_state outside of this tiny
>block of code.  Hence I really don't like putting it in there.

Oh absolutely, this is ugly as sin. And if there is anything even worth stashing
the max would only be mbox, as Ira suggested earlier in v1, iirc. So no,
we should not be doing this sort of thing. And if pass one were done in the
callback the need for this would disappear.

>>
>> >
>> > So with that in mind I'm withdrawing the RB above.  This looks to be
>> > an idea that with hindsight doesn't necessarily pan out.
>> > Long hand equivalent with the specific handling needed for each case
>> > is probably going to be neater than walking a table of much more
>> > restricted callbacks.
>>
>> I'm not married to the idea of the array of callbacks but I'm not sure how this
>> solves having to iterate on the CPMU devices twice?
>
>Laid that out in the other branch of the thread but basically either
>1) We stash irrelevant information in cxl_dev_state just to get it into the callback
>   It's not used for anything else and this makes a fiddly and non obvious tie
>   up between different registration steps that appear somewhat independent.

Yeah anything _but_ this.

>
>2) We do the whole double parse twice (so 4 times in total) which is the right
>   option to keep the layering if using this array of callbacks approach, but
>   really ugly.  If we flatten it to straight line code there is no implication
>   of layering and the state being parsed on is right there in a local variable.

If we are keeping this patch, then as mentioned before, I would prefer this. imo
this is better than both 1 above and the open-coding approach.

>I can live with it either way, but it's definitely not as pretty as it looks
>for the mailbox case.

Agreed.

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ