lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 5 Dec 2008 17:59:17 +0100
From:	"Miguel Ángel Álvarez" <gotzoncabanes@...il.com>
To:	"Krzysztof Halasa" <khc@...waw.pl>
Cc:	netdev@...r.kernel.org
Subject: Re: qmgr for ixp4xx

Hi

On Fri, Dec 5, 2008 at 5:03 PM, Krzysztof Halasa <khc@...waw.pl> wrote:
> "Miguel Ángel Álvarez" <gotzoncabanes@...il.com> writes:
>
>>> The FIFOs are some internal property of HDLC controller (it isn't
>>> documented but they probably connect the bus master DMA controller to
>>> the bit-stuffer and transmitter (and bit-destuffer and receiver in the
>>> RX path)). You just need to send a message to HSS to tell it the
>>> correct value.
>>>
>> ¿The message is the HSS_CONFIG_CORE_CR containing the
>> CCR_NPE_HFIFO_3_OR_4HDLC flag?
>
> Well right, this one, too. I missed this one.
>
> So it seems the following are needed (and 128-bit LUT):
>
> #define PKT_NUM_PIPES           1 /* 1, 2 or 4 */
> #define PKT_NUM_PIPES_WRITE             0x52
>
> #define PKT_PIPE_FIFO_SIZEW     4 /* total 4 dwords per HSS */
> (= 1 word for 4E1, 2 words for 2E1, 4 for 1E1)
> #define PKT_PIPE_FIFO_SIZEW_WRITE       0x53
>
> #define CCR_NPE_HFIFO_2_HDLC            0x04000000
> #define CCR_NPE_HFIFO_3_OR_4HDLC        0x08000000
> (to be set in the CORE register).
>

Perfect... The ones I were using.

>> I understand this, but I do not know exactly how to use it. I mean...
>> - I have seen that more queues are added for tx and rxfree, but not rx
>> or txdone... Are they not require?
>
> The HSS uses 4 TX queues (each for 1 HDLC/packetized stream), and it
> sends all used descriptors back to TXDONE queue (all 4 streams).
>
> The same with RX, you have 4 RXFREE queues (= "RX empty" descriptors,
> waiting for RX data), but when the data is ready, they are going back
> to the CPU in a single RX queue. There is a stream# in the descriptor.
>
OK.

>> - Must we use different txreadyq for each hdlc?
>
> Yes, otherwise one HSS could grab all descriptors thus making the
> other HSS (temporarily) unusable. Actually we need a TXREADY queues
> per HDLC stream (4 per 4E1 port).
>
>> - The values you have chosen for txreadyq are 2 and 19. Does it not
>> conflict with HSS0_PKT_RXFREE1_QUEUE and HSS1_PKT_RXFREE1_QUEUE.
>
> It does certainly conflict. For now there are no problems because MVIP
> isn't supported. I guess we need 64-queue support. Fortunately, Karl
> Hiramoto already has a patch for 64 queues, almost ready for merge.
>
> We also have to make sure the queues don't conflict with the Ethernet
> driver and (if used) with the crypto code.
>
The queues are different for each NPE, aren' t they?

I will check for the 64-queue support patch (thanks Karl). However, if
I am not sure they are required. I mean...
- HSS0 uses queues 12-22.
- HSS1 uses queues 0-10.
- That leaves us 10 queues free, doesn't it? Couldn't we use queue 11
for eth txreadyq, 23-26 for HSS0 txreadyq, 27-30 for HSS1 txreadyq and
31 for crypto code (which I do not know?)

>> - I am not sure which documentation did you use for this (great)
>> implementation of eth and hss. The intel manuals lack all information
>> about this, so I am trying to check differences with Intel's software
>> library (a nightmare),
>
> I was using the library, though I processed it first with some custom
> scripts to make it easier to read. Christian Hohnstaedt's code was
> also a great help to understand what's going on.
>
I am trying to follow Intel's code... and it is difficult...

>> and have found that in "ixHssAccPktTxInternal"
>> they add the hdlc port to the entry (which at last seemed to me that
>> they were accessing consecutive memory positions from the entry for
>> hdlc0).
>
> Do you mean this?
> /* Set the least significant 2 bits of the queue entry to the HDLC port number. */
> qEntry |= (hdlcPortId & IX_HSSACC_QM_Q_CHAN_NUM_MASK);
>
> They want it when the packet is back in TXDONE queue.
>
Ummm? To know which queue should be refeeded? How could we do that?

>> So that is why I thought acc represented the same thing.
>
> No, acc is a private thing of queue manager. HSS code don't know about
> it.
>
OK.

>> What you are saying is that if I send the data to the correct
>> tx queue, it will reach the correct FIFO... OK.
>
> Yes.
>
Perfect.

>> And reception? If I
>> have only one reception queue how could I manage to know to which
>> channel have I received the data?
>
> The descriptor will have an ID in it.
>
Where?

>> - Do you know where can it be found more information about this
>> relation between queues, dmas and FIFOs? You are helping me greatly,
>> but I do not want to make you loose so much time.
>
> We only have ixp4xx library code. Few small minutes of my time are
> not a problem when spent to help implement a useful feature.
>
> I wonder what should I do now with all this. Perhaps integrating
> Karl's 64-queue patch and making the HDLC part of HSS driver able to
> be merged upstream... I will look if it does make sense.
>

I think your code is quite useful and structured, so it will be very
interesting to have it mainstream.

I was trying to improve your code to make it work in a 4E1. Before
making it configurable, I have made a first implementation considering
only my problem (which is not completely fixed yet). I will send you a
patch with my development (too large to add it in the list), so that
you could give me your oppinion if you want to. I would like to merge
it with your code when it is working (and the clean-up is made).

Thanks

Miguel Ángel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ