lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110711.022403.532062888492669176.davem@davemloft.net>
Date:	Mon, 11 Jul 2011 02:24:03 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	mirq-linux@...e.qmqm.pl
Cc:	netdev@...r.kernel.org
Subject: Re: [PATCH v2 00/46] Clean up RX copybreak and DMA handling

From: Michaİİ Mirosİİaw <mirq-linux@...e.qmqm.pl>
Date: Mon, 11 Jul 2011 11:16:49 +0200

> Packet is indicated in queue N, theres no memory for new skb, so its
> dropped, and the buffer goes back to free list. In parallel, queue M
> (!= N) indicates new packet. Still, there's no memory for new skb so
> its also dropped and its buffer is reused. The effect is that all
> packets are dropped, whatever queue they appear on.

Why would queue M (!= N) fail just because N did?  They may be
allocating out of different NUMA nodes, and thus succeed.

> The HOL blocking does not matter here, because there's only one head
> --- the system memory. If I misunderstood this point, please explain
> it further.

Multiqueue drivers are moving towards placing the queues on different
NUMA nodes, and in that scenerio one queue might succeed even if the
other fails.

Back to the hardware hanging issue, it's real.  Getting into a
situation where the RX ring lacks any buffers at all is the least
tested path for these chips.

Testing fate is a really bad idea, and this is why I always propose to
keep the hardware with RX buffers to use in all circumstances.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ