lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <B41635854730A14CA71C92B36EC22AAC3AD92D@mssmsx411>
Date:	Sat, 30 Sep 2006 21:26:40 +0400
From:	"Ananiev, Leonid I" <leonid.i.ananiev@...el.com>
To:	"Trond Myklebust" <trond.myklebust@....uio.no>
Cc:	"Linux Kernel Mailing List" <Linux-Kernel@...r.kernel.org>
Subject: RE: Postal 56% waits for flock_lock_file_wait

> On which filesystem were the above results obtained if it was not
ext2?
The default ext3 fs was used.

> All the above results are telling you is that your test involves
several
> processes contending for the same lock, and so all of them barring the
> one process that actually holds the lock are idle.

Yes. It is  flock_lock_file_wait.

Leonid
-----Original Message-----
From: Trond Myklebust [mailto:trond.myklebust@....uio.no] 
Sent: Saturday, September 30, 2006 7:06 PM
To: Ananiev, Leonid I
Cc: Linux Kernel Mailing List
Subject: Re: Postal 56% waits for flock_lock_file_wait

On Sat, 2006-09-30 at 09:25 +0400, Ananiev, Leonid I wrote:
> A benchmark
>              'postal -p 16 localhost list_of_1000_users'
> 56% of run time waits for flock_lock_file_wait;
> Vmstat reports that 66% cpu is idle and  vmstat bi+bo=3600 (far from
> max).
> Postfix server with FD_SETSIZE=2048 was used.
> Similar results got for sendmail. 
> Wchan is counted by
>             while :; do
>                         ps -o user,wchan=WIDE-WCHAN-COLUMN,comm;
> sleep 1;
>            done | awk '/ postfix /{a[$2]++}END{for (i in a) print
> a[i]"\t"i}'
> If ext2 fs is used the Postal throughput is twice more and bi+bo by
50%
> less while  flock_lock_file_wait 60% still.

On which filesystem were the above results obtained if it was not ext2?

> Is flock_lock_file_wait considered as a performance limiting waiting
for
> similar applications in smp?

All the above results are telling you is that your test involves several
processes contending for the same lock, and so all of them barring the
one process that actually holds the lock are idle.

As for the throughput issue, that really depends on the filesystem you
are measuring. For remote filesystems like NFS, locks can _really_ slow
down performance because they are often required to flush all dirty data
to disk prior to releasing the lock (so that it becomes visible to
processes on other clients that might subsequently obtain the lock).

Cheers,
  Trond
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ