[<prev] [next>] [day] [month] [year] [list]
Message-ID: <55B252B0.7010301@unitedstack.com>
Date: Fri, 24 Jul 2015 22:58:56 +0800
From: juncheng bai <baijuncheng@...tedstack.com>
To: unlisted-recipients:; (no To-header on input)
CC: linux-fsdevel@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Performance degrad on big cluster
Hi all,
We meet a performance degrad in one of our cluster. Our
randwrite latency degraded from 1ms to 5ms(fio -ioengine=rbd
iodepth=1)
The cluster has about 200 osds runing on Intel 3500 SSD, we run
both qemu and ceph-osd on the hosts. The network for ceph is 10GbE.
While the cluster is smaller and not so many qemu processes,
the IO latency is about 1ms, but now the latency is 5ms.
I use strace to get the time for syscall, all syscall (writev,
io_sub mit, recvfrom,sendmsg,lseek,fgetxattr etc.) use 300us to 600us.
The syscall time on a small and idle cluster is near to 0us.
After checked serval clusters, I come to a conclusion:
num_of_osds num_of_threads_on_host time_of_syscall(us)
200 10000 300-600
100 5000 200-500
70 2500 100-300
9 750 20-60
The threads on one of 200 osds cluster's host is like this:
name num_of_processes num_of_threads num_of_threads_per_process
qemu-kvm 49 9748 198
ceph-osd 6 5707 951
I think too many threads on the host lead to high latency of ceph-osd
process, and cause to high I/O latency from the client-side.
Anyone's help is welcome.
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists