aboutsummaryrefslogtreecommitdiff
path: root/usr.sbin/diskinfo/diskinfo.8
diff options
context:
space:
mode:
authorAlexander Motin <mav@FreeBSD.org>2017-07-05 16:20:22 +0000
committerAlexander Motin <mav@FreeBSD.org>2017-07-05 16:20:22 +0000
commit1a01f934bf7573108f352683ee59a3b7771c113e (patch)
tree74a093ec92b0ccb932253de7eb5c5485543191ac /usr.sbin/diskinfo/diskinfo.8
parentcb503ae22d4f98f85d62f18cd353c91ee26feb1b (diff)
downloadsrc-1a01f934bf7573108f352683ee59a3b7771c113e.tar.gz
src-1a01f934bf7573108f352683ee59a3b7771c113e.zip
Add naive benchmark for SSDs in ZFS SLOG role.
ZFS SLOGs have very specific access pattern with many cache flushes, which none of benchmarks I know can simulate. Since SSD vendors rarely specify cache flush time, this measurement can be useful to explain why some ZFS pools are slower then expected. This test writes data chunks of different size followed by cache flush, alike to what ZFS SLOG does, and measures average time. To illustrate, here is result for 6 years old SATA Intel 710 Series SSD: Synchronous random writes: 0.5 kbytes: 138.3 usec/IO = 3.5 Mbytes/s 1 kbytes: 137.7 usec/IO = 7.1 Mbytes/s 2 kbytes: 151.1 usec/IO = 12.9 Mbytes/s 4 kbytes: 158.2 usec/IO = 24.7 Mbytes/s 8 kbytes: 175.6 usec/IO = 44.5 Mbytes/s 16 kbytes: 210.1 usec/IO = 74.4 Mbytes/s 32 kbytes: 274.2 usec/IO = 114.0 Mbytes/s 64 kbytes: 416.5 usec/IO = 150.1 Mbytes/s 128 kbytes: 776.6 usec/IO = 161.0 Mbytes/s 256 kbytes: 1503.1 usec/IO = 166.3 Mbytes/s 512 kbytes: 2968.7 usec/IO = 168.4 Mbytes/s 1024 kbytes: 5866.8 usec/IO = 170.5 Mbytes/s 2048 kbytes: 11696.6 usec/IO = 171.0 Mbytes/s 4096 kbytes: 23329.6 usec/IO = 171.5 Mbytes/s 8192 kbytes: 46779.5 usec/IO = 171.0 Mbytes/s , and much newer and supposedly much faster NVMe Samsung 950 PRO SSD: Synchronous random writes: 0.5 kbytes: 2092.9 usec/IO = 0.2 Mbytes/s 1 kbytes: 2013.1 usec/IO = 0.5 Mbytes/s 2 kbytes: 2014.8 usec/IO = 1.0 Mbytes/s 4 kbytes: 2090.7 usec/IO = 1.9 Mbytes/s 8 kbytes: 2044.5 usec/IO = 3.8 Mbytes/s 16 kbytes: 2084.8 usec/IO = 7.5 Mbytes/s 32 kbytes: 2137.1 usec/IO = 14.6 Mbytes/s 64 kbytes: 2173.4 usec/IO = 28.8 Mbytes/s 128 kbytes: 2923.9 usec/IO = 42.8 Mbytes/s 256 kbytes: 3085.3 usec/IO = 81.0 Mbytes/s 512 kbytes: 3112.2 usec/IO = 160.7 Mbytes/s 1024 kbytes: 2430.6 usec/IO = 411.4 Mbytes/s 2048 kbytes: 3788.9 usec/IO = 527.9 Mbytes/s 4096 kbytes: 6198.0 usec/IO = 645.4 Mbytes/s 8192 kbytes: 10764.9 usec/IO = 743.2 Mbytes/s While the first one obviously has maximal throughput limitations, the second one has so high cache flush latency (about 2 millisecond), that it makes one almost useless in SLOG role, despite of its good throughput numbers. Power loss protection is out of scope of this test, but I suspect it can be related. MFC after: 2 weeks Sponsored by: iXsystems, Inc.
Notes
Notes: svn path=/head/; revision=320683
Diffstat (limited to 'usr.sbin/diskinfo/diskinfo.8')
-rw-r--r--usr.sbin/diskinfo/diskinfo.812
1 files changed, 10 insertions, 2 deletions
diff --git a/usr.sbin/diskinfo/diskinfo.8 b/usr.sbin/diskinfo/diskinfo.8
index e65633e6779f..a30337a7c020 100644
--- a/usr.sbin/diskinfo/diskinfo.8
+++ b/usr.sbin/diskinfo/diskinfo.8
@@ -1,5 +1,6 @@
.\"
.\" Copyright (c) 2003 Poul-Henning Kamp
+.\" Copyright (c) 2017 Alexander Motin <mav@FreeBSD.org>
.\" All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
@@ -28,7 +29,7 @@
.\"
.\" $FreeBSD$
.\"
-.Dd July 1, 2017
+.Dd July 4, 2017
.Dt DISKINFO 8
.Os
.Sh NAME
@@ -36,7 +37,7 @@
.Nd get information about disk device
.Sh SYNOPSIS
.Nm
-.Op Fl citv
+.Op Fl citSvw
.Ar disk ...
.Nm
.Op Fl p
@@ -64,9 +65,16 @@ This is a string that identifies the physical path to the disk in the
storage enclosure.
.It Fl s
Return the disk serial number
+.It Fl S
+Perform synchronous random write test (ZFS SLOG test),
+measuring time required to write data blocks of different size and
+flush disk cache.
+Blocks of more then 128KB are written with multiple parallel operations.
.It Fl t
Perform a simple and rather naive benchmark of the disks seek
and transfer performance.
+.It Fl w
+Allow disruptive write tests.
.El
.Pp
If given no arguments, the output will be a single line per specified device