We spun up some new primaries on aws "m3.2xl" instances.
These had enough drive capacity for our data.
Yes, we would have liked to use "i2" instance types, but they weren't available to us (dont ask).
We ran 'mongoperf' on them and here is a sample of the results:
------------------------------------------------------------------------------
EBS
- 986 ops/sec 3 MB/sec
- 1154 ops/sec 4 MB/sec
- 1139 ops/sec 4 MB/sec
- 1268 ops/sec 4 MB/sec
- 1119 ops/sec 4 MB/sec
- 930 ops/sec 3 MB/sec
- 929 ops/sec 3 MB/sec
- 1330 ops/sec 5 MB/sec
- 1341 ops/sec 5 MB/sec
- 946 ops/sec 3 MB/sec
- 892 ops/sec 3 MB/sec
- 1131 ops/sec 4 MB/sec
- 1153 ops/sec 4 MB/sec
- 1073 ops/sec 4 MB/sec
- 1071 ops/sec 4 MB/sec
- 1316 ops/sec 5 MB/sec
SSD (ephemeral)
- 8790 ops/sec 34 MB/sec
- 8879 ops/sec 34 MB/sec
- 8986 ops/sec 35 MB/sec
- 8976 ops/sec 35 MB/sec
- 8109 ops/sec 31 MB/sec
- 8610 ops/sec 33 MB/sec
- 8942 ops/sec 34 MB/sec
- 8574 ops/sec 33 MB/sec
- 8565 ops/sec 33 MB/sec
- 8292 ops/sec 32 MB/sec
- 8357 ops/sec 32 MB/sec
- 8722 ops/sec 34 MB/sec
- 7123 ops/sec 27 MB/sec
- 7863 ops/sec 30 MB/sec
------------------------------------------------------------------------------------------
So like.. 7x 8x performance.
Needless to say... when we ran our performance tests again, we Crushed our old timings.
With EBS, our slowest 4 API calls were taking (90th percentile): 2.5, 2.4, 2.4 and 2.4 seconds.
With SSD, our slowest 4 API calls were taking (90th percentile): 0.8, 0.8, 0.4 and 0.4 seconds.
Oh.. and why did we think we were I/O bound in the first place?
Cuz IOSTAT said this about our EBS:
And maybe i'll get around to posting some IOSTAT graphs w/the SSD measurements...
No comments:
Post a Comment