The max is about 4 OSDs per SSDs, it does not have to be assigned to particular one. It will not "destroy" performance, if you have 1 slow node out of 10, it will slow 10% of read requests and about 30% of write requests, but again avoid this.ĭoes the SSD journal still need to be assigned to a particular OSD? (I mean CLI) or just add petasan using the web interface? You should make your storage nodes as symmetric as possible. We use 20 G partition for each HDD OSD, so it does not need to be largeĭoes that mean if I have one node in a cluster without SSD, will I destroy the performance of the whole cluster? Is there any recommendation for SSD capacity for one HDD in node ? Quote from admin on February 17, 2018, 6:02 pm Thank you very much for your time, would not it be bad, in the future, to jointly create some performance recommendations? how to achieve a reasonable result □ does the SSD journal still need to be assigned to a particular OSD? (I mean CLI) or just add petasan using the web interface?Īctualy i have 3 nodes, in each node 144GB RAM, 2x 6core CPU, 8x 146GB SAS OSD for test 2x10gbe backend and ib for iscsi - all work ok.does that mean if I have one node in a cluster without SSD, will I destroy the performance of the whole cluster?. is there any recommendation for SSD capacity for one HDD in node ?.It means that i have to have 2 SSD for 8 HDD in each node ? I have 4 same node Supermicro 2圆core cpu and 144Gb ram.ġst node) 8x 146 HDD SAS, 2x 10GBE, IB, IT mode raid -> 8 OSDĢnd node) 8x 146 HDD SAS, 2x 10GBE, IB, IT mode raid -> 8 OSDģrd node) 8x 146 HDD SAS, 2x 10GBE, IB, IT mode raid -> 8 OSDĤ node) 8x 64GB SSD Patriot flare, 2x 10GBE, IB, IT mode raid SATA3 -> 8 journalsĪll servers i have placed in same rack, redundant 10gbe backend, jumbo frame and IB for conectivity to esxi bades.
0 Comments
Leave a Reply. |