Reddit reviews Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) 1.6 Feet
We found 20 Reddit comments about Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) 1.6 Feet. Here are the top ones, ranked by their Reddit score.
INTERNAL MINI SAS DATA CABLE connects a RAID or PCIe controller with an SFF-8087 port to 4 discrete SATA drives; Mini SAS to SATA adapter provides reliable internal connectivity between a Serial attached SCSI controller card in a computer system and direct attached storage devices with a SATA connectorLEVERAGE HARDWARE RAID PERFORMANCE with this SATA multi-lane cable; Two cables can connect up to 8 SATA drives to span RAID controller arrays and share performance across two PCIe 2.0 x8 lanes with compatible host bus adapters; Supports up to 6Gbs data transfer rate per driveDIY OR PRO INSTALLERS both appreciate the convenience of a forward fan-out cable with an internal mSAS connector when expanding storage needs; 3 foot cable harness of SAS to SATA cable provides sufficient length for internal cable management; Slim ribbon cables minimize airflow impact in a computer caseFLEXIBLE DESIGN of SAS breakout cable includes acetate cloth tape over slim ribbon cables for strain relief to protect cables without rigidity; Woven mesh sheath covers half of the cable for easy routing; P1 to P4 markers provide easy ID after installation; Low profile SATA connectors have easy-grip treads with stainless steel latches to prevent accidental disconnection and reduce vibration disconnectionSFF-8087 COMPATIBLE with popular RAID cards such as 3Ware 9650SE-8LPML RAID, Adaptec ASR-5805/512MB SAS RAID / 2258200-R 5405 RAID / 2258500-R 51645 RAID, Dell PERC H700 Raid / PERC H200, Dell PowerEdge 8-port SAS SATA Controller / PERC H310, HighPoint RocketRAID 2720 / RocketRAID 620 / RocketRAID 4520 / RocketRAID 4522, LSI Logic SAS9211-8I HBA Card, StarTech.com PCI Express RAID, Syba 4-Port RAID HyperDuo / PCI Express SATA II 4 x Ports RAID, Supermicro AOC-SAS2LP-MV8 8-Channel SAS/SATA
Raw storage:
Total 108TB(18 drives)
Actual storage:
Total 72TiB
Case:
Used the two bay 3.5" cage, and three bay 2.5" cage from the Deep Silence 3 case.
Fans:
Used two 120mm case fans from the Deep Silence 3 case between the two stacks of drives.
Motherboard: Supermicro X10SRA-F
CPU: Intel Xeon E5-1620 v3 3.5GHz
Heatsink: Noctua DH-D15
RAM:
Total 48gb
PSU: Corsair AX1500i
Controllers:
Total 20 ports
NIC: Mellanox Connectx-2 10g
OS Disks: 2 x Intel 330 60GB, mdadm RAID1
Storage Disks:
Seven shucked from Best Buy WD easystore externals and two from Amazon as internals.
I originally shucked the Seagates from externals. I have replaced the Seagates as they fail, and I had one fail during this upgrade. Yes, I have had five Seagate failures.
SATA/SAS cables:
OS: Fedora 25 with ZFS for Linux
Cost:
The cost was spread across years. This is more like two builds in one. My old build with the motherboard, memory, heatsink, CPU, and 4tb drives combined with my new 8tb build. With the 4tb drives I have replaced five of nine drives over time, which has driven up the real total cost.
The case is huge, but all the space is nice. You don't feel like you are cramming anything in. I used a Fractal Design R5 for my previous build, and prefer Fractal Design cases to Nanoxia cases. But the biggest Fractal Design case wouldn't quite suit my needs. Even this was a stretch for the Deep Silence 6 case. I wish the Deep Silence 6 had spots to mount 2.5" drives on the back side like the R5. It is a feature I miss.
I have a few issues. The trays and the screw holes on the WD 8tb drives don't match. The WD drives are missing the middle bottom screw holes. My temporary workaround is strong 3M double sticky foam tape with two screws. I may use a drill and drill holes in the sides of the trays. I had to tape down the 2.5" cage, but the drives are so light it is not a big deal.
After building this beast I had the window closed, the door shut, and no room fan for one day. The room was quite warm. I have since opened the window, turned on the fan, and left the door open.
My Kill-a-watt peaked at 450 watts during boot. It idles between 200-220 watts. So I could go back to my AX760 from my previous build with SATA power splitters.
I still have one tray free, but no extra drive or SATA port.
I was originally going to move the four bay 3.5" cage from the Deep Silence 3, but it was just too integrated into the case. I tried adapting it, and it didn't come out well. Even if it had, the bottom tray was going to sit below the lip of the side of the case. So that tray would have been less accessible.
I am currently copying 18tb from the old array to the new array as a burn-in test.
I got the original idea to build with this case from someone else's post. I probably would have just bought another Fractal Design R5, and run two systems otherwise. I have run two systems for storage before, connected them with 10g, and used iSCSI. When I did I used, https://romanrm.net/mhddfs , to merge the filesystems together. I am considering doing the same again.
With the right cages you could probably fit around 26 3.5" drives in this case.
Over time I have gone from 250gb to 500gb to 1tb to 1.5tb to 2tb to 4tb to 8tb drives. I didn't think I would be upgrading to 8tb anytime soon, until the Best Buy easystore deal. In the past I mostly purchased on Black Fridays. In more recent years externals from Costco.
TLDR: I built a new server combining an existing 24TiB ZFS with a new array of 36TiB ZFS for the win!
You don’t. You find SAS controllers like the H200 or LSI 9200-8e if you need external connections.
Internally you can use one of these if you need SATA.
Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) 1.6 Feet https://www.amazon.com/dp/B018YHS8BS/ref=cm_sw_r_cp_api_i_uvl8CbJM89PS0
The LSI 9211-8i will suit you nicely. Try to get one already flashed with the P20 firmware or just flash it yourself if your comfortable with that.
EDIT: Here's a link to one that's already pre-flashed and from a reputable seller. And you'll need two of these cables to go with it.
It has a sas connection, so you use a breakout cable, 1 port becomes 4 sata connections.
https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B018YHS8BS?th=1
You can get SAS->SATA "fan" cables (example) to use this with regular SATA drives, like if you were building a large NAS array in a huge PC case. This would support up to 24 SATA drives off a single controller.
LSI LOGIC SAS 9207-8i Storage Controller LSI00301
Mini-SAS to 4x SATA Forward Breakout Cable
I picked up one of these cards and the breakout cables and it handles 8TB drives, easy to install. Works great
For future reference when you have more money, I recently got one of these HBA cards this vendor and it works fine and was properly updated/flashed: https://www.ebay.com/itm/Dell-H310-6Gbps-SAS-HBA-LSI-9211-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID-High-Air-Flow/162834671120
I use this for a cable: https://www.amazon.com/gp/product/B018YHS8BS/
And use this for cooling the heatsink: https://www.amazon.com/gp/product/B009NQLT0M/
There's not really 6gbps or 3gbps cables, SAS cables are SAS cables.
Sideband signal is for connections to backplanes.
Yes, the direction, forward or reverse, breakout does matter, don't buy anything that doesn't specify.
Here's a good one: https://www.amazon.com/dp/B018YHS8BS
The thing to do is go to eBay and get a used dell h310/9211-8i that’s been flashed with with the latest IT firmware, then you use sas to sata breakout cables. Like these ones: Cable Matters Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) 1.6 Feet https://www.amazon.com/dp/B018YHS8BS/
Best to get slightly longer than you think you’ll need.
It’s also good to put take off the heat sink, drill two holes in the corners and twist tie a small fan to it and then repaste. That’ll keep it from overheating.
Then you’ll have something far more reliable the chinesium sata expanders.
> Do you mind sending me a link to that memory? I can only find PC sticks for twice the price.
Sorry in my haste to reply I overlooked the fact that the T20 wants unbuffered ECC ram, which holy crap that jacks up the prices - the registered stuff is cheap, and would be awesome for say a Dell Poweredge server.
My apologies yo.
You say you have 24 gigs of RAM - so you're running 2x8gb & 2x4gb (I'm assuming ECC unbuffered here)? if that's the case, then while it may not be $100 for 32gb Newegg has a 3rd party seller showing $62 per 8GB ecc udimm here.
>Also have you looked into running SSDs? My dilemma is do I get the Samsung 850 Evo or the 950 with a PCI adapter
It looks like you can remove the optical drive and place 2 x 2.5" drives in its place. Me personally, for what you have listed above, I would just install 1 or 2256/500gb 850 evo's in its place and call it a day.
My home server runs all my VM's save for 1 on multiple 120/80/256gb SSD's (basically whatever we had laying around from work after upgrades - that 80 is an old Intel SSD from 2008 or 2009 I think).
So, what I personally would do is:
For our T30 server on Oahu, we used a 500gb Evo SSD for the 3 VM's, an LSI SAS9260-8i RAID adapter, these cables, and 2 6TB Seagate Ironwolf drives in a mirrored config, and 16gb of NON ECC DDR4 memory (it's not a super mission critical server).
According to this thread you don't need ECC ram, and if you're data isn't suuuuuuuper important (like life threatening important), then to the ebay you find 32GB of non-ecc ram for $145.
FWIW I don't run ECC ram at home, but my home server is mainly for Plex, a single Active Directory server, pi-hole, and pfSense. Not super mission critical, and if one of my Linux ISO's get's corrupted, no big deal.
Our servers in our main office, they get the ECC ram, because that shit's critical - we do electrical engineering w/ AutoCAD, I don't need hours of work down the drain.
Errrrr shit sorry I kinda rambled on and brain dumped. I hope something in that wall of text is useful. Aloha :D
EDIT: forgot a word and a letter :/
Use your on-board SATA controller for the drive(s) where the VMs will be stored. I have two SSDs in mine since I have a dozen VMs. These drives will show up in the ESXi storage module.
For "raw access" you will need a separate drive controller (aka HBA=host bus adapter) for the disks you want Xpenology to use for its storage pool. This separate drive controller will show up in ESXi and you enable "passthrough" in the hardware configuration screen. After you do this, the separate drive controller can be added during configuration of the XPE VM as an additional "PCI Device" and all drives connected to this controller will show up in XPE after DSM boots. ESXi will have no visibility to these drives at all. Configured this way the drives behave as if they were in an actual Synology box.
There are caveats however since not all drive controllers can be passed and not all seem to be compatible with Jun's bootloader. There are various LSI models that most people use, with 9211-8i being one of the more popular ones. There are third party cards (such as Dell and IBM) that are the same as the 9211-8i and can use its firmware. Secondly, the card needs to be flashed from IR mode to IT mode which basically disables the built-in RAID function and presents the drives as a JBOD. Here is one example of how to flash an LSI 9211-8i into IT mode: https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/
You can also purchase preflashed cards on ebay. Do a search for "9211 it mode" and you'll find many listed. You should be able to grab one for $35-40. I personally use a Dell H310.
The 9211-8i has two SFF-8087 ports, each which supports 4 drives. Use a cable like this which has a standard SATA connector: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B018YHS8BS
Go here for info on how to configure XPE on ESXi: https://xpenology.com/forum/forum/50-dsm-6x/
There are also many Youtube videos available that shows how to configure XPE in ESXi.
Side note: Now that I've done some searches on RDM, I recall that my issue was that RDM was greyed out as an option for me. Maybe it will work fine for you, but the passthrough method is the recommended way. It's also used for other platforms like FreeNAS and unRAID.
Here's a trustworthy cable to try: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B018YHS8BS?th=1
I use several of these cables. This will net you 8 drives. You can always use a SAS Expander for more.
https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B018YHS8BS/ref=sr_1_7?ie=UTF8&qid=1494600842&sr=8-7&keywords=sas+to+sata
/u/clickwir has basically summed it up. The "header cable" that you are describing is actually a board known as a backplane which your HDDs will slot into and on the back are the SATA connections. Hot swap is fancy terminology describing harddrives that can be easily accessed and replaced without shutting down or stopping the machine.
The reason why a SAS addon card is good is because each SAS port can take on 4 SATA connections making your wiring look very sleek. The downside is that you will likely have to buy the card and won't be able to take advantage of all your motherboard's SATA ports.
> The motherboard in my supermicro has a SAS2 controller onboard so I just have that 1 red sata-to-8087 miniSAS reverse breakout that connects the motherboard to the case and all 24 bays work.
I had no idea you could do that. So you just need this cable and you can connect all 24-drives? How does this work with the motherboard? Do you need something special controller built in?
You've been gracious, please double check my setup for me?
I have acquired the nVIDIA Tesla K10's. Now for the rest of the shopping list:
* cheap enough to not be affected by changes in budget
From what I have seen thus far, I should be able to finish options 1 and 2 (because I start counting at 0) next month, leaving the server itself to be purchased in November or December. Might leave either the DAS or the sound card as an afterthought...
You need these: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B018YHS8BS
Something like this would probably work. Alternatively, you could get this which has room for 4 more drives so you would be able to connect all your drives to the card instead of using the built in sata ports. Either way if you do get one of these host-bus adapter cards, you will need a breakout cable. I bought the second card and two cables and I have my drives running through the card. It should be plug and play, and your system should recognize them with no problem.
You could white box it, but the Xeon CPU brand new is gonna be at least $200.
I went with a TS140. Something like this
Spend another couple hundred on ram Example ram here and an M1015 + breakout cables for passthrough to freenas. (That's what I do.)
EDIT
I'd suggest using ECC memory, especially with a ZFS file system.