

Buy anything from 5,000+ international stores. One checkout price. No surprise fees. Join 2M+ shoppers on Desertcart.
Desertcart purchases this item on your behalf and handles shipping, customs, and support to KUWAIT.
🚀 Unlock RAID speeds that leave Gen5 in the dust!
The ASUS Hyper M.2 X16 PCIe 4.0 Expansion Card transforms your workstation or server with four NVMe M.2 slots supporting up to 256Gbps transfer speeds via PCIe 4.0 x16 interface. Designed for AMD 3rd Gen Ryzen sTRX40, AM4 sockets, and Intel VROC NVMe RAID, it features a premium low-loss PCB, supports high-power 14W SSDs, and includes an integrated blower fan with heatsink to prevent thermal throttling—delivering rock-solid, enterprise-grade RAID performance for professionals who demand cutting-edge speed and reliability.


| ASIN | B084HMHGSP |
| Best Sellers Rank | #139 in Computer Motherboards #1,224 in Data Storage |
| Brand | ASUS |
| Compatible Devices | Motherboards with AMD TRX40/X570 Series PCIe 4.0 support and Intel VROC NVMe Raid |
| Customer Reviews | 4.3 4.3 out of 5 stars (754) |
| Hardware Interface | PCI Express 4.0 |
| Item Dimensions L x W x H | 1.04"L x 1.2"W x 0.2"H |
| Item Type Name | ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card Supports 4 NVMe M.2 (2242/2260/2280/22110) up to 256Gbps for AMD 3rd Ryzen sTRX40, AM4 Socket and Intel VROC NVMe Raid |
| Item Weight | 2 Pounds |
| Manufacturer | ASUS |
| Model Number | HYPER M.2 X16 GEN 4 Card |
| Operating System | Windows, macOS, Linux |
| Style Name | Music |
| UPC | 192876624920 |
| Warranty Description | 1 Year |
C**N
Absolutely Scorching Speeds If Setup Properly
Absolutely rock solid integration with the TRX40 chipset on an ASUS motherboard that has the PCI-e bifurcation option (PCIe RAID mode) that you can activate on a lot via BIOS. Disassembly and re-assembly was a breeze, the correct amount of screws was included, each slot has a full size thermal pad inside ready to go and the fan seems to work wonders even with four SN850X running in RAID0. It's setting up the AMD RAID drivers properly that is the wild card and can be a complete buzzkill if you do one step wrong. You have to navigate to (for AMD processors) to the support page specific to your CHIPSET, not the processor, and download whatever RAID software package they have there for your AM4 or TRX40 board. You have to enable both SATA and NVMe RAID modes in the BIOS before you can even install the RAID package you just grabbed and once that is on properly, you will see some unrecognized storage controllers in your Device Manager. From there you right click each unrecognized item and point it toward the folder with drivers that came with that package or that you grabbed separately and manually update each of them one by one. Then go back into the BIOS and delete the legacy array housing your single NVMe drives and create whatever RAID array you want there and if all is well it should show up as one single drive in Windows that you can then format as a Simple Volume and manage through AMD RAID Xpert. At each step you might require a few restarts or a full shutodown/power down in order for the changes to fully take. Once you clear this hurdle though, it never reverts and is insanely easy to manage as long as you don't flip back to non-RAID mode in BIOS. Highly recommend sticking with the 256K allocation size recommended in the BIOS when creating the array and the Windows default value when formatting it in Windows, any other tweaks yielded lesser or spottier performance. As other readers have stated, you must make sure that you have a free x8 or x16 slot for two or four SSDs respectively and you must be able to split the 16 into four siloed lanes for each drive in the BIOS and you must make sure that whatever slot you're using isn't sharing lanes with your processor or RAM or another NVMe slot on your mobo. Threadripper platforms are worry free in this department because of the gross amount of PCIe lanes they have. As long as you're not running two 5090s in parallel and maxing out every RAM, NVMe slot and with some SATA drives thrown in, this should not be a concern. Even I didn't expect quite the eye watering results I got with four of the fastest 1TB Gen4 drives on the market running in a striped RAID0 setup. The screenshots speak for themselves. That's basically brushing right up against the theoretical limits of Gen 4 NVMe 1.4 drives and what it should look like when those drives are striped and running free of bottlenecks. For $50 (Used - Like New...came in perfect condition) plus less than $100 each for the drives, this is an absolute no brainer in terms of bang for buck value and will give you performance exceeding that of current Gen5 SSDs by another 8GB/sec. For reference this is a hair shy of the 2400 MT/s base, non XMP clock speed of my 3600 mhz RAM. This is madness. If you do video editing, hosting off your main rig or are looking to trim any possible other system bottlenecks to max out a current gen graphics card, what are you doing still reading??
D**.
Solves AER issues on AMD boards with Gen4 NVMe drives
Preface: This card is NOT intended to just be dropped in to a commodity PC motherboard. This is a card you install in a server or high-end workstation that has more x16 slots than the number of graphics cards you have. Reviewers complaining that their BIOS doesn't recognize all 4 slots probably plugged it into an x4 or x8 slot on their motherboard. Make sure to check your motherboard specifications before you buy! Background of why I got this: I have an ASRock Rack ROMED8-2T with an AMD EPYC 7443P processor and 128GB of ECC PC4-25600. I have a pair of Samsung 980 PRO 500GB NVMe drives installed which are used for the root filesystem (RAID1), ZFS L2ARC and ZFS secondary log (SLOG/ZIL) device. Since completing the initial build, I noticed frequent correctable PCIe bus errors in dmesg (see 1st and 2nd screenshots). Communication with the NVMe drive would shut down for second or two after each AER event, which got *really* bad when I enabled the ZIL on them. Approximately once every 3 months I'd also get a hard lockup necessitating a hard reset of the server over IPMI, which I strongly suspect was related to this. The hard lockups became less frequent with the 6.1 LTS kernel and P3.50 firmware but I was still able to upset it under heavy write workloads. The PCI device implicated in these messages was "AMD Starship/Matisse GPP Bridge" at 0000:40:01.1, the only child device of which was the NVMe drive at 41:00.0. The other bridge device and NVMe drive did not experience the issue. Research online indicated that an NVMe riser card such as this one would resolve the issue. There are numerous alphabet-soup brand cards for a half or third of the price here on Amazon, but to me $80 is cheap insurance to protect my NVMe drives and the data on them by using a card with a hefty heatsink, a high quality fan and a warranty. I had two hiccups during installation, which were: - This card is quite long, and will interfere with the fan connectors on the ROMED8-2T in PCIE slots 1-4. It will fit in slots 5-7, although 5 and 6 will bring the card quite close to the SAS connectors. - The NVMe drives were not detected until I manually changed the link speed on the slot to "x4x4x4x4" in the BIOS setup. After setting everything correctly I confirmed that both drives are running at full gen4 speed of 16GT/s (see 3rd screenshot).
C**T
Not all Mobo's will support this. Make sure you get one that supports "Gen 4" Also you will need to reconfigure your PCIE lanes in Bios to 4*4*4*4*, if you leave them at default/16x this won't work
S**R
Works great on a Asus Motherboard. 4TB M.2 SSD's with no SATA cables needed. Great help on you tube to set it up & running
G**E
I almost didn’t buy this product due to negative ratings and that would have been a mistake. After use and re-reading it is clear all negative reviews on this product simply don’t understand the technical limitations of the environment, to wit: Each NVME requires x4 PCIE lanes, and many motherboards have a single x16 slot (which furthermore requires firmware support for 4x4x4x4 bifurcation). Simply check the support and know that to use all four slots on this card you likely will need to move your graphics card to an x4/x8 slot and/or update BIOS and/or make configuration changes. These options might not be called “bifurcation” and may be “pcie raid” … these firmware and hardware inconsistencies are not Asus’ fault. Just because your PCIE slot looks like a full-sized x16 slot does not mean this product will work. No, this is not an active raid controller. That said, this product for those with the technical aptitude to understand the limitations and requirements is fantastic. There are power filtering capacitors on the board, and other passive components populated to protect our expensive PCIE4-era NVMEs. There is a huge solid block of machined aluminium and the correct riser rubbers/thermal adhesives/risers/screws to mount four single- or double-sided NVMEs. The fan isn’t super useful in a system with above average cooling, and you can simply turn it off. My NVMEs went from circa 90 degrees (on motherboard with no heat sinks, in the random access RW work I use them for) to barely above ambient. The maximum theoretical throughput of x4 PCIE is approaching 8Gb/sec and I see RW speeds at random access approaching 16Gb across 4 drives, I don’t know if sequential RW is at full x16 speed, but with the 4x 980 Pros I use I see ever-so-slightly more than double speed of 2x 980 Pros, so it definitely scales in a linear fashion IF YOU USE x16 IN 4x4x4x4 BIFURCATION!
T**G
The expansion card is well made. I added 4 more NVME. Now trying to workout how to configure it as RAID10..
A**R
شغال100%
ترست بايلوت
منذ أسبوع
منذ يوم واحد