• Optane disappointment leads to a ‘Plan B’

    Home » Forums » Newsletter and Homepage topics » Optane disappointment leads to a ‘Plan B’

    Author
    Topic
    #1856582

    LANGALIST By Fred Langa I was in a bit of a pickle: The performance of my brand-new PC was awful — it simply couldn’t handle my peak workloads! Worse,
    [See the full post at: Optane disappointment leads to a ‘Plan B’]

    7 users thanked author for this post.
    Viewing 4 reply threads
    Author
    Replies
    • #1856820

      This article contains an accidentally-duplicated image in Figs 6 and 7 .

      Here’s the correct Fig 7.

      • This reply was modified 5 years, 11 months ago by Fred Langa.
      3 users thanked author for this post.
    • #1857018

      I looked for long and still couldn’t find a good RAM Cache app for my Samsung PM961 256GB M.2 NVMe SSD drive.

    • #1857265

      It depends on your usage. I use quite a few image editors and several browsers  mainly, and then my optane HDD is very fast.

    • #1858041

      I’ve been “heavy workloading” PC workstations since forever.

      There’s really no room for the word “performance” and “spinning hard drive” on the same page, let alone in the same sentence.

      If you really want to see what a modern processor can do – or even a set of them, such as what you can find in a modern high-end Xeon workstation – you don’t even want a measly single SSD. You want an array of them.

      Consider what I/O devices modern high end system designs in the $5K to $10K range are sporting. For example, the Dell UltraSpeed card with 4 slots for high performance m.2 drives and the ability to tie PCIe lanes directly to those drives via the NVMe protocol and Xeon Scalable processor’s new low-latency on-chip RAID capabilities. I got such a workstation with UltraSpeed card with just 2 of the 4 slots filled early this year. It outpaces the I/O performance of my older system with PCIe RAID controller and 8 SATA III SSDs by 3 to 1, and still has room to grow and get faster. Sustainable I/O rates in the gigabytes per second even with small block sizes.

      As Fred points out, you find that such a system just doesn’t bog down when you pile on the work. I do product testing in VMs while using the main system for engineering work. You wouldn’t know the VMs are even there.

      It turns out modern PC designs built for high workload performance really are no longer limited by I/O capacities or speeds. The current biggest challenge is to get data to and from the RAM fast enough to do big jobs on multiple cores at once.

      Fred, I caution you about one thing: 3rd party caching utilities often seem to provide a performance boost “on paper” (i.e., in artificial benchmarks designed to measure hardware performance). Windows has a VERY effective file system cache right out of the box. You really don’t need an extra 3rd party solution making your I/O interface more complex (and potentially prone to additional faults).

      -Noel

      1 user thanked author for this post.
      • #1858258

        I had never heard of a Dell UltraSpeed card and did not realise that it was possible to run disks in such a manner!

        Every day is a school day…

        Thanks.

        • #1859450

          Oh yes, parallel / striped was always the thing for more speed. On some systems you used to be able to do EITHER mirror or striped but not both, in the old days… nowadays high-end systems often do things like RAID 1+0 or 0+1, or something even more complicated.

          And yes, there are times when you can get more performance with a third-party caching setup. These tend to be uncommon special cases where you start to run into Windows scheduler algorithm scaling issues… which was somewhere around several dozen CPU cores fully loaded and simultaneous heavy I/O, last time I checked. So more like US$ 100K and over, than 10K… and still uncommon.

    • #1871850

      I recently replaced a desktop’s HD with an SSD, but had some special
      considerations because of the partitioning/multibooting I do. Woody
      suggested that I post my experience as a reply to Fred’s post.

      I have a desktop that runs 24/7/365, doing World Community Grid work
      when I am not on the system. I thought the PC’s 4.5-year-old 2TB hard
      drive was started to make some peculiar sounds. (Its predecessor had
      lasted only two years.) Encouraged by Fred’s recent Crucial SSD
      experience, I bought an MX500 and successfully replaced the old HD.

      For most desktop configurations this is pretty easy to do. I had to get
      a bracket to fit the 2.5″ drive into the space built for 3.5″ drives in
      my Dell XPS 8500 desktop. I also bought a SATA-USB3 cable to clone my
      old drive to the SSD, as shown in the Crucial videos.

      The difficult part was the disk cloning. I had done some pretty
      complicated partitioning on my hard drive, using TeraByte’s BootIt Bare
      Metal (BIBM). I use the BIBM facility to enable more than four primary
      partitions so I can boot from many different partitions for testing. Of
      course nowadays Dell and Microsoft create their own special primary
      partitions for utilities, factory restore, system partition, etc. BIBM
      by default even creates a tiny one of its own. Also I have an extended
      (primary) partition for logical volumes. At last count I have nine
      primary partitions and can multiboot to many of them. To enable all
      this, BIBM creates a special Extended Master Boot Record (EMBR) to
      replace the disk’s standard MBR — a potential stumbling block for disk
      cloning softwatre.

      Note my desktop is an old, non-GPT, non-UEFI system. BIBM does not
      allow booting from a GPT partition, so I stayed with NTFS. There is a
      new BootIt UEFI (BIU) product from TeraByte that removes some prior
      restrictions, but I don’t have any experience with BIU yet. Probably on
      my next new PC. One step at a time.

      Given the complicated disk structure, I was concerned that the Crucial
      (Acronis) copy/clone software would not be able to clone the HD
      properly. I had a chat session with Crucial Support and they agreed. I
      had suggested using BIBM’s Disk Imaging facility instead. They agreed
      with that as well, so that’s what I used. It worked.

      The only problem I had with the BIBM cloning was the time it took. That
      was probably my fault. I could have opened up my desktop and put in the
      SSD as a second SATA drive. That should have made the cloning go very
      quickly (I’m pretty sure). However putting in a second drive in that
      desktop requires some extra steps — you have to remove the HD cage to
      get at the second slot. In contrast, cage removal is not required to
      get to the first slot. So I decided to just use a SATA-USB3 cable to
      clone the drive externally, per the Crucial videos, which are geared
      toward laptops. I knew it would take longer, but it seemed safe.

      I had previous experience when replacing the original hard drive years
      ago. For that I had put the replacement drive in an external drive
      enclosure (an old Thermaltake BlacX ST0005U) and easily did the cloning
      using BIBM in about an hour. Unfortunately I had forgotten that I had
      used an eSATA cable instead of USB3 to connect the BlacX to my PC, which
      has an eSATA port, so the drive was effectively inside the PC directly
      on the SATA bus. Fast.

      In any event, although the BlacX has a 2.5″ slot, I decided not to try
      using it for the HD to SSD cloning since I was concerned I might fry the
      SSD (different voltages?). I think newer BlacXs may support SSDs, but
      was dubious if my old one did. I sent a query to Thermaltake Support on
      this, but never got a response. I played it safe and did not really
      consider using the BlacX. Had I thought more about using it, I would
      probably have remembered why that external cloning with BIBM had gone so
      quickly — the eSATA connection.

      Long story short, the cloning thru the SATA-USB3 cable with BIBM worked,
      but took almost *nineteen* hours. I suspected the USB3 connection to a
      known-good USB3 port got dropped to USB2, either because of some
      inherent BIBM restriction or because of the options I had selected for
      the cloning. Anyway, it finally completed without any errors. I was
      patient.

      The good news was that I was able to immediately boot Windows on my
      production WIN10 partition. All looked normal. Later I did some BIBM
      tests. I made a copy of the WIN10 partition to my TEST2 partition’s
      space, then booted the TEST2 partition successfully. I next did a
      backup of WIN10 to an external hard drive and then did a restore of that
      external image to my TEST3 partition, then booted TEST3. All went
      normally.

      With the SSD I am now back to where I was with the old hard drive —
      able to copy partitions, back up and restore partitions, and multiboot
      from numerous primary partitions. The SSD is faster and produces no
      worrisome sounds (hopefully it has some way of telling me if it is about
      to die). The cloning just took much longer than I expected. If I were
      doing it again, I would probably just remove the HD cage and install the
      SSD as a second drive. That would require at least two cage removals:
      remove cage, insert the SSD in slot 2, replace cage, clone, remove cage,
      take out the dying drive from slot 1 and put the SSD in its place,
      replace cage. Or maybe use some modern enclosure with SSD and eSATA
      support?

      I later reported my experience to TeraByte Support, asking if there were
      different parameters I should have used. The answer I got was
      essentially that’s the way it is for that type of connection if BIBM
      (actually Image for DOS) is used — a BIOS limitation. However it would
      have gone much faster if I had used their Image for Windows or Image for
      Linux products, which bypass BIOS. It’s not too important to me now,
      unless I have to do this again some day.

      Possible addition: As described in Fred’s post, Crucial has optional
      Momentum Cache software to speed up its SSD even more. I wasn’t sure if
      it would play well with BIBM. I posted a query on the TeraByte BIBM
      forum. Terabyte Support quickly responded that they were unfamiliar
      with Momentum Cache, but said “you’d want to ensure it’s not something
      that lives across reboots and it’s abstracted by the firmware so it
      looks and acts like a regular drive.” I later found this Micron
      writeup, “TN-FD-32: Enhancing SSDs With Momentum Cache – Crucial”,
      which provides some more details (Google search for it). I passed that
      info on to TeraByte, with my thought that as long as you did a full (no
      Fast Startup or Hibernate) shutdown to be sure the MC cache was flushed,
      MC sounded safe in a multiboot environment. (The Crucial PDF doesn’t
      make clear when a “shutdown” flush is done by the MC driver, so I
      assumed worse case — MC might leave some bits not written to the SSD
      until that Windows resumed from its hibernation, causing possible data
      loss if another OS was booted instead.) However since I’m happy with
      the SSD speed as it is for what I currently do and don’t want to make
      things more complicated, I’m not planning to use Momentum Cache in my
      multiboot environment for now. Jeff

      2 users thanked author for this post.
    Viewing 4 reply threads
    Reply To: Optane disappointment leads to a ‘Plan B’

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: