Announcement

Collapse
No announcement yet.

WD drive "colors"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #21
    Re: WD drive "colors"

    What in the fuck kind of stuff are you storing?
    Cap Datasheet Depot: http://www.paullinebarger.net/DS/
    ^If you have datasheets not listed PM me

    Comment


      #22
      Re: WD drive "colors" [wall ot text]

      Originally posted by Uranium-235 View Post
      What in the fuck kind of stuff are you storing?
      I'm a business. I store everything that I've ever created, everything that I've ever USED to create those things, all of the data that proves each design, 'scope traces, mathematical analysis, etc.. And, I do so in a manner that lets me step BACK into the design to make revisions or further explore "particular operating conditions" that are alleged to be problematic (or, to determine how far a particular design can be stressed in a given direction). E.g., if I want to deploy a particular design in a hotter/colder/wetter environment, how will it's performance change from those of its original conditions? What aspects of the design are most sensitive to temperature? Moisture? Atmospheric pressure?

      What would you do if you had to tweek a design that you created 10 years ago? Yeah, you may have saved printouts of the source code. And, the actual photoplot films of the artwork. As well as a print of the schematic.

      But, do you still have the TOOLS that you used to create each of those? The same CAD program? PCB layout tool? Compilers? Can you step in and make a change quickly and easily -- or, do you have to manually transcribe a printed schematic into electronic form in your CURRENT schematic editing tool? And, then manually place all of the components (and foils!) as they were placed in the original board layout?

      What about the operating system (and appropriate utilities) under which those tools ran?

      I've designed many products for "regulated" industries (gaming, medical, pharma, etc.). So, I need to be able to accurately reproduce EXACTLY the final product that I originally produced (because THAT product has been "validated"). I can't use today's compiler because it will generate a different (likely better/faster) program binary than the compiler that was used in the original product. I.e., making NO changes to the source code will result in a different end product! One that will need to be re-validated (expensive and time consuming); I can't just drop it in as a replacement for the old product cuz there's no guarantee that the new binary performs the same as the old, in all cases (if it did, then why is it different?).

      Do you take the computer (and all associated peripherals) that you used to design the device and shrink-wrap it and store it on a shelf for that potential future use? How much shelf space do you have? :>

      Do you save all of the databooks that you consulted when selecting components for the design? While some of the data may be available on-line, failing to preserve the original means you'll now have to spend time hunting for that data. And, will the data have changed from that which you used -- and relied upon (think litigation) -- in your design? What do you do about the documents that are NOT "public"? Confidential specifications? Materials released to you under NDAs? (as well as the NDAs themselves!)

      Do you save the pages of notes (lab books) that you accumulated during the design process? And, the pages of test results that you compiled to prove how it operated in specific conditions?

      All of this intellectual property represents the value of a business. Along with the ability to efficiently access it! (if it's costly to make use of it, then its value decreases accordingly)

      Now, repeat this for everything you've designed as well as designs that were abandoned (perhaps not economical in 2005 but may now be more manufacturable due to technological advances or changes in component prices) as well as those that were just "explored" -- over a period of 40+ years.

      Add to that, all of the business records (purchase orders, sales receipts, telephone logs, etc.) over the same time period.

      So, you digitize EVERYTHING and keep it on disk. E.g., my current project has over a million files -- not counting tools, documents, etc. Disks are the modern day equivalent of 'fiche.

      How much "data" do you think your employers generate per person, per year? decade? They obviously keep your "attendance" records, PTO, insurance claims, salary history, employment application, etc. But, what about the WORK that you do for them and the results of that work??

      When I first started in this business, I saved all of the "paper" -- and kept the disk drive from the host machine to preserve the "tools". But, that's a stack of paper a foot thick and a disk for each project, project proposal, abandoned project. I don't have the space to store all of that. Nor the staff to maintain it and ensure it is still in a "viable" form (storing stuff only to discover that it is no longer readable is just a waste of space).

      [Actually, I originally stored disk images on spools of 9 track tape as they were cheaper than $1000 disk drives. But, they eat up a shitload of space! Thankfully, you can copy an image off of a tape and onto a physical disk pretty easily when the disks become more affordable!]

      So, I just have an archive that stores digital versions of everything that can be digitized (physical HARDWARE prototypes I preserve when/if necessary). Then, "catalog" all of the files (because I can't access all of them at the same time, given the number of drives involved) so I can search for some particular project or notes that I remember as being associated with some project, etc. Then, once I know which disk it's on and where within that disk, I can recover it.

      I keep "virtual machines" for each system configuration used in the development of a particular project so I can "run" those machines -- and the tools they encapsulate -- on TODAY'S computer system. I create a new VM for each machine configuration that I use during a product's development (e.g., if I upgraded the compiler because of a bug that I uncovered in the compiler, I want to be able to run the "buggy" compiler as well as the "bugfree" compiler that replaced it)

      While I am developing a product, I keep all of my "documents" (schematics, source code, layouts, regression tests, etc.) under a VCS (version control system) so I can move forward or backward in time to explore different approaches to the design. E.g., I may discover a flaw in a new implementation of a particular piece of code that I've written. So, I will want to be able to "step backwards" to see where the flaw crept into the design. Is it present in any RELEASED versions of the software? If so, which ones??

      In addition to enabling me to resurrect an old project (for maintenance or revision), this also is a great resource that I can consult. I can dig up the results of an experiment I did on circuit X of project Q and see what I can extrapolate from that for a new circuit to be designed for a different project. Or, review the criteria I used for selecting a particular component. Or, look up the failure rate for a particular product (warranty repairs).

      For example, I wrote a clever little binary-to-packed-BCD conversion routine a few decades ago. When I need a similar capability, I just go looking for the original and relearn what I previously wrote.

      Or, if I want to implement another CORDIC algorithm, I can dig up the analysis of past versions to remind myself of how to evaluate the accuracy of the various tables that must be constructed to achieve a particular end result.

      Why would I want to have to start from scratch when I've already learned (and forgotten!) all of this stuff? Who better to remind me than my own notes on the subjects? I already KNOW that it works/bugfree so why risk trying to reinvent it?

      But, this relies on my own personal memory of what I did, when I did it, and for which particular project. Because file names (which is what I catalog) aren't usually very explicit. I probably have 300 files called "Layout" -- and the only way I can know what that layout entails is if I remember what the design for that particular project (which I can get by looking "up" to the enclosing folder(s) until I find a name that describes the project/product) involved.

      If, instead, I could tag files -- or projects -- with more descriptive commentaries and then search for keywords in those commentaries...

      Do you keep a log of "problems and repairs" for devices that you've fixed? So, if you encounter another, you have something to remind you what the problem was with the LAST one you encountered? Or, do you start blind, each time?

      Where do you keep that log -- in a notebook? (how do you search it?) Does it include sketches, photos, 'scope traces, etc.?

      Comment


        #23
        Re: WD drive "colors"

        this is starting to remind me of HP,
        i wonder how long it will be before some beta-male in marketing gets the idea of a "rainbow drive",
        causing the customers to leave and never return??

        Comment


          #24
          Re: WD drive "colors"

          Originally posted by stj View Post
          this is starting to remind me of HP,
          i wonder how long it will be before some beta-male in marketing gets the idea of a "rainbow drive",
          causing the customers to leave and never return??
          We already have such devices: RAM (as on-drive cache), SSD and HDD in the same package. All that needs to happen is firmware being "opened" so customers can tune the performance to their specific applications.

          Comment


            #25
            Re: WD drive "colors"

            Originally posted by Curious.George View Post
            "silent" errors
            Looks a lot like I have a PNY CS900 500 GB SSD that has silent corruption. SFC kept coming back with the corruption report, with the Windows 10 1909 that's on it. Even when Windows Update didn't error out, SFC comes back with the corruption message.

            Then I checked with Crystal Disk Info and see that its SMART reported a raw value for bad blocks, even when I didn't see Windows give the dreaded "The device, X, has a bad block." error message in the event log.

            That SSD, cannot be trusted!

            Now, since September 9, 2020, a 250 GB Samsung 970 Evo Plus NVME SSD is in its place!

            That PNY SSD was never used in my B450 rig. That has a Crucial MX500 500 GB SSD, which has been trusty, so far!
            Last edited by RJARRRPCGP; 09-22-2020, 10:07 AM.
            ASRock B550 PG Velocita

            Ryzen 9 "Vermeer" 5900X

            16 GB AData XPG Spectrix D41

            Sapphire Nitro+ Radeon RX 6750 XT

            eVGA Supernova G3 750W

            Western Digital Black SN850 1TB NVMe SSD

            Alienware AW3423DWF OLED




            "¡Me encanta "Me Encanta o Enlistarlo con Hilary Farr!" -Mí mismo

            "There's nothing more unattractive than a chick smoking a cigarette" -Topcat

            "Today's lesson in pissivity comes in the form of a ziplock baggie full of GPU extension brackets & hardware that for the last ~3 years have been on my bench, always in my way, getting moved around constantly....and yesterday I found myself in need of them....and the bastards are now nowhere to be found! Motherfracker!!" -Topcat

            "did I see a chair fly? I think I did! Time for popcorn!" -ratdude747

            Comment


              #26
              Re: WD drive "colors"

              Originally posted by RJARRRPCGP View Post
              Looks a lot like I have a PNY CS900 500 GB SSD that has silent corruption. SFC kept coming back with the corruption report, with the Windows 10 1909 that's on it. Even when Windows Update didn't error out, SFC comes back with the corruption message.
              I think (?) SFC only checks system files -- not "user files" (?). Could it be that those particular corrupt files weren't actually USED in most Windows boots?

              And, it obviously won't check a volume that is "offline" :> E.g., 99% of my archive sits "cold".

              Then I checked with Crystal Disk Info and see that its SMART reported a raw value for bad blocks, even when I didn't see Windows give the dreaded "The device, X, has a bad block." error message in the event log.
              I think even SMART only reports on parts of the drive that are actively "examined". I.e., if you have a whole sh*tload of stuff that you only reference once a year, I don't think SMART will report errors in the sectors occupied by those files UNTIL you actually start trying to access them.

              In each case, if errors develop that can't be repaired (exceed the Hamming distance of the ECC being used), then you risk losing data -- without even KNOWING that this is happening! (you go to access the data and get a "read error"; the sector isn't necessarily "bad" but the data that it is hosting has been corrupted to a point where it can't be correctly recovered.)

              That SSD, cannot be trusted!
              My fear is that errors in SSDs can quickly cascade. If I lose a sector on a magnetic disk, it's usually just an omen that others MIGHT follow. And, usually not in great numbers and immediately. So, I may lose part of a file. I'm not sure there's enough data on SSD performance to be able to make a similar assumption.

              When I mount a drive from my archive, its catalog is verified against the last noticed catalog (VTOC). This guards against my making changes to its contents "elsewhere" (i.e., mount the drive on some other system) when the catalog wasn't being actively watched.

              Then, I see which file has been verified LEAST recently and read its contents, compute the hash and compare the hash against the value stored for that file in the catalog. If they disagree, I alert the user that there has been some change to the data that wasn't supervised (it may not be corruption; it may be a change that I deliberately made but failed to update the catalog).

              Because the database knows where every file is located and what their hashes are, I can then locate a duplicate copy of the file *if* I need to restore it's previous contents.

              In this way, every file on every drive EVENTUALLY gets explicitly examined to catch errors before they result in data loss (I started this practice when I used to use magtape for my archive medium; disks just make it quicker and easier!)

              Now, since September 9, 2020, a 250 GB Samsung 970 Evo Plus NVME SSD is in its place!

              That PNY SSD was never used in my B450 rig. That has a Crucial MX500 500 GB SSD, which has been trusty, so far!
              Good luck with BOTH! I keep watching for signs as to when the technology is "mature" enough that you don't need to be vigilant of (SSD) firmware updates, etc.

              Comment

              Working...
              X