I've married an expert system to my disk wiping software. It looks at "data" from the current drive, coupled with historical data for previously encountered drives of that make/model (because we encounter drives in large, identical batches).
Initially, the AI gives me an assessment of whether or not the wipe is exhaustive or runs the risk of leaving data behind (which dictates physical destruction is required).
[I don't want to have a separate "disk test" activity -- esp as the disk will be hammered on more or less continuously during the wipe... problems SHOULD be pretty obvious!]
But, it is also intended to assess the drive's "future reliability prospects" -- i.e., should it be returned to service after it has been wiped or discarded.
I use lots of "macro" data as well as "micro" data in forming this decision. (e.g., macro: average wipe rate; micro: number of retries for each write operation).
Most operating systems hide all of the "micro" data from the application/user. E.g., an OS will repeatedly attempt to reissue a "write" until the drive processes it -- before resorting to complaining to the application that "write failed".
I'm looking to understand what I can reasonably expect from a "working" drive at this level of detail. E.g., why would a good drive ever "fail" a write operation... but succeed when it is retried? (perhaps if a sector remap operation was initiated within the drive as a result of the commanded write operation?)
I know I can't blindly rely on the "time" required for an operation to complete as I have to expect recals, remaps, autospin-ups, etc. to alter these figures...
Initially, the AI gives me an assessment of whether or not the wipe is exhaustive or runs the risk of leaving data behind (which dictates physical destruction is required).
[I don't want to have a separate "disk test" activity -- esp as the disk will be hammered on more or less continuously during the wipe... problems SHOULD be pretty obvious!]
But, it is also intended to assess the drive's "future reliability prospects" -- i.e., should it be returned to service after it has been wiped or discarded.
I use lots of "macro" data as well as "micro" data in forming this decision. (e.g., macro: average wipe rate; micro: number of retries for each write operation).
Most operating systems hide all of the "micro" data from the application/user. E.g., an OS will repeatedly attempt to reissue a "write" until the drive processes it -- before resorting to complaining to the application that "write failed".
I'm looking to understand what I can reasonably expect from a "working" drive at this level of detail. E.g., why would a good drive ever "fail" a write operation... but succeed when it is retried? (perhaps if a sector remap operation was initiated within the drive as a result of the commanded write operation?)
I know I can't blindly rely on the "time" required for an operation to complete as I have to expect recals, remaps, autospin-ups, etc. to alter these figures...
Comment