- Fixed pricing on recovery (You know what you are paying - no nasty surprises).
- Quick recovery turnaround at no extra cost. (Our average recovery time is 2 days).
- Memory card chip reading services (1st in the UK to offer this service).
- Raid recoding service (Specialist service for our business customers who have suffered a failed server rebuild).
- Our offices are 100% UK based and we never outsource any recovery work.
- Strict Non-disclosure privacy and security is 100% guaranteed.
Case Studies
Case Studies
Throughout our 25 years of services, we have assisted a lot of clients. We teach our clients about the reason behind of the loss of their data. We have a wide knowledge of problems that related to data recovery, and you can get those from our case studies.
Case Study 1 — MacBook Not Booting (Fusion-style Storage)
Summary
-
Asset: MacBook with a two-device “Fusion” storage layout (fast solid-state device + 2.5″ HDD tier).
-
Symptoms: On power-up, the Mac shows a “sad face”/prohibitory indicator and never reaches the login window.
-
Business impact: ~100 active client projects (graphic design) needed urgently.
Note: Apple “Fusion” exists officially on desktop Macs (HDD + blade SSD) using CoreStorage (10.8–10.12) or APFS Fusion (10.13+). Laptops can have a similar two-device layout if retrofitted or via external/secondary storage. Our workflow supports both CoreStorage and APFS Fusion.
Tooling & Lab Controls
-
Write-blocking & logging: Tableau/TX1-class blockers; chain-of-custody; per-device hashes (XXH3/MD5/SHA-256).
-
Hardware imagers: PC-3000, Atola, DeepSpar for SATA; native NVMe acquisition via PCIe HBA.
-
Analysis: APFS/CS parsers, GPT inspectors, hex tools; macOS and Linux workstations for analysis/mounts.
-
No writes to originals at any stage.
Step-by-Step Recovery Workflow
1) Intake & Evidence Preservation
-
Document serials/WWNs; photograph drive labels and connector damage.
-
Remove both storage members (HDD and solid-state device).
-
Attach through write-blockers; acquire power-on currents and rail resistances to screen for shorts.
2) Topology Discovery (What do we actually have?)
-
Read GPT from both devices; record partition GUIDs, sizes, and roles.
-
Identify stack type:
-
APFS Fusion: look for NXSB (APFS superblocks) on two Physical Stores with a shared Fusion UUID and roles (Primary/Secondary).
-
CoreStorage Fusion: look for CS PV headers on both devices, one LVG spanning both, with LVF/LV metadata.
-
-
Check for FileVault (APFS crypto flags or CS encrypted LVF); if present, ensure user password or recovery key is available.
3) Media Health Assessment
-
HDD: SMART review; head-map test (which heads/read channels are marginal); short servo/seek tests; log pending/reallocated sectors.
-
SSD/NVMe: Media error counters, SMART/E8/E9 wear indices; controller stability under long reads.
-
Decide imaging strategy (conservative on weak heads; throttle queues on NVMe).
4) Per-Device Imaging (Image First, Fix Later)
-
HDD imaging:
-
Start with soft-read pass (no deep retries, long timeouts disabled).
-
Build a head map; if a head is weak, image other heads first.
-
Subsequent hard-read passes to back-fill sparse areas with limited retries and power-cycle windows.
-
-
SSD/NVMe imaging:
-
Lock to stable PCIe link speed; capture in fixed-size chunks to mitigate controller stalls.
-
If DRAM-less with SLC cache behavior, use short duty cycles to avoid timeouts.
-
-
Verify images with segment checksums; record bad-block maps (expected near 0 for SSD; variable for HDD).
5) Fusion Reconstruction (Virtual, Read-Only)
-
APFS Fusion path:
-
Parse both NXSBs; confirm Fusion pairing and Physical Store roles.
-
Construct a virtual Physical Store by honoring APFS block maps (tiering metadata holds where hot/cold extents live across SSD/HDD).
-
Assemble the APFS container → volumes atop the virtual store.
-
-
CoreStorage Fusion path:
-
Parse PV headers on both members; rebuild LVG and LVF graph.
-
If encrypted, open Keybag with user credentials/recovery key to derive volume key.
-
Assemble the LV to a virtual block device.
-
If FileVault is enabled: decrypt after imaging using the provided key/password. No brute-force is attempted.
6) Filesystem Repair & Extraction
-
APFS: parse checkpoints; validate BTrees, snapshots, extent lists; mount read-only; export /Users and project working directories (Adobe/CC scratch, fonts, plugins).
-
HFS+ (if CoreStorage not APFS): replay journal; rebuild catalog and extent trees; fix orphan inodes; mount read-only; export data.
-
For large media projects, verify PSD/AI/INDD open; rebuild previews where needed.
7) Validation & Delivery
-
Hash manifests for all exported paths; spot-open representative client files.
-
Provide data via secure download (encrypted archive) or encrypted USB disk (preferred for >200 GB).
-
Document residual unread LBAs (if any) and prove non-impact on user data.
Outcome
-
Fusion set reconstructed; volumes mounted read-only; 100% of client projects delivered.
-
Root cause: HDD starting to show early media degradation; SSD healthy. Proactive backup guidance provided.
Case Study 2 — Dell RAID 5 Server (8× SAS) — Failed Rebuild / Share Offline
Summary
-
Asset: Dell rack server with PERC controller and 8× SAS drives in RAID 5.
-
Symptoms: SMB shares disappear; admin UI shows Users/Local Users only; array reportedly rebuild attempted and failed early.
-
Impact: Office file server offline.
Tooling & Lab Controls
-
SAS imaging: PC-3000 SAS, Atola, or HBA pass-through with write-block.
-
Metadata tools: DDF/LSI/PERC config decoders; mdadm readers (if NAS-style under the hood).
-
Virtual RAID builder with parity math and stripe heuristics.
-
Filesystem repair: NTFS/ReFS/XFS/EXT toolchain; VMFS if needed.
Step-by-Step Recovery Workflow
1) Intake & Isolation
-
Label drives by slot; record serials/WWNs; capture any PERC logs/NVRAM if available.
-
Do not power the controller again; transport drives only.
2) Quick Health & Imaging Plan
-
Run non-destructive SMART/SAS log reads; classify drives: healthy / marginal / failing.
-
Define imaging policy:
-
Marginal drives: soft-pass first, then targeted hard-pass for sparse fill.
-
Healthy drives: single consistent pass with verification.
-
3) Per-Disk Hardware Imaging
-
Image all 8 members independently to sterile targets; generate bad-block maps.
-
Where a member stalls: reduce queue depth, increase inter-read delay, enable power-cycle windows to catch marginal bands.
-
Hash all images; freeze originals.
4) RAID Geometry Discovery
-
Extract controller metadata (on-disk superblocks) to get stripe/block size, start offset, and parity rotation.
-
Validate with heuristics: try left/right, synchronous/asynchronous parity; pick the geometry that yields consistent file-system headers (e.g., NTFS $MFT signature alignment).
-
Adjust for anomalies:
-
HPA/DCO or capacity drift → normalise image sizes.
-
512e vs 4Kn sector size → normalise in virtual space.
-
5) Virtual Reconstruction (No Controller Writes)
-
Assemble a virtual RAID 5 from the images.
-
For unreadable sectors on a single member, compute missing data from the parity of the other seven.
-
Where the failed rebuild wrote stale parity (write-hole), prefer majority data across stripes, then reconcile at the FS layer.
6) Filesystem Repair & Export
-
Mount the reconstructed volume read-only.
-
If NTFS: replay $LogFile; resolve $MFT/$MFTMirr discrepancies; repair indexes/bitmaps; validate ACLs for shares.
-
If ReFS: salvage intact block-cloned objects; export logically consistent trees.
-
Export full directory trees to new storage; maintain timestamps/ACLs where possible.
7) Validation & Delivery
-
Provide hash manifests; spot-open key files; run path-length and illegal-character reports for Windows shares.
-
Deliver on encrypted HDD; include a concise technical report (geometry used, member health, any repaired stripes).
Outcome
-
Full logical recovery of the share set.
-
Root cause: latent media errors on one disk + early rebuild attempt → parity/data divergence. Our image-then-virtual-rebuild approach prevented further loss.
Case Study 3 — LaCie 6big (6-Disk) — Two Physical Failures, Video Archive
Summary
-
Asset: LaCie 6big (Thunderbolt/USB-C hardware RAID), 6× HDD.
-
Workload: Drone company; BBC/Netflix program footage (large ProRes/RAW/MXF/MP4 assets).
-
Symptoms: Two disks physically failed. Vendor support and a local shop attempted recovery without success.
LaCie 6big typically supports RAID 0/5/6/10. With two failed disks, a RAID 5 set collapses; RAID 6 can survive two failures. We assumed worst case (RAID 5) and planned to recover by extracting partial reads from both failed members to reduce unknown blocks per stripe, enabling reconstruction.
Tooling & Lab Controls
-
Imaging: PC-3000/Atola for SATA; per-disk head-map imaging with power-cycle schedules.
-
Mechanical service capability (donor head-stack swaps with ROM/adaptive migration) where justified by ROI and head-crash evidence.
-
RAID math: RAID 5/6 P+Q (Reed–Solomon over GF(2^8)) reconstruction tooling.
-
Media repair: MP4/MOV atom rebuild, MXF index stitch, ProRes stream validation.
-
Filesystem: APFS/HFS+/exFAT typical on creative storage.
Step-by-Step Recovery Workflow
1) Intake & Non-Destructive Baseline
-
Label all 6 drives; record serials/WWNs; photograph chassis and backplane for evidence.
-
Do not re-attach to LaCie controller; work drive-to-imager directly.
2) Health Assessment
-
SMART and acoustic scan: two disks show head/preamp faults (spin, no read channel / clicking).
-
Remaining four show normal spin profiles; one with a small grown-defect list.
3) Stabilisation & Imaging
-
Good members (4×): full-pass imaging with verification.
-
Marginal/good with defects: soft→hard multi-pass; mark sparse areas.
-
Two failed members:
-
Attempt non-invasive channel recovery; if no read, proceed to head-stack replacement using matched donors (ROM/adaptive parameters preserved).
-
Post-service, image with strict duty cycles, short read windows, and long cool-downs to avoid secondary damage.
-
Outcome: both previously dead members yielded substantial partial images (not necessarily 100%).
-
4) RAID Level & Geometry Identification
-
Inspect LaCie metadata and analyze stripe content: detect parity presence and count (RAID 5 vs 6).
-
Determine block size, start offset, parity rotation, and if RAID 6, P/Q layout.
-
Normalise image capacities; remove any HPA/DCO artifacts.
5) Virtual Array Reconstruction
-
If RAID 6: use P/Q parity to reconstruct up to two missing blocks per stripe.
-
If RAID 5 (two failed originally): leverage the partially imaged members so that per stripe the number of unknown blocks ≤ 1; then compute the last block via parity.
-
Build the virtual array only from disk images (never on originals).
6) Filesystem Mount & Content-Level Repair
-
Identify filesystem: APFS/HFS+ (most common) or exFAT.
-
Mount read-only; repair metadata (APFS BTrees/checkpoints or HFS+ catalog/extent trees).
-
Export the footage sets; then repair media containers:
-
MP4/MOV: rebuild moov atoms and time indexes from mdat; validate H.264/H.265 GOP structure.
-
MXF: regenerate KLV indexes; re-stitch spanned clips.
-
ProRes/RAW: continuity checks across stripe boundaries; fix truncated headers where possible.
-
7) Validation & Delivery
-
Spot-play long-form clips; verify duration/PTS continuity.
-
Provide hash manifests and a clip inventory (names, sizes, durations).
-
Deliver on encrypted multi-TB external disks; include a technical report (RAID geometry, member health, recovered fraction per member, any irrecoverable gaps with clip-level notes).
Outcome
-
Complete or near-complete restoration of program footage; any partial clips clearly annotated.
-
Root cause: two member failures; prior attempts did not image per-disk or account for parity/stripe math. Our per-disk imaging + virtual P/Q/Parity reconstruction + media container repairs restored the archive.
General Notes & Best Practices (applies to all three)
-
Never rebuild on originals. Always stabilise and image first.
-
Preserve adaptives/ROM when changing PCBs or head-stacks—these are unique to each HDA.
-
Normalise geometry (offsets, sector size, HPA/DCO) before virtual assembly.
-
Decrypt after imaging (with valid keys) for FileVault/BitLocker/LUKS; do not attempt brute force.
-
Mount read-only; repair metadata in copies; produce hash manifests and concise engineering reports.