1 points by sufiyankureshi 5 hours ago|1 comments
Hi HN,

I’ve been working on a small system that takes already-trained ML models and applies a sequence of stability and robustness interventions (no retraining) to reduce representation drift under stress.

I ran this on a couple of pretrained vision models as a sanity check:

– ResNet-18, MobileNetV2 – Pretrained on ImageNet – Evaluated on CIFAR-10 – Runtime measured in minutes

Observed results: – ~43–55% reduction in representation drift – ~5–6× improvement in finetuning efficiency – Effects held across architectures, including lightweight models

This isn’t accuracy tuning, accuracy was mostly stable. The gains came from preserving internal representation boundaries under controlled stress.

I’m sharing the results + approach because I haven’t seen many tools focus on post-training stability interventions rather than retraining or data collection.

Happy to answer questions or share more details if useful.

sufiyankureshi 4 hours ago
Author here. Happy to go deeper on methodology, metrics, or limitations if helpful. This is early-stage and I’m mostly sanity-checking whether this problem resonates.