New 'mistake-gating' learning algorithm cuts neural network updates by 50-80% while mimicking human error-correction biology
arXiv cs.AI · April 17, 2026
AI Summary
•Researchers propose 'memorized mistake-gated learning,' a biologically plausible plasticity rule inspired by human negativity bias and error-related negativity
•The method only updates synaptic weights when classification errors occur, rather than updating on every sample, reducing computational overhead significantly
•Particularly effective for incremental learning (acquiring new knowledge while retaining existing knowledge) and online learning scenarios with memory constraints
•Reduces storage buffer requirements for data replay, making the approach practical for continual learning tasks with limited energy and memory resources