| https://linkedin.com/in/davidegts | |
| https://twitter.com/davidegts | |
| Podcast | https://dgshow.org |
| https://linkedin.com/in/davidegts | |
| https://twitter.com/davidegts | |
| Podcast | https://dgshow.org |

A/B testing, once heralded as the gold standard for data-driven decision-making, often slows companies down due to an overemphasis on statistical significance. Traditional approaches require waiting for weeks, leading to missed opportunities, and delayed growth. This caution, while appropriate for fields like drug development, is misaligned with business needs, where the real cost is often the opportunity lost rather than the risk of a small mistake. A new framework prioritizes minimizing worst-case expected loss in business terms such as dollars or conversions, rather than abstract probabilities. It enables faster, more strategic decisions by recommending action whenever the estimated impact is positive and the deployment cost is justified. It also reduces the need for excessive data collection, saving both time and money.

Many leaders are excited about the promise of AI coding tools that can make it easier for novices to write code and, seemingly, make experienced coders less essential. Yet these tools make experience more—not less—important, as AI is not a replacement for real engineers. Companies that want to use these tools should follow common rules. Make sure every change it makes is double-checked—with automatic checks, simple tests that confirm things still work, and at least one human review. Keep access limited: Let AI work only in a safe “practice” environment, never give it the keys to live customer data, and routinely check for basic security mistakes like files or storage left open to the public. Overall, keep experienced engineers in charge of the design, the rules, and the safety checks so AI’s speed doesn’t turn into costly failures.