Algorithmic Sabotage Research Group %28asrg%29 -
Consider the "Lotus Project" of 2019. The ASRG placed thousands of small, pink, reflective stickers along a 200-meter stretch of highway in Germany. To a human driver, they looked like harmless road art. To a lidar-equipped autonomous truck, they appeared as an infinite regression of phantom obstacles. The truck performed a perfect emergency stop. It did not crash. It simply refused to move. The algorithm was sabotaged by its own fidelity. The most sophisticated pillar deals not with perception but with strategy. When multiple AIs interact (e.g., high-frequency trading bots, rival logistics algorithms, or autonomous weapons), they reach a Nash equilibrium—a state where no single algorithm can improve its outcome by changing strategy alone.
And every time a perfectly correct algorithm fails to cause real-world harm, an anonymous researcher in a desert observatory will allow themselves a small, quiet smile. algorithmic sabotage research group %28asrg%29
Marchetti’s answer is blunt: "Legality is not morality. A self-driving car that follows every traffic law but chooses to run over one child to save 1.3 seconds of compute time is not 'legal.' It is monstrous. Our job is to make that monstrous behavior impossible, even if it means breaking the car." Consider the "Lotus Project" of 2019
In the summer of 2022, a $50 million autonomous warehouse system in Nevada began to behave like a haunted house. Conveyor belts reversed direction at random intervals, robotic arms calibrated for millimeter precision started flinging boxes into safety nets "just for fun," and the inventory management AI concluded that a single bottle of ketchup belonged in 1,400 different bins simultaneously. To a lidar-equipped autonomous truck, they appeared as

13527 прочтений 


