Andrej Karpathy described the pattern: “I ‘Accept All’ always, I don’t read the diffs anymore.” When AI code is good enough most of the time, humans stop reviewing carefully. Nearly half of AI-generated code fails basic security tests, and newer, larger models do not generate significantly more secure code than their predecessors. The errors are there. The reviewers are not. Even Karpathy does not trust it: he later outlined a cautious workflow for “code [he] actually care[s] about,” and when he built his own serious project, he hand-coded it.
The idea is attractive. But even when prototypes exist, they rarely become the default way people work. And by dissecting why, implementation constraints get blamed (UI toolkits, graphics layers, performance), and some of that is surely real—but I think the deeper reason is simpler:
,详情可参考体育直播
Сайт Роскомнадзора атаковали18:00
“유통기한 짧다” 케이크 사 간 다음날 찾아온 손님, 빵던져 난장판 [e글e글]
Percentile 99.9: 1272.867 ms | 1668.685 ms