Humor is a release valve for what you can’t say. (View Highlight)
Economic “agents” maximize utility under constraint. Sutton and Barto, fathers of reinforcement learning, turn utility into “reward,” giving examples of a chess player making a move or a cleaning robot optimizing its path. In each case, an agent “seeks to achieve a goal despite uncertainty about its environment.” (View Highlight)
While a pocket of tech elites have been using “high-agency” since the mid-2010s, it’s no surprise the term has taken off amid the LLM boom. Given access to a system that’s memorized all documented human knowledge, what matters is not expertise, but a dogged ability to adapt and win no matter who you are and what you start with. (View Highlight)
If agency combines autonomy (“the capacity to formulate goals in life”) plus efficacy (“the ability and willingness to pursue those goals”), AI in 2025 is sorely lacking in both.² (View Highlight)