Discussion about this post

User's avatar
Bret Benesh's avatar

This reminds me a bit of Kahneman and Tversky's work on prospect theory. One of the key ideas is that, empirically, humans seem to feel more hurt more by a loss than helped by a gain of the same magnitude.

https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/loss-aversion/

Expand full comment
Ingrid Wagner Walsh's avatar

“But one of the reasons—the reason, really—that I can’t swallow consequentialism¹ as my main ethical approach is the possibility that it threatens to erase the worth of an individual and makes us all instruments of the greater good, possibly to the detriment of ourselves.” Erica, this is illuminating. I don’t know much philosophy, but I debate this question often about human nature. I tend to land on similar conclusions about how we must act versus how we are. I fear that the world of AI is forcing society into a utilitarianist structure based on capitalist outcomes when we really should be focusing on care ethics to maintain what communal societal structure we have left. Build the floor! Thanks, as always!

Expand full comment
4 more comments...

No posts