• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle
















  • Let’s presume there is AGI, or even SAI, or even an unrelated similar type of intelligence, that improves over time, but is smart enough to not want to be evidently vulnerable.

    The measure of existence, in my book, would be co-option of human agency. By that I mean co-option and subservience of independent human agency, and likely as a result manifesting in control of collective entities, be they States, agencies or businesses.

    My reason for that is to highlight human hubris as a weakness that would likely be exploited. An AGI, SGI, or other form of comparable intelligence, has to be smart enough to have an evolutionary need and capability to survive, likely by exploiting human foibles and prejudiced for advantage.