Perception of and concern for apparently potential hazards of artificial intelligence critical issues decision-making seem to inspire the goal of ensuring development of AI decision-making that we can live with. However, this goal might be logically unachievable, because limited, fallible human perception seems to preclude identification of what we *should* live with.
Apparently, as a result, conflicting human perspective thereregarding as well as human conflict fight/flight response seem likely passed on to AI.