Monday, October 30, 2017

Potential Danger Of The Attempt To Replace God With AI

6:33:24 PM <@ SIDPMod> To me, the premise: "How to root out hidden biases in AI" seems illogical since it seems to rely upon using a flawed guage to gauge the gauge's flaws.

6:34:56 PM <@ SIDPMod> For example, the excerpt "I believe the great use for machine learning and AI will be in conjunction with really knowledgeable people who know history and sociology and psychology to figure out who should be treated similarly to whom" seems to overlook the limitations of human perception. If any humans could "figure out who should be treated similarly to whom", social conflict seems unlikely to exist.

6:47:26 PM <@ SIDPMod> Technology seems to have been developed to help humans determine the factors relevant to decision-making. If, with the factors currently humanly and technologically perceived, we still haven't rooted out the biases in human intelligence that seem suggested as the tool for rooting out biases in artificial intelligence that exist via human intelligence.

6:56:22 PM <@ SIDPMod> Goals that are suggested to have been technologically achieved, such as locomotion and computation might be presented on AI's behalf to propose AI's ability to be eventually fool-proofed.

7:11:03 PM <@ SIDPMod> Without suggesting that goal to be irrefutably unreachable, achievement seems illogical due to apparently limited human understanding of what optimal future circumstance consists of, and perhaps as a result, what decision-relevant factors are, and apparently much less so, the relevant state of those factors.

7:12:34 PM <@ SIDPMod> Re: "How do you know when you have the right model, and when it’s capturing what really happened in society?" to me, therefore, an even more important question might be "How do you know when your model is capturing what really should happen in society?"