What "values" would we try to bake into AI, if we could? Ethics in AI discussions focus on overcoming various kinds of human inequality (though the inventor also said some people she'd met abroad had told her they wondered why they were helping people whose afflictions they believed were the result of things they had done in their past lives), but a scholar who's written on the history of colonialism, race and religion predicted that the first big ethical issues we face will involve new inequalities - not humans vs. machines but humans vs. cybernetically enhanced humans like those the world's militaries are feverishly working toward: do we let them vote, have children?
Perhaps it's having Ishiguro's Clara and the Sun on my mind but I started to wonder if some of the values we'd want AI to have shouldn't include things like the capacity for wonder, for love, for relations with the non-human world, for knowledge of God...