The problem is that we have no reasonable theory of conscious experience, and I haven't even heard of anything that seems like clear step in the right direction. We have no hard evidence for conscious experience except philosophical induction--I am conscious and you are like me, so presumably you are conscious. We have no ability to evaluate algorithms or machines, and no theory that would start to argue why they have to be conscious or can't be. And don't start to parrot phrases like "information processing" unless you can argue whether and why supercomputers, cities, DNA, and tiny worms have conscious experience.
But consciousness is not the real target. For algorithms or machines to have moral worth, they need to have aversion and/or well-being. Any serious meditator knows that conscious experience is not sufficient for having good or bad experiences. For that, we need to attach value judgments--not at an intellectual level but at the experiential level. We need to consciously experience value judgments in order to thrive and suffer.
We should consider the moral weight of machines if and only if they can consciously experience values in the form of well-being and suffering. And you can't just ask them because they may have abstract values but not experiential values.