![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
151·
4 days agoI think we should still do it, we probably will never understand unless we do it, but we have to accept the possibility that if these synths are indeed sentient then they also deserve the basic rights of intelligent living beings.
I think we should still do it, we probably will never understand unless we do it, but we have to accept the possibility that if these synths are indeed sentient then they also deserve the basic rights of intelligent living beings.
That raises a lot of ethical concerns. It is not possible to prove or disprove that these synthetic homunculi controllers are sentient and intelligent beings.
Is he Naruto running or offering a hug?
Linux keeps winning by doing nothing.
Good point. There is a theory somewhere that loosely states one cannot understand the nature of one’s own intelligence. Iirc it’s a philosophical extension of group/set theory, but it’s been a long time since I looked into any of that so the details are a bit fuzzy. I should look into that again.
At least with computers we can mathematically prove their limits and state with high confidence that any intelligence they have is mimicry at best. Look into turing completeness and it’s implications for more detailed answers. Computational limits are still limits.