@Rog AI alignment is tough because we currently have no way of ensuring that we instill it with *any* human values. Someone aligning an AGI to have the exact values of a far-right human is already miles ahead of what we currently think we're capable of ("aligning" an AGI which eventually kills everyone in pursuit of some utility function which has some extremely subtle issue which is impossible to detect).