strypey, "AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."
https://betterwithout.ai/pragmatic-AI-safety
I've posted a quote along these lines before, but I think it's a key point, worth reiterating.