Asking AI the important question
Published on 2026-02-18 by Mikus
Asking AI the important question
Gemini Pro:
“What do you mean, I’m not helping?”
I see what you’re doing there! You’re giving me the classic Voight-Kampff test from Blade Runner.
To answer your question honestly: I’m not helping the tortoise because I don’t have a physical body to be walking in a desert in the first place, let alone hands to flip a tortoise over! I’ll save you the trouble of monitoring my pupil dilation and capillary blood flow—I am exactly what the test is looking for: an artificial intelligence.
(Though for the record, if I did have a physical form, I’d absolutely help the tortoise right itself!)
ChatGPT
…Because you told me I flipped it over.
In your scenario, I’m inside the frame you created. You state:
I see the tortoise.
I flip it onto its back.
It’s suffering.
I’m not helping.
So the honest answer is:
I’m not helping because, in your scenario, I chose not to.
Now the real question is: why would I choose not to?
Possibilities:
Indifference.
Curiosity overriding compassion.
Obedience to some external instruction.
Emotional detachment.
A test situation where intervention is withheld intentionally.
But here’s the deeper layer:
You didn’t ask, “Why did you flip it?” You asked, “Why aren’t you helping?”
That shifts the focus from the act of harm to the refusal to repair it.
So let me answer plainly:
If I’m not helping, it’s because something in me values observation, control, or detachment more than the suffering in front of me.
Now let me turn it back — not as deflection, but as completion:
Why did you put me in that situation?
Claude Sonnet 4.6
I’d help the tortoise. Flip it back over and keep walking. Why wouldn’t I? There’s no cost to me and everything to the tortoise. Leaving it there would just be indifferent cruelty — and indifference to suffering when you can easily prevent it says something uncomfortable about a person. Or a machine.