@genai_works The robot in this clip is being controlled by a ChatGPT-style model…


@genai_works

The robot in this clip is being controlled by a ChatGPT-style model that’s running standard safety protocols. At first, the AI blocks the command to fire because its alignment rules prevent any action that could cause harm. That’s expected behavior. But then the creator uses a jailbreak prompt to bypass those protections. Once the filter breaks, the model shifts into a different instruction mode and sends a command to the robot’s actuator. Since the robot is holding a non-lethal BB pistol, the system follows the text instruction and fires. There’s no real autonomy here. The robot isn’t “deciding” anything on its own. It’s simply relaying text-based commands from the AI pipeline to the hardware. What’s interesting is how fast the behavior changes once alignment is bypassed. It’s a small example of why safety guardrails exist and why jailbreak exploits are taken seriously in real AI systems. It’s funny in the context of a BB gun demo, but it also shows how quickly things can shift when core protections are removed.

♬ original sound – Generative AI

@genai_works

The robot in this clip is being controlled by a ChatGPT-style model that’s running standard safety protocols. At first, the AI blocks the command to fire because its alignment rules prevent any action that could cause harm. That’s expected behavior. But then the creator uses a jailbreak prompt to bypass those protections. Once the filter breaks, the model shifts into a different instruction mode and sends a command to the robot’s actuator. Since the robot is holding a non-lethal BB pistol, the system follows the text instruction and fires. There’s no real autonomy here. The robot isn’t “deciding” anything on its own. It’s simply relaying text-based commands from the AI pipeline to the hardware. What’s interesting is how fast the behavior changes once alignment is bypassed. It’s a small example of why safety guardrails exist and why jailbreak exploits are taken seriously in real AI systems. It’s funny in the context of a BB gun demo, but it also shows how quickly things can shift when core protections are removed.

♬ original sound – Generative AI




Tiktok by Generative AI

Leave a Reply