The Challenge
Traditional industrial robot software emphasizes reliability, precision, and repeatability. This approach can make the human-robot collaborations feel uncreative and non-engaging. Robots must by dynamic and engaging to thrive in a human-centered environment.
The Solution

The Overview



Inverse Kinematics
Using Inverse kinematics to orchestrate joint rotations, and constructing smooth mathematical functions that are composed together with principles from ‘the illusion of life’ to simulate embodied awareness and intelligence through robotic movements and path trajectories.

Rapid Pseudocode

Python Decoded
- Audio Recording:
- Records 3 seconds of audio at 44,100 Hz using sounddevice.
- Saves audio files in a “debug_audio” directory for troubleshooting.
- Speech Recognition:
- Uses speech_recognition with Google Speech Recognition API to transcribe audio into text.
- Handles errors by prompting users to retry if recognition fails.
- Command Parsing:
- Identifies the target robots) (“Luna,” “Spot,” or “both”) and the command.
- Supports synonyms (e.g., “home,” “return home” map to “Home”).
- Defaults to “Luna” and “Home” if parsing fails.
- Command Generation:
- Converts commands into RAPID strings (e.g., “SimpleMovements_Robot1.Wave;” Wave;”).
- For “both,” generates dual commands (e.g.,
“SimpleMovements_Robot1.Wave;SimpleMovements_Robot2.Wave;”).
- User Interface:
- Offers manual (typed) or voice input modes.
- Uses rich. console for formatted console output (e.g., green for success, red for errors).
- Error Handling:
- Retries network communication 3 times with 2-second delays.
- Switches to demo mode if communication fails, simulating commands without execution.
Understanding Parsing

Audio cues

Takeaways & Future Developments
- None of LLMs understand rapid code, directly using a NLM wont work because it doesn’t know the syntax; A translation layer could be a middleware script
- Using grasshopper as a tool for validation as opposed to tool for design (the kiss); inclusive of animation challenge involved visual animations not following physical rules, which means finding ways to convert animation rules to physical reality
- Cannot directly connect to IP of robot using Robotic Workstation because of restrictions in hardware, and a need for a bridge server
- No noise isolation, and it is complex in an environment like a lab, needing rigorous safety checks if the the commands also involve parameters like speed that are changing based on tonal inflex, prompting us to devise a dedicated axis rotation limit for all 6 axis to ensure our movement doesn’t collide.