I explore how intelligence, perception, and motion can work together. Through hands-on experiments with LLMs, agent workflows, diffusion models, sensors, and embedded systems, I create small functional prototypes that test behavior, interaction, and real-world responsiveness
Define the question and clarify what we want to learn, prove, or trigger through the prototype.
Build small, controlled experiments — in software, hardware, or both — to observe behavior and interaction.
Refine based on real-world responses, not assumptions.
Capture insights, constraints, and opportunities that the experiment reveals.
LLMs, Agent Systems, Multimodal Workflows
Diffusion Models, ComfyUI, Model Control & Prompt Architecture
Arduino, ESP 32, Raspberry Pi, Nvidia Jetson Orin, Intel Realsense, RP Lidar, All Sensors & Motors
Whisper, Eleven Labs, Azure STT, Google STT, OpenCV, MediaPipe and YOLOv8
VS Code, PyCharm, Jupyter Notebook, Copilot, Python, Embedded C++, C#, Hugging face, Tensor, Keras, Linux, ROS & Windows iOT