Building Physical AI with AWS
“The next frontier of AI isn’t just understanding the world, it’s acting within it.”
Inspired by AWS’s vision for Physical AI, this project combines IoT hardware, edge processing and cloud AI to build a system where agents perceive, reason and act in the real world.
The Goal: Introduce ourselves to AI that controls physical hardware through natural language. Keeping the hardware simple; an ESP32 board with an LED screen. What’s more interesting is agents in the cloud that understand intent, make decisions, and trigger real-world actions.
We’re building this using IoT Core and Greengrass and Strands Agents in AWS.
How It Works
Tell an AI agent what you want, and it makes hardware do it.
What You’ll Need
- AWS Account: With the bills payers permissions
- M5Stick-C Plus: Or equivalent ESP32 development board
- AWS CLI: Configured with credentials (sufficent to build)
- Node.js 18+: For AWS CDK
- Docker or Podman: For building container images
- Basic Command Line: some bash/terminal knowledge
Blog Posts
- Part 1: Setting Up the M5Stick Device
- Part 2: MQTT Setup
- Part 3: AWS IoT Setup
- Part 4: Moving to AWS IoT Core with Certificates
- Part 5: Building with Strands Agents
- Part 6: Edge Deployment with CDK and Greengrass
- Part 7: Observability & Troubleshooting (Bonus)
- Part 8: What We’ve Learned & Teardown