Hi everyone, I saw the video that Heun used his phone to control the car,lamp and so on. I deeply attract by this project. I browser the webpage of OpenHybrid and I decide to make a simple demo following the “sensorAndSlider example”.
Befor that, I have several questions to ask.
1. What’s the functionality of the Arduino-YUN-Image? Is it uniform for all Hybrid Objects?
2. Can you describe the data flow briefly? I means what happens from the moment I connect the elements use my iphone to the moment the elements work together. How the data communicates between each element.
3. I didn’t familiar with AR technology, I saw there are some virtual images shown when the iphone camera aim at the target. So is there some AR modeling code embed in the Reality Editor App?
4. The laptop used to create Web based AR content for the Reality Editor. Then when I manipulate the hybrid objects with Reality Editor, does the laptop still need to work as a server?
Thanks for taking the time to read this, looking forwards the responding.