open hybrid LED example , the yun itself is given as the target image for augmenting the slider i have used IR sensor instead of force sensor ,the result is the same
The more you play with the parts, the more it makes sense. The App has a clever snapshot feature (I don’t if that’s what they call it, but that’s what it seems to do). Once you acquire an object, push the button and it locks the image and the controls stay in a fixed place, but you can still slide them and such.
The Vuforia software associates an image “marker” with the hybrid object through characterization files (which includes the image, the object basic info, and the app overlay controls and stuff). These files live on the server, and get loaded by the App when it connects to the server. The server is the “brains” and the “storehouse” for the information, while the App is a clever user interface - UX really – camera, display, touchscreen, etc.
The server also has interfaces to the “real world” on the backside, or that’s how it sits in my head. The server glues together the App interface and the real world, and is the core of the hybrid reality. It also hosts the Vuforia application that builds the hybrid object, but that’s just a “setup” set to build the configuration.
To me, it is helpful to divorce the setup/configuration phase from what happens during usage.
Hi @ivanlin i see why you are confused , you are right an image is needed to augment the html contents on top of it
in the video i took a photo of the arduino Yun , changed its properties ( size, RGB scales) to meet the vuforia requirements and used ( uploaded ) it as target ,QR code like image is not always needed, but it makes the target more unique
so what you see is the HTML slider being augmented on top of the image of arduino yun i took
hope your doubts are cleared now
PS : i was too lazy to go out and take print out of a marker image and so ended up doing this
Now I understand QR-code-like is not necessary.
As long as the ‘features’ are in high rating it would be enough, isn’t it?
But another thing I don’t know how to do is how can I determinate the position, angles or scales of the virtual buttons, sliders on iPhone/iPad screen?
Is there any tool or file that to describe those information?
Yes, this is all correct.
You need to have the object.js file referenced in your index.html.
It is used to communicate with the Reality Editor on how to scale the interface.
If it is missing, your interface will not be visually presented in the right way.
Imagine your would have the marker attached to the brain wave reading device on the head.
You could literally visualize the brain waves right at the spot where they are generated.
Then you could draw a line from the “brain” to objects in the environment and as such make the human in to a hybrid “human”.
Or something that is just more visually connected with the domain of brain reading.
So that the pattern of the marker merges in instead of standing out.