OpenHybrid expirements

open hybrid LED example , the yun itself is given as the target image for augmenting the slider
i have used IR sensor instead of force sensor ,the result is the same

https://www.youtube.com/watch?v=o4nsMlTm0v8

controlling a small obstacle avoiding robot ,using openhybrid

Line graph updation using openhybrid

Hi, @V_Mohammed_Ibrahim,

Why are you able to control the slider without the QR-code-like image in the screen?

I thought the image is necessary.

My understanding is that the image we uploaded to Vuforia contains some key information, like buttons, sliders.

After uploading the image, Vuforia would provide a zip file for developers to download and the zip file should be placed onto Arduino Yun or Pi.

Therefore, we are able to see those buttons and sliders on the position we want in iPhone or iPad screen.

But your video demonstrated my understanding seem wrong.
Could you please explain how it work? Thanks a lot.

Ivan

The more you play with the parts, the more it makes sense. The App has a clever snapshot feature (I don’t if that’s what they call it, but that’s what it seems to do). Once you acquire an object, push the button and it locks the image and the controls stay in a fixed place, but you can still slide them and such.

The Vuforia software associates an image “marker” with the hybrid object through characterization files (which includes the image, the object basic info, and the app overlay controls and stuff). These files live on the server, and get loaded by the App when it connects to the server. The server is the “brains” and the “storehouse” for the information, while the App is a clever user interface - UX really – camera, display, touchscreen, etc.

The server also has interfaces to the “real world” on the backside, or that’s how it sits in my head. The server glues together the App interface and the real world, and is the core of the hybrid reality. It also hosts the Vuforia application that builds the hybrid object, but that’s just a “setup” set to build the configuration.

To me, it is helpful to divorce the setup/configuration phase from what happens during usage.

Hope this helps,
Peter

1 Like

Hi @ivanlin i see why you are confused , you are right an image is needed to augment the html contents on top of it
in the video i took a photo of the arduino Yun , changed its properties ( size, RGB scales) to meet the vuforia requirements and used ( uploaded ) it as target ,QR code like image is not always needed, but it makes the target more unique
so what you see is the HTML slider being augmented on top of the image of arduino yun i took
hope your doubts are cleared now

PS : i was too lazy to go out and take print out of a marker image and so ended up doing this :grin:

Hi, @V_Mohammed_Ibrahim,

Thanks for the quick response.

Now I understand QR-code-like is not necessary.
As long as the ‘features’ are in high rating it would be enough, isn’t it?

But another thing I don’t know how to do is how can I determinate the position, angles or scales of the virtual buttons, sliders on iPhone/iPad screen?

Is there any tool or file that to describe those information?

1 Like

@ivanlin The 3D scaling is taken care of by the object.js file ,you can find it in the examples folder of your ,hybridobject library or here https://github.com/openhybrid/HybridObject/tree/master/examples/sensorAndSlider/interface , and you will be able to move or resize the HTML elements by switching on the developer function from the settings , in the reality editor
@valentin did i miss anything here ?

Yes, this is all correct.
You need to have the object.js file referenced in your index.html.
It is used to communicate with the Reality Editor on how to scale the interface.
If it is missing, your interface will not be visually presented in the right way.

OpenHybrid with Brain Computer Interface ( CC is also added)
https://www.youtube.com/watch?v=KupVqYzv6NU

2 Likes

I love it!

Imagine your would have the marker attached to the brain wave reading device on the head.
You could literally visualize the brain waves right at the spot where they are generated.
Then you could draw a line from the “brain” to objects in the environment and as such make the human in to a hybrid “human”.

Maybe instead of the maze pattern for the target you could use something that looks like brain neurons or PCBs:
https://www.google.com/search?q=brain+neurons&source=lnms&tbm=isch
https://www.google.com/search?q=pcb&espv=2&biw=1330&bih=875&source=lnms&tbm=isch

Or something that is just more visually connected with the domain of brain reading.
So that the pattern of the marker merges in instead of standing out.

1 Like

Its still in the beginning stage , i will be refining it into an awesome application of openhybrid :smile:

1 Like

Keep us updated! I am so happy that you find such good use for the Reality Editor.

1 Like

If you want to record the iPad screen directly you can do this with quicktime:

1 Like