Interactive Exhibition Architecture - Open Call

Hi,

After speaking briefly with Valentin, he recommended that I make a post here to call for collaborators. We are planning an exhibition in the Summer of 2016 and would be interested in using open hybrid as the backbone of interaction. We would like to find ways to allow the visitors to deconstruct and reconstruct the exhibition. This could be through passive (fitbit) participation or with active (smart device) interaction. In this sense I can imagine mobile plinths, transforming walls, feed selection etc.

From a philosophical standpoint I am very interested in the way in which perception affects the construction of meaning and specifically the ways in which swarms construct their social reality. The ways in which attention actually modifies the structure of our minds (and reality) is very interesting - and is for me one of the takeaways of Open Hybrid.

We are currently in the idea-finding phase, and to this end welcome all suggestions. In February we would like to enter the pre-production phase and do testing in April. There is not yet a set time and date for the exhibition, although there is a venue that we are considering (in Hamburg, Germany). Furthermore, there is a good chance that we will be able to show the exhibition at the 34c3 (pending approval).

Anyone interested?
Denjell

1 Like

hi , this seems interesting ,constructing an entire exhibition based on the openhybrid ,
at the moment one idea that i can think of that can be implemented is ,maybe we can use openhybrid to allow people to view the data of exhibits using the reality editor , people could just walk by and point their device at the id image for the exhibit and then data about it would be displayed to them , this way more than one person could view the details by just pointing their device at the target id image ,
@denjell what do you think ?

I would really like to explore your idea, not the least because I can see quite a few real-world applications for this. One downside to this type of mediation of course would require the activated app on every device (whether BYOD or Gallery-Provided). At the same time, merely getting the details of the object would only leverage openhybrid as a “support” system (which in no way makes this a weak idea). Can you think of any ways to actually change / generate the content?

generate the content in the sense ?

@V_Mohammed_Ibrahim → I meant the idea of using openhybrid (and user interaction with the system) as a means of generating / influencing the content. Maybe an example of this might be having an installation with a puredata backend listening to openhybrid events and then modulating itself…

1 Like

Hi @denjell, Welcome to our community!

The community is still very new. I will support with some design focused workshops in the coming month so that artists, designer and engineers will join for project collaborations and workshops. Its gone be a big learning curve for all of us. :smile:

1 Like

@denjell how about moving platforms ,which move to another location when connected through the reality editor ?
for example , you can point the device ( the device could be one with you guys) at a map of the place and then the different locations show up as IO points, then if say if you are at location A and you connect A to B in the exhibition area via the reality editor pointed at the map , then the platform moves to location B ,along with people standing on it
what do you think @valentin ?

Mobile is open to call device . if you are using a sim card then you are authorised to make calls . you can consider the mobile repair dubai as expert to connect your mobile data with call . so you can make wifi calls. it will give you fantastic experience .