Philips Hue hardware interface

Continuing the discussion from Openhybrid Raspberry pi - inteface and library:

I will look into your code tomorrow and I will start to add some error handling. I think it is a good idea to synchronize our efforts so the work doesn’t get done twice :wink: . Ah, and about the colour control you were thinking about @psomdecerff : I am intending to use some colour picker like or Tiny Colorpicker: A lightweight cross browser color picker. as UI element.

1 Like

@Carsten and @psomdecerff maybe one of our old examples can help.
In our first Reality Editor demos we have had the philips hue light working with an AR color picker.
Unfortunately this first demo was build up on a completely different foundation and is not compatible with the Reality Editor and Open Hybrid. We separated the color in to hue, saturation and brightness. That made it very easy to interact with.

Here is the video from that demo:

Yes, I have seen that video and I’m thinking heavily about the right way to implement the colour part of the light. The easiest thing would indeed be to just expose hue, saturation and brightness as I/O points but I think it would be beneficial to have only one I/O point for the colour. Let’s say I want to have several lights to have the same colour - then as a user - I would just like to draw one connection between the lights and not two or three. Another issue is what colour space to use. There might be other applications than just lights who might provide an input colour for the lights. I could imagine a media center which provides the dominant colour of the currently running movie. Then you could create an ambilight effect. What format would you use as “global colour exchange format”? RGB because it is most common in the web? Hue and saturation because it is the easiest? xy?

What do you think @psomdecerff, @valentin?

First, I’d like to have switches like on that lamp – I assume they are “smart”, and send an on/off to the Editor, rather than a HW switch. This is an irritant I find on my system already – wife turns off the lamp manually, and then I have to do the manual “on” before using my nifty app!

On the interface questions, I am of two minds, maybe three. At the most basic level, hybrid reality should make things intuitively obvious to use, especially for basic things; once you get past that, you are in the familiar trade between flexibility and simplicity, and I think going too far down the flexibility path will be a BAD THING. Honestly, for a lamp, or a car radio, I don’t think you can get much better than a dimmer switch and volume knob – and that’s basically what a hybrid object would provide. Once you get to more sophisticated control – hue for the bulbs, or fader and equalization for a radio – the interface will either get overly busy, layered, or both. Personally, I think when to get to that point, a “designed” interface is better, and an ability to seamless shell into a native app would make perfectly good sense.

For connectivity, I think you’re looking at “grouping” as much as anything. A handful of lights that you want to want to work in tandem because they’re in a shared fixture or one room, or a bundle of connections you want to hook up at once (hue, brightness, saturation for light, L/R for a stereo, or whatever). I can see this working pretty easily several ways, but the way we’re all used to with a mouse is drawing a select box or doing click-to-select and then picking “group”.

I haven’t played with multiple objects in the window at once, but I imagine it gets busy fast, and hard to connect. Maybe instead there should be a notion of locality, as with the room-specific BT beacons, where colocated devices all show on an alternate page, and you can do simple things from there? Already, I find myself wanting a recently-used page versus having to re-acquire an object…I have markers sitting next to may chair now to make it easy to pick the lamp across the room without moving.

Perhaps the base set of abilities should include ties to alternate views of “nearby” objects for easy selecting/grouping/controlling/linking? And if an object links to a native app or some other “smart” app, an easy-button to shell to it? For example, on Hue, maybe the page should include a Hue app button that takes you to the factory Philips Hue app, where you can do colors and some sort of grouping already? Or, if you chose to have some of Reality Mixer app for colors, or sounds, or whatever, it would go to it instead?

I am reminded of how musicians behave. A guy with a guitar has a guitar and his voice…unless he goes one step up and adds an amp (one new object and one new cord), and maybe a microphone (another object and another cord). If you add another guy, he does the same, but by the time you add a third, you add a mixer board and a power amp. And then monitor speakers, and mics for a drum kit, and then an associated lighting board, and so forth. Still, that first guitar has one cord, going somewhere.

Another example is “If This, Then That” – knitting together existing but single-purpose tools to do something more complex. The app doesn’t do everything itself, nor does every interface add much new functionality; the app adds functionality to do simple but useful things.

So, to sum up, I think the biggest value for hybrid reality is to do basic controls on fairly simple but not necessarily well-understood devices, and do simple but clever things with them. For more complex operations, I think some sort of more sophisticated app is going to be needed. I wish I could use the same interface for every TV in my house (volume, channel, on/off, and input would be fine!), and for every media source as well (play, pause, rewind/forward, etc.). Leave the color balance, channel guide, and surround-sound controls for a native app.

As for hue control, I think basic RGB is a good bet. Something that works similarly to what’s in Microsoft color picking would be well-known, and in the end that seems like a good bet.

I see what you have in mind. I was thinking about that as well a lot. One solution would be to generate some sort of a general purpose concentrator in the GUI which you would bundle the color components.
Any solution needs to take care of the foundational principle described here:

Because if we continue that thought to other devices, we might end up with an abstract IoT standard again and incompatibilities down the road.

I know, this is a tricky point. Maybe we find a good solution all together?

With the light shown in the video we experimented with Hue, Saturation and Brightness for some times and many interactions and HSB is the color space that makes most sense.
You can also call it color tone, saturation and brightness.

RGB does not make sense, because it does not allow you to control the brightness separated from the color tone. From an interaction perspective, everything is strangely intertwined with RGB.

I think eventually some simplistic adapters will be needed. For example, inverters (for on-off to off-on, or proportional to inverse control). Perhaps color palate converters could be part of that as well.

In the physical world, we’re all familiar with adapters (my laptop bag contain an ever-shifting mix as technology moves on). Perhaps in the logical world we could have some too? It would be easy to have a virtual object in the chain of hybrid objects – just map a marker to a virtual object with ins and outs.

I think if we keep simple, obvious, and intuitive at the forefront of our thinking, we will be OK. I’m not saying there can’t be some extensions and options, and probably evolution over time, but making a lot of simple things that work well will be much better than a few complicated things that are hard to make work.

1 Like

Ok, I totally forgot about that. So just to make it clear, the principles state that every I/O point should expect/emit values between 0 and 1? Is that correct? But what is the mode parameter for then? I’m getting a little bit confused, sorry :smile:.

Yes, I’ve been thinking about that adapter idea as well. We could provide some sort of adapter library. Maybe as some sort of “virtual I/O points” which can simply be added to a hardware interface. Or another idea: the adapters could somehow appear whenever a user draws a connection between two I/O points. So that the user could connect I/O point 1 to adapter to I/O point 2. But that coud get pretty confusing with a growing count of adapters. I think this topic requires some thought…

Ok, I decided to simply create I/O points for hue, saturation and brightness. All I/O points now work with floats in the range [0,1]. I created a pull request @valentin.

I also wrapped all of the hardware interfaces code into a big if-statement which checks if the hardware interface is enabled. This avoids errors when packages required by a disabled hardware interface are not present or don’t work on the server hardware.


I have to spends some (a lot) more time on documenting.
For now I was only documenting for a designer but not for the developer working on the code.
But this forum here helps a lot to but all of that in to writing. :smile:

When I was working on the Audi car, I figured out that there are interfaces that require increments.
Something like a rotary encoder with 10 steps. or a switch with 3 states and so on.

I was thinking how can I represent these kind of functionality and still take care of the simple principle.

So I came up with the following structure:

there are different modes that are send with the floating point value.

Mode “f”: floating point between 0 and 1
Mode “d”: boolean (digital) 0 and 1
Mode “p”: positive step adding to the floating point value.
Mode “n”: negative step subtracting from the floating point value.

Resulting in the following API. You can find a already working implementation in the HybridObject.cpp Arduino library file.

1. write(<object>, <IOPoint>,<float>)
2. writeStepUp(<object>, <IOPoint>, optional <increments> default 1)
3. writeStepDown(<object>, <IOPoint>, optional <increments>  default 1)
4. writeDigital(<object>, <IOPoint>,<bool>)

5. <float> read(<object>, <IOPoint>)
6. <int default 0> stepAvailable(<object>, <IOPoint>, optional <increments>  default 1) 
7. <bool>  readDigital(<object>, <IOPoint>, <threshold default 0.5>)


  1. writes a normal floating point value
  2. adds a positive step to the floating point value.
    for example: If we have 10 increments one call would add 0.1 to the floating point value until 1 has been reached.
    the Mode indicates that a positive step happened.
  3. same as 2 just a negative step until 0 has been reached.
  4. writes 1 or 0.


  1. reads the normal floating point.
  2. indicates if a step was send and if it is negative (int -1) or positive (int 1).
    For infinit steps this indication still continues, even the floating point value has reached 0 or 1.
  3. allows to read the floating point as a digital value using a threshold.

Since all modes are send as a floating point value, they can all be translated in to the respective other mode. Programmers can translate between different steps. The fundamental principle is still true.

It is working. But it adds up a lot of complexity to the simple Arduino API.
This is why I was not explaining it anywhere for now.
I think it needs a lot more discussion.

For future I had in mind that we can add a media “m” mode for common MIME Types as data load.
This could transport (stream) audi, video and other data.


I was wondering what those functions were while studying the cpp code , this helps :smiley:

So can the hybridobject server process data in the byte form now ? like the ones needed to control motors connected in serial.

Well, I think there are two answers to that question.

  1. Technically you can use whatever datatype you wish but conceptionally you should follow the design guidelines @valentin outlined in his post.
  2. You can easily convert the byte datatype into the required [0,1] range. A byte can represent the numbers 0 to 255. To map that into the [0,1] range you just have to divide your byte value by 255. To convert back just multiply by 255 and round the resulting value down.

In an earlier version we used a byte values, but they can not represent many different type of values.
The 0-1 floating point can have different granularity depending on how many decimal places you use.
I found this very beautiful. :smile: