Targets other than image targets


I thought you could turn the UI with some gesture, but instead you simply touch the UI and turn the IPhone. I think I am too used to manipulating things on the phone with gestures. And well, I expected that you had to directly manipulate the UI to position it but instead you have to “fix” the UI’s position by touching it and move the surrounding “coordinate system” by moving the Phone.


Do you think that the interface is intuitive?
If you would use gestures, you would need to separate gestures for

  1. X and Y
  2. Z
  3. Rotation X and Y

With locking in the phone you have all of it in one at hand.

(This is actually an interesting discussion we can have about gestures vs real world interaction)


After you have understood that you have to move the Phone to position the UI it’s quite intuitive. But first you have to understand that and I think it is not obvious that you can “lock” the position of the UI by touching it. In the RealityEditor application the camera is kind of used as “glass pane”. When you look at the screen you see the real world behind it. And when I, as a real world person, move my point of view a real world object’s position stays fixed and because I expected the augmented UI elements to behave like real world objects I didn’t even get the idea to move the phone to position the UI :slightly_smiling: Additionally I was used to position the UI element by touching and moving it in some kind from the time before this new feature was added. This new mode sort of reverses the positioning paradigm - instead of moving the UI you now have to move the Phone.

You are right about the gestures. That wouldn’t be intuitive and more complicated. You would have to explain the gestures somewhere. So I think that your solution is quite good but it would be even better if you could immediately understand that you have to move the Phone to position the UI. I have got no idea how to make it more obvious though :slight_smile:


Maybe some short video explaining it? :smiley:

Other then that you use the motion of the phone in another context.
When you draw a line from one data point to another data point of an object outside of the screen.
You lock in the one end on the screen and you move it to another data point.

I think it is interesting to think about the vocabulary of gestures used and how to keep them minimal.
I think the log in and use the phone in space, is a powerful one.


Yes a video will do :slight_smile: But it would be even better if you could understand it right away without any instructions from outside the actual application.

I’ve got to admit I don’t get this paragraph :wink: Are you saying you think the “lock feature” (we really have to give it a name :slight_smile: ) is similar to drawing a connection between IOPoints? I don’t see that.

Yes, I agree. This “lock feature” is really cool. Once you have understood it you can do a lot with it. I will experiment with it in the next couple of days and try to use it for my cylindrical lamp shade marker. I’m excited to see if I can get the UI positioned as I wish. I only gave it a quick try today.

Yeah and I think it’s always good to discuss which gestures we have, which gestures we actually need and which gestures are missing or which could be improved or replaced. I think It would be cool to get some feedback from people actually only using the RealityEditor who are not so involved in developing. That could open a new perspective on things.