Posts

My engineers claim gestures on multi-touch resistive do not work nearly as well as they do on projected capacitive (PCT) used in a variety of consumer devices, like the iPhone – why is this? Have there been any recent breakthroughs with resistive multi-touch? I would appreciate any new input on this subject.

Hi Alex:

When you are comparing iPhone/iPad projected capacitive (also called PCT or P-Cap) to any other (even identical) projected capacitive sensors, you may not find the performance to be as good as Apple ’s product.  How can this be?  It is because Apple has had a really big head-start (as in years).  You and yours are playing catch-up, and it will take a while for you to integrate the prior art (yes, Apple did not invent multi-touch) with the new, to achieve the same thing.

Here is a good example:  Using a multi-touch demo, you can use the pinch gesture to make the picture really small….so small, that you will not be able to “catch” the corners and expand it…it will stay really small untilTouchGuy_Basic you reset the program.  Apple has anticipated the picture getting too small, so their software will accept nearby fingers and “guess” that the user wants to expand that photo, and it will.  There is a lot of anticipation in the iXX software that makes it better than your stuff.  Touch Guy is a hardware person, so you can guess that he will point the blame finger at the software folks.

Now to your original question…  MARS is different only in the pressure required to enable the gestures.  Otherwise, the gestures should be the same and the performance the same as projected capacitive and better, of course, with input from pens and pencils.  Keyboard entry is noticeably better with sure-footed MARS than with projected capacitive, which seems to often “guess” wrong at which key you wanted (auto correct to the rescue).

For more info on multi-touch, check out Touch International’s Putting the ‘Touch’ in Multi-Touch White Paper.

Touch Guy

Touchless Gestures – the next frontier of touch technology

Big changes in touch and interactivity are coming quickly (see the Top 5 Touch Trends segment) and the concept of “touchless gestures” or enhanced motion recognition has the potential to change a lot of what we know about touch. The good news for touch screen manufacturers is that this touchless technology is a long way off  from claiming any sizable share of the marketplace for a couple of reasons:

1)  Let’s face it, touchless gestures are not yet practical for many touch screen applications.
2)  These emerging technologies are still largely in their infancy.
3)  The market hasn’t found a good place for them yet.

But touchless gestures are a cool idea, and are, no doubt, part of our interactive future. The popularity of Nintendo’s Wii, has demonstrated the need for enhanced motion recognition and digital interaction with display devices. And now Sony, with the Move, and Microsoft, with Kinect, have signaled a substantial response to the Wii, enabling much more sophisticated interactive capabilities [Mark Fihn, Top 5 Touch Trends].

This video below from the Virtopsy Project shows that there is, in fact, huge potential for these motion recognition devices and demonstrates how Microsoft’s Kinect can be used to control a Medical PACS system. I dont think the technology is quite where it needs to be, but the Virtopsy Project presents some real food for thought.

See the Virtopsy Project in motion: http://www.youtube.com/watch?v=b6CT-YDChmE

Signing out.

Touch Girl.