My engineers claim gestures on multi-touch resistive do not work nearly as well as they do on projected capacitive (PCT) used in a variety of consumer devices, like the iPhone – why is this? Have there been any recent breakthroughs with resistive multi-touch? I would appreciate any new input on this subject.
When you are comparing iPhone/iPad projected capacitive (also called PCT or P-Cap) to any other (even identical) projected capacitive sensors, you may not find the performance to be as good as Apple ’s product. How can this be? It is because Apple has had a really big head-start (as in years). You and yours are playing catch-up, and it will take a while for you to integrate the prior art (yes, Apple did not invent multi-touch) with the new, to achieve the same thing.
Here is a good example: Using a multi-touch demo, you can use the pinch gesture to make the picture really small….so small, that you will not be able to “catch” the corners and expand it…it will stay really small until you reset the program. Apple has anticipated the picture getting too small, so their software will accept nearby fingers and “guess” that the user wants to expand that photo, and it will. There is a lot of anticipation in the iXX software that makes it better than your stuff. Touch Guy is a hardware person, so you can guess that he will point the blame finger at the software folks.
Now to your original question… MARS is different only in the pressure required to enable the gestures. Otherwise, the gestures should be the same and the performance the same as projected capacitive and better, of course, with input from pens and pencils. Keyboard entry is noticeably better with sure-footed MARS than with projected capacitive, which seems to often “guess” wrong at which key you wanted (auto correct to the rescue).
For more info on multi-touch, check out Touch International’s Putting the ‘Touch’ in Multi-Touch White Paper.