Short story: I have full multitouch scaling and panning working in specially-developed apps on a stock T-Mobile G1 Android phone with a change to just one system classfile (i.e. with no modifications to the kernel whatsoever).
MultiTouch running on the G1 without kernel modification (red and green circles are drawn where touch points are detected)
Long story: read on for full details, including a video of the action, and full source code so that you can run this yourself (assuming you are a developer and understand the risks of doing this — this is NOT yet for end-users).
Shameless plug: if you like or use this multi-touch work for Android, please donate to support continued development of awesome features for Android!
For those with ADD or that don’t want to read the gory details, you can just watch the video on YouTube (it is also embedded below).
THE GORY DETAILS Touch screens and tinfoil hatsWhen the T-Mobile G1 / HTC Dream was released, it only supported single-touch rather than iPhone-style multitouch. Theories as to the lack of multitouch included hardware limitations, software support for it not being ready in the Android stack, and the threat of being devoured by Apple’s patent lawyers. Dan Morrill, a Google developer advocate for Android, made statements that the device was single-touch and the Android stack had no support yet for multitouch, but that Google would be willing to work together with handset manufacturers to develop multitouch software support when the hardware manufacturers were ready to release a multitouch handset. Eventually even one of HTC’s chiefs chimed in that the Dream was only ever designed to be a single-touch device.
Recently though, videos started surfacing on the net that showed various experiments people were performing on ListViews with two fingers that seemed to indicate the screen supported multiple touchpoints — however the results of these tests were still pretty inconclusive. Finally though, after the source of the Android stack was released, a developer Ryan Gardner / RyeBrye posted on his blog that he had managed to locate some lines in the kernel driver that were commented out that indicated that multitouch was indeed possible on these devices — and he hacked together a demo of two-fingered drawing that proved it.
To use RyeBrye’s solution, you have to recompile your phone’s kernel. It works by removing the comments around some debug statements (lines 132-151 of the the Synaptics I2C driver, synaptics_i2c_rmi.c) that dump motion events out to a logfile. He then wrote a user interface to read the logfile and draw dots on the screen.
Google, of course, continued to remain silent on the multitouch issue, and conspiracy theories grew thicker…
Enabling multitouch on the G1, the real wayRyeBrye did a great service to the Android hacker community by demonstrating that the screen is multitouch-capable. However there are some real limitations to his approach (which he fully acknowledged), such as having to recompile your kernel and having to get at the events by parsing a logfile. Also it looks like nobody yet has picked up the ball and turned his work into a working system.
Actually, it turns out that if you read a little further down in the driver code (lines 187-200 of synaptics_i2c_rmi.c), you’ll notice that you don’t need to recompile your kernel at all to get multitouch working on the G1 — the kernel driver in fact already emits multitouch information! The driver emits ABS_X, ABS_Y and BTN_TOUCH values for position and up/down information for the first touchpoint, but also emits ABS_HAT0X, ABS_HAT0Y and BTN_2 events for the second touchpoint. Where are these events getting lost then?
I pulled apart the Android stack and scoured it for the location where these events are passed through to Dalvik through JNI. It turned out to be very difficult pinpoint where input events were getting received MotionEvent objects populated (because they are processed on an event queue, the objects are recycled rather than created, and it happens in non-SDK code — egrep wasn’t much help either). The exact point at which multitouch information is lost though turns out to be $ANDROID_HOME/frameworks/base/services/java/com/android/server/KeyInputQueue.java. This class is the only code running on Dalvik that ever gets to see the raw device events — and it promptly discards ABS_HAT0X, ABS_HAT0Y and BTN_2. (It doesn’t seem to do so intentionally or maliciously, it just ignores anything it doesn’t recognize, and it is not coded to recognize those event symbol types.)
Now we’re getting somewhere. I recompiled the whole Android stack and tested detecting these events, and sure enough, I could now detect the second touchpoint — without recompiling the kernel (but, unfortunately, after having to modify part of the Android Java stack).
Two touch points being detected, with blue bars indicating the column and row of each touch point
Implementing functional multitouch on the G1 in a backwards-compatible wayI wanted to find a way to pass multitouch events through to user applications in a way that was as minimally invasive as possible, i.e. that didn’t require a major replumbing of the whole MotionEvent system, and that was backwards compatible with single-touch applications. It turns out that there is a field in MotionEvents, “size”, that does not appear to be used currently. It is actually mapped to MotionEvents’ size fields from the ABS_TOOL_WIDTH attribute emitted by the Synaptics driver — however the value seems to be ignored by the Android UI, and the value seems pretty chaotic. I suspect the driver actually uses it to represent some attributes of a tool used on similar Wacom-style tablet devices.
Anyway the driver specifies that ABS_TOOL_WIDTH can be in the range [0,15] (and this is mapped to the range [0.0,1.0] when it is placed in the size field), so we have four spare bits in each motion event that are unused. I modified KeyInputQueue.java to generate either one or two motion events depending on whether or not BTN_2 was down, and then marked each event with a bit (bit 0) signifying whether the event was for the first or the second touch point. I then used two more bits to attach the two touch point up/down states to each motion event, BTN_TOUCH and BTN_2, so that individual touch states of the two buttons could be known from either event type, and then, for backwards-compatibility purposes, I set the button-down state of each generated event to the state of (BTN_TOUCH || BTN_2). This is done to keep the semantics of the button-down status of MotionEvents consistent with what the event pipeline would expect, specifically so that the up/down status doesn’t alternate between emitted events.
The result is an Android stack that behaves normally for single-touch, generates events that can be separated into two streams by multi-touch-aware applications, and at worst only generates a series of events that appear to jump back and forth between two points on the screen when two fingers are touched to the screen in a single-touch application — e.g. if you are using a standard listview and hold down two fingers, the list will just jump up and down between the two fingers as you move them around.
VIDEO OF WORKING MULTITOUCH ON THE G1Here is a video of a multitouch application that I wrote to exercise the modified Android stack.
The REAL reason for no multitouch on the G1 at time of releaseNote that I mention in the video that the multitouch screen for some reason “was disabled at the time of release”. I do not at all believe this was an intentional curbing of the phone’s functionality — it just (1) was not in the design specs to have this feature for the first phone release, (2) would not have been ready in time (the hardware support for it is not polished, and the software support not started in the G1), and (3) was not central to the core mission of what Android was trying to achieve. Honestly having looked through some of the ENORMOUS mass of source code in the Android stack, I don’t have any idea at all how it was all pulled together in time for release, and how the release happened with so few 1.0 problems. The Android software stack is an incredibly well-engineered and well-brought-together stack — and it exhibits some amazing engineering and some amazing project management that all the pieces could have been developed separately and finally integrated into a single working product in such a short time.
As it is probably clear from the video, there are some technical challenges to making multitouch work on this hardware. The main technichal problem is that the Synaptics screen is not a true 2D multitouch device. It is a 2x1D device, or contains two sets of orthogonal wires and firmware for analyzing the resulting two 1D projection histograms of capacitance across the screen. This leads to a number of problems, in approximate decreasing order of severity:
An example of the touch points crossing over each other
An example of "snapping" when two points get too close together horizontally or vertically (regardless of their separation in the other dimension)
These problems, especially the first two, are serious for general multitouch usage. This is almost certainly one of the biggest considerations behind the decision to not support multitouch on the G1. (And there is probably a financial reason, patent worries or other. There’s always money involved in anything you don’t understand…) The biggest problem, the inability to distinguish between forward and reverse diagonal configurations, means that general multitouch gestures involving rotations simply won’t work in the general case. (But see motion estimation workarounds below.)
The good newsActually though it turns out that you don’t need rotation gestures for most multitouch operation that people would be interested in, because we work mostly with axis-aligned documents — maps, wordprocessing docs, web pages… and as long as your fingers are not too close together in either axis, you can get all the info and resolution you need for iPhone-worthy zooming and scrolling from the G1’s hardware events.
Scaling a map (at least, the image of a map) -- note that the points have inadvertently swapped, but the scale factor is still chosen correctly
Additionally the G1’s touch screen has a slight advantage for two-fingered (axis-aligned) touch gestures, such as sliding two fingers down or across the screen: if the two touch points are almost aligned in one axis, it locks them into alignment, making two-fingered gesture detection more natural (ok, that’s a stretch 🙂 )
Scaling an image, with points snapped horizontally. Scale factor is not affected too dramatically by point snapping, because the distance between snapped points and actual finger positions is fairly similar.
As is demonstrated in the video, the system should work fine for zooming and panning maps and web pages.
It turns out that the multitouch events generated by the driver are very noisy (i.e. not well tested or polished). I had to do a lot of complicated polishing of event noise to get the system usable to this level. As well as the problems with loss of accuracy around axis-crossings as described above, quite a number of events can give wildly inaccurate X and Y coordinates just after and just before a change in up/down state. There is still a little more tuning and polishing that needs to be done, but the code is below if you want to play with it and improve it.
What can be done to fix or work around the remaining problemsThe system could be made more natural to use by building in motion estimation (inertia and damping) in the vincinity of the discontinuities where touch points cross over each others’ axes, so that if the user is in fact doing a rotation gesture by moving strongly towards the axis crossing point, events will continue to be generated that smoothly cross that point. Of course there is still the potential for error here though if the user stops or reverses direction.
Getting and running the codeSo I mentioned that you wouldn’t have to recompile your kernel… but you still have to recompile one system class of the Android java stack, or all you can do with the demo code is operate one touch point as normal (i.e. just drag, not stretch).
Unfortunately the version of the Android stack that made it onto the G1 was derived from a snapshot of the code taken quite a while before Android 1.0 was released, so you can’t just patch the one class, recompile that class’ .jar file, and re-install a single .jar on your phone — that .jarfile, built from the publicly-available Android 1.0 source (or, worse, Cupcake/1.1), won’t likely work with the rest of the .jar files on your phone. So for now you need to build the entire 1.0 stack with the patch and then flash your entire phone.
Note the following:
Steps to follow:
There is considerable work that could be done to polish this and tweak it for optimal usage. A lot of the demo code (event noise smoothing etc.) could be moved into the Android stack, and motion estimation could be added to this to make things smoother. There are still sometimes glitches when you lift one finger off the screen after a multitouch operation, as well as when one finger hits the edge of the screen (due to some edge-logic in the lowlevel driver, I think).
Getting this patch upstream is probably unlikely, because ultimately this is a hack, especially the hijacking of the MotionEvent size field — but the actual impact to single-touch applications is very low: just some weirdness/jumping when you have two fingers on the screen. Note though that the G1’s default software stack has its own weirdness here (as the very first grainy “we think there’s multitouch on the G1” YouTube videos showed), and because of the hardware event noise when you lift one finger from a multitouch event.
I suggest someone write a .odex editor tool that can selectively excise one class from a .odex tool and replace it with one from another Dalvik-compiled class — then “all” that you would need to do to get multitouch on your phone would be to get root and then patch your system. Everything else should keep working as normal.
Ideally someone would then graft this patched .odex file into JesusFreke’s RC30 image, so that all you had to do was reflash your phone and you’d have a phone that is full working, but with multitouch support too. (At the moment it’s either-or…)
I want to also put out there a challenge for someone to build a MultiTouch frontend for Google Maps and WebView. In the demo, I just scale static images of a map and a webpage.
You can also use my code if you need a testbed to start developing your own multitouch software, so that you’re ready for the day that multitouch is officially supported by Google.
I am unlikely to do any more with this code myself, I just had to show it could be done 🙂
Final wordPlease don’t sue me, Apple.
That’s it! Have fun.Please discuss among yourselves in this Google Groups thread.
— Luke Hutchison, 2008-01-10
(If you want to know where to send the check 😉 , you can email me at the domain name “mit dot edu” preceded by: my first name, then dot, then the first five letters of my last name, then “at”. Please don’t ask me how to get this installed and working on your phone though, it’s not ready for end-users and I cannot respond to individual queries about installation.)
I just realized that I can patch the event system to work in a way that even Google would probably be happy with, i.e. in a way that will probably allow this patch to make it upstream. All I need to do is send *one* MotionEvent for multitouch events, with the X and Y coords set to the midpoint between the touch points, and the size field of the event set to the distance between the touch points. The size would be zero for single-touch events. This would allow the application writer to simply detect event.getSize()>0.0f, and initiate a scale operation. This also dramatically simplifies application code, because you don’t have to deal with two events, and cuts down on the number of events sent through the event pipeline.
This solution only allows for scaling, not rotation (I would need an additional event field “angle” to pass along rotation information), but it is sufficient to get multitouch working in all applications *today*, in a way that doesn’t break anything at all — and rotation won’t work well on this touch screen anyway, as the video points out.
Since it is almost impossible to build a working Android Java stack right now (as the source snapshot that made it onto the G1 was never released as a unit to git, it was taken from the state of source several months before the release of the 1.0 source and then polished extensively), it turns out that the simplest place to implement this is actually in the C kernel driver. (It is easy to get unmodified Android running on a modified kernel.) Then you would only have to put together a flashable system image with a custom kernel and an *uncustomized* Java stack taken directly from the G1 (i.e. RC30), and the chance of things breaking is much lower. This will also be a lot easier to redistribute so that we can get lots of people out there with working multitouch on their phones, and start pushing multitouch apps out to the market — without waiting for Google to upstream a patch (or for T-Mobile to pick up Google’s patch and OTA it, which may take until the next ice age).
I will cook up a kernel patch soon and see if JesusFreke will do a system image release with my patch.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3