libgdx test automation through Input abuse

I was browsing the source of the newest libGDX update, and came across
It looks like this was added in early 2011, but its totally new to me. Anyway, its a very slick little pair of tools that lets you run a proxy on your phone and forward that phone's events to your libGDX app on your desktop. This lets you maintain the very fast edit/compile/crash cycle on your desktop without losing the ability to test phone-only events like compass, accelerometer and multi-touch.

Anyway, I don't (yet) want it for all that. I've been thinking about how to automate some testing of my libGDX app, and realized that I could use this approach to save and playback input events. At first, I thought I might hack the remote end to read events and send them over a loopback network, but I quickly realized it would be simpler to just build something similar to that injects events by reading them from a file (instead of reading them off the network).

Both and my new work on the same handy property of libGDX, which lets you replace the input management system trivially. Just create a new input manager and stuff it in as the default.

fakeInput = new InputGenerator(new StringReader("1\n0 99\n1 99\n1000\n"));
Gdx.input = fakeInput;

You need a bunch of code in your new input manager to maintain the (fake) input system's state. Thankfully Mario wrote all that in and it was easy to crib from.

Instead of the current binary format used to send events across the network, I chose a very simplistic text format for the saved event logs:

<event> <eventArgs ...>

where every element is just an integer. The file format isn't very readable, but its reasonable to edit it or fix it manually if necessary.

To generate an event log, I added some log statements to my existing InputProcessor that write the events in the above format to a file. This is an easy way to record a session for later playback. (Or as a basis for manually editing the file.)

Since I just record absolute touch event coordinates, the screen at replay time needs to match that at recording time. This should be possible to enforce (or work around by recording and replaying screen size changes, or scaling the recorded events), but I haven't implemented any of that yet. It should also be possible to playback events that only make sense on a physical device (like accelerometer or compass events), but I haven't needed that yet. Android keys like MENU and BACK are also currently ignored.

Other improvements I'd like to make are entries in the file that change the inter-event delay (currently hardcoded to 7ms). Entries that repeat an event N times (to simulate hammering the system). A tool that generates random streams of "interesting" valid events as stress testing. And better versioning so I know when the run-time system is too "different" from the environment that recorded the events.

On the implementation side, instead of the relatively straight-forward "giant case statement" approach that uses for encoding and decoding events, I thought I'd try a more "OO" approach. That was a mistake. It took about four times as much code to express the same work (of converting a list of integers describing an event into changes in the event state and callbacks to the registered InputProcessor). On the other hand, using java.util.Scanner to parse the file worked out very nicely.

The code is currently bound up with a bunch of stuff specific to my project (especially my logging system), so I need to extract that before sharing the code. I should probably refactor and build this on/under that code, too.

Comments or Questions?

Subscribe via RSS | Atom Feed