Currently, qTrace provides built-in support to submit defects directly to many defect trackers. The word “built-in” is not only about the fact that integration adapters to defect trackers are included out of the box with qTrace, but also indicates that the whole defect submission workflow can happen exclusively inside qTrace. Not only does this improve the testers’ productivity, it also makes the user experience much more pleasant, seamless and consistent.
In order to make this possible, our goal so far has always been replicating the defect submission UI and workflows of the supported trackers as much as possible. For example below are the screenshots of our Rally and FogBugz adapters. As you can see, they support a great number of fields, field types, dependencies and even custom fields. They also populate multi-valued fields dynamically, allow testers to specify default values and choose to show or hide certain fields etc.
While this level of support may be taken for granted by now, it’s been a long way to have it built in qTrace. Our beta users should be able to recall those early days when each adapter supports only 2 or 3 fields absolutely necessary for a submission. So a typical workflow would be something like submitting a defect by qTrace, logging-in into the tracker system to augment fields like Priority and Assignee etc. Even so, if the defect system is configured with additional required fields, qTrace will fail to submit the defect and testers had to submit manually and attach qTrace output themselves.
Yet, as good as it is, there are downsides to this submission approach of qTrace.
First, the capability of each adapter we build is very much dependent on the capabilities provided by the API of the corresponding defect tracker. If a tracker system exposes a rich service API, we can build an adapter that can replicate most of the functionalities that this tracker provides. On the other hand, given a limited service API, there is only so much we can make the submission adapter do. It’s worth pointing out that while some tracker APIs are more powerful than others, we have never come across an API that is powerful enough for us to replicate all functionalities of a tracker.
Given that, when facing limitations caused by the lack of functionality in the tracker’s service API, we either have to accept them or beat very hard for a solution. For example, for certain critical features, we have to consider extending the API of the targeted tracker by building some plugin, if it is supported, and having it expose the necessary functionality and metadata for qTrace to consume. Of course, this would add up to the development effort (now that we have to learn about the tracker’s plugin model besides other things we already have to learn) and deployment complexity (qTrace becomes a client-server application!)
Even without having to go beyond what tracker APIs provide, merely the job of learning about the trackers (including their data models, business logic, workflows and API) in order to replicate as much functionality as possible, i.e. what we’re doing today, already requires a large portion of development effort. As we support more trackers and versions, the portion of development effort spent on integration will still be, if not more, significant. More development effort also means we will lag behind the release of a new tracker version, not being able to offer testers the latest functionality as they have with that new version of the tracker.
What if, when implementing a new tracker adapter, instead of aiming to rebuild 100% the functionality the tracker has, we only have to aim for about x% (with x being as low as possible) that are likely needed by 90% of testers 90% of the time? It’s not guaranteed that this x% is even supported by the tracker API, but because we’re talking about functionality most testers need most of the time, it’s more likely than not that x% is supported by existing tracker APIs. Obviously if that’s the goal, development and maintenance effort will be reduced significantly (no more having to read the API documentation from end to end, no more having to deal with that tricky script generated dependency or similar corner cases etc.), we can support more and more trackers and versions quickly and testers still enjoy the seamless (and now even simpler) integration for most of their defect submission needs. But how about the remained submission scenarios where those x% are not sufficient? What can we do about that?
How about we do nothing? Testers can always fire up the browser and submit the defect there! Let the defect trackers themselves handle the request and take care of the data, logic and workflow that isn’t built in qTrace? Doesn’t sound too bad, right? But let me rephrase what testers have to do. They have to fire up the browser, login to their tracker, copy and paste defect title, description, trace steps, environment information and attach the qTrace output(s). Now, unless you’re passionate about copying and pasting, that should sound bad. And besides, who would want to deal with 2 or more software just to submit a defect? (Testers who haven’t owned qTrace, I’m speaking to you.) So even though we talk about the 10% submission needs here, this approach will likely make our customers frown.
Apparently, we need a better approach. One that is flexible enough to cover 100% defect submission requirements of all of our users while can be implemented with reasonable effort (precisely, it has to be less than the effort we have to spend to build 100% of the functionality a tracker has into qTrace). Fortunately, such solution does exist. Unfortunately, an in-depth coverage of it will probably double or triple the length of this post. Plus, this solution is interesting enough to deserve its own post anyway. Therefore, let’s save it to next week. In the meantime, if you have any thought and suggestion about qTrace’s defect submission feature, please drop a comment.