I've worked in the field of computer science and software engineering research off-and-on for almost ten years: first, as a software developer and research associate and then, in the last 3 years, as a masters student. In that time, I've been able to work on a number of very interesting and innovative projects. That is what I think is a great advantage of academia. At least in theory, academics are meant to be free to investigate new ideas, new theories, and new technologies without the obligation to provide a product that will generate profit.
However, in the time that I've been involved in academia, I've learned enough to become a little critical of it as well. There is a well-known mantra that in academia one must, "publish or perish." If you don't run studies and publish papers, then you won’t make it as an academic. Since grants and fellowships are awarded based, in part, on the curriculum vitae of an academic, if the publications dry up, then the funding will as well. So there is a kind of “market value” that is put on research. It is based on what can be published, and it is adjudicated by conference chairs and journal editors.
I've reviewed my own fair share of papers. One thing that I and other researchers look for is rigor in research. Science needs rigor. It is important that a paper does not claim more than it can actually demonstrate. That means that the researchers have to work hard to design, implement, and run experiments. And they have to be careful about what they claim.
It’s my belief that modest claims in research are good claims. I think that in computer science and software engineering, we often research scenarios that are far too complex, with far too many variables to make large claims. Unfortunately, it is difficult to publish modest claims. They don’t seem exciting enough. It seems that I see a lot of reviews that want papers that demonstrate how a new technique or technology is better than all other predecessors or that helps all people in all situations. The average researcher just isn't up to the task. It is an impossible one.
It requires experiments that are too large and take too long. To make such strong claims, one needs to produce experiments that have statistical significance. When you are working with a new technology, that means that you need to pull from a large pool of unbiased, randomly selected participants that you can ask to try the technology, so that you can measure their usage of it, ask them about their reaction to it, and try to form some kind of generalizations and a theory about why the technology does or doesn’t work. The problem is that most researchers don’t have access to a large pool of participants, and the participants almost never can be randomly selected.
Even if these criteria can be met, the problem of gathering and analyzing data from these kinds of scenarios can become almost insurmountable. For my thesis, I, my supervisor, and several others designed and performed a study that involved ten participants in total. With only ten participants, it took us several months to transcribe and review the videos, analyze logged data from the application, etc., etc. If we had as many participants as we needed to gain statistical significance, it would have taken us years to complete the research.
In such a fast-moving realm as computer science and software engineering, this kind of long turn-around time is very frustrating. It makes research seem to lag behind the innovations of commercial and Open Source projects. It also makes it difficult to do future research. Since the results lag behind the results of production software, it is difficult to convince professionals to participate in experiments. It is a large cost (in terms of time and money) for what they see as a small reward. This will make it more difficult in the future to perform academic experiments that are able to make strong claims.
It might be time for academics to re-evaluate the goals of computer science and software engineering research. What kinds of problems are they trying to solve, and what is really important? From my perspective, my own research has been a success. I started my masters not as an academic, but as a software developer. I had problems in my day-to-day programming that needed solving. My masters work gave me the opportunity to work with some excellent and smart people and come up with a workable solution. I was able to produce the Diver tool and hand it to the Eclipse community, which has always been a community that fosters innovation and excellent technical solutions.
The results of the work, however, have been difficult to publish. In spite of the recognition that Diver got by being named a finalist for "Best Developer Tool of the Year," the results of our research have yet to be published outside of my masters thesis. I get the distinct feeling that I can make a bigger impact by working toward building real solutions for real developers than I can by trying to convince conference chairs that the solutions are worth-while.
So, I’m leaving the Diver project in the capable hands of the CHISEL group at the University of Victoria, and moving onto a new chapter in my life. I've been offered a job at Microsoft and I start tomorrow. It has been great working at the University of Victoria. There are some really smart, creative, and inspiring people there. It has also been great working within the Eclipse community. I truly believe that the work done with Eclipse has changed the way that people develop software. The community is great and so are the products.
In the end, I'm really just interested in designing and implementing good solutions to real-world problems. Microsoft has offered me what seems to be an exciting opportunity to do just that. As an added bonus, it pays better than the wage of a masters student :-). So, this will be my last post to Planet Eclipse. Thank you everyone for your support over the years and I wish you all the greatest blessings.
The Diver project is still available on Sourceforge. If you like, you can find my masters thesis here.