Design is not research. Researchers are in essence knowledge builders. As real experts they usually combine various systematic ways to build the knowledge. There are two principal reasons to build knowledge:
(1) to address human curiosity by helping to answer questions and
(2) to progress the human race by helping to develop solutions for problems.
There is an interplay between these two. In order to develop the solutions, questions about the problem need to be answered. In order to answer the questions, solutions for problems need to be developed. Therefore, many consider this whole interrelated system of knowledge and solution development to be research. Because I believe that the design of a conceptual or practical solution is a creative task that should not necessarily be solved in a systematic way, I consider only parts of this interrelated system as research. In order to solve a problem, one should investigate the problem, design and investigate the conceptual solution, and design and investigate the actual implementation of this conceptual solution. These investigations are considered research when they are performed in a systematic way. The designs are not research, but may be required to perform the research and are thus often also performed by researchers.
This vision is inspired by Roel Wieringa. Read more about it here and here.
There are different types of theories. Usually, researchers build further on each other's work to expand knowledge about a certain topic. Therefore, every research project can be considered to be one iteration in a never-ending iterative research building effort of the human race. Knowledge exist in the form of theories that build on some fundamental assumptions and on other theories. From (objective) observations and (subjective) impressions, a descriptive theory can be formulated. Next, by relating observations one can reveal a mechanism and thus build an explanatory theory or one can quantify causal relations and thus build a predictive theory. Further, the explanatory and predictive theory can be extended to a prescriptive theory, which provides directions on how to tackle a certain problem.
This vision is inspired by Shirley Gregor. Read more about it here.
Sometimes, research does not lead to new knowledge. In contrast to most researchers, I perform research under the open-world assumption. This means that a statement can have three states: true, false or unknown. Many researchers do not consider the unknown state. Hence, when a relation is investigated and no significant results are found, they conclude that the relation is probably not existing and thus the hypothesized statement is false, whereas I believe that no conclusions about the statement can be drawn from insignificant results. Indeed, there are many potential reasons why results are insignificant: the variables may not be related, the relation may be more complex than initially thought and tested for, the measures for the variables may be imprecise, there are many confounding variables obscuring the relation, etc.
In Claes, et al., 2015 I deliberately did not include a description of the statistical analysis because our analysis had almost no significant results. In Claes, et al., 2017 we were able to improve the preciseness of our metrics and more significant results were obtained.
Simplification is often necessary, but always challenging. It is generally considered a good research practice to study bivariate causal relationships between an independent and a dependent variable. However, when human behavior is studied, it is practically impossible to isolate the effect of one variable on another. The abundance of influencing variables makes it too hard to rule out their influence. Therefore, I consider it a good practice to include as many relevant variables in the research model as practically possible.
In Claes, et al., 2015 I have built and evaluated a theory with a relatively high number of variables.
Researcher mobility in itself should not be a goal. It is not important how many long periods a researcher has spent abroad during her/his career. It is important, however, how many profound collaborations a researcher has experienced to diversify her/his knowledge of research methods, topics and practices.
For example, I believe that I have similar experiences as a researcher with a long stay abroad. During my PhD I was part of the IS group at Eindhoven University of Technology, which I have visited on many occasions (and still do). Moreover, my publication list shows that I have successfully collaborated with more than 10 researchers from more than 5 other universities. By inviting a diverse set of international guests for long or short stays in our group (see Collaborations), I was inspired by many experts in my field. I am also known in my community as an active participant at conferences (see Presentations and Services). This way, I am convinced I am able to have a continuous physical and virtual, yet profound interaction with many research groups in my community. I have built my own network and developed my own research lines, independently from but related to the other research(ers) in our research group. The diversity of applied research methods and investigated research topics (see Research) illustrate how it is not required to stay abroad in order to diversify one's research knowledge.
In my research, I study human behavior. Here are some lessons that I learned on the road. I use them as principles for future research designs.
Digital treatments are preferred over real-world lectures. I try to optimize as many details in an experiment set-up as possible, because I believe that every small improvement raises the reliability of the study. This is the reason why I prefer digital treatments over intensive training treatments. Digital treatments are exactly reproducible, easy to distribute, and self-documented in detail. Moreover, they can sometimes enable the simultaneous experiment execution of control and treatment group in the same room, which increases the number of factors that can be kept constant between both groups.
In Claes, et al., 2017 I have discussed how an affective digital treatment can be developed.
Cognitive overload is a range and thus I consider a "potential improvement interval". Cognitive overload is described in literature as a binary variable. Either someone's brain is overloaded or it is not. I believe there are gradations of overload. When working memory is heavily loaded, the retarding and deteriorating effects of overload are already occurring. Techniques were/are developed by researchers (including myself) to lower the cognitive load and thus also the chance for cognitive overload while solving some kind of problem. In my opinion, these techniques will have little effect for tasks that are either too simple or too challenging. For a certain human being, performing a certain task under certain conditions, there is an interval of actual or desired cognitive load where techniques can have a bigger impact. This interval begins with the heavy load where overload effects start to happen and (hopefully) ends further than the point where cognitive overload is a fact. I would like to call this the potential improvement interval.
For overload-reducing technique development students are the ideal test subjects. This potential improvement interval depends mainly on three factors (i.e., task, executor and circumstances). When studying the adoption and effect of a newly developed technique, I perform comparative experiments, which require these factors to be kept constant amongst the compared groups, except for the technique under investigation. For between-subject experiments it is of course not possible to reuse the same participants for the different groups. Because overload is a variable in my research, I try to let every user perform a task in their potential improvement interval. Further, because they have to perform the same tasks, their potential improvement interval should be overlapping. Thus, the ideal test subjects for my research have similar maturity, reasoning skills, prior knowledge, etc. Therefore, I often perform experiments with students as optimal test subjects. With more experienced practitioners it is practically impossible to design a single task that is in the potential improvement interval for all participants when under the same conditions.