Disclaimer. The information below are academic paper excerpts about some specific methodology topic. Because I believe I cannot explain it better than these authors, they are literally copied here. Please refer to the original source (mentioned at the bottom of each block), instead of incorrectly crediting me for their fantastic work.
Types of theories
The method for classifying theory for IS proposed here begins with the primary goals of the theory. Research begins with a problem that is to be solved or some question of interest. The theory that is developed should depend on the nature of this problem and the questions that are addressed. Whether the questions themselves are worth asking should be considered against the state of knowledge in the area at the time. The four primary goals of theory discerned are
Combinations of these goals lead to the five types of theory shown in the left-hand column [below]. The distinguishing features of each theory type are shown in the right-hand column. It should be noted that the decision to allocate a theory to one class might not be straightforward. A theory that is primarily analytic, describing a classification system, can have implications of causality. For example, a framework that classifies the important factors in information systems development can imply that these factors are causally connected with successful systems development. Some judge- ment may be needed to determine what the primary goals of a theory are and to which theory type it belongs.
- Analysis and description. The theory provides a description of the phenomena of interest, analysis of relation- ships among those constructs, the degree of generalizability in constructs and relationships and the boundaries within which relationships, and observations hold.
- Explanation. The theory provides an explanation of how, why, and when things happened, relying on varying views of causality and methods for argumentation. This explanation will usually be intended to promote greater understanding or insights by others into the phenomena of interest.
- Prediction. The theory states what will happen in the future if certain preconditions hold. The degree of certainty in the prediction is expected to be only approximate or probabilistic in IS.
- Prescription. A special case of prediction exists where the theory provides a description of the method or structure or both for the construction of an artifact (akin to a recipe). The provision of the recipe implies that the recipe, if acted upon, will cause an artifact of a certain type to come into being.
Theory Type Distinguishing Attributes I. Analysis Says what is.
The theory does not extend beyond analysis and description. No causal relationships among phenomena are specified and no predictions are made.
II. Explanation Says what is, how, why, when, and where.
The theory provides explanations but does not aim to predict with any precision. There are no testable propositions.
III. Prediction Says what is and what will be.
The theory provides predictions and has testable propositions but does not have well-developed justificatory causal explanations.
IV. Explanation and prediction (EP) Says what is, how, why, when, where, and what will be.
Provides predictions and has both testable propositions and causal explanations.
V. Design and action Says how to do something.
The theory gives explicit prescriptions (e.g., methods, techniques, principles of form and function) for constructing an artifact.
From: Gregor, S. "The nature of theory in information systems", MIS Q. 30(3), 2006, pp. 611-642, http://www.jstor.org/stable/25148742
How to develop metrics and measures?
In general, measurement is defined as the process by which numbers or symbols are assigned to attributes (e.g. complexity) of entities in the real or abstract world (e.g. business process models) in such a way as to describe them according to clearly defined rules. The measurement of a process model complexity can be approached from different perspectives, depending on its theoretical foundations (e.g. software engineering, cognitive science or graph theory).
[METRIC DEFINITION] The observable value which results from the measurement is a metric,
[MEASURE DEFINITION] while a measure associates a meaning to that value by applying human judgement.
Metrics researchers mostly agree with three-steps-procedure for defining and validating a new metric: (1) metric definition, (2) metric's theoretical validation and (3) metric's empirical validation. The fourth, optional step entails the development of an IT tool for automatic metric calculation.
According to the Fenton and Pfleeger's model for software metric definition, the metric definition procedure includes three steps: (1) identification of the measured entity (e.g. a program's module in programming or a business process model in our case), (2) identification of desired measuring attributes of the entity (e.g. the size of a process model) and finally (3) metric definition (e.g. number of activities in a process model). After a metric is defined, we can assess its quality.
Metrics Quality characteristics
Metrics can be of different quality, depending on how precisely does a metric describes an attribute of an entity, where Latva-Koivisto defined following characteristics of a good [complexity] metric:
- Validity - the complexity metric measures what it is supposed to measure.
- Reliability - the measures obtained by different observers of the same process model are consistent.
- Computability - a computer program can calculate the value of the metric in a finite time and preferably quickly.
- Ease of implementation - the difficulty of implementation of the method that computes the complexity metric is within reasonable limits.
- Intuitiveness - it is easy to understand the definition of the metric and see how it relates to the instinctive notion of complexity.
- Independence of other related metrics - ideally, the value of the complexity metric is independent of other properties that are sometimes related to complexity. These include at least size and visual representation of the process.
Validation of measurements and metrics is necessary to ensure that the conclusions obtained from the measurement are in fact valid and precise and that a metric actually measures the attribute it was designed to measure. The problem of defining a metric is that we have to convert abstract concepts into measurable definitions. This process can create unwanted discrepancies between the two. Validation of a measurable concept is not a trivial process, as three validation aspects or problems have to be addressed: (1) content validity, (2) criteria validity and (3) construct validity. Reliability is another concept we have to address when dealing with evaluation of a measurement concept. Reliability fundamentally addresses the question of correct- ness and determinicity of a measurement concept, as well as its consistency through time and measured entities. A metric can be reliable, but not valid, where unreliable metric can never be valid.
In literature we can mainly find three methods used for theoretical metric validation: (1) properties which result from the type of metric's measurement scale (i.e. nominal, ordinal, interval, ratio, absolute), (2) metric compliance with Briand's framework properties and (3) metric compliance with Weyuker's properties. By considering these methods, we can find out if the metric is structurally sound and compliant with measuring theory. Some researchers (e.g. Cardoso, Coskun) use Weyuker's properties for theoretical validation of a metric. These properties were designed for the evaluation of complexity metrics for programming code, but can also be used for complexity metrics of processes and the corresponding process models. A good complexity metric should satisfy all nine properties defined by Weyuker. Another way to theoretically validating a metric is through the Briand et al. framework, which was primarily intended for programming code metrics. According to the framework, metrics are divided into five categories, based on what they measure: size, length, complexity and cohesion or coupling, where each category contains specific properties the metric should comply with. These categories are also related to process complexity.
Empirical validation of a metric compliments the theoretical validation. For this purpose, researchers can use different empirical research methods, e.g. surveys, experiments and case studies. The goal of empirical validation is to find out if a metric actually measures what it was supposed to measure. Both theoretical and empirical validations of a metric are required for a metric to be structurally sound and practically useful.
 L. Finkelstein, M.S. Leaning, A review of the fundamental concepts of measurement (Jan.) Measurement 2 (1) (1984) p. 25-34.
 J. Cugini, et al., Methodology for evaluation of collaboration systems, Evaluation Working Gr. DARPA Intell. Collab. Vis. Progr. Rev. 3 (1997).
 G.M. Muketha, A.A.A. Ghani, M.H. Selamat, R. Atan, A survey of business process complexity metrics, Inf. Technol. J. 9 (7) (2010) p. 1336-1344.
 N.E. Fenton, S.L. Pfleeger, Software Metrics: a Rigorous and Practical Approach, PWS Publishing Co., Boston, MA, USA, (1998).
 A.M. Latva-Koivisto, Finding a complexity measure for business process models. (2001).
 J. Mendling, Metrics for Business Process Models. (Berlin, Heidelberg: Springer Berlin Heidelberg) Metr. Process Models 6 (2008), p. 103-133.
 L.C.Briand, C.M.Differding, and H.D.Rombach, Practical Guidelines for Measurement-Based Process Improvement, 1996.
 E.J. Weyuker, Evaluating software complexity measures. IEEE Trans. Softw. Eng. 14 (9) (1988) p. 1357-1365.
 E.Coskun, A new Complexity Measure for Business Process Models, 2014.
From: G. Polancic, B. Cegnar, Complexity metrics for process models - A systematic literature review. Computer Standards & Interfaces 51 (July 2016), p. 104-117, http://doi.org/10.1016/j.csi.2016.12.003