Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Formal reasoning for analyzing goal models that evolve over time

  • Original Article
  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

In early-phase requirements engineering, modeling stakeholder goals and intentions helps stakeholders understand the problem context and evaluate tradeoffs, by exploring possible “what if” questions. Prior research allows modelers to make evaluation assignments to desired goals and generate possible selections for task and dependency alternatives, but this treats models as static snapshots, where the evaluation of the fulfillment of an intention remains constant once it has been determined. Using these techniques, stakeholders are unable to reason about possible evolutions, leaving questions about project viability unanswered when the fulfillment of goals or availability of components is not guaranteed in the future. In this article, we formalize the Evolving Intentions framework for specifying, modeling, and reasoning about goals that change over time. Using the Tropos language, we specify a set of functions that define how intentions and relationships evolve, and use path-based analysis for asking a variety of “what if” questions about such changes. We illustrate the framework using the Bike Lanes example and prove correctness of the analysis. Finally, we demonstrate scalability and effectiveness, enabling stakeholders to explore model evolution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. We ignore distinctions between intention types and exclude actor boundaries, as the focus of this article is analysis.

  2. Note the difference between the evidence pair \((\bot , \bot ) \) meaning None and the special value \(\bot \) meaning “no information”.

References

  1. Alkaf H, Hassine J, Binalialhag T, Amyot D (2019) An automated change impact analysis approach for user requirements notation models. J Syst Softw 157:110397. https://doi.org/10.1016/j.jss.2019.110397

    Article  Google Scholar 

  2. Alkaf HS, Hassine J, Hamou-Lhadj A, Alawneh L (2017) An automated change impact analysis approach to grl models. In: Proc of SDL’17, vol 10567. Springer, p 157

  3. Amyot D (2003) Introduction to the user requirements notation: learning by example. Comput Netw 42(3):285–301

    Article  Google Scholar 

  4. Amyot D, Ghanavati S, Horkoff J, Mussbacher G, Peyton L, Yu E (2010) Evaluating goal models within the goal-oriented requirement language. Int J Intell Syst 25(8):841–877

    Article  Google Scholar 

  5. Aprajita (2017) TimedGRL: specifying goal models over time. Master’s thesis, McGill University

  6. Aprajita, Luthra, S, Mussbacher G (2017) Specifying evolving requirements models with timedURN. In: 2017 IEEE/ACM 9th international workshop on modelling in software engineering (MiSE), pp 26–32. https://doi.org/10.1109/MiSE.2017.10

  7. Aprajita, Mussbacher, G (2016) TimedGRL: specifying goal models over time. In: Proceedings of the the sixth international workshop on model-driven requirements engineering (MoDRE’16)

  8. Baier C, Katoen JP (2008) Principles of model checking. MIT Press, Cambridge

    MATH  Google Scholar 

  9. Bresciani P, Perini A, Giorgini P, Giunchiglia F, Mylopoulos J (2004) Tropos: an agent-oriented software development methodology. Auton Agents Multi-Agent Syst 8(3):203–236

    Article  Google Scholar 

  10. Chung L, Nixon BA, Yu E, Mylopoulos J (2000) Non-functional requirements in software engineering. Kluwer Academic Publishers, Norwell, MA

    Book  Google Scholar 

  11. Clarke EM, Emerson EA, Sistla AP (1986) Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Trans Program Lang Syst (TOPLAS) 8(2):244–263

    Article  Google Scholar 

  12. Dhaouadi M, Spencer KMB, Varnum MH, Grubb AM, Famelis M (2021) Towards a generic method for articulating design-time uncertainty. J Object Technol

  13. Dunn D (2020) Bloor street bike lane design feasibility study and pilot project shaw street to avenue road: Public drop-in panels. Tech. rep., City of Toronto (2015). https://www.toronto.ca/wp-content/uploads/2019/01/95dd-Bloor-Street-Bike-Lanes-Pilot-Dec-2-2015-panels-web.pdf. Accessed January

  14. Ernst NA, Borgida A, Jureta I (2011) Finding incremental solutions for evolving requirements. In: Proceedings of the 19th IEEE international requirements engineering conference (RE’11), 15–24

  15. Forrester JW (1961) Industrial dynamics. Pegasus Communications, California

    Google Scholar 

  16. Giorgini P, Mylopoulos J, Nicchiarelli E, Sebastiani R (2002) Reasoning with goal models. In: Proceedings of the international conference on conceptual modeling (ER’02), 167–181

  17. Giorgini P, Mylopoulos J, Nicchiarelli E, Sebastiani R (2003) Formal reasoning techniques for goal models. J Data Semant 1:1–20

    MATH  Google Scholar 

  18. Giorgini P, Mylopoulos J, Sebastiani R (2005) Goal-oriented requirements analysis and reasoning in the Tropos methodology. Eng Appl Artif Intell 18(2):159–171

  19. Gonçalves R, Knorr M, Leite J, Slota M (2013) Non-monotonic temporal goals. In: Proceedings of the international conference on logic programming and nonmonotonic reasoning (LPNMR’13), 374–386

  20. Grubb AM (2015) Adding temporal intention dynamics to goal modeling: a position paper. In: Proceedings of the 7th International Workshop on Modeling in Software Engineering (MiSE’15)

  21. Grubb AM (2018) Reflection on evolutionary decision making with goal modeling via empirical studies. In: Proceedings of the 26th IEEE International Requirements Engineering Conference (RE’18), p 376–381

  22. Grubb AM (2019) Evolving Intentions: support for modeling and reasoning about requirements that change over time. Ph.D. thesis, University of Toronto

  23. Grubb AM, Chechik M (2016) Looking into the crystal ball: requirements evolution over time. In: Proceedings of the 24th IEEE international requirements engineering conference (RE’16)

  24. Grubb AM, Chechik M (2017) Modeling and reasoning with changing intentions: an experiment. In: Proceedings of the 25th IEEE international requirements engineering conference (RE’17)

  25. Grubb AM, Chechik M (2018) BloomingLeaf: a formal tool for requirements evolution over time. In: Proceedings of the 26th IEEE international requirements engineering conference (RE’18): Posters & Tool Demos, p 490–491

  26. Grubb AM, Chechik M (2020) Reconstructing the past: the case of the Spadina expressway. Requir Eng 25(2):253–272. https://doi.org/10.1007/s00766-019-00321-0

    Article  Google Scholar 

  27. Grubb AM, Song G, Chechik M (2016) GrowingLeaf: supporting requirements evolution over time. In: Proceedings of 9th international I* workshop (iStar’16), p 31–36

  28. Hartmann T, Fouquet F, Nain G, Morin B, Klein J, Barais O, Le Traon Y (2014) A native versioning concept to support historized models at runtime. In: Proceedings of the international conference on model driven engineering languages and systems (MODELS’14), p 252–268

  29. Hassine J, Alshayeb M (2019) Measuring goal-oriented requirements language actor stability. e-Inform Softw Eng J 13(1):203–226. https://doi.org/10.5277/e-Inf190106

    Article  Google Scholar 

  30. Herkert J, Borenstein J, Miller K (2020) The boeing 737 max: lessons for engineering ethics. Sci Eng Ethics 26(6):2957–2974

    Article  Google Scholar 

  31. Hopcroft J, Ullman J (1979) Introduction to Automata Theory, Languages, and Computation. Addison-Wesley Series in Computer Science and Information Processing. Addison-Wesle

  32. Horkoff J, Yu E (2011) Analyzing goal models: different approaches and how to choose among them. In: Proceedings of the 2011 ACM symposium on applied computing (SAC’11), p 675–682

  33. Horkoff J, Yu E (2013) Comparison and evaluation of goal-oriented satisfaction analysis techniques. Requir Eng 18(3):199–222

    Article  Google Scholar 

  34. Islam S, Houmb SH (2010) Integrating risk management activities into requirements engineering. In: Proceedings of the 4th international conference on research challenges in information science (RCIS’10), p 299–310

  35. Jackson M (1995) The world and the machine. In: Proceedings of the 17th international conference on software engineering (ICSE’95), pp. 283–292. https://doi.org/10.1145/225014.225041. http://doi.acm.org/10.1145/225014.225041

  36. Kuchcinski K, Szymanek R (2016) JaCoP - Java constraint rogramming solver. http://jacop.osolpro.com. Accessed 21 Feb 2016

  37. van Lamsweerde A (2008) Requirements engineering: from craft to discipline. In: Proceedings of the 16th ACM SIGSOFT international symposium on foundations of software engineering (FSE’08), p 238–249

  38. van Lamsweerde A (2009) Requirements engineering - from system goals to UML models to software specifications. Wiley, Hoboken

    Google Scholar 

  39. Letier E (2001) Reasoning about agents in goal-oriented requirements engineering. Ph.D. thesis, Universite Catholique de Louvain, Belgium

  40. Letier E, van Lamsweerde A (2002) Deriving operational software specifications from system goals. In: Proceedings of the 10th ACM SIGSOFT symposium on foundations of software engineering (FSE’02), p 119–128

  41. Luthra S, Aprajita, Mussbacher G (2018) Visualizing evolving requirements models with timedURN. In: Proceedings of the 10th international workshop on modelling in software engineering (MiSE’18)

  42. Mano MM (2002) Digital design. Prentice Hall, New Jersey

    Google Scholar 

  43. Mens T, Demeyer S (2008) Software evolution, 1st edn. Springer Publishing Company, Incorporated, New York

    Book  Google Scholar 

  44. Nguyen CM, Sebastiani R, Giorgini P, Mylopoulos J (2016) Multi-objective reasoning with constrained goal models. Requir Eng 23(2):189–225

    Article  Google Scholar 

  45. Nguyen CM, Sebastiani R, Giorgini P, Mylopoulos J (2017) Modeling and reasoning on requirements evolution with constrained goal models. In: Proceedings of the 15th international conference software engineering and formal methods (SEFM’17), p 70–86

  46. Rolland C, Souveyet C, Achour CB (1998) Guiding goal modeling using scenarios. IEEE Trans Softw Eng 24(12):1055–1071

    Article  Google Scholar 

  47. Russell SJ, Norvig P (2010) Artificial intelligence: a modern approach, chap. 6 constraint satisfaction problems. Prentice Hall, New Jersey

    Google Scholar 

  48. Sabetzadeh M, Easterbrook S (2006) View merging in the presence of incompleteness and inconsistency. Requir Eng 11(3):174–193

    Article  Google Scholar 

  49. Sebastiani R, Giorgini P, Mylopoulos J (2004) Simple and minimum-cost satisfiability for goal models. In: Proceedings of the international conference on advanced information systems engineering (CAiSE’04), p 20–35

  50. Sommerville I (2016) Software engineering, 10th edn. Pearson, London

    MATH  Google Scholar 

  51. Sterman J (2000) Business dynamics: systems thinking and modeling for a complex world. McGraw-Hill, New York

    Google Scholar 

  52. Yu ES (1997) Towards modeling and reasoning support for early-phase requirements engineering. In: Proceedings of the 3rd IEEE international symposium on requirements engineering (RE’97), p 226–235

  53. Zhang H, Liu L (2010) Modelling environmental impact with a goal and causal relation integrated approach. In: Proceedings of the international congress on international environmental modelling and software (iEMSs’10)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alicia M. Grubb.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Algorithm 1 helper functions

Appendix: Algorithm 1 helper functions

Table 7 Forward propagation of satisfied and denied evidence pairs, adapted from [17]

1.1 Propagation algorithms

As mentioned in Sect. 3, when evidence pairs are propagated from source nodes to target nodes, this is called forward propagation. We first consider an adapted version of the forward propagation algorithm introduced by Giorgini et al.  [17]. Recall that an evidence pair is a pair (sd), where \(s \in \{\textit{F}, \textit{P}, \bot \}\) is the level of evidence for and \(d \in \{\textit{F}, \textit{P}, \bot \}\) is the level of evidence against the fulfillment of g, where \(\textit{F}> \textit{P} > \bot \). In the following algorithms, we use Sat(g) to refer to the s value of an evidence pair and Den(g) to refer to the d value of an evidence pair. The forward propagation rules are listed on lines 1–32 in Fig. 6. To make the propagation algorithm easier to understand, in Table 7, we restate these rules in terms of (s,d) pairs (i.e., Sat(g) and Den(g)) for the source and target intentions. For example, the first row gives the Sat and Den rules for decomposition (i.e., \(\textsf {and}\) & \(\textsf {or}\)) and directly connects to the rules on Lines 1–4 for \(\textsf {and}\) and Lines 5–8 for \(\textsf {or}\) of Fig. 6.

UDATE_LABEL function (Algorithm 2). Suppose that a target has more than one incoming link. Using Table 7 is insufficient because it does not specify how to combine values from multiple links. The Update_Label function (see Algorithm 2) specifies how the evidence pair for an intention g is updated based on the links for which g is the target intention. Update_Label takes as input an intention g, a time point t, a set of relationships \({\textit{targetRel}}\), and a set of tuples Old mapping intentions to evidence pairs, and returns a new evidence pair assignment for the input goal g at time point t. Line 18 assigns the old evidence pair to sat and den. Then, the main loop (Lines 19–21) iterates through the relationships in \({\textit{targetRel}}\), and invokes Apply_Rules_Sat and Apply_Rules_Den returning new sat and den values, respectively, for the selected relationship based on the rules in Table 7. After iterating over all the incoming relationships the updated sat and den values for the goal g at time t are the maximum values (i.e., strongest evidence) that was propagated from one of the incoming links.

For example, suppose Old contains the tuples \(\langle \textsf {{\small Have Design}},\) \(0, (\textit{F}, \bot ) \rangle \), \(\langle \textsf {{\small Have Bike Lanes}}, 0, (\bot , \bot ) \rangle \), and \(\langle \textsf {{\small Build Bike Lanes}},\) \(0, (\textit{F}, \textit{F}) \rangle \). When Update_Label is called for Have Bike Lanes, the returned value for Have Bike Lanes will be \((\textit{P}, \textit{F}) \), based on the \(\textsf {and}\) relationship between them.

figure b

PROPAGATE function (Algorithm 2). This function propagates values from source nodes to target nodes. The Propagate function (see Algorithm 2) takes as input goals \(G\), relationships \(R\), and a single time point t, as well as two sets, uGoals and aGoals, to represent unassigned and assigned values, respectively. Each element of aGoals is of the form \(\langle g, t, \llbracket g[t] \rrbracket \rangle \) and may be specified by the user. Each element of uGoals is of the form \(\langle g, t, \emptyset \rangle \), where the third element in the tuple is a set of possible evidence pairs in \(\mathcal {E}\). Propagate returns an updated copy of aGoals and uGoals. The main loop (see Lines 2–15) iterates until a fix-point is found, where no additional evidence pairs have been assigned to intentions (i.e., \(|{\textit{aGoals}} | = size\), on Line 15). The updated sets are returned on Line . The inner loop (see Lines 7–9) evaluates each relationship \(r \in R\). If the target of r is already assigned, then the relationship is skipped (see 5). If the target of r is not already assigned and all of the source intentions of r are assigned (Line 9), then Update_Label is called to assign an evidence pair for the target intention. Finally, the target intention is removed from uGoals and added to aGoals with the assigned evidence pair (Lines 12 and 13).

A value is only assigned to a target intention when all source intentions have an assigned value in \(\mathcal {E}\). For example, consider the Bike Lanes example fragment in Fig. 1 (with R specified in Sect. 3) and the following values for aGoals and uGoals (assuming Weak_Conflict avoidance at timepoint 0): \({\textit{aGoals}} = \{ \) \(\langle \textsf {{\small Permanent Construction}}, 0, (\textit{F}, \bot ) \rangle ,\) \(\langle \textsf {{\small Bike Lane Usage}}, \) \(0, (\textit{P}, \bot ) \rangle ,\) \(\langle \textsf {{\small Temporary Construction}}, 0, (\bot , \textit{F}) \rangle \}\), \({\textit{uGoals}} = \{ \)

   \(\langle \textsf {{\small Have Design}}, 0, \{(\textit{F}, \bot ), (\textit{P}, \bot ), (\bot , \textit{P}), (\bot , \textit{F}), (\bot , \bot ) \} \rangle ,\)

   \(\langle \textsf {{\small Access to Parking}}, 0, \{(\textit{F}, \bot ), (\textit{P}, \bot ), (\bot , \textit{P}), (\bot , \textit{F}), (\bot , \bot ) \} \rangle ,\)

   \(\langle \textsf {{\small Build Bike Lanes}}, 0, \{(\textit{F}, \bot ), (\textit{P}, \bot ), (\bot , \textit{P}), (\bot , \textit{F}), (\bot , \bot ) \} \rangle ,\)

   \(\langle \textsf {{\small Cyclist Safety}}, 0, \{(\textit{F}, \bot ), (\textit{P}, \bot ), (\bot , \textit{P}), (\bot , \textit{F}), (\bot , \bot ) \} \rangle ,\)

   \(\langle \textsf {{\small Have Bike Lanes}}, 0, \{(\textit{F}, \bot ), (\textit{P}, \bot ), (\bot , \textit{P}), (\bot , \textit{F}), (\bot , \bot ) \} \rangle \}\),

are used as inputs to Algorithm 2. The resulting output value is \({\textit{aGoals}} = \{\) \(\langle \textsf {{\small Permanent Construction}}, 0, (\textit{F}, \bot ) \rangle ,\) \(\langle \textsf {{\small Bike Lane Usage}}, \) \(0, (\textit{P}, \bot ) \rangle ,\) \(\langle \textsf {{\small Temporary Construction}}, 0, (\bot , \textit{F}) \rangle ,\) \(\langle \textsf {{\small Access to Parking}}, \) \(0, (\textit{P}, \bot ) \rangle ,\) \(\langle \textsf {{\small Build Bike Lanes}}, 0, (\textit{F}, \bot ) \rangle \}\). Have Bike Lanes and Cyclist Safety are not assigned values because Have Design does not have a value and one cannot be generated through forward analysis.

figure c

CHECK_PROP function (Algorithm 3). The Check_Prop function (see Algorithm 3) determines whether any intention evaluations in a goal graph violate the propagation rules for relationships (see Fig. 6). This function is used to verify that assignments made by Propagate (see Algorithm 2) satisfy all propagation rules. The inputs to Check_Prop are \(G\): the set of goals, \(R\): the set of relationships, \(\Pi \): the set of time points, and \({\textit{aGoals}} \): a set of tuples that map intentions to evidence pairs (\(\langle g, t, \llbracket g[t] \rrbracket \rangle \)). The main loop (see Lines 2–16) iterates over each tuple in aGoals and checks the relationships where the goal is a target node. The relationship loop (see Lines 5–14) iterates over each relationship where g is the target goal. For each relationship, the function loops over the source intentions to determine if they are assigned (see Lines 7–9), meaning that they are elements of aGoals. Evidence pairs for source goals are stored in \({\textit{sEvals}}\). If the source goal has an assignment \({\textit{sEvals}} \ne \emptyset \), then the relationship is added to the \({\textit{linkEvalSet}}\) to be checked. When all relationships have been added to \({\textit{linkEvalSet}}\), Sat_Test is called on Line 16, which checks the assignments against the rule in Fig. 6. If a violation is found, Check_Prop returns \({\textit{false}}\). Once all intentions have been checked, no violations have been discovered and Check_Prop returns \({\textit{true}}\).

For example, Check_Prop would return \({\textit{true}}\) for the output of the Propagate example discussed above, where \({\textit{aGoals}} = \{\langle \textsf {{\small Permanent Construction}}, (\textit{F}, \bot ) \rangle ,\) \(\langle \textsf {{\small Bike Lane Usage}}, (\textit{P}, \bot ) \rangle ,\) \(\langle \textsf {{\small Temporary Construction}}, (\bot , \textit{F}) \rangle ,\) \(\langle \textsf {{\small Access to Parking}}, (\textit{P}, \bot ) \rangle ,\) \(\langle \textsf {{\small Build Bike Lanes}}, (\textit{F}, \bot ) \rangle \}\).

However, only the and relationship for Build Bike Lanes and the + relationship for Access to Parking would have been checked by Sat_Test on Line 16.

figure d
figure e
figure f

1.2 Evolving function algorithms

In this subsection, we present three helper functions (i.e., Assign_Constant, Check_EI_Func, and Check_NotBoth) used in Algorithm 1 to check the evolving functions in the evolving goal graph.

ASSIGN_CONSTANT function (Algorithm 4). The purpose of the Assign_Constant function (see Algorithm 4) is to increase the efficiency of Algorithm 1 by assigning all evidence pairs that are known prior to reasoning because they are constrained by a Constant function. Assign_Constant iterates over each intention with a User-Defined function (see Line 2), and again for each atomic function contained within (see Line 3). If the function f is a Constant function, then Algorithm 4 assigns evidence pairs over the specified period (see Lines 9–10). Prior to this assignment, Algorithm 4 also checks that the proposed assignment is not in conflict with any User Evaluations in UEval. If a conflict is found, the algorithm returns \({\textit{false}}\) meaning the graph is infeasible (see Line 7).

In the bike lanes example, Assign_Constant selects evidence pairs for Have Design over the entire path (\((\bot , \textit{F}) \) at \(t=0\) and \((\textit{F}, \bot ) \) after), and for Bike Lane Usage at \(t=\{17, 32, 45\}\) when it is satisfied \((\textit{F}, \bot ) \) in the summer.

CHECK_EI_FUNC function (Algorithm 5). The purpose of the Check_EI_Func (see Algorithm 5) function is to check that no evolving functions for intentions have been violated while assigning evidence pairs. This Boolean function takes as input a set of goal assignments \({\textit{aGoals}} \), the time point path \(\Pi \), and \(EF\) which consists of mappings between intentions and User-Defined functions (see Definition 12). Each mapping, made up of atomic functions (see Definition 8), takes the form

$$\begin{aligned} \langle g, \{ \langle {type}_0, x_0, {start}_0, {stop}_0 \rangle ,\ldots ,\langle {type}_n, x_n, {start}_n, {stop}_n \rangle \} \rangle . \end{aligned}$$

Check_EI_Func returns \({\textit{true}}\) if no atomic functions have been violated. The main loop (see Lines 2–20) iterates over each item in the set \(EF\). The function loop (see Lines 4–20) then iterates over each atomic function (initializing values on Line 5) and checks each value of the time point path \(\Pi \) within the range [startstop) depending on the type (i.e., Constant, Increase, Decrease). For example, the check for Constant (see Lines 6–8) verifies that each assigned evidence pair (i.e., having a value in \({\textit{aGoals}} \)) is equal to \(x\). If all checks pass and the main loop completes, Check_EI_Func returns \({\textit{true}}\) on Line 21, but if a single check fails, \({\textit{false}}\) is returned.

As mentioned when discussing Definition 6 (see Sect. 5) if stakeholders add the user evaluation in conjunction with an evolving function, this may result in an inconsistency. These inconsistencies are found by Check_EI_Func. For example, if Have Bike Lanes has the evolving function \(\langle { \textsc {Constant}}, (\textit{F}, \bot ),\) \(t_{BLOpen}, t_{end} \rangle \),

and is accidentally assigned the user evaluation \(\langle \textsf {{\small Have Bike Lanes}}, t_{a_2}, (\bot , \textit{F}) \rangle \), where \(t_{BLOpen}\) is before \(t_{a_2}\), then this inconsistency would be found by Check_EI_Func on Line 8 returning \({\textit{false}}\).

CHECK_NOTBOTH function (Algorithm 6). Similarly, Check_NotBoth inspects the assignments for both intentions in each NotBoth relationship function in the evolving goal graph (see Algorithm 6). Check_NotBoth returns \({\textit{true}}\) iff expected values of all assignments are consistent with Definition 15. NotBoth relationships are stored in the set of relationships R; thus, the main loop (see Lines 2–20) iterates over each relationship and only investigates further for NBN and NBD relationships. After initializing relevant values on Lines 7–20, the time point loop (see Lines 8–11) iterates over each time point in \(\Pi \). For time points before the decision (\(t < t_{Ref}\)), Check_NotBoth verifies that assigned intentions (i.e., intentions having values in \({\textit{aGoals}} \)) are \((\bot , \bot ) \) (see Lines 12–15). For the decision time point \(t_{Ref}\), Check_NotBoth verifies that one intention is \((\textit{F}, \bot ) \) and the other has \({\textit{fVal}}\), the value specified by the type, where \({\textit{fVal}}\) is \((\bot , \bot ) \) for NBN and \((\bot , \textit{F}) \) for NBD (see Lines 12–15). For time points after the decision (\(t > t_{Ref}\)), Check_NotBoth verifies that assigned values are the same as those at the time when the decision was made (at \(t_{Ref}\)) (see Lines 16–20). In the bike lanes example, Check_NotBoth would test the value of the NBN relationship between Permanent Construction and Temporary Construction.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grubb, A.M., Chechik, M. Formal reasoning for analyzing goal models that evolve over time. Requirements Eng 26, 423–457 (2021). https://doi.org/10.1007/s00766-021-00350-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00766-021-00350-8

Keywords

Navigation