runtime-verification.rst 8.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231
  1. ====================
  2. Runtime Verification
  3. ====================
  4. Runtime Verification (RV) is a lightweight (yet rigorous) method that
  5. complements classical exhaustive verification techniques (such as *model
  6. checking* and *theorem proving*) with a more practical approach for complex
  7. systems.
  8. Instead of relying on a fine-grained model of a system (e.g., a
  9. re-implementation a instruction level), RV works by analyzing the trace of the
  10. system's actual execution, comparing it against a formal specification of
  11. the system behavior.
  12. The main advantage is that RV can give precise information on the runtime
  13. behavior of the monitored system, without the pitfalls of developing models
  14. that require a re-implementation of the entire system in a modeling language.
  15. Moreover, given an efficient monitoring method, it is possible execute an
  16. *online* verification of a system, enabling the *reaction* for unexpected
  17. events, avoiding, for example, the propagation of a failure on safety-critical
  18. systems.
  19. Runtime Monitors and Reactors
  20. =============================
  21. A monitor is the central part of the runtime verification of a system. The
  22. monitor stands in between the formal specification of the desired (or
  23. undesired) behavior, and the trace of the actual system.
  24. In Linux terms, the runtime verification monitors are encapsulated inside the
  25. *RV monitor* abstraction. A *RV monitor* includes a reference model of the
  26. system, a set of instances of the monitor (per-cpu monitor, per-task monitor,
  27. and so on), and the helper functions that glue the monitor to the system via
  28. trace, as depicted below::
  29. Linux +---- RV Monitor ----------------------------------+ Formal
  30. Realm | | Realm
  31. +-------------------+ +----------------+ +-----------------+
  32. | Linux kernel | | Monitor | | Reference |
  33. | Tracing | -> | Instance(s) | <- | Model |
  34. | (instrumentation) | | (verification) | | (specification) |
  35. +-------------------+ +----------------+ +-----------------+
  36. | | |
  37. | V |
  38. | +----------+ |
  39. | | Reaction | |
  40. | +--+--+--+-+ |
  41. | | | | |
  42. | | | +-> trace output ? |
  43. +------------------------|--|----------------------+
  44. | +----> panic ?
  45. +-------> <user-specified>
  46. In addition to the verification and monitoring of the system, a monitor can
  47. react to an unexpected event. The forms of reaction can vary from logging the
  48. event occurrence to the enforcement of the correct behavior to the extreme
  49. action of taking a system down to avoid the propagation of a failure.
  50. In Linux terms, a *reactor* is an reaction method available for *RV monitors*.
  51. By default, all monitors should provide a trace output of their actions,
  52. which is already a reaction. In addition, other reactions will be available
  53. so the user can enable them as needed.
  54. For further information about the principles of runtime verification and
  55. RV applied to Linux:
  56. Bartocci, Ezio, et al. *Introduction to runtime verification.* In: Lectures on
  57. Runtime Verification. Springer, Cham, 2018. p. 1-33.
  58. Falcone, Ylies, et al. *A taxonomy for classifying runtime verification tools.*
  59. In: International Conference on Runtime Verification. Springer, Cham, 2018. p.
  60. 241-262.
  61. De Oliveira, Daniel Bristot. *Automata-based formal analysis and
  62. verification of the real-time Linux kernel.* Ph.D. Thesis, 2020.
  63. Online RV monitors
  64. ==================
  65. Monitors can be classified as *offline* and *online* monitors. *Offline*
  66. monitor process the traces generated by a system after the events, generally by
  67. reading the trace execution from a permanent storage system. *Online* monitors
  68. process the trace during the execution of the system. Online monitors are said
  69. to be *synchronous* if the processing of an event is attached to the system
  70. execution, blocking the system during the event monitoring. On the other hand,
  71. an *asynchronous* monitor has its execution detached from the system. Each type
  72. of monitor has a set of advantages. For example, *offline* monitors can be
  73. executed on different machines but require operations to save the log to a
  74. file. In contrast, *synchronous online* method can react at the exact moment
  75. a violation occurs.
  76. Another important aspect regarding monitors is the overhead associated with the
  77. event analysis. If the system generates events at a frequency higher than the
  78. monitor's ability to process them in the same system, only the *offline*
  79. methods are viable. On the other hand, if the tracing of the events incurs
  80. on higher overhead than the simple handling of an event by a monitor, then a
  81. *synchronous online* monitors will incur on lower overhead.
  82. Indeed, the research presented in:
  83. De Oliveira, Daniel Bristot; Cucinotta, Tommaso; De Oliveira, Romulo Silva.
  84. *Efficient formal verification for the Linux kernel.* In: International
  85. Conference on Software Engineering and Formal Methods. Springer, Cham, 2019.
  86. p. 315-332.
  87. Shows that for Deterministic Automata models, the synchronous processing of
  88. events in-kernel causes lower overhead than saving the same events to the trace
  89. buffer, not even considering collecting the trace for user-space analysis.
  90. This motivated the development of an in-kernel interface for online monitors.
  91. For further information about modeling of Linux kernel behavior using automata,
  92. see:
  93. De Oliveira, Daniel B.; De Oliveira, Romulo S.; Cucinotta, Tommaso. *A thread
  94. synchronization model for the PREEMPT_RT Linux kernel.* Journal of Systems
  95. Architecture, 2020, 107: 101729.
  96. The user interface
  97. ==================
  98. The user interface resembles the tracing interface (on purpose). It is
  99. currently at "/sys/kernel/tracing/rv/".
  100. The following files/folders are currently available:
  101. **available_monitors**
  102. - Reading list the available monitors, one per line
  103. For example::
  104. # cat available_monitors
  105. wip
  106. wwnr
  107. **available_reactors**
  108. - Reading shows the available reactors, one per line.
  109. For example::
  110. # cat available_reactors
  111. nop
  112. panic
  113. printk
  114. **enabled_monitors**:
  115. - Reading lists the enabled monitors, one per line
  116. - Writing to it enables a given monitor
  117. - Writing a monitor name with a '!' prefix disables it
  118. - Truncating the file disables all enabled monitors
  119. For example::
  120. # cat enabled_monitors
  121. # echo wip > enabled_monitors
  122. # echo wwnr >> enabled_monitors
  123. # cat enabled_monitors
  124. wip
  125. wwnr
  126. # echo '!wip' >> enabled_monitors
  127. # cat enabled_monitors
  128. wwnr
  129. # echo > enabled_monitors
  130. # cat enabled_monitors
  131. #
  132. Note that it is possible to enable more than one monitor concurrently.
  133. **monitoring_on**
  134. This is an on/off general switcher for monitoring. It resembles the
  135. "tracing_on" switcher in the trace interface.
  136. - Writing "0" stops the monitoring
  137. - Writing "1" continues the monitoring
  138. - Reading returns the current status of the monitoring
  139. Note that it does not disable enabled monitors but stop the per-entity
  140. monitors monitoring the events received from the system.
  141. **reacting_on**
  142. - Writing "0" prevents reactions for happening
  143. - Writing "1" enable reactions
  144. - Reading returns the current status of the reaction
  145. **monitors/**
  146. Each monitor will have its own directory inside "monitors/". There the
  147. monitor-specific files will be presented. The "monitors/" directory resembles
  148. the "events" directory on tracefs.
  149. For example::
  150. # cd monitors/wip/
  151. # ls
  152. desc enable
  153. # cat desc
  154. wakeup in preemptive per-cpu testing monitor.
  155. # cat enable
  156. 0
  157. **monitors/MONITOR/desc**
  158. - Reading shows a description of the monitor *MONITOR*
  159. **monitors/MONITOR/enable**
  160. - Writing "0" disables the *MONITOR*
  161. - Writing "1" enables the *MONITOR*
  162. - Reading return the current status of the *MONITOR*
  163. **monitors/MONITOR/reactors**
  164. - List available reactors, with the select reaction for the given *MONITOR*
  165. inside "[]". The default one is the nop (no operation) reactor.
  166. - Writing the name of a reactor enables it to the given MONITOR.
  167. For example::
  168. # cat monitors/wip/reactors
  169. [nop]
  170. panic
  171. printk
  172. # echo panic > monitors/wip/reactors
  173. # cat monitors/wip/reactors
  174. nop
  175. [panic]
  176. printk