We are hiring new doctoral researchers, student research assistants, and tutors. Apply now!
Distinguished Paper Award and Best Artifact Award at FSE 2024!

Matthias Kettl

Picture of Matthias Kettl

Software and Computational Systems Lab
Department of Computer Science
Ludwig-Maximilians-Universität München (LMU Munich)
Oettingenstraße 67
80538 Munich (Germany)

Room
F 009
Office Hours
By appointment
Email
firstname.lastname@sosy.ifi.lmu.de
ORCID
0000-0001-7365-5030

GPG-Key

Please send me encrypted mails!
My GPG key: 0x0C6A6D6A013FD4CF
Fingerprint: DAF8 96DE 3569 D688 C46D BC0E 0C6A 6D6A 013F D4CF

Thesis Mentoring

Available topics
Distributed Value Analysis

CPAchecker is a framework for automatic software verification developed at our chair. As such it has a broad technology stack that should provides everything that is needed for verifying C programs. CPAchecker is able to distribute the analysis of a single task to different workers by decomposing the input program into blocks. Each worker analyzes one block and communicates the results to all other workers over messages to establish the correct verification result. Currently, only the predicate abstraction can be executed in the framework. We want to enable value analysis for this approach, too. Your task would be to implement the (de)serialization of abstract states and the communication of them. Additionally, the implemented abstraction should also work in the distributed setting which have to be encoded in the framework.

Modular Distributed Summary Synthesis

Due to the rising availability of distributed computational power distributing a verifier is of great interest. While distributing a single verifier is important, we consider how to use verifiers as building blocks in order to achieve distributed verification. Your goal would be to create a framework to make this possible, based upon some existing theory.

Currently assigned topics
Distributed Incremental Verification

CPAchecker is a framework for automatic software verification developed at our chair. As such it has a broad technology stack that provides everything that is needed for verifying C programs. CPAchecker is able to distribute the analysis of a single task to different workers by decomposing the input program into blocks. Each worker analyzes one block and communicates the results to all other workers over messages to establish the correct verification result. The referenced work uses incremental verification on code blocks. In our setting, the blocks are already available and we can also handle the updates of the summaries in parallel.

Tubes: Identifying Abstract Error Traces

There might be several reasons for reaching a specific error location in a program. We want to identify all classes of inputs that trigger the same bug. They should be as abstract as possible but as precise as necessary. For a given program we want to find out, which restrictions to nondeterministic variables are necessary to reach an error location. For example, we might reach error location E1, if the nondeterministic variable x equals 1 and the nondeterministic variable y is less than 1024. Error location E2 might be reachable for x > 100 and y < 100. Otherwise the program does not show faulty behavior. In our approach we are interested in the most abstract restrictions for all inputs for reaching error locations instead of a single testvector like x = 1 & y = 1023 for E1 and/or x = 101 & y = 99 for E2.

Reconstruction of Violation and Correctness Witnesses for Distributed Block Summaries

CPAchecker is a framework for automatic software verification developed at our chair. As such it has a broad technology stack that should provides everything that is needed for verifying C programs. CPAchecker is able to distribute the analysis of a single task to different workers by decomposing the input program into blocks. Each worker analyzes one block and communicates the results to all other workers over messages to establish the correct verification result. In case we find a violation, we want to provide the user with a counterexample that proves the reachability of a violation. Currently, neither the counterexample nor invariants can be synthesized from the messages. However, witnesses are important for improving the trust in the verfication results.

Implementing and Evaluating a new CPA for validating and finding tubes.

Finding and fixing bugs is one of the most expensive parts of the software development process. To expedite it, a good description of all inputs which result in the same error can be helpful. To generate them, we will use CPAchecker to find constraints which result in an error when fulfilled, for example all even numbers as input result in an error. This will be done by first finding an error path in the program being analyzed, followed by refining a first guess for a constraint using an approach based on CEGAR.

Finding Semantically Equivalent Correctness Witnesses

Correctness Witnesses are an artifact produced by a verifier which allows a validator to replay the verification process. The task would be to analyze correctness witnesses and find semantically equivalent witnesses from the perspective of a validator.

Finished topics
Scaling Formal Verification: Parallel Analysis of Functions in CPAchecker

Verifying complex programs with respect to a given specification (e.g., 'check that function reach_error is never called') may be time consuming. To make verifiers more applicable in industry, we want to tackle the problem with a divide-and-conquer approach. We decompose the verification task in multiple smaller tasks that are then verified in parallel. As soon as one of the analyses reports a violation, we abort all other analyses and report the violation. This will lead to many false alarms since we do not know with which input parameters functions will be called. Return values of functions will be ignored and interpreted as unknown value in a first step. From there we can continue in two directions:

  1. Analyze functions under a user defined condition (e.g., a user restricts the values of parameter x of function foo to the interval [0;100])
  2. We identify each function call automatically and try to infer all parameters.
The goals of this topic are:
  1. The verification verdict should be available faster (in terms of passed walltime).
  2. Wrong proofs should be non-existing. In case a program is safe, the decomposition should be a sound over-approximation.
Finally, we will evaluate this approach on the SV-COMP benchmark set. This topic is inspired by the tool Infer

Fault Injection: Generating a Benchmark Set for Automated Fault Repair

The SV-COMP benchmarks set is one of the largest benchmarks sets of C programs for software verification. For all programs in the set we know whether the program fulfills certain properties, e.g., the function 'reach_error' is never called. The benchmarks help to evaluate state-of-the-art verification tools and their effectiveness. For fault localization techniques there is no large benchmark set that provides sufficient metadata for each task to perform a large-scale evaluation. The idea is to create such tasks by manually crafting them, injecting faults in safe tasks, or identifiying fixes for unsafe tasks.

Teaching

Publications, Theses, and Projects