LACL, bâtiment P2-RdC, salle des conseil P2-131, comment y aller
Abstract: Time-bounded reachability problems are concerned with assessing whether a modelâ€™s trajectories travers a given region of the state-space within given time-bounds. In the case of stochastic models reachability is associated with a measure of probability which depends on the modelâ€™s parameters. We propose a methodology that, given a reachability specification (for a parametric stochastic model), allows for computing a reachability related probability distribution on the parameter space, i.e. a distribution that allows for identifying regions of the parameter space for which there is a non-null probability to match the considered reachability specification. The methodology relies on the characterisation of distance between a modelâ€™s trajectory and a reachability specification which we show being assessable by using a hybrid automaton as a monitor of a modelâ€™s trajectory. An automata-based adaptation of the Approximated Bayesian Computation method is then introduced to estimate the reachability distribution on the parameter space.
Abstract: Process mining techniques use event logs containing real process executions in order to mine, align and extend process models (commonly BPMN or Petri nets). The partition of an event log into trace variants facilitates the understanding and analysis of traces, so it is a common pre-processing in process mining environments. Trace clustering automates this partition; traditionally it has been applied without taking into consideration the availability of a process model. In this work we extend our previous model based trace clustering, by allowing cluster centroids to have a complex structure. Cluster centroids can then range from partial orders, down to sub-nets of the initial process model. This new clustering framework is able to cluster together traces that are distant only due to concurrency or loop constructs in process models. The method is encoded as a SAT problem and we show a sampling method to deal with large datasets.