Artificial Intelligence for the Development of new Materials, Processes and Properties
The design of new materials, microstructures and manufacturing processes has over millennia been based on trial-and-error.
The knowledge acquired through random discoveries, systematic experiments, and transfer from neighbouring scientific disciplines produced many empirical rules and later predictive theories. Combined with computer simulations, they have today matured into a backbone position of materials science, enabling the community to discover and improve advanced materials and processes on the basis of detailed understanding. This classical ‘intelligent design’ approach is currently being challenged and partly dissolved by the increasing use of advanced statistical analysis and artificial intelligence applied to large data sets.
Indeed, not only the automatization of material production and testing, but also the rapid evolution of advanced characterization and material simulations dramatically increase the volume of relevant information collected by our researchers. But different from the dragons in the ancient tales, who were satisfied with guarding the treasures they had collected, we aim at turning this treasure into scientific progress by combining advanced data analysis and artificial intelligence with existing material science research.
MPIE research addresses particularly the following challenges and chances:
1. Experiments such as Atom Probe Tomography or Scanning Transmission Electron Microscopy produce very large data sets (several GB-TB per experiment), that encode the inherent microstructural patterns of the investigated sample. However, such data scatters due to the omnipresent statistical fluctuations in real materials and due to unavoidable noise in the experiment. Similar challenges arise in computer simulations that employ stochastic sampling to cover high-dimensional parameter spaces. Big data analysis helps to reveal hidden patterns that emerge only when sufficient data is accumulated.
2. Our experimentalists know how to identify and interpret microstructural and other features, but usually spend much more time (100-1000x) in evaluating the data than in producing them. We work on developing tools for semi-automatic analysis, to empower scientists to focus on the exceptional aspects and to reduce human errors in routine tasks.
3. Data should be kept only if it is relevant and accessible. We therefore work on algorithms to filter for important data and reduce raw data storage by preprocessing. Providing proper metadata to track the history of the sample, measurement details, and data processing steps are crucial to ensure data quality, and to enable reuse of available data in new context. Our goal is to automatically attach and augment metadata by the tools we develop, and to make this metadata accessible to other automatic (meta)tools by linking to material science ontologies. For providing an infrastructure to create, manage, and analyse data in a highly automated fashion from day 1 (i.e. for scientists without prior data-science experience), we developed the pyiron framework.
4. Artificial intelligence methods can be trained to make predictions from data without building conventional scientific models. We see great chances in this, but not for replacing traditional science: rather, we explore how artificial intelligence may guide theoretical and experimental setups to the most interesting conditions for exciting discoveries – after all, 90-99% of all research efforts turn out to be unsuccessful, scientific dead-ends, or simply boring. Reducing that fraction by even a small amount will boost progress!