@phdthesis{, author = {Alia, Gazmend}, title = {Artificial Intelligence Methodologies for Optimization in High Dimensional Spaces with focus in Transistor Models}, editor = {}, booktitle = {}, series = {}, journal = {}, address = {}, publisher = {}, edition = {}, year = {2023}, isbn = {}, volume = {}, number = {}, pages = {}, url = {}, doi = {}, keywords = {Machine learning, Artificial Intelligence, Differential Evolution, Transistor models, Calibration, Parameter extraction, Fitting, Automation}, abstract = {The huge boost in computing power and speed has opened new horizons for science and technology. One key consequence is the possibility to simulate much faster and more accurately than ever before, thus allowing for very realistic modelling of phenomena, which opens new paths for better understanding and better solutions. More complex and more realistic models come hand in hand with many new challenges, which require novel approaches. One of the most important challenges is the calibration of the models with respect to some given reference data. This means, determining the parameters of the model, such that its behavior in simulation is as close as possible to some reference data measured in the real world. This is the first step and the most critical one, which is required to make the models valid, i.e. usable, for further studies and development. This research focuses on models of transistors. They are complex physics-based models, that contain hundreds of parameters in order to capture as many real effects as possible. These models are the bridge between the foundry and designers, enabling the latter to perform hundreds and thousands of simulations before building the hardware system. Faster and more accurate models directly means faster development and products with better performance-over-cost ratio. The calibration in these high dimensional spaces, with at least 50 parameters and up to hundreds, so that the simulations of the transistor match as good as possible with the real behavior, is nowadays performed manually. The state of the art in industry today is to calibrate the models by hand in a cyclic and iterative process of 30 to 50 steps. In each step, the engineers try to fit only a small part of the characteristics of the transistor by tuning a few parameters at a time. This time-consuming process takes 2 to 4 weeks due to 3 main reasons: large number of parameters, complex interactions between parameters and large number of measurement data. The most critical point is the fact that when calibrating one part of the characteristic, it is highly probable that other parts get worse, because of the strong interaction that exist among model parameters. In this work, methodologies based on machine learning are developed, that achieve fast, high quality, scalable and general purpose automatic calibration. Our new approach minimizes the human intervention during the calibration process. The automation spans the full calibration process, from input data processing up to the final model generation. Genetic Algorithms stand on the foundations of the methods. As never before, Differential Evolution is successfully used and adapted here for the model calibration problem. In addition, a smart sampling algorithmand two novel optimization algorithms, Differential Evolution with Decision Tree Classifier (DE-DTC)) and Differential Evolution with Population Prediction (DE-PP), are developed. They accelerate the convergence to the final solution by at least a factor of 5, when compared to pure Differential Evolution. Finally, model-to-model learning, i.e. using knowledge from previous calibrations, speeds up the process by at least another factor of 5. The methods are tested on hundreds of real use cases, which include BSIM4, HiSIM-HV, BSIMBULK, IGBT custom transistor models, etc. The automatic calibration is at least 20 times faster for one single model and roughly 250 times faster when hundreds of models are calibrated. It achieves consistently more accurate models compared to the manual calibration. 200 models are calibrated in 48 hours and this is pure computing time, as compared to 1.5 years it takes for manual calibration. The methodologies developed here are widely applicable to many other industrial and academic problems, where the challenge is to fit a model to some given reference data. Such problems appear often when studying vehicle aerodynamics, weather forecast, biological system behavior, price predictions in finance, thermal behavior, electromagnetic behavior, etc. Although they look different topics, on its foundation the problem is an optimization topic, hence making our approach very generic.}, note = {}, school = {Universität der Bundeswehr München}, }