3350

Speaker

Daniel Steinigen

Fraunhofer IAIS

Electrifying AI: Training Large Language Models and Their Impact on Electronics Engineering

Electrifying AI: Training Large Language Models and Their Impact on Electronics Engineering
The contribution is made by Fraunhofer Institute for Intelligent Analysis and Information Systems - IAIS (https://www.iais.fraunhofer.de/de/institut/dresden.html). Fraunhofer IAIS has been one of the leading institutes in the field of artificial intelligence (AI) for over ten years. The institute, and in particular the Dresden site, conducts research in the field of generative AI and also trains its own large AI language models (LLMs) with focus on European languages in the OpenGPT-X (https://opengpt-x.de/) project funded by the German Federal Ministry for Economic Affairs and Climate Protection. Within the scope of this project and the EU-funded TrustLLM project, IAIS is cooperating intensively with TU Dresden in the field of artificial intelligence. A further focus of the team is on the adaptation (fine-tuning) of the models for specific use cases and on customer-specific developments to transfer generative AI into real applications. The in-house developed LLM Playground enables the comparison of specially trained models with a large number of third-party open and closed source models with regard to their suitability (accuracy, response quality) for specific applications in various fields such as automotive, public administration or pharmaceuticals.

Beyond that, generative AI can also be useful for various tasks in the field of electronics engineering. LLMs demonstrate the ability to understand and generate computer languages, such as programming languages or markup languages. Furthermore, they can be specifically adapted to other computer languages by using specific fine-tuning approaches. Computer languages are also used in the field of electronics engineering. Programmable logic controllers (PLCs), which can be programmed with structured text, are used to control machines and systems. Hardware description languages (HDLs), like VHDL or Verilog, are used to describe the structure and behavior of electronic circuits, for designing integrated circuits, like application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Furthermore, schematics for electrical circuits can be designed in the form of netlists. The application possibilities are manifold and range from code generation and completion, to the detection, analysis and fixing of bugs, to code understanding and documentation of the complete program. Moreover, the use of multimodal models that combine text and vision modalities to analyze and interpret complex technical diagrams or schemes, like ircuit diagram schematics or piping-and-instrumentation diagrams (P&IDs), is conceivable. Nevertheless, the processing of complex images with multimodal models has not yet been solved and remains an open research topic.

The talk will give insides into the training of large language models (large in this case means models with at least 7 billion parameters) as well as the comparison of the variety of available and trained LLMs. A special focus will be put on the use of generative AI for electronics engineering, as described above.

Share this with a colleague