"Designing Embedded Systems with Large Language Models" by Keeshan Rama and Paul Truong

Date of Award

Spring 2024

Document Type

Thesis

Publisher

Santa Clara : Santa Clara University, 2024

Department

Electrical and Computer Engineering

First Advisor

Hoeseok Yang

Abstract

AI has become increasingly involved in many development, research, and education facets. Tools have been made available to lend help in almost every field imaginable. From students asking for homework help on ChatGPT (GPT) to full-stack AI developers with Devin, the future of generative AI tools is now. Large language models (LLMs) have been developed for a breadth of applications and this is just the beginning. The main functionality this research is focused on is ChatGPT’s ability to code, specifically on FPGA boards in Verilog. As generative AI such as ChatGPT becomes more powerful and sophisticated, people should learn how to use ChatGPT to assist them in their work. In this project, we aim to explore the functionality and limitations of coding exclusively with generative AI to benchmark how capable the AI is. GPT has ample generative capabilities to write code in any language, Verilog being one of them. However, it often produces skeleton code that needs to be finished by the user resulting in having to make multiple attempts before reaching a productive answer. To prevent redundant responses, we have been developing a strategy for improving our prompts to achieve constructive feedback toward the end goal. It should also be mentioned that users should have basic knowledge about the field in which they are using GPT, not only to identify when the tool has provided a wrong response but also to notice patterns and guide the tool when it becomes stuck. We are continuing development on a hardware accelerator using an FPGA board written by ChatGPT, in which we hope to measure its ability to improve current image-blurring algorithms.

Share

COinS