The objective of this thesis is to investigate the quality assessment of code
generated by Large Language Models (LLMs), specifically GitHub Copilot
githubcopilot. The study focuses on evaluating the quality of code produced by
Copilot and optimizing prompts to achieve desired outcomes through machine
learning prompt engineering techniques. The results demonstrate Copilot's
capabilities and limitations in producing quality code and offer new approaches
for improving user-tool interaction through targeted prompting techniques.