Using AI to Help Code

Since the advent of ChatGPT (November, 2022) and other Large Language Models (LLMs) capable of writing code based on prompting, programmers have been using LLMs to help with their coding tasks.

LLM prompting is far from a science, and we are still learning how to effectively generate correct and useful code for projects. The idea of “Vibe Coding”, where an LLM user prompts the AI to write an entire piece of code, and then re-prompts the AI to modify and update the code without any human-level debugging or editing, has become popular. Many professional programmers who have investigated vibe coding have come to the conclusion that their own programming skills are still important. Here are some references you may want to investigate yourself:

  1. Sarkar, A., & Fourney, A. (2025). “Vibe coding: programming through conversation with artificial intelligence.” arXiv preprint arXiv:2506.23253.

  2. Grey Literature Review (2025). “Vibe Coding in Practice: Motivations, Challenges, and a Future Outlook – a Grey Literature Review.” arXiv preprint arXiv:2510.00328.

  3. Greenman, S. (2025). “The AI Vibe Coding Paradox: Why Experience Matters More Than Ever.” Medium - Data Science Collective.

For this part of the assignment, we would like you to use your choice of LLM (from the Stanford AI Playground) to try to solve a challenging programming assignment that is, in a number of ways, likely too advanced for a student coming directly out of CS106B. Even with the use of the AI, you may find that you can’t create runnable code that solves the problem, but if that happens, we hope that your respect for the need to understand coding yourself better is highlighted. In other words: this problem will be difficult for an LLM to solve, and more advanced programming skills from a human would be beneficial to completing the code successfully.

Goals and Limits

  • For this part of the assignment, we would like you to use an LLM from the Stanford AI Playground to try and write a JPEG decoder based on your own Huffman code, and then answer a number of questions about your process, and the level of success that you achieved.

    • We suggest using either OpenAI’s GPT4.1 or GPT5, or Anthropic’s Claude-4-5-sonnet models. But, you are welcome to try any of the models on the AI playground.
  • You are allowed to prompt the AI as many times as you want, and you are allowed to try different models from the AI Playground.

  • Your code must use C++ without any additional libraries, and your own Huffman code must be integrated into the solution. Additionally, your code cannot use the Stanford Library’s GImage class or the Qt library’s QImage class.

  • Your code should be put into the jpegDecoder.cpp file, and you will need to start with the void loadJpeg(GCanvas& img, string filename) function, which should read a .jpg or .jpeg file and converts it into a Stanford Library GCanvas object, img, which has been passed in by reference.

  • Your code can use any helper functions, or even create one or more C++ classes. There isn’t any defined limit to the amount of code you have the AI produce.

  • Your code must use your code from huffman.cpp in some useful way.

  • To test your code, run the Huffman assignment project, select 0 at the main testing menu (to run no tests), and then select J from the options menu. Select a JPEG from the available files in the res directory

The JPEG format

The JPEG format is a widely used lossy way to store digital images. It was created in 1992, and the algorithm to encode and decode images uses many techniques to compress the images to a fraction of their original size, and Huffman Encoding is used as one of its steps.

You do not need to fully understand the JPEG decoding algorithm for this part of the assignment, and it is quite a challenging algorithm, indeed.

Thoughts on LLM prompting for code generation

You have likely used an LLM before, and you may have used LLMs to generate code. But, here are some prompting suggestions that may help you get to a solution.

  • LLM prompts are not like search engines – a short prompt is almost never helpful. The initial prompts you provide for this assignment will likely be large, and include a lot of code for the LLM to digest.

  • Prompting an LLM to produce code involves clearly and fully describing the problem you need to solve. You aren’t going to need to tell an LLM how the JPEG algorithm works (it either already knows this, or can look it up), but you do need to describe the goal of the program, and give it the specific information it will need.

    • For example, because we are using the Stanford C++ Library, you will need to tell the LLM this, and possibly provide it with details about the library (e.g., about what a GCanvas object is). One way to do this is by providing it with header files or full C++ functions.

    • Because you need to use your own Huffman code, you will need to provide the LLM with your Huffman decompression code and the code from either bits.h, bits.cpp, or both. This should be included in an early prompt (likely the first prompt).

  • You will almost certainly have to provide multiple prompts to the LLM to get to a solution. A single LLM thread is really all of the prompts so far plus your new prompt fed back into the LLM. LLMs do not have unlimited memory, and a single prompt thread can quickly degrade with more prompting.

    • One way to avoid a memory overload is to start a new prompt, and then paste in code that the LLM has already created, with a brief description of what it does, and then a new set of instructions. E.g., “Here is code for a JPEG decoder written using the Stanford C++ library and a pre-written Huffman Decoding algorithm. It has the following bug: … Please help fix that bug so that the JPEG decoder works properly. You must not use any external libraries other than the Stanford Library, and you may not use the GImage class or the QImage Qt class.”
  • Some LLMs will want to produce multiple files for you to use in your solution. This is fine, but adding a new file to a Qt project can be a little tricky. The easiest way to do it is to right/ctrl-click on the folder where you want to create the file (e.g., Sources), and then choose “Add New”. Then select “C/C++” and “C/C++ Source File”. Name the file with a .cpp extension. It should then compile into your program.

If you get stuck

This is a challenging problem, designed to show you that it is difficult to use an LLM to do an advanced programming task as a relatively new programmer.

We don’t want you to get too frustrated with this part of the assignment, though we do want you to give it a reasonable try (and you need to answer questions below about your process).

If you get stuck, you are welcome to come to the LaIR or Office Hours for assistance, as always.

When we say “we don’t want you to get too frustrated,” what this means in practice is that if you have been working for a few hours on this part of the assignment and can’t seem to make any more forward progress, you are welcome to stop and answer the questions. We will not be grading this part of the assingment on functionality for points, though we do want to see the code that you and the LLM do produce. If you are successful, great! If not, please know that getting a working solution is not really the end goal here (though it would be great to do so).

Questions

Answer the following questions about your code in short_answer.txt:

Question 9:

Which AI model did you choose (if you used multiple models, include all of them)

Question 10:

What initial prompt did you use for the model (or models)? Paste the model’s response without the code it may or may not have produced.

Question 11:

Were you able to integrate the model’s code responses into your own code and get it to compile successfully?

Question 12:

If you needed to fix bugs, what was your process? What was easy/hard about that process? Did you simply ask the model to fix its errors, or did you attempt to fix them in the code yourself?

Question 13:

What, if anything, was frustrating about the process?

Question 14:

Does your program successfully decode jpeg images with the code that the AI / you produced? If so, how long did it take? If not, how long did you spend, and what made you finally give up trying?

Question 15:

Please reflect on which of the prompts that you used generated the most useful responses, and reflect on if there was anything specific that you included that might have helped get good responses.

Please paste all of your prompts (not the responses) at the end of short_answer.txt. You can leave out your own code or the code for the Stanford libraries (just replace with <code here>)