AI tools have been popping up everywhere lately, from the news to group chats, and in education they have gone from “we’re not sure about this” to being part of everyday workflows for a lot of students. In software engineering especially, tools like ChatGPT, Google Bard, and GitHub Copilot can help with everything from understanding code to generating documentation.
In ICS 314, I experimented with AI mostly as a way to support my learning, not replace it. I have used ChatGPT for brainstorming, debugging, and explanations. I tried GitHub Copilot a couple of times, but it didn’t always fit my workflow for the WODs. Bard was more hit-or-miss and sometimes gave me vague answers that weren’t worth digging through. This class made me think a lot more intentionally about when AI is actually helpful and when it’s better to figure things out myself.
Experience WODs (e.g., E18)
For E18, I used ChatGPT to ask, “Write a JavaScript function using underscore to filter an array based on these conditions…” I got a working starting point, but it wasn’t exactly what the WOD needed. The benefit was that it helped me remember the syntax for .filter() and .map() quickly, but the downside was I had to rewrite chunks because the AI didn’t follow the problem constraints.
In-class Practice WODs
During practice WODs, I almost never used AI. I wanted to get used to working under time pressure without outside help, since that’s closer to the actual assessment environment.
In-class WODs
Same as practice WODs, I did not use AI. Mostly because it would be a distraction to switch tabs, and I knew the point was to rely on my own skills.
Essays
For essays, I sometimes used ChatGPT as a brainstorming partner. For example, when writing about Agile Project Management, I asked, “Can you list examples of Agile use outside software development?” I didn’t copy the response, but it helped me think of contexts like event planning or creative projects.
Final Project
Our group used AI for quick reference checks. If we forgot a React hook’s exact syntax, someone might drop it into ChatGPT and get a clean example. But the bulk of our project was hand-coded, since AI suggestions didn’t always follow our exact component structure.
Learning a Concept / Tutorial
When learning functional programming with Underscore, I asked ChatGPT, “Explain _.pluck in simple terms with an example array of objects.” The explanation was actually clearer than the official docs for me, so I kept it in my notes.
Answering a Question in Class or in Discord
If someone in Discord asked a question I wasn’t sure about, I might check ChatGPT for a refresher before replying just to make sure I didn’t accidentally give bad advice.
Asking or Answering a Smart Question
When I posted a smart question, AI sometimes helped me phrase it better. For example, I used ChatGPT to rewrite my problem description so it was concise and focused on the actual bug, not the whole backstory.
Coding Example
When we worked with Underscore, I asked ChatGPT, “Give me an example of using _.pluck to extract a property from a list of objects.” The answer matched the docs, but with simpler variable names that made it click for me.
Explaining Code
If I came across code in a tutorial that confused me, I would paste it into ChatGPT and ask, “Walk me through what this does line by line.” It was especially helpful for async/await examples that weren’t well-commented.
Writing Code
I sometimes used AI to write small utility functions, like a quick input validator. But I learned quickly that letting AI write big chunks usually caused integration headaches.
Documenting Code
AI was decent at generating JSDoc comments if I pasted the function in, but I would still have to edit for accuracy. It saved me from starting from scratch, though.
Quality Assurance
When I had stubborn ESLint errors, I would paste the snippet into ChatGPT and ask, “Why is ESLint flagging this?” Sometimes it nailed it, other times it guessed wrong.
Other Uses
Occasionally, I used AI for quick sanity checks before pushing code, basically asking, “Does this logic make sense for [task]?” This was more about catching overlooked cases than finding syntax errors.
Using AI in ICS 314 helped me speed up the “getting unstuck” part of coding. Instead of digging through Stack Overflow for half an hour, I could get a direct explanation in seconds. That said, I noticed that if I leaned on AI too much, I remembered less. I had to balance using it as a guide with forcing myself to work through problems manually.
Outside class, I used ChatGPT when working on a personal project to automate data cleaning for a spreadsheet. I asked it to write a Python script to reformat CSV files, and it saved me hours. The same approach could be used in real-world software engineering for quick prototyping.
The main challenge was accuracy. AI sometimes gave wrong answers that looked convincing. But there is a big opportunity for AI to be more integrated into coding environments so you don’t have to jump between tabs or copy-paste code.
Traditional methods like reading docs force you to fully understand the material, while AI can shortcut to the answer. For me, the best approach was using AI after trying the problem myself. This way, it reinforced my learning instead of replacing it.
I think AI will play a bigger role in software engineering education, especially for personalized explanations. But schools will have to teach students how to fact-check AI, just like they teach how to verify sources for research papers.
AI was a helpful tool in ICS 314, but only when I used it intentionally. It sped up my workflow, gave clearer explanations for tricky concepts, and helped with documentation. At the same time, it could easily become a crutch if I wasn’t careful. For future courses, I think AI should be encouraged as a supplement, not a replacement, for learning.