— Notes

Notes.

Practical writing: working with AI coding tools, student opportunities, and effective prompting.

Working with AI Coding Tools

AI-assisted coding is now part of daily engineering. Used well, it makes you faster. Used badly, it builds you a maze you then have to escape.

Plan first, code second. Instead of telling the model “add this feature,” ask it to write a step-by-step plan: which files to change, and why. Agree on the plan before anything is written.

Work in small pieces. Handing the model a large change at once tends to bury bugs. One function, one test, one refactor at a time.

Read the tests with your own eyes. The model can write tests but not necessarily good ones. Watch for mocks that hide real failures.

Keep version control alive. Commit after every small success. When something breaks, git reset --hard HEAD~1 becomes your best friend.

Ask “why?” When the model proposes a solution, ask why this approach is better than another. If you can’t follow the answer, the approach is probably wrong for you.

Student Resources & Opportunities

Being a student means getting professional-grade tools for free or at a large discount. The ones I’ve actually used:

  • GitHub Student Developer Pack — hundreds of subscriptions in one. Most valuable to me: $200 DigitalOcean credit, a free .me domain from Namecheap, a JetBrains IDE subscription, Canva Pro.
  • JetBrains Student License — free IDEs across the whole suite.
  • Google Cloud Free Tier — a genuinely useful always-free layer. Small FastAPI apps run essentially free on Cloud Run.
  • Notion Education — free Personal Pro.
  • Figma Education — free professional plan after student verification.
  • arXiv + Papers with Code — the best uncensored door into current ML research.

One rule: never pick a service just because it’s free. Pick the one that solves your problem, then apply the student discount.

Effective Prompting

A good prompt is like a good bug report: it tells the reader what you expected, what happened, and the surrounding context.

Give context, then give the ask. Start by telling the model which files, frameworks, and code style matter. Then ask. Without context, the model falls back to its own assumptions.

Say what it shouldn’t do. Negative constraints like “don’t add new dependencies” or “don’t mock anything in tests” often clarify the output more than positive instructions.

One task at a time. Instead of “add X, refactor Y, and write the tests,” just say “add X.” Review, then move on.

Concrete examples beat abstract descriptions. Point to an existing working example. “Follow the same pattern as Button.astro for Input.astro.”

One last thing: always read the output. Copy-paste has a price, and that price is a codebase you can’t debug but is assumed to be yours.