AI-Assisted Development Guidelines

Field Details
Status Active
Last Updated 05-11-2026

Purpose

To define the standards for using AI tools (GitHub Copilot, ChatGPT, etc.) to generate code, ensuring quality, security, and maintainability


Scope

Applies to: All development teams using AI-assisted coding tools

Does not apply to: Manual code review processes, non-AI development


The Golden Rule

You are the owner of the code. AI is a tool, not a teammate. If AI writes a bug and you commit it, you are responsible for the fix. Never commit code you do not fully understand.


Code Integrity

  • Manual Review: Every line of AI-generated code must be read and understood. No "blind" Tab-completion.
  • Refactoring: AI often writes "verbose" code. You must refactor AI output to match our naming-conventions.md and project architecture.
  • Logic Verification: AI can confidently generate logic that looks correct but fails on "edge cases." Check all loops and conditionals.

Security Practices

  • No PII/Secrets: Never paste proprietary business logic, API keys, or sensitive customer data into external AI prompts (unless using an enterprise-safe instance).
  • Library Check: AI may suggest outdated or non-existent libraries. Verify that any suggested package is secure and actively maintained.
  • License Compliance: Ensure AI-generated snippets do not violate Copyleft licenses.

Pull Request (PR) Standards

  • Transparency: If a significant portion of a PR (>50%) was generated by AI, tag it with a label (e.g., ai-generated).
  • Comments: Use AI to help write documentation (JSDoc/Comments), but ensure the comments describe the intent of the code, not just what the code says.

Examples

Good Practice: Using AI to generate a boilerplate interface, then manually renaming fields to match our PascalCase standards.


Exceptions

AI may be used more liberally for documentation generation and test case creation, but still requires review



Changelog

Version Date Author Change
1.0.0 05-11-2026 Tibin Sunny Initial version