JavaScript required
We’re sorry, but Coda doesn’t work properly without JavaScript enabled.
Skip to content
Share
Explore
Farin
1) UI Quality
Expectation
Match the actual intended UI experience from Figma (visual look, spacing rhythm, interaction state), not only copied numeric values.
If design interpretation is unclear, ask before marking complete.
Actions
Before development: review design and write a short “UI interpretation note” (2–4 bullets).
Before handoff: run a visual parity checklist (desktop + at least one other viewport).
Attach comparison evidence (screenshots/GIF).
Success criteria
No “looks off” feedback in first review for assigned work.
At least 90% UI parity accepted in first pass.
2) UX Decision Quality
Expectation
Validate whether flow feels usable and trustworthy for a client.
If behavior is unclear, clarify with lead/QA before implementation.
Actions
For each feature, define:
expected user path,
edge behavior (empty/error/loading),
success/failure messaging.
Confirm behavior with QA/lead when ambiguous.
Success criteria
No major UX behavior mismatch after “done” handoff.
All ambiguous behavior clarified before implementation.
3) Status Update Accuracy
Expectation
Report actual status honestly; never call “done” if only partial.
No stakeholder update that overstates readiness.
Actions
Use fixed status labels:
In Progress
Ready for Internal Review
Ready for QA
Every update must include:
completed scope,
pending scope,
known bugs/risks.
Success criteria
Zero “done but not done” incidents.
Zero incorrect “main flow done” reports.
4) Rework Reduction / Scope Discipline
Expectation
Do not modify restricted areas unless explicitly approved.
Preserve previously fixed/approved sections.
Actions
Before coding, list in-scope files/modules.
Any out-of-scope change requires approval note.
If adjacent issues are found, log separately instead of silently modifying.
Success criteria
Zero unauthorized scope touches.
Rework requests due to unnecessary edits reduced significantly.
5) Testing Discipline
Expectation
Properly test own code before handoff (not just AI-generated output submission).
Verify with different screens/data states.
Actions
Mandatory self-test checklist per task:
happy path,
at least 2 edge cases,
cross-screen/responsive check,
regression check on touched modules.
Include test evidence in handoff (brief checklist + proof).
Success criteria
Majority of defects found before handoff (self-detected).
Critical bugs in first review drop sharply.
6) Debugging & Problem-Finding First
Expectation
Identify root cause before proposing fixes.
Use GPT/Copilot as support, not substitute for diagnosis.
Actions
For each bug, submit mini-debug note:
symptom,
root cause hypothesis,
validation steps,
chosen fix + why.
Escalate only after own investigation attempt.
Success criteria
Clear root-cause reasoning shown on assigned bugs.
Improved independence in troubleshooting.
7) Guidance Follow-through (remote + office)
Expectation
Apply guidance consistently in both offline and remote work.
Practice assigned debugging techniques.
Actions
Weekly check-in on:
what guidance was applied,
what was learned,
what still blocks execution.
Maintain short “learning log” (debug skills practiced this week).
Success criteria
Demonstrable use of suggested debugging methods.
Fewer repeated mistakes across similar tasks.
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
Ctrl
P
) instead.