Goal: Define an initial, low‑to‑medium touch accreditation approach for Go partners, focusing on technical capability (connectors and agents) so internal teams can trust partners with services work.
Key Topics Discussed
Need for Go Partner Accreditation
Ensure partners have minimum technical capability before Go sends them customers Build internal trust (sales, TAMs, product) that certified partners won’t damage the brand Use accreditation as a qualifier for which partners get time, leads, and inclusion in the “Go roadshow” Level‑1 / “100 level” accreditation only (good enough for now, not expert level) Future plan for higher levels (200/300) as product and practices mature Same framework potentially reusable for large customers’ internal teams Constraints & Product Maturity
No strong, stable consensus yet on “best way” to build Go agents LLMS are non‑deterministic; patterns still emerging Agent tech and UX (UI/agent builder) are effectively “pre‑alpha” Implications for accreditation Can’t pretend there is one canonical “right way” to build agents Must design something that can evolve quickly as product changes Be careful not to encode fragile patterns that will be obsolete in weeks/months Focus: Connectors vs. Agents
Clear distinction emerging between connectors and agents Connectors ≈ today’s Packs/code: sync tables, APIs, schemas, auth, MCP Agents ≈ instructions + set of tools (connectors) in UI builder; relatively simple configuration Where partners add most value Engineering expectation: most partners (especially SaaS/ISVs) build connectors; relatively few build “true” agents Agents expected to be built mostly by end users in‑product, especially via the new agent builder Stronger, more stable patterns on connectors → better suited to certification now Agent piece should be lighter‑weight and framed as basic capability, not deep expertise What Competence the Certification Should Prove
For connectors (primary focus) Ability to build a Pack/connector using the SDK Includes: auth, sync table(s), formulas/actions, working against an external API or MCP Comfort reading SDK docs and external API docs Ability to produce something non‑trivial but realistic For agents (lighter focus) Understand when a Go agent is appropriate in a workflow Ability to stand up a simple, useful agent Choose a use case that fits Go’s real capabilities Configure tools/connectors and write effective instructions/prompts Appreciate the iterative loop: test → adjust prompt/data/skills → test again Recognized as an important but separate skill (not everyone is good at it) Existing scoping template/agent is a strong asset Open question: how much discovery/scoping to bake into accreditation vs. treat as optional/advanced content Proposed Structure of Technical Accreditation
1) Connector Capstone
Ask partners to build a connector (current “agent” with sync table/MCP, evolving into formal “connector”) Requirements (directionally) Includes at least one sync table Includes at least one formula/action or similar feature Integrates with an external API or MCP Partner chooses the external tool (Jira, Salesforce, HubSpot, other tools with free tier or that they already use) They build the connector, then submit: 3–5 minute Loom/Zoom recording walking through what they built, how it works, and why they chose it Tests ability to read and apply SDK + external API docs Allows creativity and alignment with their real practice and customers Seeds internal partner directory metadata: “this partner has real experience with DocuSign/Figma/Outreach/etc.” Connector patterns are relatively stable vs. agents 2) Simple Agent Capstone
Ask partners to build a basic Go agent for one well‑chosen use case Requirements (directionally) Identify a use case where a Go agent is actually a good fit Build an agent with at least one tool/connector and appropriate data Provide the key prompt/instructions Demonstrate in a short Loom how the agent works and why this workflow makes sense Optional added data for learning Submit the instructions/prompt text explicitly Store submissions for pattern‑finding and future best‑practice guidance (even if not reviewed immediately) Checks minimum bar: they can reason about when to use Go and wire an agent together Provides material for potential coaching moments, if capacity allows Prescriptive vs. Open‑Ended Projects
Prescriptive option (Eric’s view) Easier to grade fairly: everyone implements the same scenario (e.g., “build an agent/connector for Jira”) Can design the scenario to intentionally exercise specific skills Lower grading overhead (less research into unfamiliar tools) Requires access to specific SaaS tools (licensing issues) Less creativity, fewer insights into real partner use cases Open‑ended option (Hector’s and Michelle’s leaning) Lets partners choose tools they already use and care about Surfaces interesting new APIs and use cases for Go Better signal on how they think about their own customers and workflows Can feed into a partner‑matching algorithm (e.g., “partner has built DocuSign connector”) Harder to grade consistently; more effort per review Risk that partners choose “too easy” scenarios and skip key concepts Hybrid approach (emerging consensus) Let partners choose their tool (Jira, Salesforce, HubSpot, or any SaaS with a free tier / existing access) Prescribe what must be included (auth, sync table, action/formula, etc.) rather than which product Use a few guiding questions to reduce “blank page” paralysis, e.g.: What is your main product/service? Where do your users have to leave that product and manually go to another system (copy/paste, context switch)? Level of Touch & Delivery Model
Overall: low‑to‑medium touch; cannot be deeply bespoke for every partner Need a mix of self‑serve content, objective checks, and limited higher‑touch interaction Potential delivery structure Highspot or similar training to cover core concepts (SDK basics, connectors vs. agents, scoping basics) Objective questions (multiple choice, true/false) for discovery/fit “Is Go a good fit here?” style scenarios Connector build + short Loom Simple agent build + short Loom Run periodic “kickoff classes” (e.g., weekly or twice per month) One‑to‑many format to explain expectations, answer common questions, and reduce 1:1 support Option A (low touch): quick checklist‑style review of submissions (≈30 min per partner) → pass/fail Option B (medium touch): add 30 min feedback call per partner or per small cohort to discuss what they built Need a technical grader/proctor (e.g., solutions architect/engineer, possibly Ted Yu or upcoming technical hire) Must be honest about time cost per partner (rough estimate: ~3 hours per partner if doing Q&A + detailed review) Partner Time Investment & Platform Volatility
Developer experience and agent model are evolving rapidly Risk that partners invest heavily in patterns/workflows that become obsolete quickly Keep partner effort as light as possible while still providing internal confidence Put more weight on relatively stable areas (connectors), and lighter requirements on volatile ones (agents) Intended audience for accreditation “Go partners” broadly: solution partners and ISVs doing services Also applies to ISVs with “minor” in services or solution partners with lightweight ISV ambitions Same framework could be offered as guidance to large customers’ internal teams (with or without grading) Next Steps / Plan Forward
Synthesize today’s discussion into a more concrete accreditation proposal Define variants (e.g., very low‑touch vs. medium‑touch with coaching, prescriptive vs. hybrid project choice) Align with Lauren, Coe, Victoria’s org (and David where relevant) on Tolerance for review effort Expected partner time investment Return next week with a more baked outline for further review with Hector and Eric Michelle to attend tomorrow’s partner builder workshop (1 pm) for more input on partner questions and topics Workshop to be recorded for internal learning only (not shared back with partners) to protect partner IP/ideas Action Items
Draft a concrete Level‑1 Go Partner Accreditation proposal (structure, connector/agent capstones, quiz concept, touch level options) and review with Lauren, Coe, and Victoria’s team. Define recommended project requirements for the connector capstone (auth, sync table, actions/formulas, acceptable tool types, submission format). Outline a minimal, lightweight agent capstone (requirements, Loom + prompt submission) that acknowledges product volatility. Propose a cohort‑based delivery model (cadence, kickoff agenda, review flow, approximate time per partner). Identify and propose the grading/proctoring owner for phase 1 (e.g., Ted Yu, new technical hire, or a solutions architect/SE). Join tomorrow’s 1 pm partner builder workshop and capture themes/questions that should feed into accreditation content. Design a short objective quiz bank focused on scoping/fit (“Is Go appropriate here?” scenarios) for the knowledge‑check portion of accreditation. Decide internally how “open” vs. “prescriptive” partner projects should be, balancing grading overhead vs. learning value.
Questions
Do level 1 builders need to shadow? how do we let partners shadow? resource and planning wise? If we are building 1st party connectors, the partners could shadow that without any legal or ethical issues Who is going to bring a partner, who is making the decision, the SE, AE or both? What specifically do the SEs need to see/know to agree that a partner has the capability? If SH had a new hire that was going to build connectors, who would train them and what resources to do they have? Do we have anyone internally that will build agents? Does Ted need to know how to build a connector agent?
What could Ted help with?
Scoping agents
Building agents?
Internal POC for finding resources for partners
What are the differences between simple and complex agents?
for agents, it might be the UI vs SDK