I Shipped a Feature. I Don't Write Code.
How I used AI to add analytics to our app and what it taught me about who gets to participate in engineering workflows.
TL;DR
You don’t need to write code to ship a feature. I added Google Analytics tracking to our app using Cursor IDE. The AI wrote the code; I made the decisions.
The business person’s contribution is knowing what to track and why. Event names, categories, placements: these are product decisions, not engineering ones.
AI tools are changing who gets to participate in engineering workflows. Not replacing engineers. Expanding who can contribute.
The hardest part wasn’t the code. It was figuring out how to test that the tracking actually worked on my local setup.
How did we get here?
At Talent, our CPO Filipe raised a need: we should track where talent+ subscriptions come from. Which page? Which button? Which plan? Google Analytics was already installed on the app. We just needed to send the right events with the right details.
Our CTO Leal shared a code snippet showing how to fire a custom event. A few lines of code. Simple enough in theory.
In the past, I would have turned this into a ticket and handed it to a developer. But I’ve been building my own tools and dashboards with AI for a while now. What I hadn’t done was contribute to our team’s actual app. The codebase that goes live. The one engineers work on every day.
I decided it was time.
The setup
I opened our app’s codebase in Cursor IDE and started a conversation with the AI agent. I explained the goal in plain language: we need to track where talent+ subscriptions come from.
The agent did something I’ve come to expect from these tools: it asked me questions before writing anything. Which user actions should count as a subscription event? Do you want to track intent (someone starts the purchase) or just success (someone completes it)? What about the source: pricing page, settings, profile?
I made the decisions:
Track both. A “purchase start” event when someone clicks the confirm button, and a “purchase success” event when the subscription actually goes through.
Track all placements. Pricing page, profile, settings, onboarding: everywhere the subscription flow can begin.
Include three properties on every event: which page triggered it, which plan they picked, and whether it was for themselves or a gift.
The agent then showed me exactly where in our codebase these moments happen. It pointed to the specific file, the specific button, the specific function. I didn’t need to search for anything. I just needed to confirm: “Yes, that’s the right place.”
How the build actually works
The AI agent created a working copy of the codebase, made the changes across about eleven files, and ran checks to make sure nothing was broken.
The whole implementation took a few hours in one session. I didn’t write a single line of code. But I made every product decision: what to name the events, what information to include, which user actions matter.
The tension: testing is where I got stuck
Building the feature was the straightforward part. Testing whether it actually worked? That’s where I had no idea what to do.
Google Analytics has tools that let you see events arriving in real time. But when I opened them, nothing was coming through. Zero.
The agent walked me through it. I needed two Chrome extensions to make my local browser visible to the analytics debug tools. I installed them, connected everything, and tried again.
I clicked “Extend Talent+” on the app. And there it was: talent_plus_purchase_start, showing up in the debugger. The event fired. The placement, the plan, the mode: all captured.
That testing phase took more mental effort than the entire implementation. It’s the kind of thing that feels obvious once you’ve done it, but completely opaque when you haven’t.
What I told the team
Before committing anything, I wrote a message to our tech team explaining what I’d done. The AI helped me draft it, but the structure was mine: what I did, how I tested it, and what the code actually changes.
Leal’s response: “From what you did everything looks OK! So feel free to commit, push, and open a PR.”
That was a good moment.
The git workflow (things I learned along the way)
This was my first time going through the full engineering workflow: create a branch, commit changes, push to GitHub, open a pull request, get it reviewed, and merge.
None of these steps were hard once someone explained them. But each one would have stopped me cold if I’d been working alone without the AI agent guiding me through.
A simple framework
After this experience, I’d say AI tools are changing who gets to participate in engineering workflows in three ways:
Deciding. The business person defines what to track, how to name it, and what matters for reporting. These are product and strategy decisions. The AI can’t make them for you.
Contributing. You can now go beyond prototypes and internal tools. With the right setup and guidance, a non-tech person can make changes to a real codebase, open a pull request, and get it reviewed by engineers.
Communicating. You need to explain what you did: to your CTO, to reviewers, to the team. Understanding the work well enough to describe it clearly is part of the contribution.
What this changes
In my experience, this shift has a specific implication worth noting.
The bottleneck isn’t coding. For a task like analytics tracking, the hard part was never the code. It was knowing what to track, where to track it, and why it matters for the business. Event names, placements, properties: those are product decisions. The AI handled the syntax. I handled the intent.
More people can participate in more workflows. I’m not suggesting every operator should start committing code. But the barrier has dropped significantly. If you understand the problem and can describe it clearly, AI tools can help you contribute in ways that weren’t possible two years ago.
Engineers become reviewers, not bottlenecks. Leal didn’t build this feature. He reviewed it. That’s a different use of his time: and arguably a better one for a task this size.
Practical takeaways
Start with something well-scoped. Analytics tracking was a good first task because the goal was clear, the risk was low, and the surface area was small.
Let the AI ask questions first. Don’t rush to “build it.” Let the agent ask what you want to track, where, and why. Your answers are the spec.
Write a summary before you commit. Describing what you did, in your own words, forces you to understand it. It also builds trust with the engineering team.
Go through the full workflow. Branch, commit, PR, review, merge. It’s the team’s process for a reason. Don’t skip it just because you’re “non-tech.”
Ask for help when you’re blocked. Some things you simply can’t figure out alone. Knowing when to ask is part of contributing.
What I’m still figuring out
If AI tools keep lowering the barrier to participation, how should teams think about who works on what? Not everyone should be committing code. But more people probably could, and maybe should, for the right kinds of tasks.
I don’t have full answers yet. But I think they’re worth exploring.
Stay curious.
Tolga

