
Vibe Coding: How AI-Powered Design-to-Code Transforms Developer Workflows
Vibe Coding: How AI-Powered Design-to-Code Transforms Developer Workflows
This post is a follow-up to Richard's blog on our Vibe Design Flow. In his post, Richard explained how we approach designing and creating blueprints at reconfigured - how he plans stuff in Miro with quick sketches, then uses tools like V0 and Claude Code to implement those design blueprints that he hands over to me.
But how does it feel from my side of things? Let me walk you through how we've evolved this workflow and why I think it's transforming how teams build products.
The Evolution of Our Design-to-Code Flow
Our process has evolved beyond what Richard described in his original post. Instead of implementing blueprints in a separate demo app, Richard now implements them directly in our application codebase. This is huge.
For simple functionality where we already have the right libraries and modules in place, Richard can actually implement the functionality himself. This is both exciting and a bit scary - it's a perfect example of how AI coding tools are making technical product managers and designers increasingly dangerous (in a good way)! 😄
Think about it: if your application has well-designed patterns for actions, data loading, and state management (like TanStack Query and/or Zustand ), it becomes insanely fast for technical designers and product managers to ship features without needing a developer to implement every piece.
When Complexity Requires Handoff
Of course, not everything is simple enough for a non-developer to implement fully. Sometimes functionality is more complex and requires deeper technical knowledge. That's where the handoff becomes so important, and where our approach really shines.
Even when Richard hands over a complex feature, I'm not starting from scratch. Instead of receiving abstract Figma files or sketches, I get functioning React components. This means:
- I don't need to figure out how the UI should look and feel
- I can copy the Tailwind classes directly
- In many cases, I can copy-paste entire components
- I just need to hook up the real data and action logic
This reduces my mental load significantly. I know exactly what the intended look and feel should be, and I can focus purely on the logic layer.
A Real Example: Turning Stashed Notes into Quests
Let me give you a concrete example. One of our recent features was improving the "Organize" section in reconfigured. We wanted users to be able to turn their stashed notes into full quests.
Here's how the handoff worked:
- Richard built the UI components and basic interactions
- He created buttons that showed the right modals
- He documented what should happen when different actions occurred
- When I received the PR, I focused exclusively on:
- Changing our data layer to support this new action
- Updating existing components to work with the new flow
- Ensuring everything was properly connected
This workflow is dramatically faster than if I had to start from scratch. I didn't waste time recreating UI components or figuring out the exact interactions - I could focus on the complex logic that brings everything together.
Separating GUI from Logic: Not a New Idea, But Finally Practical
This separation of concerns between UI and logic isn't a new concept. It's been around in the Linux world for decades. There's a quote I can't remember exactly, but it's something along the lines of "trends in GUI come and go, so GUI code shouldn't be heavily tested, but it should be easy to switch, while logic parts should be more stable because the logic stays the same."
I've tried to design our application architecture with this principle in mind. We separate:
- State management: Different types of state are handled differently (app state, form state, server state)
- GUI layer: Focuses only on how information is displayed and where actions are triggered
- Logic layer: Handles actions, data loading, and state synchronization completely isolated from display components
Side note: I know there are architectural patterns like Model-View-Controller (MVC), Model-View-Presenter (MVP), and Model-View-ViewModel (MVVM). But trying here to explain in my own words and using examples insted of just explaining these patterns.
This separation has another huge benefit in the age of AI coding tools: it lets me leverage multiple agents simultaneously. I can have one agent working on updating a piece of action logic while another updates a data loader. These are independent, composable pieces that can be worked on in isolation.
The New AI-Powered Workflow
Our current workflow looks something like this:
- Richard has an idea for a feature
- He uses AI tools to design and write the UI components
- He opens a pull request with the blueprint implemented
- I take over the PR, hook up the real data and logic
- We finalize and ship to users
This approach has dramatically increased our shipping velocity. And the coding agents and assistants are a massive productivity boost on both sides:
- They make technical designers and product managers more capable
- They reduce the implementation work for developers
- They enable clearer communication of design intent through actual working code
Tools We Use for Vibe Coding
For our team, Claude Code has become the centerpiece of our vibe coding workflow. Richard uses it to convert his design ideas into functional React components, and I use it for implementing the logic layer and complex interactions. Many of the tools have multimodel capabilities meaning Richard can sketch something on paper, take a photo, and have AI create components from that sketch.
While other AI coding tools like Cursor, Windsurf, and various editor integrations exist, we've found Claude's understanding of context and ability to work with our codebase structure particularly helpful. That said, the specific tool matters less than the workflow - any capable AI coding assistant can enable this type of collaboration with the right approach.
Why This Approach Works So Well
There are a few key reasons why this style of "vibe coding" approach works so well:
1. Working Code Communicates Intent Perfectly
When Richard hands me a functioning component, there's zero ambiguity about how something should look or behave. I can see it working right in front of me, which eliminates the interpretation errors that happen when translating from Figma to code.
Remember the classic "works on my machine" problem? With this approach, I'm literally receiving a working implementation that I can run, interact with, and inspect. I don't have to guess what happens when a user clicks a button or hovers over an element - I can see it. This dramatically reduces miscommunication and ensures we're aligned on the intended user experience from the start.
The code itself serves as the most precise form of documentation possible. No matter how detailed a written spec might be, it can never capture all the nuances that functioning code demonstrates.
2. Clean Separation of Concerns
By keeping UI code separate from logic code, we create natural boundaries for collaboration. Richard can focus on the UI layer using AI tools, while I can focus on the logic layer. This separation also makes the codebase more maintainable long-term.
Beyond the collaborative benefits, this separation means we can evolve our UI and logic independently. Want to refresh the UI without touching business logic? No problem. Need to refactor how data is processed without changing the user experience? That's straightforward too.
This modular approach also makes our code more testable. We can unit test business logic without worrying about UI rendering, and we can test UI components with mock data without needing the full application state.
3. AI Tools Excel at Pattern Matching
AI coding assistants are excellent at generating UI components based on existing patterns. By focusing them on the UI layer, we play to their strengths while avoiding the areas where they might create more complex bugs.
UI code tends to follow repetitive patterns - buttons, cards, form fields, and layouts all have consistent structures. Once an AI tool understands your component library and styling approach, it can generate new components that match your existing patterns with impressive accuracy.
By contrast, business logic tends to be more context-dependent and requires deeper understanding of the application's domain. By having Richard use AI tools for UI generation while I focus on the logic, we get the best of both worlds - leveraging AI where it's strongest while applying human expertise where it's most needed.
4. Reduced Context Switching
As a developer, I spend less time switching between design tools and code. I can just look at the working component and immediately understand what needs to be built. This mental context switching is often an invisible productivity drain.
Context switching is one of the biggest productivity killers in software development. Every time I have to pause coding to look at design files, interpret requirements, or clarify intentions, I lose my flow state and it takes time to rebuild mental momentum.
With our vibe coding workflow, I'm working directly with code from the start. I don't need to mentally translate between different tools and formats. This keeps me in flow state longer and significantly reduces the cognitive load of implementation.
5. Improved Testing and QA Process
An unexpected benefit we've discovered is how this approach influences our testing process. With working UI prototypes in the codebase early in development, we can:
- Start visual testing earlier in the development cycle
- Identify UX issues before they're deeply integrated with business logic
- Get feedback from stakeholders on functioning prototypes, not just static designs
With our approach, we can validate the UI independently first. Stakeholders can provide feedback on the actual working components, not just mockups. This means we catch design issues before investing in complex logic implementation, saving significant development time.
When it comes to automated testing, this separation allows for more targeted test suites. Component tests can verify that UI elements render correctly with various props, while tests on business logic focus on data flow and state management. This more modular testing approach leads to better test coverage and more reliable tests that don't break when unrelated parts of the application change.
The Risks: Keeping the Codebase Clean
One challenge with this approach is maintaining code quality. If you're "vibe coding" too much - just throwing AI-generated code into your codebase without structure - things can get messy fast.
That's why we still maintain good principles:
- GUI code stays separate from logic code
- Components follow consistent patterns
- We review all code, especially AI-generated code
- We refactor when necessary to maintain clean architecture (AI helps here as well)
Adoption Challenges and Learning Curve
While this workflow has been transformative for us, I recognize it might not be a plug-and-play solution for every team. There are some challenges to consider:
Architectural Prerequisites
This approach works best when you already have a clean separation between UI and logic layers. If your codebase heavily intermixes these concerns, you'll need to refactor before this workflow becomes truly effective. For us, this wasn't a problem as the code has been written from get go to be as composable as possble with clear separtion of display logic and business logic.
Team Dynamics and Roles
Traditional roles can become blurred with this approach. Designers who previously worked exclusively in design tools now need to understand component structure and basic React concepts. Developers who might have controlled every aspect of implementation now need to be comfortable building on top of code they didn't write from scratch.
This requires trust and a growth mindset from everyone involved. We found it helpful to start small - having Richard implement a single, simple component - and then gradually expanded from there as confidence grew.
Learning to "Speak AI"
There's definitely a learning curve to effectively using AI coding tools. Richard spent time learning how to craft prompts that would generate clean, maintainable code that fits our codebase patterns. Similarly, I had to learn how to effectively review and augment AI-generated code.
This isn't just about technical skills - it's also about developing judgment about when to use AI and when to write code manually. Some tasks are perfect for AI assistance, while others still benefit from a human touch.
Putting It All Together
The combination of AI coding tools and this separation of concerns has fundamentally changed how we build. Richard can now implement entire UI flows that would have previously required days of my time. I can focus on the complex logic that ties everything together.
This isn't just a minor productivity boost - it's a complete transformation of our workflow. We're shipping features faster than ever, with better communication between design and development, and with less friction in the process.
And I can tell you, as the developer who used to get handed Figma files and requirements docs, this new world is so much better!
Where This Goes Next: The Future of Vibe Coding
As AI tools continue to evolve, I see this workflow becoming even more powerful in several ways:
Deeper Integration Between Design and Code
Right now, Richard still needs to manually translate his design ideas into code (albeit with AI assistance). I imagine a near future where design tools directly generate component code that matches your codebase patterns. Imagine sketching a component in Figma and having it automatically generate React code that follows your team's established patterns. There are already tools like Builder.io that does this kind of stuff! I'm expecting to see more in the future.
More Intelligent Logic Assistance
While AI tools currently excel at generating UI components, they're still limited when it comes to complex application logic. As these tools improve, they'll be able to suggest not just UI implementations but also appropriate hooks, data structures, and state management approaches based on your existing patterns. Like AI app builders still struggle lot with backend code, see exmple this YouTube video.
Evolving Team Structures
This workflow is already blurring the lines between designers and developers. In the future, I expect we'll see new roles emerge that sit at this intersection - people who aren't traditional developers but can use AI tools to implement their ideas directly in code, with developers focusing more on architecture, performance, and complex logic.
Multi-Agent Collaboration
A particularly exciting frontier is having multiple AI assistants collaborate on different aspects of development simultaneously. Imagine one agent working on the UI layer while another refines the data model, with a third handling test creation - all coordinated through a shared understanding of the codebase.
If you want to see how this all looks in reality, check out reconfigured and give us feedback. What should we implement next? Our vibe coding approach means we can probably ship it faster than you think!