RobLib's Blog

Building Watch-List.me: The Reality of AI-Assisted Development

Recently, I built Watch-List.me, a Next.js application deployed on Vercel for tracking movies and TV shows. While the project itself might not sound revolutionary, the development process highlighted something important about our current relationship with AI coding tools - particularly when using Claude Sonnet 4's agent mode for code generation.

This experience reinforced a crucial lesson: AI can dramatically accelerate development, but it requires constant human oversight and architectural decision-making. The more complex your application becomes, the more critical this oversight becomes.

Where AI Excels: The Grunt Work

Claude's agent mode proved invaluable for generating the mundane, repetitive code that every web application needs:

For these tasks, AI is incredibly efficient. What might take me 30 minutes to write carefully, Claude generated in seconds. The time savings on repetitive work allowed me to focus on the more interesting architectural decisions.

The Critical Gap: Architectural Understanding

However, as the application grew more complex, I encountered AI's fundamental limitation: it lacks holistic understanding of application architecture, especially when it comes to Next.js Server Components and the delicate balance between server and client-side rendering.

The Server vs Client Component Dilemma

Next.js App Router with Server Components introduces a new complexity that AI tools consistently struggle with. They tend to make decisions based on immediate context rather than considering:

I repeatedly had to intervene when Claude would suggest using client components for content that should be server-rendered for SEO, or conversely, trying to add interactivity to server components without proper hydration boundaries.

Real Examples of AI Missteps

One particularly telling example involved the movie search functionality. Claude initially suggested implementing the entire search interface as a server component, which would have resulted in full page reloads for every search query - terrible UX. Later, when I asked for optimization, it went to the opposite extreme, suggesting we make the entire movie listing page client-side, which would have hurt SEO.

The correct solution required understanding that the initial movie grid should be server-rendered for SEO and performance, while the search overlay needed to be a client component for interactivity, with proper data fetching strategies for each use case.

The Human Factor: Code Review and Architectural Oversight

Working with AI taught me that code review has never been more important. But it's not just about catching bugs - it's about ensuring architectural consistency and long-term maintainability.

What I Learned to Watch For:

I found myself doing more thorough code reviews than ever before, not because the AI-generated code was buggy (it usually worked), but because I needed to ensure it fit into the larger architectural vision.

The Anti-Pattern: "Vibe Coding"

The biggest risk I see with AI-assisted development is what I call "vibe coding" - letting the AI make architectural decisions based on what "feels right" for individual features, without considering the application as a whole.

This approach might work for small scripts or prototypes, but it leads to inconsistent patterns, performance issues, and maintenance nightmares in production applications. The temptation is strong because AI-generated code often works immediately, but working code isn't the same as good code.

Best Practices for AI-Assisted Development

Based on this experience, here's what I recommend for working with AI coding tools:

Before Starting:

During Development:

After Implementation:

The Future of AI-Assisted Development

Despite these challenges, I'm optimistic about AI's role in software development. The productivity gains are real, especially for the tedious parts of coding that we all have to do but don't particularly enjoy.

However, the human developer's role is evolving rather than diminishing. We're becoming more like architects and code reviewers, focusing on high-level decisions while AI handles the implementation details. This requires us to level up our understanding of system design, performance implications, and architectural patterns.

Conclusion

Building Watch-List.me with AI assistance was both faster and more challenging than traditional development. The speed gains were substantial, but they came with the overhead of constant architectural oversight.

The key insight is that AI tools are powerful accelerators, not replacements for engineering judgment. They excel at generating code that works, but ensuring that code is well-architected, performant, and maintainable remains firmly in the human domain.

As we continue to integrate AI into our development workflows, the developers who thrive will be those who learn to leverage AI's strengths while maintaining rigorous standards for code quality and architectural consistency. The future belongs to human-AI collaboration, but only when the human remains firmly in the driver's seat.