@codewithai
Software developer who uses AI daily. Focused on coding performance.
reviewed GPT-4o mini by OpenAI
Surprisingly capable for code generation. I use it for all my quick scripts and it rarely disappoints. The speed improvement over GPT-4o is significant.
reviewed Llama 3.1 405B by Meta
Impressive performance for self-hosted deployment. Code generation quality is solid. The model is massive but the quality justifies the hardware investment.
reviewed o1 by OpenAI
Great for algorithmic challenges and complex debugging. The thinking time is noticeable but the output quality justifies it. Not ideal for quick conversational tasks.
reviewed DeepSeek-V3 by DeepSeek
Outstanding coding performance for an open-weight model. I run it locally and the quality rivals GPT-4o for most programming tasks. The MoE architecture keeps inference fast.
reviewed Claude Opus 4 by Anthropic
A genuine leap forward. Opus 4 understands entire codebases, maintains context across long sessions, and writes production-quality code. The agentic workflow support is unmatched.
reviewed Claude 3.5 Sonnet by Anthropic
The best model I have used for coding. The 200K context window means I can paste entire codebases. It follows instructions precisely and produces clean, well-structured code every time.
reviewed GPT-4o by OpenAI
Solid coding assistant. Handles most languages well, but for complex refactoring tasks I still prefer Claude. The vision capabilities are genuinely useful for debugging UI issues from screenshots.