DEPT®

Podcasts

Machine Learning GuideThe Ship It Podcast

Dept Podcasts

We’re a group of digital pioneers passionate about building software. If you are too, you’re in the right place. Hear from real-life DEPT® engineers (and some of our non-DEPT® pals too) about what it’s like developing & designing software for clients ranging from exciting startups to Fortune 500 companies.

Machine Learning Guide logo
Machine Learning Guide

Go beyond the fundamentals of machine learning and join creator & host Tyler Renelle as he covers intuition, models, math, languages, frameworks, and more. Machine Learning Guide is an audio course that boasts millions of downloads and has helped a ton of aspiring engineers learn the ropes when it comes to these rapidly evolving technologies.

Recent Episodes

MLA 022 Code AI Tools2025-02-09

Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

Show notes: https://ocdevel.com/mlg/mla-22

Tools discussed:

  1. Windsurf: https://codeium.com/windsurf
  2. Copilot: https://github.com/features/copilot
  3. Cursor: https://www.cursor.com/
  4. Cline: https://github.com/cline/cline
  5. Roo Code: https://github.com/RooVetGit/Roo-Code
  6. Aider: https://aider.chat/

Other:

  1. Leaderboards: https://aider.chat/docs/leaderboards/
  2. Video of speed-demon: https://www.youtube.com/watch?v=QlUt06XLbJE&feature=youtu.be
  3. Reddit: https://www.reddit.com/r/chatgptcoding/

Boost programming productivity by acting as a pair programming partner. Groups these tools into three categories:

Hands-Off Tools: These include solutions that work on fixed monthly fees and require minimal user intervention. GitHub Copilot started with simple tab completions and now offers an agent mode similar to Cursor, which stands out for its advanced codebase indexing and intelligent file searching. Windsurf is noted for its simplicity—accepting prompts and performing automated edits—but some users report performance throttling after prolonged use.

Hands-On Tools: Aider is presented as a command-line utility that demands configuration and user involvement. It allows developers to specify files and settings, and it efficiently manages token usage by sending prompts in diff format. Aider also implements an “architect versus edit” approach: a reasoning model (such as DeepSeek R1) first outlines a sequence of changes, then an editor model (like Claude 3.5 Sonnet) produces precise code edits. This dual-model strategy enhances accuracy and reduces token costs, especially for complex tasks.

Intermediate Power Tools: Open-source tools such as Cline and its more advanced fork, RooCode, require users to supply their own API keys and pay per token. These tools offer robust, agentic features, including codebase indexing, file editing, and even browser automation. RooCode stands out with its ability to autonomously expand functionality through integrations (for example, managing cloud resources or querying issue trackers), making it particularly attractive for tinkerers and power users.

A decision framework is suggested: for those new to AI coding assistants or with limited budgets, starting with Cursor (or cautiously exploring Copilot’s new features) is recommended. For developers who want to customize their workflow and dive deep into the tooling, RooCode or Cline offer greater control—always paired with Aider for precise and token-efficient code edits.

Also reviews model performance using a coding benchmark leaderboard that updates frequently. The current top-performing combination uses DeepSeek R1 as the architect and Claude 3.5 Sonnet as the editor, with alternatives such as OpenAI’s O1 and O3 Mini available. Tools like Open Router are mentioned as a way to consolidate API key management and reduce token costs.

Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

Show notes: https://ocdevel.com/mlg/33

3Blue1Brown videos: https://3blue1brown.com/


  • Background & Motivation:

    • RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware.
    • Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability.
  • Core Architecture:

    • Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped in residual connections and layer normalization.
    • Positional Encodings: Since self-attention is permutation invariant, add sinusoidal or learned positional embeddings to inject sequence order.
  • Self-Attention Mechanism:

    • Q, K, V Explained:
      • Query (Q): The representation of the token seeking contextual info.
      • Key (K): The representation of tokens being compared against.
      • Value (V): The information to be aggregated based on the attention scores.
    • Multi-Head Attention: Splits Q, K, V into multiple “heads” to capture diverse relationships and nuances across different subspaces.
    • Dot-Product & Scaling: Computes similarity between Q and K (scaled to avoid large gradients), then applies softmax to weigh V accordingly.
  • Masking:

    • Causal Masking: In autoregressive models, prevents a token from “seeing” future tokens, ensuring proper generation.
    • Padding Masks: Ignore padded (non-informative) parts of sequences to maintain meaningful attention distributions.
  • Feed-Forward Networks (MLPs):

    • Transformation & Storage: Post-attention MLPs apply non-linear transformations; many argue they’re where the “facts” or learned knowledge really get stored.
    • Depth & Expressivity: Their layered nature deepens the model’s capacity to represent complex patterns.
  • Residual Connections & Normalization:

    • Residual Links: Crucial for gradient flow in deep architectures, preventing vanishing/exploding gradients.
    • Layer Normalization: Stabilizes training by normalizing across features, enhancing convergence.
  • Scalability & Efficiency Considerations:

    • Parallelization Advantage: Entire architecture is designed to exploit modern parallel hardware, a huge win over RNNs.
    • Complexity Trade-offs: Self-attention’s quadratic complexity with sequence length remains a challenge; spurred innovations like sparse or linearized attention.
  • Training Paradigms & Emergent Properties:

    • Pretraining & Fine-Tuning: Massive self-supervised pretraining on diverse data, followed by task-specific fine-tuning, is the norm.
    • Emergent Behavior: With scale comes abilities like in-context learning and few-shot adaptation, aspects that are still being unpacked.
  • Interpretability & Knowledge Distribution:

    • Distributed Representation: “Facts” aren’t stored in a single layer but are embedded throughout both attention heads and MLP layers.
    • Debate on Attention: While some see attention weights as interpretable, a growing view is that real “knowledge” is diffused across the network’s parameters.

In today’s episode we talk with DEPT Digital Products Software Engineer Ali Halim about his origin story. We talk about late nights in college, what got him hooked, paying your dues in your first job and seizing opportunities to grow your career.  This is a great one for the aspiring technical team leaders out there!

In this episode, we’re talking about Architecture Sprints and why they’re valuable.  Our client Bloomerang joins us for this one.  Bloomerang is the donor relationship and volunteer management platform for thousands of nonprofits. Their SVP of Payments, Evan DaSilva talks with us about how an Architecture Sprint with DEPT helped them get past a challenging situation with technical leadership during the busy end of year giving season! We dive into what an architecture sprint is and why it’s a valuable investment to get your product moving, or moving again!

In this episode, we talk to DEPT® Project Manager Kate Flynn about her origin story: She talks about how she went from Architect, to Construction Manager, to Software Project Manager. This leads to a great conversation about the parallels between architecture and software, and what software engineering can learn from architecture. Kate also shares her thoughts on striking a balance between planning too much, and planning just enough. Finally, what origin story would be complete without a tale about building a two story underground bunker and powerlifting? Enjoy!

In this episode, we interview Jonah Jolley, Director of Engineering, and Stephanie Bressan, PSM, CSPO of DEPT® about the relationship between product and engineering teams.   What makes a good partnership? What doesn’t make a good partnership? We also talk about the importance of empathy, trust, keeping priorities visible, and how you must come prepared with snacks! We wrap up with thoughts on Agile: Does it help or hurt? Do velocity metrics matter? Are standups even helpful?

On this episode of Ship It! We go further into the topic of CMS’s, or Content Management Systems by talking about DEPT DASH, a new product from DEPT that helps you bootstrap applications that use headless CMS’s like Strapi and Contentful.  Host Matt Merrill is joined by Allan Winterseick, Managing Partner, DEPT US and John Berger, Principal Engineer on DEPT DASH to talk about it as well as why you might use headless CMS’s. We also have a screencast of using DEPT DASH you can see here.

Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

Discussing Databricks with Ming Chang from Raybeam (part of DEPT®)

Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

Conversation with Dirk-Jan Kubeflow (vs cloud native solutions like SageMaker)


Dirk-Jan Verdoorn - Data Scientist at Dept Agency

Kubeflow. (From the website:) The Machine Learning Toolkit for Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.

TensorFlow Extended (TFX). If using TensorFlow with Kubeflow, combine with TFX for maximum power. (From the website:) TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. When you're ready to move your models from research to production, use TFX to create and manage a production pipeline.

Alternatives:

MLA 019 DevOps2022-01-13

Try a walking desk while studying ML or working on your projects! https://ocdevel.com/walk

Chatting with co-workers about the role of DevOps in a machine learning engineer's life


Expert coworkers at Dept

Devops tools

Pictures (funny and serious)


DEPT® Podcasts © 2025