Winamp Logo
Test & Code in Python Cover
Test & Code in Python Profile

Test & Code in Python

English, Technology, 1 season, 204 episodes, 4 days, 16 hours
About
Topics include automated testing, testing strategy, software engineering practices, packaging, Python, pytest, data science, TDD, continuous integration, and software methodologies. Also anything I think helps make the daily life of a software developer more fun and rewarding. Hosted by Brian Okken.
Episode Artwork

Free Your Inner Nonfiction Writer

Learn how to write nonfiction fast and well.Johanna Rothman joins the show to discuss writing nonfiction.Johanna's book: Free Your Inner Nonfiction Writer If you like Test & Code, I think you'll also like Python People Python People is a new podcast about getting to know the people who make Python and our community awesome. Be sure to check out pythonpeople.fm. 
7/18/202337 minutes, 46 seconds
Episode Artwork

Open Source at Intel

Open Source is important to Intel and has been for a very long time.Joe Curley, vice president and general manager of software products and ecosystem, and Arun Gupta, vice president and general manager for open ecosystems, join the show to discuss open source, OneAPI, and open ecosystems at Intel.
7/17/202343 minutes, 33 seconds
Episode Artwork

202: Using Towncrier to Keep a Changelog

Hynek joins the show to discuss towncrier. At the top of the towncrier documentation, it says "towncrier is a utility to produce useful, summarized news files (also known as changelogs) for your project." Towncrier is used by "Twisted, pytest, pip, BuildBot, and attrs, among others." This is the last of 3 episodes focused on keeping a CHANGELOG. Episode 200 (https://testandcode.com/200) kicked off the series with keepachangelog.com and Olivier Lacan In 201 (https://testandcode.com/201) we had Ned Batchelder discussing scriv. Special Guest: Hynek Schlawack.
5/31/202349 minutes, 44 seconds
Episode Artwork

201: Avoid merge conflicts on your CHANGELOG with scriv

Last week we talked about the importance of keeping a changelog. This week we talk with Ned Batchelder about scriv, a tool to help maintain that changelog. Scriv "is a command-line tool for helping developers maintain useful changelogs. It manages a directory of changelog fragments. It aggregates them into entries in a CHANGELOG file." Special Guest: Ned Batchelder.
5/25/202335 minutes, 22 seconds
Episode Artwork

200: Keep a CHANGELOG

A changelog is a file which contains a curated, chronologically ordered list of notable changes for each version of a project. This episode is about what a changelog is, with an interview with Olivier Lacan, creator of keepachangelog.com (https://keepachangelog.com). The next two episodes talk about some tools to help software project teams keep changelogs while avoiding merge conflicts. Special Guest: Olivier Lacan.
5/19/202352 minutes, 13 seconds
Episode Artwork

199: Is Azure Right for a Side Project?

For a web side project to go from "working on desktop" to "live in the cloud", one decision that needs to be made is where to host everything. One option is Microsoft Azure. Lots of corporate sites use it. Is it right for side projects? Pamela Fox, a Cloud Advocate for Python at Microsoft, joins the show to help us with that question. Special Guest: Pamela Fox.
5/4/202351 minutes, 11 seconds
Episode Artwork

198: Testing Django Web Applications

Django has some built in ways to test your application. There's also pytest-django and other plugins that help with testing. Carlton Gibson and Will Vincent from the Django Chat Podcast join the show to discuss how to get started testing your Django application. 00:00 Introduction 00:20 Thanks porkbun for sponsoring 01:41 Welcome and podcasting discussion 17:21 Django starter projects 21:35 Testing Django Should be chapters there also, if your podcast player supports them. Special Guests: Carlton Gibson and Will Vincent.
4/27/20231 hour, 2 minutes, 1 second
Episode Artwork

197: Python project trove classifiers - Do you need this bit of pyproject.toml metadata?

Classifiers are one bit of Python project metadata that predates PyPI. Classifiers are weird. They were around in setuptools days, and are still here with pyproject.toml. What are they? Why do we need them? Do we need them? Which classifiers should I include? Why are they called "trove classifiers" in the Python docs (https://pypi.org/classifiers/) Brett Cannon joins the show to discuss these wacky bits of metadata. Here's an example, from pytest-crayons (https://github.com/okken/pytest-crayons/blob/main/pyproject.toml): [project] ... classifiers = [ "License :: OSI Approved :: MIT License", "Framework :: Pytest" ] Special Guest: Brett Cannon.
4/5/202332 minutes, 49 seconds
Episode Artwork

196: I am not a supplier

Should we think of open source components the same way we think of physical parts for manufactured goods? There are problems with supply chain analogy when applied to software. Thomas Depierre discusses some of those issues in this episode. Special Guest: Thomas Depierre.
3/31/202336 minutes, 1 second
Episode Artwork

195: What would you change about pytest?

Anthony Sottile and Brian discuss changes that would be cool for pytest, even unrealistic changes. These are changes we'd make to pytest if we didn't ahve to care about backwards compatibilty. Anthony's list: The import system Multi-process support out of the box Async support Changes to the fixture system Extend the assert rewriting to make it modular Add matchers to assert mechanism Ban test class inheritance Brian's list: Extend assert rewriting for custom rewriting, like check pytester matchers available for all tests Throw out nose and unittest compatibility plugins Throw out setupmodule, teardownmodule and other xunit style functions Remove a bunch of the hook functions Documentation improvement of remaining hook functions which include examples of how to use it Start running tests before collection is done Split collection and running into two processes Have the fixtures be able to know the result of the test during teardown Special Guest: Anthony Sottile.
3/8/202357 minutes, 14 seconds
Episode Artwork

194: Test & Code Returns

A brief discussion of why Test & Code has been off the air for a bit, and what to expect in upcoming episodes.
3/5/20236 minutes, 3 seconds
Episode Artwork

193: The Good Research Code Handbook - Patrick Mineault

I don't think it's too much of a stretch to say that software is part of most scientific research now. From astronomy, to neuroscience, to chemistry, to climate models. If you work in research that hasn't been affected by software yet, just wait. But how good is that software? How much of common best practices in software development are making it to those writing software in the sciences? Patrick Mineault has written "The Good Research Code Handbook". It's a website. It's concise. And it will put you on the right path to writing better software. Even if you don't write science based software, and even if you already have a CS degree, there's some good information worth reading. Special Guest: Patrick Mineault.
8/30/202243 minutes, 13 seconds
Episode Artwork

192: Learn to code through game development with PursuedPyBear - Piper Thunstrom

The first game I remember coding, or at least copying from a magazine, was in Basic. It was Lunar Lander. Learning to code a game is a way that a lot of people get started and excited about programming. Of course, I don't recommend Basic. Now we've got Python. And one of the game engines available for Python is PursuedPyBear, a project started by Piper Thunstrom. Piper joins us this episode and we talk about PursuedPyBear, learning to code, and learning CS concepts with game development. PursuedPyBear, ppb, is a game framework great for learning with, with goals of being fun, education friendly, an example of idiomatic Python, hardware library agnostic, and built on event driven and object oriented concepts. Special Guest: Piper Thunstrom.
8/6/202242 minutes, 27 seconds
Episode Artwork

191: Running your own site for fun and absolutely no profit whatsoever - Brian Wisti

Having a personal site is a great playground for learning tons of skills. Brian Wisti discusses the benefits of running a his own blog over the years. Special Guest: Brian Wisti.
7/1/202246 minutes, 3 seconds
Episode Artwork

190: Testing PyPy - Carl Friedrich Bolz-Tereick

PyPy is a fast, compliant alternative implementation of Python. cPython is implemented in C. PyPy is implemented in Python. What does that mean? And how do you test something as huge as an alternative implementation of Python? Special Guest: Carl Friedrich Bolz-Tereick.
6/21/202251 minutes, 17 seconds
Episode Artwork

189: attrs and dataclasses - Hynek Schlawack

In Python, before dataclasses, we had attrs. Before attrs, it wasn't pretty. The story of attrs and dataclasses is actually intertwined. They've built on each other. And in the middle of it all, Hynek. Hynek joins the show today to discuss some history of attrs and dataclasses, and some differences. If you ever need to create a custom class in Python, you should listen to this episode. Full Transcript (https://pythontest.com/testandcode/189-attrs-dataclasses-hynek-schlawack/) Special Guest: Hynek Schlawack.
6/7/202232 minutes, 23 seconds
Episode Artwork

188: Python's Rich, Textual, and Textualize - Innovating the CLI

Will McGugan has brought a lot of color to CLIs within Python due to Rich. Then Textual started rethinking full command line applications, including layout with CSS. And now Textualize, a new startup, is bringing CLI apps to the web. Full Transcript (https://pythontest.com/testandcode/188-python-rich-textual-textualize-innovating-cli/) Special Guest: Will McGugan.
5/17/202235 minutes, 36 seconds
Episode Artwork

187: Teaching Web Development, including Front End Testing

When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices? Karl Stolley believes it's never too early. Let's hear how he incorporates code quality in his courses. Our discussion includes: starting people off with good dev practices and tools linting html and css validation visual regression testing using local dev servers, including https incorporating testing with git hooks testing to aid in css optimization and refactoring Backstop Nightwatch BrowserStack the tree legged stool of learning and progressing as a developer: testing, version control, and documentation Karl is also writing a book on WebRTC, so we jump into that a bit too. Full Transcript (https://pythontest.com/testandcode/187-teaching-web-development-including-end-testing/) Special Guest: Karl Stolley.
5/13/202239 minutes, 48 seconds
Episode Artwork

186: Developer and Team Productivity

Being productive is obviously a good thing. Can we measure it? Should we measure it? There's been failed attempts, like lines of code, etc. in the past. Currently, there are new tools to measure productivity, like using git metrics. Nick Hodges joins the show to discuss the good and the bad of developer and team productivity, including how we can improve productivity. Full Transcript (https://pythontest.com/testandcode/186-developer-team-productivity/) Special Guest: Nick Hodges.
5/12/202251 minutes, 7 seconds
Episode Artwork

185: Python + Django + Rich + Testing == Awesome

Django has a handful of console commands to help manage and develop sites. django-rich (https://pypi.org/project/django-rich/) adds color and nice formatting. Super cool. In a recent release, django-rich also adds nice colorized tracebacks to the Django test runner. Full Transcript (https://pythontest.com/testandcode/185-python-django-rich-testing-awesome/) Special Guests: Adam Johnson and David Smith.
5/11/202220 minutes, 55 seconds
Episode Artwork

184: Twisted and Testing Event Driven / Asynchronous Applications - Glyph

Twisted has been supporting asynchronous / event driven applications way before asyncio. Twisted, and Glyph, have also been encouraging automated tests for a very long time. Twisted uses a technique that should be usable by other applications, even those using asyncio or other event driven architectures. Full Transcript (https://pythontest.com/testandcode/184-twisted-testing-event-driven-async-apps/) Special Guest: Glyph.
3/21/202241 minutes, 16 seconds
Episode Artwork

183: Managing Software Teams - Ryan Cheley

Ryan Cheley joins me today to talk about some challenges of managing software teams, and how to handle them. We end up talking about a lot of skills that are excellent for software engineers as well as managers. Some topics discussed: handling code reviews asking good questions being honest about what you can't do with current resources and data discussing tradeoffs and offering solutions that can be completed faster than the ideal solution balancing engineering and managing making sure documentation happens remote teams encouraging collaboration encouraging non-work-related conversations watching out for overworking Full Transcript (https://pythontest.com/testandcode/183-managing-software-teams/) Special Guest: Ryan Cheley.
3/17/202247 minutes, 48 seconds
Episode Artwork

182: An Unorthodox Technical Interview and Hiring Process - Nathan Aschbacher

Don't you just love technical interviews, with someone who just saw your resume or CV 5 minutes ago asking you to write some code on a whiteboard. Probably code that has nothing to do with anything you've done before or anything you will do at the company. No? Neither does Nathan Aschbacher. So when he started building the team at his company, he decided to do things differently. Hiring is one of the essential processes for building a great team. However, it's a high noise, low signal process. Nathan Aschbacher has a relatively unorthodox tech hiring approach. He's trying to make it very humane, with a better signal to noise ratio. Nathan is not intereseted in bizarre interview processes where the interviewer doesn't know anything about the interviewee beforehand, all people are asked the same questions, and people are asked to code on white boards. Instead, he states "if the goal is to try to figure out if the person can do the work with your team, and your trying to build the team that you are adding this person to, they need to know what the team is like, and determine if they want to be part of the team, and the team needs to know what the person is like and if they would be addititve to the team. So what's Nathan's process: - Screening resumes and CVs, looking for internal motivation to become an expert at something. - Basic phone screen, very informal. - A couple 2-3 hour pairings with someone on the team with whatever they are working on. - Debriefing both the candidate and the team afterwords. - Giving the candidate an opportunity for a second impression and following up on difficulties during the pairings. We discuss the process, and also: - trying to remove the barriers to team integration - treating people as humans And of course, there's the story of how Nathan ended up interviewing someone with Zoo experience an no technical experience for a technical role. Of course, it was a misunderstanding of a job requirement around experience with ZooKeeper. But it's a good story. Full Transcript (https://pythontest.com/testandcode/182-unorthodox-tech-interview/) Special Guest: Nathan Aschbacher.
3/8/202247 minutes, 50 seconds
Episode Artwork

181: Boost Your Django DX - Adam Johnson

We talk with Adam Johnson about his new book, "Boost Your Django DX". Developer experience includes tools and practices to make developers more effective and efficient, and just plain make software development more fun and satisfying. One of the things I love about this book is that it's not just for Django devs. I'd guess that about half the book is about topics that all Python developers would find useful, from virtual environments to linters to testing. But of course, also tons of tips and tools for working with Django. Full Transcript (https://pythontest.com/testandcode/181-boost-your-django-dx/) Special Guest: Adam Johnson.
3/1/202227 minutes, 32 seconds
Episode Artwork

180: Lean TDD

Lean TDD is an attempt to reconcile some conflicting aspects of Test Driven Development and Lean Software Development. I've mentioned Lean TDD on the podcast a few times and even tried to do a quick outline at the end of episode 162 (https://testandcode.com/162). This episode is a more complete outline, or at least a first draft. If you feel you've got a good understanding of TDD, and it's working awesome for you, that's great. Keep doing what you're doing. There are no problems. For me, the normal way TDD is taught just doesn't work. So I'm trying to come up with a spin on some old ideas to make it work for me. I'm hoping it works for you as well. I'm calling the new thing Lean TDD. It's inspired by decades of experience writing software and influence from dozens of sources, including Pragmatic Programmer, Lean Software Development, Test-Driven Development by Example, and many blog posts and wiki articles. The main highlights, however, come from the collision of ideas between Lean and TDD and how I've tried to resolve the seemingly opposing processes. Full Transcript (https://pythontest.com/testandcode/180-lean-tdd/)
2/21/202226 minutes, 5 seconds
Episode Artwork

179: Exploratory Testing

Exploratory testing is absolutely an essential part of a testing strategy. This episode discusses what exploratory testing is, its benefits, and how it fits within a framework of relying on automated tests for most of our testing. Full Transcript (https://pythontest.com/testandcode/179-exploratory-testing/)
2/9/202211 minutes, 39 seconds
Episode Artwork

178: The Five Factors of Automated Software Testing

"There are five practical reasons that we write tests. Whether we realize it or not, our personal testing philosophy is based on how we judge the relative importance of these reasons." - Sarah Mei This episode discusses the factors. Sarah's order: Verify the code is working correctly Prevent future regressions Document the code’s behavior Provide design guidance Support refactoring Brian's order: Verify the code is working correctly Prevent future regressions Support refactoring Provide design guidance Document the code’s behavior The episode includes reasons why I've re-ordered them. Full Transcript (https://pythontest.com/testandcode/178-factors-automated-software-testing/)
1/31/202210 minutes, 26 seconds
Episode Artwork

177: Unit Test vs Integration Test and The Testing Trophy

A recent Twitter thread by Simon Willison reminded me that I've been meaning to do an episode on the testing trophy. This discussion is about the distinction between unit and integration tests, what those terms mean, and where we should spend our testing time. Full Transcript (https://pythontest.com/testandcode/177-unit-test-integration-test-testing-trophy/)
1/28/202221 minutes, 39 seconds
Episode Artwork

176: SaaS Side Projects

The idea of having a software as a service product sound great, doesn't it? Solve a problem with software. Have a nice looking landing page and website. Get paying customers. Eventually have it make enough revenue so you can turn it into your primary source of income. There's a lot of software talent out there. We could solve lots of problems. But going from idea to product to first customer is non-trivial. Especially as a side hustle. This episode discusses some of the hurdles from idea to first customer. Brandon Braner is building Released.sh. It's a cool idea, but it's not done yet. Brandon and I talk about building side projects: - finding a target audience - limiting scope to something doable by one person - building a great looking landing page - finding time to work on things - prioritizing and planning - learning while building - even utilizing third party services to allow you to launch faster - and last, but not least, having fun Full Transcript (https://pythontest.com/testandcode/176-saas-side-projects/) Special Guest: Brandon Braner.
1/18/202226 minutes, 8 seconds
Episode Artwork

175: Who Should Do QA?

Who should do QA? How does that change with different projects and teams? What does "doing QA" mean, anyway? Answering these questions are the goals of this episode. Full Transcript (https://pythontest.com/testandcode/175-who-should-do-qa/)
1/12/202213 minutes, 6 seconds
Episode Artwork

174: pseudo-TDD - Paul Ganssle

In this episode, I talk with Paul Ganssle about a fun workflow that he calls pseudo-TDD. Pseudo-TDD is a way to keep your commit history clean and your tests passing with each commit. This workflow includes using pytest xfail and some semi-advanced version control features. Some strict forms of TDD include something like this: - write a failing test that demonstrates a lacking feature or defect - write the source code to get the test to pass - refactor if necessary - repeat In reality, at least for me, the software development process is way more messy than this, and not so smooth and linear. Pauls workflow allow you to develop non-linearly, but commit cleanly. Full Transcript (https://pythontest.com/testandcode/174-pseudo-tdd/) Special Guest: Paul Ganssle.
12/22/202139 minutes, 24 seconds
Episode Artwork

173: Why NOT unittest?

In the preface of "Python Testing with pytest" I list some reasons to use pytest, under a section called "why pytest?". Someone asked me recently, a different but related question "why NOT unittest?". unittest is an xUnit style framework. For me, xUnit style frameworks are fatally flawed for software testing. That's what this episode is about, my opinion of * "Why NOT unittest?", or more broadly, * "What are the fatal flaws of xUnit?" Full Transcript (https://pythontest.com/testandcode/173-why-not-unittest/)
12/17/202123 minutes, 30 seconds
Episode Artwork

172: Designing Better Software with a Prototype Mindset

A prototype is a a preliminary model of something, from which other forms are developed or copied. In software, we think of prototypes as early things, or a proof of concept. We don't often think of prototyping during daily software development or maintenance. I think we should. This episode is about growing better designed software with the help of a prototype mindset. Full Transcript (https://pythontest.com/testandcode/172-designing-better-software-prototype-mindset/)
11/30/20216 minutes, 53 seconds
Episode Artwork

171: How and why I use pytest's xfail - Paul Ganssle

Paul Ganssle, is a software developer at Google, core Python dev, and open source maintainer for many projects, has some thoughts about pytest's xfail. He was an early skeptic of using xfail, and is now an proponent of the feature. In this episode, we talk about some open source workflows that are possible because of xfail. Full Transcript (https://pythontest.com/testandcode/171-use-pytest-xfail/) Special Guest: Paul Ganssle.
11/22/202138 minutes, 26 seconds
Episode Artwork

170: pytest for Data Science and Machine Learning - Prayson Daniel

Prayson Daniel, a principle data scientist, discusses testing machine learning pipelines with pytest. Prayson is using pytest for some pretty cool stuff, including: * unit tests, of course * testing pipeline stages * counterfactual testing * performance testing All with pytest. So cool. Full Transcript (https://pythontest.com/testandcode/170-pytest-data-science-machine-learning/) Special Guest: Prayson Daniel.
11/18/202145 minutes, 12 seconds
Episode Artwork

169: Service and Microservice Performance Monitoring - Omri Sass

Performance monitoring and error detection is just as important with services and microservices as with any system, but with added complexity. Omri Sass joins the show to explain telemetry and monitoring of services and of systems with services. Full Transcript (https://pythontest.com/testandcode/169-service-microservice-performance-monitoring/) Special Guest: Omri Sass.
11/11/202130 minutes, 14 seconds
Episode Artwork

168: Understanding Complex Code by Refactoring into Larger Functions

To understand complex code, it can be helpful to remove abstractions, even if it results in larger functions. This episode walks through a process I use to refactor code that I need to debug and fix, but don't completely understand. Full Transcript (https://pythontest.com/testandcode/168-understanding-complex-code-refactoring/)
11/2/202111 minutes, 19 seconds
Episode Artwork

167: React, TypeScript, and the Joy of Testing - Paul Everitt

Paul has a tutorial on testing and TDD with React and TypeScript. We discuss workflow and the differences, similarities between testing with React/TypeScript and Python. We also discuss what lessons that we can bring from front end testing to Python testing. Full Transcript (https://pythontest.com/testandcode/167-react-typescript-joy-testing/) Special Guest: Paul Everitt.
10/22/202136 minutes, 52 seconds
Episode Artwork

166: unittest expectedFailure and xfail

xfail isn't just for pytest tests. Python's unittest has @unittest.expectedFailure. In this episode, we cover: - using @unittest.expectedFailure - the results of passing and failing tests with expectedFailure - using pytest as a test runner for unittest - using pytest markers on unittest tests Docs for expectedFailure: https://docs.python.org/3/library/unittest.html#skipping-tests-and-expected-failures Some sample code. unittest only: ```python import unittest class ExpectedFailureTestCase(unittest.TestCase): @unittest.expectedFailure def test_fail(self): self.assertEqual(1, 0, "broken") @unittest.expectedFailure def test_pass(self): self.assertEqual(1, 1, "not broken") ``` unittest with pytest markers: ```python import unittest import pytest class ExpectedFailureTestCase(unittest.TestCase): @pytest.mark.xfail def test_fail(self): self.assertEqual(1, 0, "broken") @pytest.mark.xfail def test_pass(self): self.assertEqual(1, 1, "not broken") ``` Full Transcript (https://pythontest.com/testandcode/166-unittest-expected-failure-xfail/)
10/14/20216 minutes, 23 seconds
Episode Artwork

165: pytest xfail policy and workflow

A discussion of how to use the xfail feature of pytest to help with communication on software projects. The episode covers: * What is xfail * Why I use it * Using reason effectively by including issue tracking numbers * Using xfail_strict * Adding --runxfail when transitioning from development to feature freeze * What to do about test failures * How all of this might help with team communication Full Transcript (https://pythontest.com/testandcode/165-pytest-xfail-policy-workflow/)
10/7/20219 minutes, 44 seconds
Episode Artwork

164: Debugging Python Test Failures with pytest

An overview of the pytest flags that help with debugging. From Chapter 13, Debugging Test Failures, of Python Testing with pytest, 2nd edition (https://pythontest.com/pytest-book/). pytest includes quite a few command-line flags that are useful for debugging. We talk about thes flags in this episode. Flags for selecting which tests to run, in which order, and when to stop: * -lf / --last-failed: Runs just the tests that failed last. * -ff / --failed-failed: Runs all the tests, starting with the last failed. * -x / --exitfirst: Stops the tests session afterEd: after?Author: yep the first failure. * --maxfail=num: Stops the tests after num failures. * -nf / --new-first: Runs all the tests, ordered by file modification time. * --sw / --stepwise: Stops the tests at the first failure. Starts the tests at the last failure next time. * --sw-skip / --stepwise-skip: Same as --sw, but skips the first failure. Flags to control pytest output: * -v / --verbose Displays all the test names, passing or failing. * --tb=[auto/long/short/line/native/no] Controls the traceback style. * -l / --showlocals Displays local variables alongside the stacktrace. Flags to start a command-line debugger: * --pdb Starts an interactive debugging session at the point of failure. * --trace Starts the pdb source-code debugger immediately when running each test. * --pdbcls Uses alternatives to pdb, such as IPython’s debugger with –-pdbcls=IPython.terminal.debugger:TerminalPdb. This list is also found in Chapter 13 of Python Testing with pytest, 2nd edition (https://pythontest.com/pytest-book/). The chapter is "Debugging Test Failures" and covers way more than just debug flags, while walking through debugging 2 test failures. Full Transcript (https://pythontest.com/testandcode/164-debugging-python-test-failures-pytest/)
9/14/202113 minutes, 17 seconds
Episode Artwork

163: pip install ./local_directory - Stéphane Bidoul

pip : "pip installs packages" or maybe "Package Installer for Python" pip is an invaluable tool when developing with Python. A lot of people know pip as a way to install third party packages from pypi.org You can also use pip to install from other indexes (or is it indices?) You can also use pip to install a package in a local directory. That's the part I want to jump in and explore with Stéphane Bidoul. The way pip installs from a local directory is about to change, and the story is fascinating. Full Transcript (https://pythontest.com/testandcode/163-pip-install-local-directory/) Special Guest: Stéphane Bidoul.
8/20/202129 minutes, 39 seconds
Episode Artwork

162: Flavors of TDD

What flavor of TDD do you practice? In this episode we talk about: * Classical vs Mockist TDD * Detroit vs London (I actually refer to it in the episode as Chicago instead of Detroit. Oh well.) * Static vs Behavior * Inside Out vs Outside In * Double Loop TDD * BDD * FDD * Tracer Bullets * Rules of TDD * Team Structure * Lean TDD This is definitely an episode I'd like feedback on. Reach out to me @brianokken (https://twitter.com/brianokken) or via the contact form (https://testandcode.com/contact) for further questions or if I missed some crucial variant of TDD that you know and love. Full Transcript (https://pythontest.com/testandcode/162-flavors-tdd/)
8/3/202122 minutes, 44 seconds
Episode Artwork

161: Waste in Software Development

Software development processes create value, and have waste, in the Lean sense of the word waste. Lean manufacturing and lean software development changed the way we look at value and waste. This episode looks at lean definitions of waste, so we can see it clearly when we encounter it. I'm going to use the term waste and value in future episodes. I'm using waste in a Lean sense, so we can look at software processes critically, see the value chain, and try to reduce waste. Lean manufacturing and lean software development caused people to talk about and examine waste and value, even in fields where we didn't really think about waste that much to begin with. Software is just ones and zeros. Is there waste? When I delete a file, nothing goes into the landfill. The mistake I'm making here is confusing the common English definition of waste when what we're talking about is the Lean definition of waste. This episode tries to clear up the confusion. Full Transcript (https://pythontest.com/testandcode/161-waste-in-software-development/)
7/20/202118 minutes, 49 seconds
Episode Artwork

160: DRY, WET, DAMP, AHA, and removing duplication from production code and test code

Should your code be DRY or DAMP or something completely different? How about your test code? Do different rules apply? Wait, what do all of these acronyms mean? We'll get to all of these definitions, and then talk about how it applies to both production code and test code in this episode. Full Transcript (https://pythontest.com/testandcode/160-dry-wet-damp-aha/)
7/8/202114 minutes, 40 seconds
Episode Artwork

159: Python, pandas, and Twitter Analytics - Matt Harrison

When learning data science and machine learning techniques, you need to work on a data set. Matt Harrison had a great idea: Why not use your own Twitter analytics data? So, he did that with his own data, and shares what he learned in this episode, including some of his secrets to gaining followers. In this episode we talk about: Looking at your own Twitter analytics data. Using Python, pandas, Jupyter for data cleaning and exploratory analysis Data visualization Machine learning, principal component analysis, clustering Model drift and re-running analysis What kind of tweets perform well And much more Full Transcript (https://pythontest.com/testandcode/159-python-pandas-twitter-analytics/) Special Guest: Matt Harrison.
7/2/202147 minutes, 51 seconds
Episode Artwork

158: TDD in Swift - Gio

Iterative processes that include writing test code and production code together, such as TDD, help make coding fun. All of us that care about developing quality code with the help of testing can learn from each other, regardless of programming language. Today we step outside our normal Python comfort zone and talk with Gio about TDD in Swift. Gio Lodi, author of TDD in Swift, joins the show to discuss Test Driven Development, software workflows, bottom up vs top down, rapid feedback, developer vs customer facing tests, and more. Full Transcript (https://pythontest.com/testandcode/158-tdd-swift/) Special Guest: Gio Lodi.
6/18/202143 minutes, 20 seconds
Episode Artwork

157: pre-commit - Anthony Sottile

pre-commit started as a framework for running linters and code formatters during git actions via git hooks. It's grown and expanded and now supports an extensive list of languages and actions and manual running of actions. But even at it's core, it's great for letting computers nitpick about whitespace and formatting, so that code reviews can focus on architecture and design. Anthony Sottile discusses pre-commit, for using locally by developers, and pre-commit.ci, which can run actions during merge requests. "Git hook scripts are useful for identifying simple issues before submission to code review. We run our hooks on every commit to automatically point out issues in code such as missing semicolons, trailing whitespace, and debug statements. By pointing these issues out before code review, this allows a code reviewer to focus on the architecture of a change while not wasting time with trivial style nitpicks." - pre-commit.com (https://pre-commit.com/) "Developers spend a fair chunk of time during their development flow fixing relatively trivial problems in their code. pre-commit.ci both enforces that these issues are discovered, which is opt in for each developer workflow via pre-commit, but also fixes the issues automatically, letting developers focus their time on more valuable problems." - A user of pre-commit.ci (https://pre-commit.ci/) Full Transcript (https://pythontest.com/testandcode/157-pre-commit/) Special Guest: Anthony Sottile.
6/11/202141 minutes, 40 seconds
Episode Artwork

156: Flake8: Python linting framework with Pyflakes, pycodestyle, McCabe, and more - Anthony Sottile

Flake8 is a command-line tool for linting Python projects. By default, it includes lint checks provided Pyflakes, pycodestyle, and McCabe It's also a platform, and allows plugins to extend the checks. Flake8 will run third-party extensions if they are found and installed. But what does all of that mean? Anthony Sottile is a maintainer of flake8 and has kindly offered to explain it to us. Full Transcript (https://pythontest.com/testandcode/156-flake8-python-linting-pyflakes-pycodestyle-mccabe) Special Guest: Anthony Sottile.
6/3/202122 minutes, 54 seconds
Episode Artwork

155: Four Questions to Ask Frequently During Software Projects - Tim Ottinger

Tim Ottinger has four questions that work great in many situations, from doing homework, to cooking, to writing code, to entire software projects. They are actually awesome questions to ask during a software project. We discuss the questions, where they came from, and look at some uses in software. The questions: What is it that needs to be done? What do we need in order to do it? Where can we get what we need? How can we tell if we’re doing it right? Bonus question that can be swapped out for #1: What's the most important thing that it doesn't do yet? Full Transcript (https://pythontest.com/testandcode/155-questions-ask-frequently-software-projects) Special Guest: Tim Ottinger.
5/28/202122 minutes, 48 seconds
Episode Artwork

154: Don't Mock your Database - Jeff Triplett

You need tests for your web app. And it has a database. What do you do with the database during testing? Should you use the real thing? or mock it? Jeff Triplett says don't mock it. In this episode, we talk with Jeff about testing web applications, specifically Django apps, and of course talk about the downsides of database mocking. Full Transcript (https://pythontest.com/testandcode/154-dont-mock-database) Special Guest: Jeff Triplett.
5/21/202131 minutes, 39 seconds
Episode Artwork

153: Playwright for Python: end to end testing of web apps - Ryan Howard

Playwright is an end to end automated testing framework for web apps with Python support and even a pytest plugin. Full Transcript (https://pythontest.com/testandcode/153-playwright-python-testing-web-apps) Special Guest: Ryan Howard.
5/14/202131 minutes, 29 seconds
Episode Artwork

152: Python Packaging - Brett Cannon

I always learn a lot when I talk to Brett, and this episode is no exception. We talk about the packaging workflow, tools, changes, pyproject.toml, flit, setuptools, and so much more. I hope you learn as much as I did in this great discussion. Full Transcript (https://pythontest.com/testandcode/152-python-packaging) Special Guest: Brett Cannon.
5/7/202149 minutes, 41 seconds
Episode Artwork

151: Python Adventure - Brandon Rhodes

Adventure, or Colossal Cave Adventure, was written between 1975 and 1977 in Fortran. Brandon Rhodes ported it to Python 3, initial release in 2011, and still maintains it. We talk to Brandon about this wonderful game. ``` YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING. AROUND YOU IS A FOREST. A SMALL STREAM FLOWS OUT OF THE BUILDING AND DOWN A GULLY. east A bit later... IT IS NOW PITCH DARK. IF YOU PROCEED YOU WILL LIKELY FALL INTO A PIT. light(lamp) YOUR LAMP IS NOW ON. YOU ARE IN A DEBRIS ROOM FILLED WITH STUFF WASHED IN FROM THE SURFACE. A LOW WIDE PASSAGE WITH COBBLES BECOMES PLUGGED WITH MUD AND DEBRIS HERE, BUT AN AWKWARD CANYON LEADS UPWARD AND WEST. A NOTE ON THE WALL SAYS ... ``` What's happening is that I'm playing adventure, which you can pip install thanks to Brandon Rohdes. Adventure is a faithful port to Python 3 from the original 1977 FORTRAN code by Crowther and Woods that lets you explore Colossal Cave, where others have found fortunes in treasure and gold, ... In this episode, we talk with Brandon Rhodes about this marvelous game. Full Transcript (https://pythontest.com/testandcode/151-python-adventure) Special Guest: Brandon Rhodes.
4/28/202156 minutes, 32 seconds
Episode Artwork

150: A Practical Testing Strategy

Coming up with a testing strategy doesn't have to be stressful. Prioritizing features to test, and generating test cases for each feature can be fairly quick and painless. This episode covers a strategy for both that can be applied to many types of software. Full Transcript (https://pythontest.com/testandcode/150-practical-testing-strategy)
4/15/202110 minutes, 34 seconds
Episode Artwork

149: I don't test my code, "crappy Python" is all I write - Corey Quinn

Corey Quinn is the Chief Cloud Economist at The Duckbill Group. He's also a podcaster and writes a newsletter. And he also automates things with Python. But he doesn't write tests. Let's find out why. Reason for the interview. Rough summary of a twitter conversation: Corey: What podcasts should I try to get an invite onto? ToonArmyCaptain: Python Bytes, Test & Code, Talk Python Corey: But... I don't test my code, "crappy Python" is all I write, and I'd feel like a giant imposter. So yeah, I'd be game. link (https://twitter.com/QuinnyPig/status/1354093298890141697?s=20) So here we are. This diagram is referenced in the show, the Last Week In AWS Newsletter Production Pipeline (https://files.fireside.fm/file/fireside-uploads/images/b/bc7f1faf-8aad-4135-bb12-83a8af679756/FXJsYzOQ.jpg). Full Transcript (https://pythontest.com/testandcode/149-crappy-python) Special Guest: Corey Quinn.
3/31/202149 minutes, 35 seconds
Episode Artwork

148: Coverage.py and testing packages

How do you test installed packages using coverage.py? Also, a couple followups from last week's episode on using coverage for single file applications. Full Transcript (https://pythontest.com/testandcode/148-coverage-testing-packages)
3/12/202114 minutes, 7 seconds
Episode Artwork

147: Testing Single File Python Applications/Scripts with pytest and coverage

Have you ever written a single file Python application or script? Have you written tests for it? Do you check code coverage? This is the topic of this weeks episode, spurred on by a listener question. The questions: * For single file scripts, I'd like to have the test code included right there in the file. Can I do that with pytest? * If I can, can I use code coverage on it? The example code discussed in the episode: script.py ``` def foo(): return 5 def main(): x = foo() print(x) if name == 'main': # pragma: no cover main() test code To test: pip install pytest pytest script.py To test with coverage: put this file (script.py) in a directory by itself, say foo then from the parent directory of foo: pip install pytest-cov pytest --cov=foo foo/script.py To show missing lines pytest --cov=foo --cov-report=term-missing foo/script.py def test_foo(): assert foo() == 5 def test_main(capsys): main() captured = capsys.readouterr() assert captured.out == "5\n" ``` Suggestion by @cfbolz (https://twitter.com/cfbolz/status/1368196960302358528?s=20) if you need to import pytest: if __name__ == '__main__': # pragma: no cover main() else: import pytest Full Transcript (https://pythontest.com/testandcode/147-testing-single-file-python-applications-scripts-pytest-coverage)
3/6/202111 minutes, 24 seconds
Episode Artwork

146: Automation Tools for Web App and API Development and Maintenance - Michael Kennedy

Building any software, including web apps and APIs requires testing. There's automated testing, and there's manual testing. In between that is exploratory testing aided by automation tools. Michael Kennedy joins the show this week to share some of the tools he uses during development and maintenance. We talk about tools used for semi-automated exploratory testing. We also talk about some of the other tools and techniques he uses to keep Talk Python Training, Talk Python, and Python Bytes all up and running smoothly. We talk about: Postman ngrok sitemap link testing scripts for manual processes using failover servers during maintenance, redeployments, etc gitHub webhooks and scripts to between fail over servers and production during deployments automatically floating IP addresses services to monitor your site: StatusCake, BetterUptime the affect of monitoring on analytics crash reporting: Rollbar, Sentry response times load testing: Locus Full Transcript (https://pythontest.com/testandcode/146-automation-tools-web-app-api-development-maintenance) Special Guest: Michael Kennedy.
2/28/202148 minutes, 33 seconds
Episode Artwork

145: For Those About to Mock - Michael Foord

A discussion about mocking in Python with the original contributor of unittest.mock, Michael Foord. Of course we discuss mocking and unittest.mock. We also discuss: * testing philosophy * unit testing and what a unit is * TDD * where Michael's towel is, and what color Micheal was instrumental in the building of testing tools for Python, and continues to be a pragmatic source of honest testing philosopy in a field that has a lot of contradictory information. Full Transcript (https://pythontest.com/testandcode/145-for-those-about-to-mock) Special Guest: Michael Foord.
2/18/202148 minutes, 44 seconds
Episode Artwork

144: TDD in Science - Martin Héroux

Test Driven Development, TDD, is not easy to incorporate in your daily development. Martin and Brian discuss TDD and testing and Martin's experience with testing, TDD, and using it for code involved with scientific research. We discuss lots of topics around this, including: What is TDD? Should research software be tested in order to be trusted? Time pressure and the struggle to get code done quickly. How do you make time for tests also? Is testing worth it for code that will not be reused? Sometimes it's hard to know how to test something. Maybe people should learn to test alongside learning how to code. A desire for a resource of testing concepts for non-CS people. Are the testing needs and testing information needs different in different disciplines? Biology, Physics, Astrophysics, etc. Do they have different testing needs? Do we need a "how to test" resource for each? Full Transcript (https://pythontest.com/testandcode/144-tdd-science) Special Guest: Martin Héroux.
2/13/202153 minutes, 50 seconds
Episode Artwork

143: pytest markers - Anthony Sottile

Completely nerding out about pytest markers with Anthony Sottile. Some of what we talk about: Running a subset of tests with markers. Using marker expressions with and, or, not, and parentheses. Keyword expressions also can use and, or, not, and parentheses. Markers and pytest functionality that use mark, such as parametrize, skipif, etc. Accessing markers with itermarkers and get_closest_marker through item. Passing values, metadata through markers to fixtures or hook functions. Full Transcript (https://pythontest.com/testandcode/143-pytest-markers) Special Guest: Anthony Sottile.
2/7/202140 minutes
Episode Artwork

142: MongoDB - Mark Smith

MongoDB is possibly the most recognizable NoSQL document database. Mark Smith, a developer advocate for MongoDB, answers my many questions about MongoDB. We cover some basics, but also discuss some advanced features that I never knew about before this conversation. Full Transcript (https://pythontest.com/testandcode/142-mongodb) Special Guest: Mark Smith.
1/25/202135 minutes, 6 seconds
Episode Artwork

141: Visual Testing - Angie Jones

Visual Testing has come a long way from the early days of x,y mouse clicks and pixel comparisons. Angie Jones joins the show to discuss how modern visual testing tools work and how to incorporate visual testing into a complete testing strategy. Some of the discussion: Classes of visual testing: problems with pixel to pixel testing DOM comparisons, css, html, etc. AI driven picture level testing, where failures look into the DOM to help describe the problem. Where visual testing fits into a test strategy. Combining "does this look right" visual testing with other test workflows. "A picture is worth a thousand assertions" - functional assertions built into visual testing. Baselining pictures in the test workflow. Also discussed: - automation engineer - Test Automation University Special Guest: Angie Jones.
12/30/202030 minutes, 59 seconds
Episode Artwork

140: Testing in Scientific Research and Academia - Martin Héroux

Scientists learn programming as they need it. Some of them learn it in college, but even if they do, that's not their focus. It's not surprising that sharing the software used for scientific research and papers is spotty, at best. And what about testing? We'd hope that the software behind scientific research is tested. But why would we expect that? We're lucky if CS students get a class or two that even mentions automated tests. Why would we expect other scientists to just know how to test their code? Martin works in research and this discussion is about software and testing in scientific research and academia. Special Guest: Martin Héroux.
12/18/202048 minutes, 1 second
Episode Artwork

139: Test Automation: Shifting Testing Throughout the Software Lifecycle - Nalin Parbhu

Talking with Nalin Parbhu about the software evolution towards more test automation and the creation of Infuse and useMango. We talk a software development and "shift left" where automated tests and quality checks have moved earlier into the software lifecycle. Software approaches and where quality fits in Shift left Test automation Roles of software developers, SDETs (software development engineer in test), testers, QA, etc. Developers doing testing and devops Automated testing vs manual testing Regression testing, UI testing, black bock testing Unit testing, white box, API, end to end testing User acceptance testing (UAT) Mullet Methodology (Agile at the front, Waterfall at the back) Waterwheel Methodology (Requirements -> iterative development -> QA) What's an agile team? Developer resistance to testing Manifesto for agile software development Iterative development Adapting to change Agility: being able to change course quickly Special Guests: Nalin Parbhu and Ola Omiyale.
12/4/202036 minutes, 53 seconds
Episode Artwork

138: Mutation Testing in Python with mutmut - Anders Hovmöller

Your test suite tells you about the quality of your code under test. Mutation testing is a way to tell you about the quality of your test suite. Anders Hovmöller wrote mutmut (https://mutmut.readthedocs.io/) for mutation testing in Python, and can be used with pytest, unittest, and others. In this episode, Anders explains mutation testing, how mutation testing with mutmut works, and good workflows. Special Guest: Anders Hovmöller.
11/19/202029 minutes, 58 seconds
Episode Artwork

137: Become an Author - Matt Harrison interviews Brian Okken

Matt Harrison, author of many Python books, is putting together a course, Effective Book Authoring, to help other people write and publish books. As part of this course, he's including interviews with people who have already written books, including me. This is that interview. We discuss: * Why I wrote "Python Testing with pytest" * Self publishing vs working with a publisher * The writing, editing, and publishing process * Writing format * Book promotion * Advice to other writers Special Guest: Matt Harrison.
11/5/202040 minutes, 40 seconds
Episode Artwork

136: Wearable Technology - Sophy Wong

Wearable technology is not just smart consumer devices like watches and activity trackers. Wearable tech also includes one off projects by designers, makers, and hackers and there are more and more people producing tutorials on how to get started. Wearable tech is also a great way to get both kids and adults excited about coding, electronics, and in general, engineering skills. Sophy Wong is a designer who makes really cool stuff using code, technology, costuming, soldering, and even jewelry techniques to get tech onto the human body. Sophy joins the show to answer my many questions about getting started safely with wearable tech. Some of the questions and topics: Can I wash my clothing if I've added tech to it? Is there any danger in wearing technology or building wearable tech? Are there actual wires and cables conductive thread in the fabric and textiles of some wearable tech projects? What's a good starter project? Especially if I want to do a wearable tech project with my kids? Dealing with stretch with clothing and non-bendy electronics. Some questions around the Sophy Wong and HackSpace "Wearable Tech Projects" book. How did you get into wearable tech? Do you have a favorite project? Can I get into wearable tech if I don't know how to code or solder? Are these projects accessible to people with limited budgets? Making projects so you can reuse the expensive bits on multiple projects. Special Guest: Sophy Wong.
10/26/202031 minutes, 44 seconds
Episode Artwork

135: Speeding up Django Test Suites - Adam Johnson

All test suites start fast. But as you grow your set of tests, each test adds a little bit of time to the suite. What can you do about it to keep test suites fast? Some things, like parallelization, are applicable to many domains. What about, for instance, Django applications? Well, Adam Johnson has thought about it a lot, and is here to tell us how we can speed up our Django test suites. Topics include: parallelizing tests moving from disk to memory using fake data and factory functions targeted mocking Special Guest: Adam Johnson.
10/20/202023 minutes, 34 seconds
Episode Artwork

134: Business Outcomes and Software Development - Benjamin Harding

Within software projects, there are lots of metrics we could measure. But which ones really matter. Instead of a list, Benjamin Harding shares with us a way of thinking about business outcomes that can help us with every day decision making. We talk about: * Business outcomes vs vanity metrics * As a developer, how do you keep business outcomes in mind * Thinking about customer value all the time * Communicating decisions and options in terms of costs and impact on business outcomes * Company culture and it's role in reinforcing a business outcome mindset * And even the role of team lead as impact multiplier I really enjoyed this conversation. But I admit that at first, I didn't realize how important this is on all software development. Metrics are front and center in a web app. But what about a service, or an embedded system with no telemetry. It still matters, maybe even more so. Little and big decisions developers face every day that have impact on costs and benefits with respect to customer value and business outcome, even if it's difficult to measure. Special Guest: Benjamin Harding.
10/12/202031 minutes, 49 seconds
Episode Artwork

133: Major League Hacking - Jon Gottfried

Hackathons have been spreading around the world; many at university campuses. Major League Hacking, MLH, has been encouraging and helping hackathons. Hacking can be thought of as tinkering. Taking things apart and putting them back together as an interesting experience. There's always been some of this as part of software culture. The people at Major League Hacking have taken this to a whole new level, bringing together Tech creators who enjoy playing around with and crating new technology, on campuses, and now in virtual spaces, all over the world. Jonathon Gottfried, one of the cofounders of Major League Hacking, joins the show to talk about: hacker meetups and events hackathons what it's like to go to a hackathon how to help out with hackathons as an experienced engineer, even virtually as a mentor hackathons continuing virtually during the pandemic internships and fellowships on open source projects to help students gain experience, even during the pandemic MLH approach to internships, giving interns a support group, including peers, mentors, and project maintainers and MLH itself Special Guest: Jon Gottfried.
10/5/202028 minutes, 44 seconds
Episode Artwork

132: mocking in Python - Anna-Lena Popkes

Using mock objects during testing in Python. Anna-Lena joins the podcast to teach us about mocks and using unittest.mock objects during testing. We discuss: - the different styles of using mocks - pros and cons of mocks - dependency injection - adapter pattern - mock hell - magical universe - and much more Special Guest: Anna-Lena Popkes.
9/28/202040 minutes, 49 seconds
Episode Artwork

131: Test Smarter, Not Harder

Some people avoid writing tests. Some drudge through it painfully. There is a better way. In this episode, I'm going to share some advice from Luke Plant on how to "Test Smarter, Not Harder" (https://lukeplant.me.uk/blog/posts/test-smarter-not-harder/).
9/21/20208 minutes, 53 seconds
Episode Artwork

130: virtualenv activation prompt consistency across shells - an open source dev and test adventure - Brian Skinn

virtualenv supports six shells: bash, csh, fish, xonsh, cmd, posh. Each handles prompts slightly differently. Although the virtualenv custom prompt behavior should be the same across shells, Brian Skinn noticed inconsistencies. He set out to fix those inconsistencies. That was the start of an adventure in open source collaboration, shell prompt internals, difficult test problems, and continuous integration quirks. Brian Skinn initially noticed that on Windows cmd, a space was added between a prefix defined by --prompt and the rest of the prompt, whereas on bash no space was added. For reference, there were/are three nominal virtualenv prompt modification behaviors, all of which apply to the prompt changes that are made at the time of virtualenv activation: If the environment variable VIRTUAL_ENV_DISABLE_PROMPT is defined and non-empty at activation time, do not modify the prompt at all. Otherwise: If the --prompt argument was supplied at creation time, use that argument as the prefix to apply to the prompt; or, If the --prompt argument was not supplied at creation time, use the default prefix of "({{ envname }}) " as the prefix (the environment folder name surrounded by parentheses, and with a trailing space after the last paren. Special Guest: Brian Skinn.
9/13/202036 minutes, 18 seconds
Episode Artwork

129: How to Test Anything - David Lord

I asked people on twitter to fill in "How do I test _____?" to find out what people want to know how to test. Lots of responses. David Lord agreed to answer them with me. In the process, we come up with lots of great general advice on how to test just about anything. Specific Questions people asked: What makes a good test? How do you test web app performance? How do you test cookie cutter templates? How do I test my test framework? How do I test permission management? How do I test SQLAlchemy models and pydantic schemas in a FastAPI app? How do I test warehouse ETL code? How do I test and mock GPIO pins on hardware for code running MicroPython on a device? How do I test PyQt apps? How do I test web scrapers? Is it the best practice to put static html in your test directory or just snippets stored in string variables? What's the best way to to test server client API contracts? How do I test a monitoring tool? We also talk about: What is the Flask testing philosophy? What do Flask tests look like? Flask and Pallets using pytest Code coverage Some of the resulting testing strategies: Set up some preconditions. Run the function. Get the result. Don't test external services. Do test external service failures. Don't test the frameworks you are using. Do test your use of a framework. Use open source projects to learn how something similar to your project tests things. Focus on your code. Focus on testing your new code. Try to architect your application such that actual GUI testing is minimal. Split up a large problem into smaller parts that are easier to test. Nail down as many parts as you can. Special Guest: David Lord.
9/7/202042 minutes, 8 seconds
Episode Artwork

128: pytest-randomly - Adam Johnson

Software tests should be order independent. That means you should be able to run them in any order or run them in isolation and get the same result. However, system state often gets in the way and order dependence can creep into a test suite. One way to fight against order dependence is to randomize test order, and with pytest, we recommend the plugin pytest-randomly to do that for you. The developer that started pytest-randomly and continues to support it is Adam Johnson, who joins us today to discuss pytest-randomly and another plugin he also wrote, called pytest-reverse. Special Guest: Adam Johnson.
8/28/202018 minutes, 12 seconds
Episode Artwork

127: WFH, WTF? - Tips and Tricks for Working From Home - Reuven Lerner & Julian Sequeira

Many people have been working from home now that are not used to working from home. Or at least are working from home more than they ever did before. That's definitely true for me. Even though I've been working from home since March, I wanted some tips from people who have been doing it longer. Julian Sequeira, of PyBites fame, has been working from home for about a year. Reuven Lerner, an amazing Python trainer, has been working from home for much longer. We originally had a big list of WFH topics. But we had so much fun with the tips and tricks part, that that's pretty much the whole episode. But there's lots of great tips and tricks, so I'm glad we focused on that. Special Guests: Julian Sequeira and Reuven Lerner.
8/24/202041 minutes, 43 seconds
Episode Artwork

126: Data Science and Software Engineering Practices ( and Fizz Buzz ) - Joel Grus

Researches and others using data science and software need to follow solid software engineering practices. This is a message that Joel Grus has been promoting for some time. Joel joins the show this week to talk about data science, software engineering, and even Fizz Buzz. Topics include: Software Engineering practices and data science Difficulties with Jupyter notebooks Code reviews on experiment code Unit tests on experiment code Finding bugs before doing experiments Tests for data pipelines Tests for deep learning models Showing researchers the value of tests by showing the bugs found that wouldn't have been found without them. "Data Science from Scratch" book Showing testing during teaching Data Science "Ten Essays on Fizz Buzz" book Meditations on Python, mathematics, science, engineerign and design Testing Fizz Buzz Different algorithms and solutions to an age old interview question. If not Fizz Buzz, what makes a decent coding interview question. pytest hypothesis Math requirements for data science Special Guest: Joel Grus.
8/17/202032 minutes, 17 seconds
Episode Artwork

125: pytest 6 - Anthony Sottile

pytest 6 is out. Specifically, 6.0.1, as of July 31. And there's lots to be excited about. Anthony Sottile joins the show to discuss features, improvements, documentation updates and more. Full release notes / changelog (https://docs.pytest.org/en/stable/changelog.html) Some of what we talk about: How to update (at least, how I do it) Run your test suites with 5.4.3 or whatever the last version you were using Update to 6 Run again. Same output? Probably good. If there are any warnings, maybe fix those. You can also run with pytest -W error to turn warnings into errors. Then find out all the cool stuff you can do now New Features pytest now supports pyproject.toml files for configuration. but remember, toml syntax is different than ini files. mostly quotes are needed pytest now includes inline type annotations and exposes them to user programs. Most of the user-facing API is covered, as well as internal code. New command-line flags --no-header and --no-summary A warning is now shown when an unknown key is read from a config INI file. The --strict-config flag has been added to treat these warnings as errors. New required_plugins configuration option allows the user to specify a list of plugins, including version information, that are required for pytest to run. An error is raised if any required plugins are not found when running pytest. Improvements You can now pass output to things like less and head that close the pipe passed to them. thank you!!! Improved precision of test durations measurement. use --durations=10 -vv to capture and show durations Rich comparison for dataclasses and attrs-classes is now recursive. pytest --version now displays just the pytest version, while pytest --version --version displays more verbose information including plugins. --junitxml now includes the exception cause in the message XML attribute for failures during setup and teardown. Improved Documentation Add a note about --strict and --strict-markers and the preference for the latter one. Explain indirect parametrization and markers for fixtures. Bug Fixes Deprecations Trivial/Internal Changes Breaking Changes you might need to care about before upgrading PytestDeprecationWarning are now errors by default. Check the deprecations and removals (https://docs.pytest.org/en/latest/deprecations.html) page if you are curious. -k and -m internals were rewritten to stop using eval(), this results in a few slight changes but overall makes them much more consistent testdir.run().parseoutcomes() now always returns the parsed nouns in plural form. I'd say that's an improvement Special Guest: Anthony Sottile.
8/7/20201 hour, 4 seconds
Episode Artwork

124: pip dependency resolver changes

pip is the package installer for Python. Often, when you run pip, especially the first time in a new virtual environment, you will see something like: WARNING: You are using pip version 20.1.1; however, version 20.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. And you should. Because 20.2 has a new dependency resolver. Get in the habit, until October, of replacing pip install with pip install --use-feature=2020-resolver. This flag is new in the 20.2 release. This new pip dependency resolver is the result of a lot of work. Five of the people involved with this work are joining the show today: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung. We talk about: * pip dependency resolver changes * user experience research and testing * crafting good error messages * efforts to improve the test suite * testing pip with pytest * some of the difficulties with testing pip * working with a team on a large project * working with a large code base * bringing new developers into a large project Special Guests: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung.
8/3/202044 minutes, 15 seconds
Episode Artwork

123: GitHub Actions - Tania Allard

Lots of Python projects are starting to use GitHub Actions for Continous Integration & Deployment (CI/CD), as well as other workflows. Tania Allard, a Senior Cloud Developer Advocate at Microsoft, joins the show to answer some of my questions regarding setting up a Python project to use Actions. Some of the topics covered: How to get started with GitHub Actions for a Python project? What are workflow files? Does it matter what the file name is called? Can I have / Should I have more than one workflow? Special Guest: Tania Allard.
7/24/202022 minutes, 53 seconds
Episode Artwork

122: Better Resumes for Software Engineers - Randall Kanna

A great resume is key to landing a great software job. There's no surprise there. But so many people make mistakes on their resume that can very easily be fixed. Randall Kanna is on the show today to help us understand how to improve our resumes, and in turn, help us have better careers. Special Guest: Randall Kanna.
7/16/202036 minutes, 12 seconds
Episode Artwork

121: Industrial 3D Printing & Python, Finite State Machines, and Simulating Hardware - Len Wanger

Len Wanger works on industrial 3D printers. And I was pleased to find out that there's a bunch of Python in those printers as well. In this episode we talk about: 3D printers What are the different types of 3D printers? Where are 3D printed industrial parts being used? Why use one type of additive manufacturing over another? Python in 3D printing hardware. What are Finite State Machines, FSMs? Benefits of FSMs for testing, logging, and breaking a complex behavior into small testable parts. Benefits of simulation in writing and testing software to control hardware. Special Guest: Len Wanger.
7/10/202049 minutes, 21 seconds
Episode Artwork

120: FastAPI & Typer - Sebastián Ramírez

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints. Typer is a library for building CLI applications, also based on Python type hints. Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer. The person behind FastAPI and Typer is Sebastián Ramírez. Sebastián is on the show today, and we discuss: FastAPI Rest APIs Swagger UI Future features of FastAPI Starlette Typer Click Testing with Typer and Click Typer autocompletion Typer CLI Special Guest: Sebastián Ramírez.
7/3/202043 minutes, 54 seconds
Episode Artwork

119: Editable Python Installs, Packaging Standardization, and pyproject.toml - Brett Cannon

There's stuff going on in Python packaging and pyproject.toml. Brett and I talk about some upcoming work on Python packaging, such as: editable installs the need for standardization configuration of other tools in pyproject.toml And then get off on tangents and talk about: why it's good to have packages like pip, toml, setuptools, wheel, etc not part of the standard library should we remove some stuff from the standard library the standard library using unittest for testing the standard library why not hypothesis I didn't bring up "why not pytest?" but you know I was thinking it. why CPython and not C++Python and more Special Guest: Brett Cannon.
6/26/202036 minutes, 6 seconds
Episode Artwork

118: Code Coverage and 100% Coverage

Code Coverage or Test Coverage is a way to measure what lines of code and branches in your code that are utilized during testing. Coverage tools are an important part of software engineering. But there's also lots of different opinions about using it. - Should you try for 100% coverage? - What code can and should you exclude? - What about targets? I've been asked many times what I think about code coverage or test coverage. This episode is a train of thought brain dump on what I think about code coverage. We'll talk about: - how I use code coverage to help me write source code - line coverage and branch coverage - behavior coverage - using tests to ask and answer questions about the system under test - how to target coverage just to the code you care about - excluding code - good reasons and bad reasons to exclude code And also the Pareto Principle or 80/20 rule, and the law of diminishing returns and how that applies (or doesn't) to test coverage.
6/26/202042 minutes, 48 seconds
Episode Artwork

117: Python extension for VS Code - Brett Cannon

The Python extension for VS Code is most downloaded extension for VS Code. Brett Cannon is the manager for the distributed development team of the Python extension for VS Code. In this episode, Brett and I discuss the Python extension and VS Code, including: pytest support virtual environment support how settings work, including user and workspace settings multi root projects testing Python in VS Code debugging and pydevd jump to cursor feature upcoming features Special Guest: Brett Cannon.
6/18/202051 minutes, 17 seconds
Episode Artwork

116: 15 amazing pytest plugins - Michael Kennedy

pytest plugins are an amazing way to supercharge your test suites, leveraging great solutions from people solving test problems all over the world. In this episode Michael and I discuss 15 favorite plugins that you should know about. We also discuss fixtures and plugins and other testing tools that work great with pytest * tox * GitHub Actions * Coverage.py * Selenium + splinter with pytest-splinter * Hypothesis And then our list of pytest plugins: 1. pytest-sugar 1. pytest-cov 1. pytest-stress 1. pytest-repeat 1. pytest-instafail 1. pytest-metadata 1. pytest-randomly 1. pytest-xdist 1. pytest-flake8 1. pytest-timeout 1. pytest-spec 1. pytest-picked 1. pytest-freezegun 1. pytest-check 1. fluentcheck That last one isn't a plugin, but we also talked about pytest-splinter at the beginning. So I think it still counts as 15. Special Guest: Michael Kennedy.
6/8/202051 minutes, 27 seconds
Episode Artwork

115: Catching up with Nina Zakharenko

One of the great things about attending in person coding conferences, such as PyCon, is the hallway track, where you can catch up with people you haven't seen for possibly a year, or maybe even the first time you've met in person. Nina is starting something like the hallway track, online, on twitch, and it's already going, so check out the first episode of Python Tea (https://www.twitch.tv/videos/635831555). Interesting coincidence is that this episode is kind of like a hallway track discussion between Nina and Brian. We've had Nina on the show a couple times before, but it's been a while. In 2018, we talked about Mentoring on episode 44 (https://testandcode.com/44). In 2019, we talked about giving Memorable Tech Talks in episode 71 (https://testandcode.com/71). In this episode, we catch up with Nina, find out what she's doing, and talk about a bunch of stuff, including: Live Coding Online Conferences Microsoft Python team Python Tea, an online hallway track Q&A with Python for VS Code team Python on hardware Adafruit Device Simulator Express CircuitPython Tricking out your command prompt Zsh and Oh My Zsh Emacs vs vi key bindings for shells Working from home Special Guest: Nina Zakharenko.
5/30/202042 minutes, 21 seconds
Episode Artwork

114: The Python Software Foundation (PSF) Board Elections - Ewa Jodlowska / Christopher Neugebauer

"The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers." That's a lot of responsibility, and to that end, the PSF Board Directors help out quite a bit. If you want to be a part of the board, you can. There's an election coming up right around the corner and you gotta get your nomination in by May 31. You can also join the PSF if you want to vote for who gets to be part of the board. But what does it really mean to be on the Board, and what are some of the things the PSF does? To help answer those questions, I've got Ewa Jodlowska, the PSF Executive Director, and Christopher Neugebauer, a current board member, on the show today. I've also got some great links in the show notes if we don't answer your questions and you want to find out more. Special Guests: Christopher Neugebauer and Ewa Jodlowska.
5/24/202030 minutes, 45 seconds
Episode Artwork

113: Technical Debt - James Smith

Technical debt has to be dealt with on a regular basis to have a healthy product and development team. The impacts of technical debt include emotional drain on engineers and slowing down development and can adversely affect your hiring ability and retention. But really, what is technical debt? Can we measure it? How do we reduce it, and when? James Smith, the CEO of Bugsnag, joins the show to talk about technical debt and all of these questions. Special Guest: James Smith.
5/15/202030 minutes, 2 seconds
Episode Artwork

112: Six Principles of Readable Tests - David Seddon

"Code is read much more often than it is written." - Guido van Rossum This is true for both production code and test code. When you are trying to understand why a test is failing, you'll be very grateful to the test author if they've taken the care to make it readable. David Seddon came up with 6 principles to help us write more readable tests. We discuss these, as well as more benefits of readable tests. David's 6 Principles of Readable Tests: Profit from the work of others Put naming to work Show only what matters Don’t repeat yourself Arrange, act, assert Aim high Special Guest: David Seddon.
5/8/202045 minutes, 2 seconds
Episode Artwork

111: Subtests in Python with unittest and pytest - Paul Ganssle

In both unittest and pytest, when a test function hits a failing assert, the test stops and is marked as a failed test. What if you want to keep going, and check more things? There are a few ways. One of them is subtests. Python's unittest introduced subtests in Python 3.4. pytest introduced support for subtests with changes in pytest 4.4 and a plugin, called pytest-subtests. Subtests are still not really used that much. But really, what are they? When could you use them? And more importantly, what should you watch out for if you decide to use them? That's what Paul Ganssle and I will be talking about today. Special Guest: Paul Ganssle.
5/2/202048 minutes, 34 seconds
Episode Artwork

110: Testing Django - from unittest to pytest - Adam Parkin

Django supports testing out of the box with some cool extensions to unittest. However, many people are using pytest for their Django testing, mostly using the pytest-django plugin. Adam Parkin, who is known online as CodependentCodr (https://twitter.com/codependentcodr), joins us to talk about migrating an existing Django project from unittest to pytest. Adam tells us just how easy this is. Special Guest: Adam Parkin.
4/25/202024 minutes, 56 seconds
Episode Artwork

109: Testing in Financial Services - Eric Bergemann

Financial services have their own unique testing development challenges. But they also have lots of the same challenges as many other software projects. Eric Bergemann joins Brian Okken to discuss: Specific testing challenges in the financial services domain CI/CD : Continuous Integration, Continuous Deployment TDD : Test Driven Development Confidence from testable applications Testing strategies to add coverage to legacy systems Testing the data and test cases themselves DevOps Continuous testing Manual testing procedures BDD & Gherkin Hiring in vs training industry knowledge Special Guest: Eric Bergemann.
4/14/202029 minutes, 34 seconds
Episode Artwork

108: PySpark - Jonathan Rioux

Apache Spark is a unified analytics engine for large-scale data processing. PySpark blends the powerful Spark big data processing engine with the Python programming language to provide a data analysis platform that can scale up for nearly any task. Johnathan Rioux, author of "PySpark in Action", joins the show and gives us a great introduction of Spark and PySpark to help us decide how to get started and decide whether or not to decide if Spark and PySpark are right you. Special Guest: Jonathan Rioux.
4/9/202032 minutes, 1 second
Episode Artwork

107: Property Based Testing in Python with Hypothesis - Alexander Hultnér

Hypothesis is the Python tool used for property based testing. Hypothesis claims to combine "human understanding of your problem domain with machine intelligence to improve the quality of your testing process while spending less time writing tests." In this episode Alexander Hultnér introduces us to property based testing in Python with Hypothesis. Some topics covered: What is property based testing Thinking differently for property based testing Using hypothesis / property based testing in conjunction with normal testing Failures saved and re-run What parts of development/testing is best suited for hypothesis / property based testing Comparing function implementations Testing against REST APIs that use Open API / Swagger with schemathesis Changing the number of tests in different test environments System, integration, end to end, and unit tests Special Guest: Alexander Hultnér.
3/27/202036 minutes, 18 seconds
Episode Artwork

106: Visual Testing : How IDEs can make software testing easier - Paul Everitt

IDEs can help people with automated testing. In this episode, Paul Everitt and Brian discuss ways IDEs can encourage testing and make it easier for everyone, including beginners. We discuss features that exist and are great, as well as what is missing. The conversation also includes topics around being welcoming to new contributors for both open source and professional projects. We talk about a lot of topics, and it's a lot of fun. But it's also important. Because IDEs can make testing Some topics discussed: Making testing more accessible Test First vs teaching testing last TDD workflow Autorun Rerunning last failures Different ways to run different levels of tests Command line flags and how to access them in IDEs pytest.ini zooming in and out of test levels running parametrizations running tests with coverage and profiling parametrize vs parameterize parametrization identifiers pytest fixture support global configurations / configuration templates coverage and testing and being inviting to new contributors confidence in changes and confidence in contributions navigating code, tests, fixtures grouping tests in modules, classes, directories BDD, behavior driven development, cucumber, pytest-bdd web development testing parallel testing with xdist and IDE support refactor rename Special Guest: Paul Everitt.
3/20/202049 minutes, 58 seconds
Episode Artwork

105: TAP: Test Anything Protocol - Matt Layman

The Test Anything Protocol, or TAP, is a way to record test results in a language agnostic way, predates XML by about 10 years, and is still alive and kicking. Matt Layman has contributed to Python in many ways, including his educational newsletter, and his Django podcast, Django Riffs. Matt is also the maintainer of tap.py and pytest-tap, two tools that bring the Test Anything Protocol to Python. In this episode, Matt and I discuss TAP, it's history, his involvement, and some cool use cases for it. Special Guest: Matt Layman.
3/11/202030 minutes, 13 seconds
Episode Artwork

104: Top 28 pytest plugins - Anthony Sottile

pytest is awesome by itself. pytest + plugins is even better. In this episode, Anthony Sottile and Brian Okken discuss the top 28 pytest plugins. Some of the plugins discussed (we also mention a few plugins related to some on this list): pytest-cov pytest-timeout pytest-xdist pytest-mock pytest-runner pytest-instafail pytest-django pytest-html pytest-metadata pytest-asyncio pytest-split-tests pytest-sugar pytest-rerunfailures pytest-env pytest-cache pytest-flask pytest-benchmark pytest-ordering pytest-watch pytest-pythonpath pytest-flake8 pytest-pep8 pytest-repeat pytest-pylint pytest-randomly pytest-selenium pytest-mypy pytest-freezegun Honorable mention: - pytest-black - pytest-emoji - pytest-poo Special Guest: Anthony Sottile.
3/4/202047 minutes, 13 seconds
Episode Artwork

103: Django - Lacey Williams Henschel

Django is without a doubt one of the most used web frameworks for Python. Lacey Williams Henschel is a Django consultant and has joined me to talk about Django, the Django community, and so much more. Topics: * Django * The Django Community * Django Girls * Django Girls Tutorial * DjangoCon * Software Testing * Using tests during learning * pytest-django * testing Django * Wagtail Special Guest: Lacey Williams Henschel.
3/1/202027 minutes, 17 seconds
Episode Artwork

102: Cosmic Python, TDD, testing and external dependencies - Harry Percival

Harry Percival has completed his second book, "Architecture Patterns with Python". So of course we talk about the book, also known as "Cosmic Python". We also discuss lots of testing topics, especially related to larger systems and systems involving third party interfaces and APIs. Topics Harry's new book, "Architecture Patterns with Python". a.k.a. Cosmic Python TDD : Test Driven Development Test Pyramid Tradeoffs of different architectural choices Mocks and their pitfalls Avoiding mocks Separating conceptual business logic Dependency injection Dependency inversion Identifying external dependencies Interface adapters to mimize the exposed surface area of external dependencies London School vs Classic/Detroit School of TDD Testing strategies for testing external REST APIs Special Guest: Harry Percival.
2/27/202041 minutes, 44 seconds
Episode Artwork

101: Application Security - Anthony Shaw

Application security is best designed into a system from the start. Anthony Shaw is doing something about it by creating an editor plugin that actually helps you write more secure application code while you are coding. On today's Test & Code, Anthony and I discuss his security plugin, but also application security in general, as well as other security components you need to consider. Security is something every team needs to think about, whether you are a single person team, a small startup, or a large corporation. Anthony and I also discuss where to start if it's just a few of you, or even just one of you. Topics include: Finding security risks while writing code. What are the risks for your applications. Thinking about attack surfaces. Static and dynamic code analysis. Securing the environment an app is running in. Tools for scanning live sites for vulnerabilities. Secret management. Hashing algorithms. Authentication systems. and Anthony's upcoming cPython Internals book. Special Guest: Anthony Shaw.
2/19/202046 minutes, 16 seconds
Episode Artwork

100: A/B Testing - Leemay Nassery

Let's say you have a web application and you want to make some changes to improve it. You may want to A/B test it first to make sure you are really improving things. But really what is A/B testing? That's what we'll find out on this episode with Leemay Nassery. Special Guest: Leemay Nassery.
2/13/202036 minutes, 30 seconds
Episode Artwork

99: Software Maintenance and Chess

I play a form of group chess that has some interesting analogies to software development and maintenance of existing systems. This episode explains group chess and explores a few of those analogies.
1/30/202016 minutes, 8 seconds
Episode Artwork

98: pytest-testmon - selects tests affected by changed files and methods - Tibor Arpas

pytest-testmon is a pytest plugin which selects and executes only tests you need to run. It does this by collecting dependencies between tests and all executed code (internally using Coverage.py) and comparing the dependencies against changes. testmon updates its database on each test execution, so it works independently of version control. In this episode, I talk with testmon creator Tibor Arpas about testmon, about it's use and how it works. Special Guest: Tibor Arpas.
1/21/202032 minutes, 58 seconds
Episode Artwork

97: 2019 Retrospective, 2020 Plans, and an amazing decade

This episode is not just a look back on 2019, and a look forward to 2020. Also, 2019 is the end of an amazingly transofrmative decade for me, so I'm going to discuss that as well. top 10 episodes of 2019 10: episode 46 (https://testandcode.com/46), Testing Hard To Test Applications - Anthony Shaw 9: episode 64 (https://testandcode.com/64), Practicing Programming to increase your value 8: episode 70 (https://testandcode.com/70), Learning Software without a CS degree - Dane Hillard 7: episode 75 (https://testandcode.com/75), Modern Testing Principles - Alan Page 6: episode 72 (https://testandcode.com/72), Technical Interview Fixes - April Wensel 5: episode 69 (https://testandcode.com/69), Andy Hunt - The Pragmatic Programmer 4: episode 73 (https://testandcode.com/73), PyCon 2019 Live Recording 3: episode 71 (https://testandcode.com/71), Memorable Tech Talks, The Ultimate Guide - Nina Zakharenko 2: episode 76 (https://testandcode.com/76), TDD: Don’t be afraid of Test-Driven Development - Chris May 1: episode 89 (https://testandcode.com/89), Improving Programming Education - Nicholas Tollervey Looking back on the last decade Some amazing events, like 2 podcasts, a book, a blog, speaking events, and teaching has led me to where we're at now. Looking forward to 2020 and beyond I discussed what's in store in the next year and moving forward. A closing quote Software is a blast. At least, it should be. I want everyone to have fun writing software. Leaning on automated tests is the best way I know to allow me confidence and freedome to: - rewrite big chunks of code - play with the code - try new things - have fun without fear - go home feeling good about what I did - be proud of my code I want everyone to have that. That's why I promote and teach automated testing. I hope you had an amazing decade. And I wish you a productive and fun 2020 and the upcoming decade. If we work together and help eachother reach new heights, we can achieve some pretty amazing things
12/31/201924 minutes, 1 second
Episode Artwork

96: Azure Pipelines - Thomas Eckert

Pipelines are used a lot in software projects to automated much of the work around build, test, deployment and more. Thomas Eckert talks with me about pipelines, specifically Azure Pipelines. Some of the history, and how we can use pipelines for modern Python projects. Special Guest: Thomas Eckert.
12/16/201926 minutes, 9 seconds
Episode Artwork

95: Data Science Pipeline Testing with Great Expectations - Abe Gong

Data science and machine learning are affecting more of our lives every day. Decisions based on data science and machine learning are heavily dependent on the quality of the data, and the quality of the data pipeline. Some of the software in the pipeline can be tested to some extent with traditional testing tools, like pytest. But what about the data? The data entering the pipeline, and at various stages along the pipeline, should be validated. That's where pipeline tests come in. Pipeline tests are applied to data. Pipeline tests help you guard against upstream data changes and monitor data quality. Abe Gong and Superconductive are building an open source project called Great Expectations. It's a tool to help you build pipeline tests. This is quite an interesting idea, and I hope it gains traction and takes off. Special Guest: Abe Gong.
11/30/201922 minutes, 48 seconds
Episode Artwork

94: The real 11 reasons I don't hire you - Charity Majors

You've applied for a job, maybe lots of jobs. Depending on the company, you've gotta get through: a resume review a coding challange a phone screen maybe another code example an in person interview If you get the job, and you enjoy the work, awesome, congratulations. If you don't get the job, it'd be really great to know why. Sometimes it isn't because you aren't a skilled engineer. What other reasons are there? Well, that's what we're talking about today. Charity Majors is the cofounder and CTO of Honeycomb.io, and we're going to talk about reasons for not hiring someone. This is a very informative episode both for people who job hunt in the future and for hiring managers and people on the interview team. Special Guest: Charity Majors.
11/18/201934 minutes, 25 seconds
Episode Artwork

93: Software Testing, Book Writing, Teaching, Public Speaking, and PyCarolinas - Andy Knight

Andy Knight is the Automation Panda. Andy Knight is passionate about software testing, and shares his passion through public speaking, writing on automationpanda.com, teaching as an adjunct professor, and now also through writing a book and organizing a new regional Python conference. Topics of this episode include: Andy's book on software testing Being an adjunct professor Public speaking and preparing talk proposals including tips from Andy about proposals and preparing for talks PyCarolinas Special Guest: Andy Knight.
10/31/201930 minutes, 24 seconds
Episode Artwork

92: 9 Steps to Crater Quality & Destroy Customer Satisfaction - Cristian Medina

Cristian Medina wrote an article recently called "Test Engineering Anti-Patterns: Destroy Your Customer Satisfaction and Crater Your Quality By Using These 9 Easy Organizational Practices" Of course, it's sarcastic, and aims to highlight many problems with organizational practices that reduce software quality. The article doesn't go out of character, and only promotes the anti-patterns. However, in this interview, we discuss each point, and the corollary of what you really should do. At least, our perspectives. Here's the list of all the points discussed in the article and in this episode: Make the Test teams solely responsible for quality Require all tests to be automated before releasing Require 100% code coverage Isolate the Test organization from Development Measure the success of the process, not the product. Metrics, if rewarded, will always be gamed. Require granular projections from engineers Reward quick patching instead of solving Plan for today instead of tomorrow Special Guest: Cristian Medina.
10/20/201935 minutes, 5 seconds
Episode Artwork

91: Python 3.8 - there's a lot more new than most people are talking about

Python 3.8.0 final is live and ready to download. On todays episode, we're going to run through what's new, picking out the bits that I think are the most interesting and affect the most people, including new language features standard library changes optimizations in 3.8 Not just the big stuff everyone's already talking about. But also some little things that will make programming Python even more fun and easy. I'm excited about Python 3.8. And really, this episode is to my way to try to get you excited about it too.
10/16/201921 minutes
Episode Artwork

90: Dynamic Scope Fixtures in pytest 5.2 - Anthony Sottile

pytest 5.2 was just released, and with it, a cool fun feature called dynamic scope fixtures. Anthony Sottile so tilly is one of the pytest core developers, so I thought it be fun to have Anthony describe this new feature for us. We also talk about parametrized testing and really what is fixture scope and then what is dynamic scope. Special Guest: Anthony Sottile.
10/11/201933 minutes, 59 seconds
Episode Artwork

89: Improving Programming Education - Nicholas Tollervey

Nicholas Tollervey is working toward better ways of teaching programming. His projects include the Mu Editor, PyperCard, and CodeGrades. Many of us talk about problems with software education. Nicholas is doing something about it. Special Guest: Nicholas Tollervey.
9/28/201941 minutes, 59 seconds
Episode Artwork

88: Error Monitoring, Crash Reporting, Performance Monitoring - JD Trask

Tools like error monitoring, crash reporting, and performance monitoring are tools to help you create a better user experience and are fast becoming crucial tools for web development and site reliability. But really what are they? And when do you need them? You've built a cool web app or service, and you want to make sure your customers have a great experience. You know I advocate for utilizing automated tests so you find bugs before your customers do. However, fast development lifecycles, and quickly reacting to customer needs is a good thing, and we all know that complete testing is not possible. That's why I firmly believe that site monitoring tools like logging, crash reporting, performance monitoring, etc are awesome for maintaining and improving user experience. John-Daniel Trask, JD, the CEO of Raygun, agreed to come on the show and let me ask all my questions about this whole field. Special Guest: John-Daniel Trask.
9/21/201948 minutes, 16 seconds
Episode Artwork

87: Paths to Parametrization - from one test to many

There's a cool feature of pytest called parametrization. It's totally one of the superpowers of pytest. It's actually a handful of features, and there are a few ways to approach it. Parametrization is the ability to take one test, and send lots of different input datasets into the code under test, and maybe even have different output checks, all within the same test that you developed in the simple test case. Super powerful, but something since there's a few approaches to it, a tad tricky to get the hang of.
9/11/201919 minutes, 1 second
Episode Artwork

86: Teaching testing best practices with 4 testing maxims - Josh Peak

You've incorporated software testing into your coding practices and know from experience that it helps you get your stuff done faster with less headache. Awesome. Now your colleagues want in on that super power and want to learn testing. How do you help them? That's where Josh Peak is. He's helping his team add testing to their workflow to boost their productivity. That's what we're talking about today on Test & Code. Josh walks us through 4 maxims of developing software tests that help grow your confidence and proficiency at test writing. Special Guest: Josh Peak.
9/6/201922 minutes, 40 seconds
Episode Artwork

85: Speed Up Test Suites - Niklas Meinzer

Good software testing strategy is one of the best ways to save developer time and shorten software development delivery cycle time. Software test suites grow from small quick suites at the beginning of a project to larger suites as we add tests, and the time to run the suites grows with it. Fortunately, pytest has many tricks up it's sleave to help shorten those test suite times. Niklas Meinzer is a software developer that recentely wrote an article on optimizing test suites. In this episode, I talk with Niklas about the optimization techniques discussed in the article and how they can apply to just about any project. Special Guest: Niklas Meinzer.
8/26/201926 minutes, 32 seconds
Episode Artwork

84: CircuitPython - Scott Shawcroft

Adafruit enables beginners to make amazing hardware/software projects. With CircuitPython, these projects can now use Python. The combination of Python's ease of use and Adafruit's super cool hardware and a focus on a successful beginner experience makes learning to write code that controls hardware super fun. In this episode, Scott Shawcroft, the project lead, talks about the past, present, and future of CircuitPython, and discusses the focus on the beginner. We also discuss contributing to the project, testing CircuitPython, and many of the cool projects and hardware boards that can use CircuitPython, and Blinka, a library to allow you to use "CircuitPython APIs for non-CircuitPython versions of Python such as CPython on Linux and MicroPython," including Raspberry Pi. Special Guest: Scott Shawcroft.
8/20/201935 minutes, 51 seconds
Episode Artwork

83: PyBites Code Challenges behind the scenes - Bob Belderbos

Bob Belderbos and Julian Sequeira started PyBites (https://pybit.es/) a few years ago. They started doing code challanges along with people around the world and writing about it. Then came the codechalleng.es (https://codechalleng.es/) platform, where you can do code challenges in the browser and have your answer checked by pytest tests. But how does it all work? Bob joins me today to go behind the scenes and share the tech stack running the PyBites Code Challenges platform. We talk about the technology, the testing, and how it went from a cool idea to a working platform. Special Guest: Bob Belderbos.
8/16/201924 minutes, 3 seconds
Episode Artwork

82: pytest - favorite features since 3.0 - Anthony Sottile

Anthony Sottile is a pytest core contributor, as well as a maintainer and contributor to many other projects. In this episode, Anthony shares some of the super cool features of pytest that have been added since he started using it. We also discuss Anthony's move from user to contributor, and how others can help with the pytest project. Special Guest: Anthony Sottile.
7/31/201936 minutes, 35 seconds
Episode Artwork

81: TDD with flit

In the last episode, we talked about going from script to supported package. I worked on a project called subark and did the packaging with flit. Today's episode is a continuation where we add new features to a supported package and how to develop and test a flit based package. Covered: viewing stages of a project with git tags flit support for editable installs flit description entry in pyproject.toml to put README on pypi. development dependencies in pyproject.toml editor layout for optimal TDD-ing test case grouping modifications to traditional TDD that helps me develop faster. code and command snippets from episode: For git checkout of versions: $ git clone https://github.com/okken/submark.git $ cd submark $ python3 -m venv venv --prompt submark $ source ./bin/activate (submark) $ git checkout v0.1 ... etc ... (submark) $ git checkout v0.7 To grab the latest again: (submark) $ git checkout master pyproject.toml change for README to show up on pypi: [tool.flit.metadata] ... description-file = "README.md" ... Adding dev dependencies to pyproject.toml: [tool.flit.metadata.requires-extra] test = ["pytest", "pytest-cov", "tox"] Installing in editable mode (in top level repo directory). works in mac, linux, windows: (submark) $ flit install --pth-file or for mac/linux: (submark) $ flit install -s
7/17/201925 minutes, 20 seconds
Episode Artwork

80: From Python script to Maintainable Package

This episode is a story about packaging, and flit, tox, pytest, and coverage. And an alternate solution to "using the src". Python makes it easy to build simple tools for all kinds of tasks. And it's great to be able to share small projects with others on your team, in your company, or with the world. When you want to take a script from "just a script" to maintainable package, there are a few steps, but none of it's hard. Also, the structure of the code layout changes to help with the growth and support. Instead of just talking about this from memory, I thought it'd be fun to create a new project and walk through the steps, and report back in a kind of time lapse episode. It should be fun. Here are the steps we walk through: 0.1 Initial script and tests 0.2 build wheel with flit 0.3 build and test with tox 0.4 move source module into a package directory 0.5 move tests into tests directory
7/4/201921 minutes, 51 seconds
Episode Artwork

79: Fixing misinformation about software testing

Some information about software testing is just wrong. I'm not talking about opinions. I have lots of opinions and they differ from other peoples opinions. I'm talking about misinformation and old information that is no longer applicable. I've ran across a few lateley that I want to address. All of the following are wrong: Integrated tests can't work. I can prove it with wacky math. Tests have to be blazing fast or they won't get run. TDD is about design, not about testing. This episode discusses why these are wrong.
6/27/201922 minutes, 38 seconds
Episode Artwork

78: I don't write tests because ...

Roadblocks to writing tests, and what to do about it. Some developers either don't write tests, or don't like writing tests. Why not? I love writing tests. In this episode we examine lots of roadblocks to testing, and start coming up with solutions for these.
6/19/201930 minutes, 41 seconds
Episode Artwork

77: Testing Complex Systems with Maintainable Test Suites

Creating maintainable test suites for complex systems. The episode describes some complexities involved with hardware testing, then shares techniques for shifting complexity out of the test cases. quick overview of what test instruments are discussion of API and communication with instruments techniques for shifting complexity out of test cases These techniques should apply to all test suites dealing with complex systems: Creating test cases that are easy to read and debug and tell a story about what is being tested. Pushing setup complexity into fixtures. Pushing lengthy repetitive API call sets into helper functions. Using stable, documented, interfaces.
6/11/201922 minutes, 59 seconds
Episode Artwork

76: TDD: Don’t be afraid of Test-Driven Development - Chris May

Test Driven Development, TDD, can be intimidating to try. Why is that? And how can we make it less scary? That's what this episode is about. Chris May is a Python developer and the co-founder of PyRVA, the Richmond Virginia Python group. In this episode, Chris shares his experience with adding testing and TDD to his work flow. I really enjoyed talking with Chris, and I think his story will help lots of people overcome testing anxiety. Special Guest: Chris May.
5/29/201935 minutes, 29 seconds
Episode Artwork

75: Modern Testing Principles - Alan Page

Software testing, if done right, is done all the time, throughout the whole life of a software project. This is different than the verification and validation of a classical model of QA teams. It's more of a collaborative model that actually tries to help get great software out the door faster and iterate quicker. One of the people at the forefront of this push is Alan Page. Alan and his podcast cohost Brent Jensen tried to boil down what modern testing looks like in the Modern Testing Principles. I've got Alan here today, to talk about the principles, and also to talk about this transition from classical QA to testing specialists being embedded in software teams and then to software teams doing their own testing. But that only barely scratches the surface of what we cover. I think you'll learn a lot from this discussion. The seven principles of Modern Testing (http://moderntesting.org): Our priority is improving the business. We accelerate the team, and use models like Lean Thinking and the Theory of Constraints to help identify, prioritize and mitigate bottlenecks from the system. We are a force for continuous improvement, helping the team adapt and optimize in order to succeed, rather than providing a safety net to catch failures. We care deeply about the quality culture of our team, and we coach, lead, and nurture the team towards a more mature quality culture. We believe that the customer is the only one capable to judge and evaluate the quality of our product We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact. We expand testing abilities and knowhow across the team; understanding that this may reduce (or eliminate) the need for a dedicated testing specialist. Special Guest: Alan Page.
5/23/201940 minutes
Episode Artwork

74: Technical Interviews: Preparing For, What to Expect, and Tips for Success - Derrick Mar

In this episode, I talk with Derrick Mar, CTO and co-founder of Pathrise. This is the episode you need to listen to to get ready for software interviews. We discuss four aspects of technical interviews that interviewers are looking for: communication problem solving coding verification How to practice for the interview. Techniques for synchronizing with interviewer and asking for hints. Even how to ask the recruiter or hiring manager how to prepare for the interview. If you or anyone you know has a software interview coming up, this episode will help you both feel more comfortable about the interview before you show up, and give you concrete tips on how to do better during the interview. Special Guest: Derrick Mar.
5/21/201927 minutes, 6 seconds
Episode Artwork

73: PyCon 2019 Live Recording

This is a "Yay! It's PyCon 2019" episode. PyCon is very important to me. But it's kinda hard to put a finger on why. So I figured I'd ask more people to help explain why it's important. I ask a few simple questions to people about Python and PyCon and get some great insights into both the language popularity and the special place this conference holds to many people.
5/3/201928 minutes
Episode Artwork

72: Technical Interview Fixes - April Wensel

Some typical technical interview practices can be harmful and get in the way of hiring great people. April Wensel offers advice to help fix the technical interview process. She recommends: * hire for mindset and attitude * look for empathy and mentorship skills * allow candidates to show their strengths instead of hunting for weaknesses * have the candidate leave feeling good about themselves and your company, regardless of the hiring decision Some topics discussed: * interview questions to bring out stories of skills and successes * stereotype threat * diversity * interview hazing * white boards * coding challenges * unconscious bias * emotional intelligence * myth of talent shortage * pair programming and collaboration during interviews * mirrortocracy * cultural add vs cultural fit * empathy * mentoring This episode is important for anyone going into a technical interview, as a candidate, as a hiring manager, or as a member of an interview team. Special Guest: April Wensel.
4/29/201937 minutes, 40 seconds
Episode Artwork

71: Memorable Tech Talks, The Ultimate Guide - Nina Zakharenko

Nina Zakharenko gives some great advice about giving tech talks. We talk about a blog series that Nina wrote called "The Ultimate Guide To Memorable Tech Talks". This episode is full of great help and encouragement for your own public speaking adventures. Some of what we discuss: * overcoming the fear of public speaking * breathing and pausing during talks * planning your talk as well as planning your time to get ready for the talk * writing proposals and getting feedback on proposals * Nina's talk in PyCascades on programming Adafruit chips * types of talks that are often rejected * pre-recording demos to avoid live demo problems * why you should speak, even if you are an introvert * benefits of public speaking * a super cool announcement at the end Special Guest: Nina Zakharenko.
4/5/201948 minutes, 32 seconds
Episode Artwork

70: Learning Software without a CS degree - Dane Hillard

Dane and Brian discuss skills needed for people that become software developers from non-traditional paths. Dane is also writing a book to address many of these skill gaps, Code Like a Pro (https://www.manning.com/books/code-like-a-pro), that's currently in an early access phase. Use code podtest&code19 to get a discount. And, sign up as a Friend of the Show (https://testandcode.com/friends-of-the-show) to enter for a chance to win a free copy of the eBook version. We also discuss the writing process, testing with a multi-language stack, music, art, photography, and more. Special Guest: Dane Hillard.
3/29/201930 minutes, 36 seconds
Episode Artwork

69: Andy Hunt - The Pragmatic Programmer

Andy Hunt and Dave Thomas wrote the seminal software development book, The Pragmatic Programmer. Together they founded The Pragmatic Programmers and are well known as founders of the agile movement and authors of the Agile Manifesto. They founded the Pragmatic Bookshelf publishing business in 2003. The Pragmatic Bookshelf published it's most important book, in my opinion, in 2017 with the first pytest book (https://pragprog.com/book/bopytest/python-testing-with-pytest) available from any publisher. Topics: * The Pragmatic Programmer (https://pragprog.com/book/tpp/the-pragmatic-programmer), the book * The Manifesto for Agile Software Development (https://agilemanifesto.org/) * Agile methodologies and lightweight methods * Some issues with "Agile" as it is now. * The GROWS Method (https://growsmethod.com/) * Pragmatic Bookshelf (https://pragprog.com/), the publishing company * How Pragmatic Bookshelf is different, and what it's like to be an author (http://write-for-us.pragprog.com/) with them. * Reading and writing sci-fi novels, including Conglommora (https://conglommora.com/), Andy's novels. * Playing music (https://andyhunt.bandcamp.com/). Special Guest: Andy Hunt.
3/21/201948 minutes, 34 seconds
Episode Artwork

68: test && commit || revert (TCR) - Thomas Deniffel

With conventional TDD, you write a failing test, get it to pass, then refactor. Then run the tests again to make sure your refactoring didn't break anything. But what if it did break something? Kent Beck has been recommending to commit your code to revision control after every green test run. Oddmund Strømme suggested a symmetrical idea to go ahead and revert the code when a test fails. Kent writes that he hated the idea, but had to try it. Then wrote about it last September. And now we have TCR, "(test && commit) || revert". What's it feel like to actually do this? Well, Thomas Deniffel has been using it since about a month after that article came out. In this episode, we'll hear from Thomas about his experience with it. It's a fascinating idea. Have a listen and let me know what you think. Special Guest: Thomas Deniffel.
3/13/201938 minutes, 23 seconds
Episode Artwork

67: Teaching Python in Middle School

In today's episode we talk with Kelly Paredes & Sean Tibor. They teach Python in a middle school in Florida, and talk about this experience on the podcast "Teaching Python". I love that they include physical computing right from the start, and everything else they are doing. It's a fun interview. Special Guests: Kelly Paredes and Sean Tibor.
2/28/201934 minutes, 57 seconds
Episode Artwork

66: Brian is interviewed by Phil Burgess

I was recently interviewed on a podcast called "IT Career Energizer Podcast". Phil Burgess is the host of the podcast, and it was a lot of fun. I think it turned out well, and I wanted to share it with you here, with Phil's permission, of course. Special Guest: Phil Burgess.
2/26/201918 minutes, 26 seconds
Episode Artwork

65: one assert per test

Is it ok to have more than one assert statement in a test? I've seen articles that say no, you should never have more than one assert. I've also seen some test code made almost unreadable due to trying to avoid more than one assert per test. Where did this recommendation even come from? What are the reasons? What are the downsides to both perspectives? That's what we're going to talk about today.
2/17/201919 minutes, 52 seconds
Episode Artwork

64: Practicing Programming to increase your value

I want you to get the most out of being a software developer, or test engineer, or whatever you do that makes this podcast relevant to your life. By "get the most" I mean: the most fun the most value more career options probably more responsibility maybe even more money, that'd be cool I want you to start (or continue) studying and practicing your skills. But not just random practice, I've got a strategy to help you focus what to study. Why am I talking about this now? Here's some background on how I re-learned how to have fun with code refactoring (https://testandcode.com/pybites) through code challenges. I'm going to write up the whole list as a blog post, which I'll share first with my Patreon Supporters (https://www.patreon.com/testpodcast), second with my email list (https://us5.list-manage.com/subscribe?u=1f123c581ab0df2737f3174b9&id=9db722df54) and slack channel (http://pythontesting.net/slack/) and then as an actual post somewhere.
2/7/201921 minutes, 36 seconds
Episode Artwork

63: Python Corporate Training - Matt Harrison

I hear and I forget. I see and I remember. I do and I understand. -- Confucius Matt Harrison is an author and instructor of Python and Data Science. This episode focuses on his training company, MetaSnake, and corporate training. Matt's written several books on Python, mostly self published. So of course we talk about that. But the bulk of the conversation is about corporate training, with Brian playing the role of someone considering starting a corporate training role, and asking Matt, an experienced expert in training, how to start and where to go from there. I think you'll learn a lot from this. Special Guest: Matt Harrison.
2/1/201933 minutes, 34 seconds
Episode Artwork

62: Python Training - Reuven Lerner

There are a lot of learning styles and a lot of ways to learn Python. If you started Python through a class at work, or through an online course, or maybe an email series, it's possibly you may have learned from Reuven Lerner. If your first encounter with pytest was reading an article in Linux Journal recently, that would be the writing of Reuven. Reuven Lerner teaches Python. This interview definitely falls into the category of talking with interesting people doing interesting things with Python. We talk about how incorporating testing into teaching can add a level of clarity to the interaction and help people duirng the learning process. I'm also fascinated by people who teach and train because it's a skill I'm trying to improve. Special Guest: Reuven Lerner.
1/13/201928 minutes, 23 seconds
Episode Artwork

A retrospective

A look back on 3 years of podcasting, and a bit of a look forward to what to expect in 2019. Top 5 episodes: 2: Pytest vs Unittest vs Nose (https://testandcode.com/2) 33: Katharine Jarmul - Testing in Data Science (https://testandcode.com/33) 18: Testing in Startups and Hiring Software Engineers with Joe Stump (https://testandcode.com/18) 45: David Heinemeier Hansson - Software Development and Testing, TDD, and exploratory QA (https://testandcode.com45) 27: Mahmoud Hashemi : unit, integration, and system testing (https://testandcode.com/27) Honorable mention: 32: David Hussman - Agile vs Agility, Dude's Law, and more (https://testandcode.com/32) This episode also went through lots of: what went well what was lacking what's next Please listen and let me know where I should take this podcast.
12/31/201831 minutes, 11 seconds
Episode Artwork

100 Days of Code - Julian Sequeira

Julian Sequeira is Co-Founder of PyBit.es (a blog/platform created to teach and learn Python) and a Python Trainer at Talk Python Training. He's also a survivor of the 100DaysOfCode in Python Challenge. We talk about the 100 days challenge, about learning Python, and about how cool it is to learn within a community. Special Guest: Julian Sequeira.
12/28/201834 minutes, 33 seconds
Episode Artwork

Genesynth, nox, urllib3, & PyCascades - Thea Flowers

Thea Flowers is a Pythonista and open source advocate. She helps empower developers of all backgrounds and experience levels using Python and open source software and hardware. Thea is the creator of Nox, the co-chair of PyCascades 2019, the lead maintainer of urllib3, and a member of the Python Packaging Authority and Packaging Working Group. Thea works on Google Cloud Platform's wonderful Developer Relations team where she works on API client libraries and community outreach. All of that is definitely cool enough. But she is also building a synthesiser based on Sega Genesis chips. So of course, that's where we'll start the conversation. Special Guest: Thea Flowers.
12/21/201831 minutes, 5 seconds
Episode Artwork

REST APIs, testing with Docker containers and pytest

Let's say you've got a web application you need to test. It has a REST API that you want to use for testing. Can you use Python for this testing even if the application is written in some other language? Of course. Can you use pytest? duh. yes. what else? What if you want to spin up docker instances, get your app running in that, and run your tests against that environment? How would you use pytest to do that? Well, there, I'm not exactly sure. But I know someone who does. Dima Spivak is the Director of Engineering at StreamSets, and he and his team are doing just that. He's also got some great advice on utilizing code reviews across teams for test code, and a whole lot more. Special Guest: Dima Spivak.
12/14/201828 minutes, 9 seconds
Episode Artwork

What is Data Science? - Vicki Boykis

Data science, data engineering, data analysis, and machine learning are part of the recent massive growth of Python. But really what is data science? Vicki Boykis helps me understand questions like: No really, what is data science? What does a data pipeline look like? What is it like to do data science, data analysis, data engineering? Can you do analysis on a laptop? How big does data have to be to be considered big? What are the challenges in data science? Does it make sense for software engineers to learn data engineering, data science, pipelines, etc? How could someone start learning data science? Also covered: A type work (analysis) vs B type work (building) data lakes and data swamps predictive models data cleaning development vs experimentation Jupyter Notebooks Kaggle ETL pipelines I learned a lot about the broad field of data science from talking with Vicki. Special Guest: Vicki Boykis.
12/11/201830 minutes, 47 seconds
Episode Artwork

Being a Guest on a Podcast - Michael Kennedy

Michael Kennedy of Talk Python and Python Bytes fame joins Brian to talk about being a great guest and what to expect. Even if you have never wanted to be on a podcast, you might learn some great tips. A few of the things we talk about will be helpful for other endeavors, like public speaking, guest blog posts, look for unsolicited job opportunities. Some people have never been on a podcast before, and are possibly freaked out about some of the unknowns of being on a podcast. That's why we did this episode. Michael and I discuss a bunch of the niggly details so that you can be relaxed and know what to expect. Topics include: If you want to be on a podcast How to stand out and be someone a podcast would want to have on a show. How to suggest yourself as a guest and the topic you want to discuss. Picking a topic for a podcast What to do before the show to prepare Helping the host out with some information Some hardware (not much) Some software (all free) Sending info like bio, headshot, links, etc. What to expect the host or show to do before the recording. Where to record Sketching out some show topics with the host, maybe on a shared document. What to expect and do Right before the show During the conversation After the recording When it goes live (help promote it) Special Guest: Michael Kennedy.
12/7/201837 minutes, 15 seconds
Episode Artwork

55: When 100% test coverage just isn't enough - Mahmoud Hashemi

What happens when 100% test code coverage just isn't enough. In this episode, we talk with Mahmoud Hashemi about glom, a very cool project in itself, but a project that needs more coverage than 100%. This problem affects lots of projects that use higher level programming constructs, like domain specific languages (DSLs), sub languages mini languages, compilers, and db query languages. Also covered: * awesome Python applications * versioning: 0-ver vs calver vs semver Special Guest: Mahmoud Hashemi.
12/3/201834 minutes, 7 seconds
Episode Artwork

54: Python 1994 - Paul Everitt

Paul talks about the beginning years of Python. Talking about Python's beginnings is also talking about the Python community beginnings. Yes, it's reminiscing, but it's fun. Special Guest: Paul Everitt.
11/25/201829 minutes, 24 seconds
Episode Artwork

53: Seven Databases in Seven Weeks - Luc Perkins

Luc Perkins joins the show to talk about "Seven Databases in Seven Weeks: A guide to modern databases and the NoSQL movement." We discuss a bit about each database: Redis, Neo4J, CouchDB, MongoDB, HBase, Postgres, and DynamoDB. Special Guest: Luc Perkins.
11/19/201854 minutes, 46 seconds
Episode Artwork

52: pyproject.toml : the future of Python packaging - Brett Cannon

Brett Cannon discusses the changes afoot in Python packaging as a result of PEP 517, PEP 518, starting with "How did we get here?" and "Where are we going?" Discussed: flit Poetry tox Continuous Integration setup.py, MANIFEST.in, etc. pipenv what's with lock files applications (doesn't go on PyPI) vs libraries (goes on PyPI) workflows dependency resolution deployment dependencies vs development dependencies will lock files be standarized multiple lock files requirements.txt Special Guest: Brett Cannon.
11/5/201850 minutes, 51 seconds
Episode Artwork

51: Feature Testing

Andy Knight joins me in discussing the concept of feature testing. A feature tests is "a test verifying a service or library as the customer would use it, but within a single process." That was a quote from an article that appeared on the Twitter engineering blog. The article describes a shift away from class tests towards feature tests, the benefits of the shift, and some reactions to it. Feature tests are similar to something I used to call "functional subcutaneous integration test", but it's a way better name, and I plan to use it more often. The idea fits well with my testing philosophy. Andy Knight is someone still holding onto the testing pyramid. So I thought it would be fun to ask him to discuss feature testing with me. I think it's a balanced discussion. I hope you enjoy it and learn something. Special Guest: Andy Knight.
10/30/201831 minutes, 35 seconds
Episode Artwork

50: Flaky Tests and How to Deal with Them

Anthony Shaw joins Brian to discuss flaky tests and flaky test suites. What are flaky tests? Is it the same as fragile tests? Why are they bad? How do we deal with them? What causes flakiness? How can we fix them? How can we avoid them? Proactively rooting out flakiness Test design GUI tests Sharing solutions Special Guest: Anthony Shaw.
10/25/201832 minutes, 20 seconds
Episode Artwork

49: tox - Oliver Bestwalter

tox is a simple yet powerful tool that is used by many Python projects. tox is not just a tool to help you test a Python project against multiple versions of Python. In this interview, Oliver and Brian just scratch the surface of this simple yet powerful automation tool. This is from the tox documentation: tox is a generic virtualenv management and test command line tool you can use for: checking your package installs correctly with different Python versions and interpreters running your tests in each of the environments, configuring your test tool of choice acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing. Yet tox is so much more. It can help create development environments, hold all of your admin scripts, ... I hope you enjoy this wonderful discussion of tox with Oliver Bestwalter, one of the core maintainers of tox. Special Guest: Oliver Bestwalter.
10/15/201855 minutes, 41 seconds
Episode Artwork

48: A GUI for pytest

The story of how I came to find a good user interface for running and debugging automated tests is interleaved with a multi-year effort of mine to have a test workflow that’s works smoothly with product development and actually speeds things up. It’s also interleaved with the origins of the blog pythontesting.net, this podcast, and the pytest book I wrote with Pragmatic. It’s not a long story. And it has a happy ending. Well. It’s not over. But I’m happy with where we are now. I’m also hoping that this tale of my dedication to, or obsession with, quality and developer efficiency helps you in your own efforts to make your daily workflow better and to extend that to try to increase the efficiency of those you work with.
10/8/201812 minutes, 11 seconds
Episode Artwork

47: Automation Panda - Andy Knight

Interview with Andy Knight, the Automation Panda. * Selenium & WebDriver * Headless Chrome * Gherkin * BDD * Given When Then * pytest-bdd * PyCharm * Writing Good Gherkin * Overhead of Gherkin and if it's worth it * When to use pytest vs pytest-bdd * The art of test automation Special Guest: Andy Knight.
9/28/201839 minutes
Episode Artwork

46: Testing Hard To Test Applications - Anthony Shaw

How do you write tests for things that aren’t that easy to write tests for? That question is a possibly terrible summary of a question sent to me by a listener. And to help me start answering that question, I asked a friend of mine to help, Antony Shaw. Of course, different types of applications have different test strategies, so there’s not a universal answer. But I know some of you out there have experience and expertise around how to tackle this problem. Listen to the discussion Anthony and I have about it, and let me know if you have some techniques or tips to add. Special Guest: Anthony Shaw.
9/2/201842 minutes, 45 seconds
Episode Artwork

45: David Heinemeier Hansson - Software Development and Testing, TDD, and exploratory QA

David Heinemeier Hansson is the creator of Ruby on Rails, founder & CTO at Basecamp (formerly 37signals). He's a best selling author, public speaker, and even a Le Mans class winning racing driver. All of that, of course, is awesome. But that's not why I asked him on the show. In 2014, during a RailsConf keynote, he started a discussion about damage caused by TDD. This was followed by a few blog posts, and then a series of recorded hangouts with Martin Fowler and Kent Beck. This is what I wanted to talk with David about; this unconventional yet practical and intuitive view of how testing and development work together. It's a great discussion. I think you'll get a lot out of it. Special Guest: David Heinemeier Hansson.
8/13/201840 minutes, 32 seconds
Episode Artwork

44: Mentoring - Nina Zakharenko

Nina Zakharenko is a cloud developer advocate at Microsoft focusing on Python. She's also an excellent public speaker. We talk about her experience with mentoring, both being a mentor, and utilizing mentors. We also talk about public speaking, her move to Microsoft, and to Portland, and the Microsoft/GitHub merge. Special Guest: Nina Zakharenko.
7/21/201826 minutes, 42 seconds
Episode Artwork

Preparing for Technical Talks with Kelsey Hightower - bonus episode

After I had wrapped up the interview with Kelsey Hightower for episode 43 (http://testandcode.com/43), I asked him one last question. You see, I admire the his presentation style. So I asked him if he would share with me how he prepared for his presentations. His answer is so thoughtful and makes so much sense, I couldn't keep it to myself. I'm releasing this as a bonus mini-episode so that it's easy to refer back to the next time you or I have a chance to do a technical talk. Special Guest: Kelsey Hightower.
7/17/20188 minutes, 30 seconds
Episode Artwork

43: Kelsey Hightower - End to End & Integration Testing

I first heard Kelsey speak during his 2017 PyCon keynote. He's an amazing speaker, and I knew right then I wanted to hear more about what he does and hear more of his story. We discuss testing, of course, but we take it further and discuss: tests for large systems, like kubernetes Testing in real world scenarios with all the configuration and everything Becoming a complete engineer by thinking about the end to end flow from the users perspective Learning from other roles, and the value of roles to allow focus and expertise We even get into Chaos Engineering and testing live systems. Special Guest: Kelsey Hightower.
7/5/201841 minutes, 47 seconds
Episode Artwork

42: Using Automated Tests to Help Teach Python - Trey Hunner

This interview with Trey Hunner discusses his use of automated tests to help teach programming. Automated testing is a huge part of developing great software. But many new developers don't get exposed to testing for quite a while. But this is changing. New ways to teach programming include automated tests from the beginning. Trey Hunner is one of the PSF directors and a Python and Django team trainer, and he has been using automated tests to help people learn Python. Special Guest: Trey Hunner.
6/28/201858 minutes, 31 seconds
Episode Artwork

41: Testing in DevOps and Agile - Anthony Shaw

We talk with Anthony Shaw about some of the testing problems facing both DevOps teams, and Agile teams. We also talk about his recent pull request accepted into pytest. Special Guest: Anthony Shaw.
4/18/201844 minutes, 47 seconds
Episode Artwork

40: On Podcasting - Adam Clark

Adam is the host of The Gently Mad (https://thegentlymad.com/) podcast, and teaches the steps in creating and growing a podcast in his course Irresistible Podcasting (https://irresistiblepodcasting.com). He was one of the people who inspired Brian to get the Test & Code podcast started in the first place. Brian took his course in 2015. Adam is in the process of updating the course, and building a community around it. Warning: This may be an episode to listen to with headphones if you have kids around. There is swearing. I wanted to get Adam's help to convince many of you to either come on this show as a guest, or start your own podcast. We did some of that. But we also cover a lot of issues like self doubt and the importance of community. Special Guest: Adam Clark.
4/10/201848 minutes, 47 seconds
Episode Artwork

39: Thorough software testing for critical features

Complete and exhaustive testing is not possible. Nor would it be fun, or maintainable, or a good use of your time. However, some functionality is important enough to make sure the test behavior coverage is thorough enough to have high confidence in it's quality. In this episode, we discuss 3 techniques that can be combined to quickly generate test cases. We then talk about how to implement them efficiently in pytest. The techniques covered are: equivalence partitioning boundary value analysis decision tables We discuss how to use these to generate test cases for a new list filter functionality in the cards application. The resulting tests: 1 UI test to make sure the options are able to be passed in correctly. 1 small parametrized test function with 16 single line parameter sets representing the different test cases.
3/29/201818 minutes, 59 seconds
Episode Artwork

38: Prioritize software tests with RCRCRC

RCRCRC was developed by Karen Nicole Johnson. In this episode we discuss the mnemonic/heuristic and use it to prioritize tests for the cards application. Recent: new features, new areas of code Core: essential functions must continue to work, your products USPs (Unique Selling Propositions) Risk: some areas of an application pose more risk, perhaps areas important to customers but not used regularly by the development team. Configuration sensitive: code that’s dependent on environment settings or operating system specifics Repaired: tests to reproduce bugs, tests for functionality that has been recently repaired. Chronic: functionality that frequently breaks
3/13/201811 minutes, 13 seconds
Episode Artwork

37: What tests to write first

This episode starts down the path of test strategy with the first tests to write in either a legacy system or a project just getting off it's feet. We cover: My approach to testing existing systems. Put names to strategies so we can refer to them later. Explain the strategies in general terms and explain why they are useful. Discuss how these strategies are used in an example project. (The code is available on github). Strategies covered today: Dog Fooding Exploratory Testing Tracer Bullet Tests Act Like A Customer (ALAC) Tests Manual Procedures Initial automated tests at 2 levels, API and UI.
3/8/201820 minutes, 55 seconds
Episode Artwork

36: Stephanie Hurlburt - Mentoring and Open Office Hours

Stephanie is a co-founder and graphics engineer at Binomial. She works on Basis, an image compressor, and has customers in games, video, mapping, and any application that has lots of image data. Stephanie has also been encouraging experienced engineers to open up their twitter DMs to questions from anyone, to help mentor people not only in technical questions, but in career questions as well. She also sets aside some time to mentor people through skype when written form just doesn't cut it. That's the primary reason I have Stephanie on today, to talk about mentoring and open office hours. But we also talk about - Binomial - image compression - texture mapping - the use of both manual and automated testing for complex systems - sane work hours - work life balance - and how long hours have led her to the opinions she holds today Special Guest: Stephanie Hurlburt.
2/13/201831 minutes, 21 seconds
Episode Artwork

35: Continuing Education and Certificate Programs at UW

There are lots of ways to up your skills. Of course, I'm a big fan of learning through reading books, such as upping your testing skills by reading Python Testing with pytest. And then there are online learning systems and MOOCs. At the other end of the spectrum is a full blown university degree. One option kind of in the middle is continuing education programs available through some universities, such as University of Washington. To discuss this option with me in more depth, we've got Andrew Hoover, Senior Director, Program Strategy, University of Washington Continuum College Special Guest: Andrew Hoover.
2/1/201825 minutes, 18 seconds
Episode Artwork

34: TDD and Test First

An in depth discussion of Test Driven Development (TDD) should include a discussion of Test First. So that's where we start. Why write tests first? How do you know what tests to write? What are the steps for test first? Isn't this just TDD? Functional Tests vs Unit Tests
12/31/201725 minutes
Episode Artwork

33: Katharine Jarmul - Testing in Data Science

A discussion with Katharine Jarmul, aka kjam, about some of the challenges of data science with respect to testing. Some of the topics we discuss: experimentation vs testing testing pipelines and pipeline changes automating data validation property based testing schema validation and detecting schema changes using unit test techniques to test data pipeline stages testing nodes and transitions in DAGs testing expected and unexpected data missing data and non-signals corrupting a dataset with noise fuzz testing for both data pipelines and web APIs datafuzz hypothesis testing internal interfaces documenting and sharing domain expertise to build good reasonableness intermediary data and stages neural networks speaking at conferences Special Guest: Katharine Jarmul.
11/30/201737 minutes, 14 seconds
Episode Artwork

32: David Hussman - Agile vs Agility, Dude's Law, and more

A wonderful discussion with David Hussman. David and Brian look back at what all we've learned in XP, TDD, and other Agile methodologies, where things have gone awry, how to bring the value back, and where testing fits into all of this. How to build the wrong thing faster Agile vs Agility Product vs Process Where testing fits into software development practices. "Integration tests, there's a name that needs to be refactored desperately." Integration tests are "story tests". They tell the story of the product. XP and TDD and the relationship with tests To test for design, use microtests, xUnit style. User Advocy tests are often lacking, but are needed to learn about the product. "I just keep writing tests until I'm not scared anymore." - Kent Beck Dude's Law: Value = Why/How People often focus so much on the how that they forget about why they are doing something. Subcutaneous Tests "The hardest part of programming is thinking." Refactoring vs Repaving Agility means being able to quickly change direction During experimentation and learning, what matters isn't how much you got done, but how much you learn. "The best way to get automation is to make developers do manual tests." Special Guest: David Hussman.
10/3/201747 minutes, 27 seconds
Episode Artwork

31: I'm so sick of the testing pyramid

What started as a twitter disagreement carries over into this civil discussion of software testing. Brian and Paul discuss testing practices such as the testing pyramid, TDD, unit testing, system testing, and balancing test effort. the Testing Pyramid the Testing Column TDD unit testing balancing unit with system tests, functional tests API testing subcutaneous testing customer facing tests Special Guest: Paul Merrill.
9/27/201739 minutes, 57 seconds
Episode Artwork

30: Legacy Code - M. Scott Ford

M. Scott Ford is the founder and chief code whisperer at Corgibytes, a company focused on helping other companies with legacy code. Topics include: How M. Scott Ford got into forming a company that works on legacy code. Technical debt Process debt Software testing The testing pyramid iterative development kanban readable code and readable test code Special Guest: M. Scott Ford.
8/1/201741 minutes, 47 seconds
Episode Artwork

29: Kobiton & QASymphony - Josh Lieberman

Kobiton is a service to test mobile apps on real devices. QASymphony offers software testing and QA tools. Special Guest: Josh Lieberman.
7/1/201718 minutes, 2 seconds
Episode Artwork

28: Chaos Engineering & Experimentation at Netflix - Casey Rosenthal

Today we have an interview with Casey Rosenthal of Netflix. One of the people making sure Netflix runs smoothly is Casey Rosenthall. He is the manager for the Traffic, Intuition, and Chaos teams at Netflix. He's got a great perspective on quality and large systems. We talk about Chaos Engineering Experimentation vs Testing Testing Strategy Visualization of large amounts of data representing Steady State Special Guest: Casey Rosenthal.
4/7/201732 minutes, 55 seconds
Episode Artwork

27: Mahmoud Hashemi : unit, integration, and system testing

What is the difference between a unit test, an integration test, and a system test? Mahmoud Hashemi helps me to define these terms, as well as discuss the role of all testing variants in software development. What is the difference between a unit test, an integration test, and a system test? TDD testing pyramid vs testing column the role of testing in software development web frameworks listen to wikipedia hatnote the world’s largest photo competition Enterprise Software with Python Links: Mahmoud on twitter: @mhashemi (https://twitter.com/mhashemi) Mahmoud on sedimental (http://sedimental.org/) hatnote (http://blog.hatnote.com/) listen to wikipedia (http://listen.hatnote.com/) Montage (https://blog.wikimedia.org/2016/12/22/montage-platform-wiki-loves-monuments/), the web platform used to help judge the world’s largest photo competition clastic (https://pypi.python.org/pypi/clastic) 10 Myths of Enterprise Python (http://sedimental.org/10_myths_of_enterprise_python.html) Enterprise Software with Python (http://shop.oreilly.com/product/0636920047346.do?code=authd) course Enterprise Software with Python (http://sedimental.org/esp.html) blog post. Special Guest: Mahmoud Hashemi.
2/26/201741 minutes, 56 seconds
Episode Artwork

26: pyresttest – Sam Van Oort

Interview with Sam Van Oort about pyresttest (https://github.com/svanoort/pyresttest), "A REST testing and API microbenchmarking tool" pyresttest A question in the Test & Code Slack channel (http://pythontesting.net/slack) was raised about testing REST APIs. There were answers such as pytest + requests, of course, but there was also a mention of pyresttest, https://github.com/svanoort/pyresttest (https://github.com/svanoort/pyresttest), which I hadn't heard of. I checked out the github repo, and was struck by how user friendly the user facing test definitions were. So I contacted the developer, Sam Van Oort, and asked him to come on the show and tell me about this tool and why he developed it. Here's the "What is it?" section from the pyresttest README: A REST testing and API microbenchmarking tool Tests are defined in basic YAML or JSON config files, no code needed Minimal dependencies (pycurl, pyyaml, optionally future), making it easy to deploy on-server for smoketests/healthchecks Supports generate/extract/validate mechanisms to create full test scenarios Returns exit codes on failure, to slot into automated configuration management/orchestration tools (also supplies parseable logs) Logic is written and extensible in Python Support Special thanks to my wonderful Patreon supporters (http://patreon.com/testpodcast) and those who have supported the show by purchasing Python Testing with unittest, nose, pytest (http://pythontesting.net/book)
12/1/201657 minutes, 55 seconds
Episode Artwork

25: Selenium, pytest, Mozilla – Dave Hunt

Interview with Dave Hunt @davehunt82 (https://twitter.com/davehunt82). We Cover: Selenium Driver (http://www.seleniumhq.org/) pytest (http://docs.pytest.org/) pytest plugins: pytest-selenium (http://pytest-selenium.readthedocs.io/) pytest-html (https://pypi.python.org/pypi/pytest-html) pytest-variables (https://pypi.python.org/pypi/pytest-variables) tox (https://tox.readthedocs.io) Dave Hunt’s “help wanted” list on github (https://github.com/search?utf8=%E2%9C%93&q=author%3Adavehunt+type%3Aissue+label%3A%22help+wanted%22+state%3Aopen+no%3Aassignee) Mozilla (https://www.mozilla.org) Also: fixtures xfail CI and xfail and html reports CI and capturing pytest code sprint working remotely for Mozilla
12/1/201642 minutes, 20 seconds
Episode Artwork

24: pytest with Raphael Pierzina

pytest is an extremely popular test framework used by many projects and companies. In this episode, I interview Raphael Pierzina (@hackebrot (https://twitter.com/hackebrot)), a core contributor to both pytest and cookiecutter. We discuss how Raphael got involved with both projects, his involvement in cookiecutter, pytest, "adopt pytest month", the pytest code sprint, and of course some of the cool new features in pytest 3. Links: Raphael Pierzina on twitter (@hackebrot (https://twitter.com/hackebrot)) pytest - http://doc.pytest.org (http://doc.pytest.org/en/latest/) cookie cutter - https://github.com/audreyr/cookiecutter (https://github.com/audreyr/cookiecutter) cookiecutter-pytest-plugin - https://github.com/pytest-dev/cookiecutter-pytest-plugin (https://github.com/pytest-dev/cookiecutter-pytest-plugin)
11/10/201635 minutes, 15 seconds
Episode Artwork

23: Lessons about testing and TDD from Kent Beck

Kent Beck's twitter profile says "Programmer, author, father, husband, goat farmer". But I know him best from his work on extreme programming, test first programming, and test driven development. He's the one. The reason you know about TDD is because of Kent Beck. I first ran across writings from Kent Beck as started exploring Extreme Programming in the early 2000's. Although I don't agree with all of the views he's expressed in his long and verbose career, I respect him as one of the best sources of information about software development, engineering practices, and software testing. Along with Test First Programming and Test Driven Development, Kent started an automated test framework that turned into jUnit. jUnit and it's model of setup and teardown wrapping test functions, as well base test class driven test frameworks became what we know of as xUnit style frameworks now, which includes Python's unittest. He discussed this history and a lot more on episode 122 of Software Engineering Radio. The episode is titled "The History of JUnit and the Future of Testing with Kent Beck", and is from Sept 26, 2010. http://www.se-radio.net/2010/09/episode-167-the-history-of-junit-and-the-future-of-testing-with-kent-beck/ I urge you to download it and listen to the whole thing. It's a great interview, still relevant, and applicable to testing in any language, including Python. What I've done in this podcast is take a handful of clips from the interview (with permission from IEEE and SERadio), and discuss the clips and my opinions a bit. The lessons are: You're tests should tell a story. Be careful of DRY, inheritance, and other software development practices that might get in the way of keeping your tests easy to understand. All test should help differentiate good programs from bad programs and not be redundant. Test at multiple levels and multiple scales where it makes sense. Differentiating between TDD, BDD, ATDD, etc. isn't as important as testing your software to learn about it. Who cares what you call it.
9/30/201613 minutes, 46 seconds
Episode Artwork

22: Converting Manual Tests to Automated Tests

How do you convert manual tests to automated tests? This episode looks at the differences between manual and automated tests and presents two strategies for converting manual to automated.
9/24/201610 minutes, 56 seconds
Episode Artwork

21: Terminology: test fixtures, subcutaneous testing, end to end testing, system testing

A listener requested that I start covering some terminology. I think it's a great idea. Covered in this episode: Test Fixtures Subcutaneous Testing End to End Testing (System Testing) I also discuss: A book rewrite Progress on transcripts A story from the slack channel
8/31/201618 minutes, 6 seconds
Episode Artwork

20: Talk Python To Me host Michael Kennedy

I talk with Michael about: Episodes of his show having to do with testing. His transition from employee to podcast host and online training entrepreneur. His Python training courses. The Pyramid Web framework. Courses by Michael Explore Python Jumpstart by Building 10 Apps Explore Write Pythonic Code Like a Seasoned Developer Python for Entrepreneurs Testing related podcast Episodes from Talk Python To Me: episode 10: Harry Percival, TDD for the Web in Python, and PythonAnywhere PythonAnywhere Harry's book, TDD with Python episode 45: Brian Okken, Pragmatic testing and the Testing Column Talk Python To Me podcast episode 63: Austin Bingham, Mutation Testing, Cosmic Ray Cosmic Ray episode 67: David MacIver, Hypothesis Hypothesis
7/29/201647 minutes, 13 seconds
Episode Artwork

19: Python unittest with Robert Collins

Interview with Robert Collins, current core maintainer of Python's unittest module. Some of the topics covered How did Robert become the maintainer of unittest? unittest2 as a rolling backport of unittest test and class parametrization with subtest and testscenarios Which extension to unittest most closely resembles Pytest fixtures? Comparing Pytest and unittest Will unittest ever get assert rewriting? Future changes to unittest I've been re-studying unittest recently and I mostly wanted to ask Robert a bunch of clarifying questions. This is an intermediate to advanced discussion of unittest. Many great features of unittest go by quickly in this talk. Please let me know if there's something you'd like me to cover in more depth as a blog post or a future episode. Links unittest (https://docs.python.org/3.5/library/unittest.html) unittest2 (https://pypi.python.org/pypi/unittest2) pip (https://docs.python.org/3.5/installing/) mock (https://docs.python.org/dev/library/unittest.mock.html) testtools (https://testtools.readthedocs.io/en/latest/) fixtures (https://pypi.python.org/pypi/fixtures) testscenarios (https://pypi.python.org/pypi/testscenarios) subunit (https://pypi.python.org/pypi/python-subunit) pipserver (https://pypi.python.org/pypi/pypiserver) devpi (https://pypi.python.org/pypi/devpi-server) testresources (https://pypi.python.org/pypi/testresources) TIP (testing in python) mailing list (http://lists.idyll.org/listinfo/testing-in-python)
6/15/201640 minutes, 25 seconds
Episode Artwork

18: Testing in Startups and Hiring Software Engineers with Joe Stump

In this episode, I interview with Joe Stump, cofounder of Sprintly (https://sprint.ly), to give the startup perspective to development and testing. Joe has spent his career in startups. He's also been involved with hiring and talent acquisition for several startups. We talk about testing, continuous integration, code reviews, deployment, tolerance to defects, and how some of those differ between large companies and small companies and startups. Then we get into hiring. Specifically, finding and evaluating good engineers, and then getting them to be interested in working for you. If you ever want to grow your team size, you need to listen to this.
4/20/201653 minutes, 28 seconds
Episode Artwork

17: The Travis Foundation

The Travis Foundation. Interview with Laura Gaetano Links and things we talked about: Travis Foundation (http://foundation.travis-ci.org) Open Source Grants (http://foundation.travis-ci.org/grants/) The Foundation's support of Katrina Owen from exercism.io (http://foundation.travis-ci.org/2016/01/25/exercism/) Exercism.io (http://Exercism.io) Rails Girls summer of code (http://railsgirlssummerofcode.org/campaign/) Diversity Tickets (http://diversitytickets.org) Conference support Speakerinnen (http://speakerinnen.org) Prompt (http://mhprompt.org/)
4/11/201626 minutes, 37 seconds
Episode Artwork

16: Welcome to Test and Code

This is a small episode. I'm changing the name from the "Python Test Podcast" to "Test & Code". I just want to discuss the reasons behind this change, and take a peek at what's coming up in the future for this podcast. Links The Waterfall Model and "Managing the Development of Large Software Systems" (http://testandcode.com/7) Josh Kalderimis from Travis CI (http://tesatandcode.com/14)
3/31/20168 minutes, 33 seconds
Episode Artwork

15: Lean Software Development

An introduction to Lean Software Development This is a quick intro to the concepts of Lean Software Development. I'm starting a journey of trying to figure out how to apply lean principles to software development in the context of 2016/2017. Links Lean Software Development (http://amzn.to/223fkLo) book by Mary & Tom Poppendieck wikipedia (https://en.wikipedia.org/wiki/Lean_software_development) entry for Lean Software Development Patreon supporters of the show (http://patreon.com/testpodcast) Talk Python to Me (https://talkpython.fm/) Podcast Python Jumpstart by Building 10 Apps - video course (https://www.kickstarter.com/projects/mikeckennedy/python-jumpstart-by-building-10-apps-video-course?ref=card) pytest sprint (http://pytest.org/latest/announce/sprint2016.html) pytest.org (http://pytest.org) pytest/tox indiegogo campaign (https://www.indiegogo.com/projects/python-testing-sprint-mid-2016#/)
3/9/201610 minutes, 58 seconds
Episode Artwork

14: Continuous Integration with Travis CI – Josh Kalderimis

Interview with Josh Kalderimis from Travis CI. Josh is a co-founder and Chief Post-It Officer at Travis CI. Topics What is Continuous Integration, CI What is Travis CI Some history of the company travis-ci.org vs travis-ci.com and merging the two Enterprise and the importance of security Feature questions Travis vs Jenkins Travis notification through Slack Reporting history of Travis results Dealing with pytest results status other than pass/fail Capturing std out and stderr logging from tests Build artifacts Tox and Travis Using Selenium What does a Chief Post-It Officer do Differentiation between Travis and other CI options Using Slack to keep remote teams communicating well Travis team Funding open source projects Travis Foundation Rails Girls Summer of Code Open source grants Mustaches and beards Shite shirts New Zealand What does Team Periwinkle do Links Jeff Knupp's Open Sourcing a Python Project the Right Way (https://www.jeffknupp.com/blog/2013/08/16/open-sourcing-a-python-project-the-right-way/) Sven's blog post when Travis started (http://svenfuchs.com/2011/2/5/travis-a-distributed-build-server-tool-for-the-ruby-community) Sven's mustache and Josh's beard (https://travis-ci.com/about) Travis CI for open source (http://travis-ci.org) Travis CI for private repositories and enterprise (http://travis-ci.com) Slack (https://slack.com/) Travis Foundation (http://foundation.travis-ci.org/) Rails Girls Summer of Code (http://railsgirlssummerofcode.org/) Talk Python to Me Podcast (https://talkpython.fm/episodes/show/45/the-python-testing-column-now-a-thing)
2/25/201658 minutes, 17 seconds
Episode Artwork

13: Ian Cordasco – Betamax

Testing apps that use requests without using mock. Interview with Ian Cordasco (@sigmavirus24 (https://github.com/sigmavirus24)) Topics: Betamax - python library for replaying requests interactions for use in testing. requests github3.py Pycon 2015 talk: Ian Cordasco - Cutting Off the Internet: Testing Applications that Use Requests - PyCon 2015 Pytest and using Betamax with pytest fixtures The utility (or uselessness) of teaching programming with Java (My own rant mainly) Rackspace and Ian’s role at Rackspace and OpenStack Python Code Quality Authority: flake8, pep8, mccabe, pylint, astroid, … Static code analysis and what to use which tool when. Raymond Hettinger - Beyond PEP 8 -- Best practices for beautiful intelligible code - PyCon 2015 Links: Testing Python-Requests with Betamax (https://semaphoreci.com/community/tutorials/testing-python-requests-with-betamax) Cutting Off the Internet: Testing Applications that Use Requests - PyCon 2015 (https://youtu.be/YHbKxFcDltM?t=1m55s) github3.py (https://pypi.python.org/pypi/github3.py) requests (http://docs.python-requests.org/en/master/) Rackspace (https://www.rackspace.com/) Openstack (https://www.openstack.org/) Python Code Quality Authority (https://github.com/PyCQA) and documentation (http://meta.pycqa.org/en/latest/) GitLab (https://about.gitlab.com/) Raymond Hettinger - Beyond PEP 8 -- Best practices for beautiful intelligible code - PyCon 2015 (https://www.youtube.com/watch?v=wf-BqAjZb8M) Other Betamax resources: Betamaxing Boto3 (http://www.roadsi.de/betamaxing-boto3.html) Using Betamax with pytest fixtures (http://www.coglib.com/~icordasc/blog/2015/07/betamax-050-now-with-a-pytest-fixture.html) Isolated @memoize (http://nedbatchelder.com/blog/201601/isolated_memoize.html)
2/17/201620 minutes, 43 seconds
Episode Artwork

12: Coverage.py with Ned Batchelder

In this episode I interview Ned Batchelder. I know that coverage.py is very important to a lot of people to understand how much of their code is being covered by their test suites. Since I'm far from an expert on coverage, I asked Ned to discuss it on the show. I'm also quite a fan of Ned's 2014 PyCon talk "Getting Started Testing", so I definitely asked him about that. We also discuss edX, Python user groups, PyCon talks, and more. Some of what's covered (pun intended) in this episode: coverage.py types of coverage Line coverage branch coverage Behavior coverage Data coverage How Ned became the owner of coverage.py Running tests from coverage.py vs running coverage from test runner. edX what is it what Ned's role is Ned's blog Ned's PyCon 2014 talk "Getting Started Testing" Teaching testing and the difficulty of the classes being part of unittest fixtures package some of the difficulties of teaching unittest because of it's class based system. the history of classes in unittest coming from java's jUnit implementation Boston's Python Group PyCon in Portland Ned to do a talk here "Machete mode debugging". Practicing PyCon talks at local group meetings. At the very least, practice it in front of a live audience. Links: Ned Batchelder (http://nedbatchelder.com/) Coverage (https://pypi.python.org/pypi/coverage) Coverage documentation (https://coverage.readthedocs.org) django-nose (https://pypi.python.org/pypi/django-nose) pytest-django (https://pypi.python.org/pypi/pytest-django) edX (https://www.edx.org/) open edX (https://open.edx.org/) Boston Python User Group (http://www.meetup.com/bostonpython/) Portland Python User Group (http://www.meetup.com/pdxpython/) - I need to go to these PyCon 2016 (https://us.pycon.org/2016/) - Planning on attending, it's in Portland. Yay! Getting Started Testing (http://nedbatchelder.com/text/test0.html) - Ned's 2014 Pycon talk
2/10/201640 minutes, 37 seconds
Episode Artwork

11: pytest assert magic

How pytest, unittest, and nose deal with assertions. The job of the test framework to tell developers how and why their tests failed is a difficult job. In this episode I talk about assert helper functions and the 3 methods pytest uses to get around having users need to use assert helper functions.
2/4/201613 minutes, 46 seconds
Episode Artwork

10: Test Case Design using Given-When-Then from BDD

Given-When-Then is borrowed from BDD and is my favorite structure for test case design. It doesn’t matter if you are using pytest, unittest, nose, or something completely different, this episode will help you write better tests. The Given-When-Then structure for test method/function development. How and why to utilize fixtures for your given or precondition code. Similarities with other structure discriptions. Setup-Test-Teardown Setup-Excercise-Verify-Teardown. Arrange-Act-Assert Preconditions-Trigger-Postconditions. Benefits Communicate the purpose of your test more clearly Focus your thinking while writing the test Make test writing faster Make it easier to re-use parts of your test Highlight the assumptions you are making about the test preconditions Highlight what outcomes you are expecting and testing against. Links discussed in the show: Mechanics of pytest, unittest, nose (http://pythontesting.net/start-here) unittest fixture reference (http://pythontesting.net/framework/unittest/unittest-fixtures/) nose fixture reference (http://pythontesting.net/framework/nose/nose-fixture-reference/) pytest fixtures (series of posts starting here) (http://pythontesting.net/framework/pytest/pytest-fixtures/) pytest style fixtures (http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts) pytest parameterized fixtures (http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/#params)
1/31/201620 minutes, 24 seconds
Episode Artwork

9: Harry Percival : Testing Web Apps with Python, Selenium, Django

Intro to Harry Percival, his background and story of how he got into TDD and ended up writing a book (http://amzn.to/1SqW1t3) Comparing using unittest and pytest with applicability to testing django projects. Functional end to end testing with selenium. The django test client for middle level tests. test isolation django and isolated unit tests unit tests vs integration tests Testing done by the development team without an external QA Double loop TDD: Functional test first, then unit tests Spikes: investigations without tests Harry's experience with having a freely available web version of a book that is also intended to be sold. Update: Comment from Harry Percival on 19-Jan-2014 I might have been a bit down on unit tests vs functional tests in that "unit tests never fail comment". Not true at all, particularly as we've just been thru upgrading django on our core system, and the unit tests really saved our bacon on that one... Links Test-Driven Development with Python (http://amzn.to/1SqW1t3) Obey the Testing Goat (http://pythontesting.net/obey-the-testing-goat) - Harry's site dedicated to the book and related posts. Python Testing with unittest, nose, pytest (http://pythontesting.net/book) Gary Bernhardt's talk, Boundaries talk (http://pythontesting.net/boundaries-talk) including a discussion of "Functional Core, Imperative Shell". Video of Boundaries talk on youtube (http://pythontesting.net/boundaries-talk-youtube) Special Guest: Harry Percival.
1/19/201645 minutes, 14 seconds
Episode Artwork

8: Agile vs Agility : Agile Is Dead (Long Live Agility)

In today's podcast, I dodge the question of "What do you think of Agile?" by reading an essay from Dave Thomas (http://pragdave.me/blog/2014/03/04/time-to-kill-agile/)
12/15/20158 minutes, 38 seconds
Episode Artwork

7: The Waterfall Model and “Managing the Development of Large Software Systems”

The waterfall model has been used and modified and changed and rebelled against since before I started programming. Waterfall such an important character in the story of software development that we should get to know it a better.
10/21/201529 minutes, 38 seconds
Episode Artwork

6: Writing software is like nothing else

My experience with writing software comes from my experience: where I grew up, what eras I lived through, what my economical and geographical experiences have been, when I learned to code, and what projects I've worked on.
10/20/20156 minutes, 55 seconds
Episode Artwork

5: Test Classes: No OO experience required

Setup and Teardown Benefits of Test Fixtures code reuse cleanup of resources errors vs failures focusing your thinking on what you are testing and what you are not scoping for efficiency Brief look at pytest named fixtures References pytest fixtures series (http://pythontesting.net/framework/pytest/pytest-fixtures/) pytest fixtures nuts & bolts (http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/) pytest session scoped fixtures (http://pythontesting.net/framework/pytest/pytest-session-scoped-fixtures/) unittest fixtures (http://pythontesting.net/framework/unittest/unittest-fixtures/) nose fixtures mentioned in introduction (http://pythontesting.net/framework/nose/nose-introduction/#fixtures) nose fixture reference post (http://pythontesting.net/framework/nose/nose-fixture-reference/) how to run a single class (http://pythontesting.net/framework/specify-test-unittest-nosetests-pytest/)
9/23/20157 minutes, 34 seconds
Episode Artwork

4: Test Fixtures: Setup, Teardown, and so much more

Setup and Teardown Benefits of Test Fixtures code reuse cleanup of resources errors vs failures focusing your thinking on what you are testing and what you are not scoping for efficiency Brief look at pytest named fixtures References pytest fixtures series (http://pythontesting.net/framework/pytest/pytest-fixtures/) pytest fixtures nuts & bolts (http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/) pytest session scoped fixtures (http://pythontesting.net/framework/pytest/pytest-session-scoped-fixtures/) unittest fixtures (http://pythontesting.net/framework/unittest/unittest-fixtures/) nose fixtures mentioned in introduction (http://pythontesting.net/framework/nose/nose-introduction/#fixtures) nose fixture reference post (http://pythontesting.net/framework/nose/nose-fixture-reference/)
9/11/201513 minutes, 47 seconds
Episode Artwork

3: Why test?

Answering a listener question. Why testing? What are the benefits? Why automated testing over manual testing? Why test first? Why do automated testing during development? Why test to the user level API? After describing my ideal test strategy and project, I list: Business related, practical benefits of testing Personal reasons to embrace testing Pragmatic, day to day, developer benefits of testing
9/2/201526 minutes
Episode Artwork

2: Pytest vs Unittest vs Nose

I list my requirements for a framework and discuss how Pytest, Unittest, and Nose measure up to those requirements. Mentioned: pytest (http://pythontesting.net/framework/pytest/pytest-introduction/) unittest (http://pythontesting.net/framework/unittest/unittest-introduction/) nose (http://pythontesting.net/framework/nose/nose-introduction/) delayed assert (http://pythontesting.net/strategy/delayed-assert/) pytest-expect (http://pythontesting.net/pytest-expect/) doctest (http://pythontesting.net/framework/doctest/doctest-introduction/) I did the audio processing differently for this episode. Please let me know how it sounds, if there are any problems, etc.
8/20/201512 minutes, 17 seconds