Planet Python
Last update: February 18, 2025 09:43 PM UTC
February 18, 2025
PyCoder’s Weekly
Issue #669: Joining Strings, MongoDB in Django, Mobile Wheels, and More (Feb. 18, 2025)
#669 – FEBRUARY 18, 2025
View in Browser »
How to Join Strings in Python
In this tutorial, you’ll learn how to use Python’s built-in .join()
method to combine string elements from an iterable into a single string with a specified separator. You’ll also learn about common pitfalls, and how CPython makes .join()
work efficiently.
REAL PYTHON
Creating the MongoDB Database Backend for Django
Django supports a number of relational databases, but to go NoSQL you need to use third party tools. This is about to change as a backend for MongoDB is in development. This talks about the history of Mongo and Django and how the new code is structured.
JIB ADEGUNLOYE
Postgres, Now with Built-in Warehousing
Why manage two databases when one does it all? Crunchy Data Warehouse keeps your transactional database running smoothly while adding warehouse features like querying object storage, BI tool connections, and more. Scale efficiently with the Postgres you trust, without the complexity →
CRUNCHY DATA sponsor
PyPI Now Supports iOS and Android Wheels
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
SARAH GOODING • Shared by Sarah Gooding
Python Jobs
Backend Software Engineer (Anywhere)
Articles & Tutorials
Charlie Marsh: Accelerating Python Tooling With Ruff and uv
Are you looking for fast tools to lint your code and manage your projects? How is the Rust programming language being used to speed up Python tools? This week on the show, we speak with Charlie Marsh about his company, Astral, and their tools, uv and Ruff.
REAL PYTHON podcast
Managing Django’s Queue
Carlton is one of the core developers of Django. This post talks about staying on top of the incoming pull-requests, bug fixes, and everything else in the development queue.
CARLTON GIBSON
Unify Distributed Data from Edge-to-Cloud
Meet HiveMQ Pulse: Built to organize distributed data into a structured namespace for seamless access from edge-to-cloud. Gain insights from distributed devices and systems, with a single source of truth for your data. Get early access →
HIVEMQ sponsor
Shipping Software on Time and on Budget
The detailed post talks about all the things you can do to try to get better at delivering on time and on budget. The article includes a lot of good references as well.
CARLTON GIBSON
Great Tables
Talk Python To Me interviews Rich Iannone and Michael Chow from Posit. They discuss the transformative power of data tables with the Great Tables library.
KENNEDY, IANNONE, & CHOW podcast
pytest-mock
: Mocking in pytest
pytest-mock
is currently the #3 pytest plugin. It is a wrapper around unittest.mock
. This covers what mocking is, and how to do it well in pytest.
BRIAN OKKEN podcast
Tail-Call Interpreter Added to CPython
New code for a tail-call interpreter has been added to the Python 3.14 alpha. It is an opt-in feature for now, but promises performance improvements.
PYTHON.ORG
Python Free-Threading Guide
This is a centralized collection of documentation and trackers around compatibility with free-threaded CPython for the Python open source ecosystem.
QUANSIGHT
re.Match.groupdict
This quick TIL post shows how you can use the .groupdict()
method from a regex match to get a dictionary with all named groups.
RODRIGO GIRÃO SERRÃO
The 10-Step Checklist for Continuous Delivery
Learn how to implement Continuous Delivery with this 10-step guide featuring actionable insights, examples, and best practices.
ANTHONY CAMPOLO
Exploring ICEYE’s Satellite Imagery
This article does a deep dive data-analysis on satellite imagery of an airport. It uses pandas, geopandas, PyTorch, and more.
MARK LITWINTSCHIK
Terminal Colours Are Tricky
Choosing just the right palette for your terminal can be tricky. This article talks about the why and how.
JULIE EVANS
Projects & Code
Validoopsie: Data Validation Made Effortless!
GITHUB.COM/AKMALSOLIEV • Shared by Akmal Soliev
Events
Weekly Real Python Office Hours Q&A (Virtual)
February 19, 2025
REALPYTHON.COM
Workshop: Creating Python Communities
February 20 to February 21, 2025
PYTHON-GM.ORG
PyData Bristol Meetup
February 20, 2025
MEETUP.COM
PyLadies Dublin
February 20, 2025
PYLADIES.COM
Django Girls Koforidua
February 21 to February 23, 2025
DJANGOGIRLS.ORG
Python Weekend Abuja
February 21, 2025
CODECAMPUS.COM.NG
DjangoCongress JP 2025
February 22 to February 23, 2025
DJANGOCONGRESS.JP
PyConf Hyderabad 2025
February 22 to February 24, 2025
PYCONFHYD.ORG
PyCon Namibia
February 24 to February 28, 2025
PYCON.ORG
PyCon APAC 2025
March 1 to March 3, 2025
PYTHON.PH
Happy Pythoning!
This was PyCoder’s Weekly Issue #669.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
February 18, 2025 07:30 PM UTC
Real Python
Concatenating Strings in Python Efficiently
Python string concatenation is a fundamental operation that combines multiple strings into a single string. In Python, you can concatenate strings using the +
operator or append them with +=
. For more efficient concatenation, especially when working with lists of strings, the .join()
method is recommended. Other techniques include using StringIO
for large datasets and the print()
function for quick screen output.
By the end of this video course, you’ll understand that you can:
- Concatenate strings in Python using the
+
and+=
operators. - Use
+=
to append a string to an existing string. - Use the
.join()
method to combine strings in a list in Python. - Handle a stream of strings efficiently by using
StringIO
as a container with a file-like interface.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 18, 2025 02:00 PM UTC
PyCharm
Which Is the Best Python Web Framework: Django, Flask, or FastAPI?
Search for Python web frameworks, and three names will consistently come up: Django, Flask, and FastAPI. Our latest Python Developer Survey Results confirm that these three frameworks remain developers’ top choices for backend web development with Python.
All three frameworks are open-source and compatible with the latest versions of Python.
But how do you determine which web framework is best for your project? Here, we’ll look at the pros and cons of each and compare how they stack up against one another.
Django
Django is a “batteries included”, full-stack web framework used by the likes of Instagram, Spotify, and Dropbox, to name but a few. Pitched as “the web framework for perfectionists with deadlines”, the Django framework was designed to make it easier and quicker to build robust web apps.
First made available as an open-source project in 2005, Django is a mature project that remains in active development 20 years later. It’s suitable for many web applications, including social media, e-commerce, news, and entertainment sites.
Django follows a model-view-template (MVT) architecture, where each component has a specific role. Models are responsible for handling the data and defining its structure. The views manage the business logic, processing requests and fetching the necessary data from the models. Finally, templates present this data to the end user – similar to views in a model-view-controller (MVC) architecture.
As a full-stack web framework, Django can be used to build an entire web app (from database to HTML and JavaScript frontend).
Alternatively, you can use the Django REST Framework to combine Django with a frontend framework (such as React) to build both mobile and browser-based apps.
Explore our comprehensive Django guide, featuring an overview of prerequisite knowledge, a structured learning path, and additional resources to help you master the framework.
Django advantages
There are plenty of reasons why Django remains one of the most widely used Python web frameworks, including:
- Extensive functionality: With a “batteries included” approach, Django offers built-in features like authentication, caching, data validation, and session management. Its don’t repeat yourself (DRY) principle speeds up development and reduces bugs.
- Ease of setup: Django simplifies dependency management with its built-in features, reducing the need for external packages. This helps streamline the initial setup and minimizes compatibility issues, so you can get up and running sooner.
- Database support: Django’s ORM (object-relational mapping) makes data handling more straightforward, enabling you to work with databases like SQLite, MySQL, and PostgreSQL without needing SQL knowledge. However, it’s less suitable for non-relational databases like MongoDB.
- Security: Built-in defenses against common vulnerabilities such as cross-site scripting (XSS), SQL injection, and clickjacking help quickly secure your app from the start.
- Scalability: Despite being monolithic, Django allows for horizontal scaling of the application’s architecture (business logic and templates), caching to ease database load, and asynchronous processing to improve efficiency.
- Community and documentation: Django has a vast, active community and detailed documentation, with tutorials and support readily available.
Django disadvantages
Despite its many advantages, there are a few reasons you might want to look at options other than Django when developing your next web app.
- Heavyweight: Its “batteries included” design can be too much for smaller apps, where a lightweight framework like Flask may be more appropriate.
- Learning curve: Django’s extensive features naturally come with a steeper learning curve, though there are plenty of resources available to help new developers.
- Performance: Django is generally slower compared to other frameworks like Flask and FastAPI, but built-in caching and asynchronous processing can help improve the response times.
Flask
Flask is a Python-based micro-framework for backend web development. However, don’t let the term “micro” deceive you. As we’ll see, Flask isn’t only limited to smaller web apps.
Instead, Flask is designed with a simple core based on Werkzeug WSGI (Web Server Gateway Interface) and Jinja2 templates. Well-known users of Flask include Netflix, Airbnb, and Reddit.
Flask was initially created as an April Fools’ Day joke and released as an open-source project in 2010, a few years after Django. The micro-framework’s approach is fundamentally different from Django’s. While Django takes a “batteries included” style and comes with a lot of the functionality you may need for building web apps, Flask is much leaner.
The philosophy behind the micro-framework is that everyone has their preferences, so developers should be free to choose their own components. For this reason, Flask doesn’t include a database, ORM (object-relational mapper), or ODM (object-document mapper).
When you build a web app with Flask, very little is decided for you upfront. This can have significant benefits, as we’ll discuss below.
Flask advantages
We’ve seen usage of Flask grow steadily over the last five years through our State of the Developer Ecosystem survey – it overtook Django for the first time in 2021.
Some reasons for choosing Flask as a backend web framework include:
- Lightweight design: Flask’s minimalist approach offers a flexible alternative to Django, making it ideal for smaller applications or projects where Django’s extensive features may feel excessive. However, Flask isn’t limited to small projects – you can extend it as needed.
- Flexibility: Flask allows you to choose the libraries and frameworks for core functionality, such as data handling and user authentication. This enables you to select the best tools for your project and extend it in unforeseen ways.
- Scalability: Flask’s modular design makes it easy to scale horizontally. Using a NoSQL database layer can further enhance scalability.
- Shallow learning curve: Its simple design makes Flask easy to learn, though you may need to explore extensions for more complex apps.
- Community and documentation: Flask has extensive (if somewhat technical) documentation and a clear codebase. While its community is smaller than Django’s, Flask remains active and is growing steadily.
Flask disadvantages
While Flask has a lot to offer, there are a few things to consider before you decide to use it in your next web development project.
- Bring your own everything: Flask’s micro-framework design and flexibility require you to handle much of that core functionality, including data validation, session management, and caching. While this flexibility can be beneficial, it can also slow the development process, as you’ll need to find existing libraries or build features from scratch. Additionally, dependencies must be managed over time to ensure they remain compatible with Flask.
- Security: Flask has minimal built-in security. Beyond securing client-side cookies, you must implement web security best practices and ensure the security of the dependencies you include, applying updates as needed.
- Performance: While Flask performs slightly better than Django, it lags behind FastAPI. Flask offers some ASGI support (the standard used by FastAPI), but it is more tightly tied to WSGI.
FastAPI
As the name suggests, FastAPI is a micro-framework for building high-performance web APIs with Python. Despite being relatively new – it was first released as an open-source project in 2018 – FastAPI has quickly become popular among developers, ranking third in our list of the most popular Python web frameworks since 2021.
FastAPI is based on Uvicorn, an ASGI (Asynchronous Server Gateway Interface) server, and Starlette, a web micro-framework. FastAPI adds data validation, serialization, and documentation to streamline building web APIs.
When developing FastAPI, the micro-framework’s creator drew on the experiences of working with many different frameworks and tools. Whereas Django was developed before frontend JavaScript web frameworks (such as React or Vue.js) were prominent, FastAPI was designed with this context in mind.
The emergence of OpenAPI (formerly Swagger) as a format for structuring and documenting APIs in the preceding years provided an industry standard that FastAPI could leverage.
Beyond the implicit use case of creating RESTful APIs, FastAPI is ideal for applications that require real-time responses, such as messaging platforms and dashboards. Its high performance and asynchronous capabilities make it a good fit for data-intensive apps, including machine learning models, data processing, and analytics.
FastAPI advantages
FastAPI first received its own category in our State of the Developer Ecosystem survey in 2021, with 14% of respondents using the micro-framework.
Since then, usage has increased to 20%, alongside a slight dip in the use of Flask and Django.
These are some of the reasons why developers are choosing FastAPI:
- Performance: Designed for speed, FastAPI supports asynchronous processing and bi-directional web sockets (courtesy of Starlette). It outperformed both Django and Flask in benchmark tests, making it ideal for high-traffic applications.
- Scalability: Like Flask, FastAPI is highly modular, making it easy to scale and ideal for containerized deployments.
- Adherence to industry standards: FastAPI is fully compatible with OAuth 2.0, OpenAPI (formerly Swagger), and JSON Schema. As a result, you can implement secure authentication and generate your API documentation with minimal effort.
- Ease of use: FastAPI use of Pydantic for type hints and validation speeds up development by providing type checks, auto-completion, and request validation.
- Documentation: FastAPI comes with a sizable body of documentation and growing third-party resources, making it accessible for developers at all levels.
FastAPI disadvantages
Before deciding that FastAPI is the best framework for your next project, bear in mind the following:
- Maturity: FastAPI, being newer, lacks the maturity of Django or Flask. Its community is smaller, and the user experience may be less streamlined due to less extensive use.
- Compatibility: As a micro-framework, FastAPI requires additional functionality for fully featured apps. There are fewer compatible libraries compared to Django or Flask, which may require you to develop your own extensions.
Choosing between Flask, Django, and FastAPI
So, which is the best Python web framework? As with many programming things, the answer is “it depends”.
The right choice hinges on answering a few questions: What kind of app are you building? What are your priorities? How do you expect your project to grow in the future?
All three popular Python web frameworks come with unique strengths, so assessing them in the context of your application will help you make the best decision.
Django is a great option if you need standard web app functionality out of the box, making it suitable for projects that require a more robust structure. It’s particularly advantageous if you’re using a relational database, as its ORM simplifies data management and provides built-in security features. However, the extensive functionality may feel overwhelming for smaller projects or simple applications.
Flask, on the other hand, offers greater flexibility. Its minimalist design enables developers to pick and choose the extensions and libraries they want, making it suitable for projects where you need to customize features. This approach works well for startups or MVPs, where your requirements might change and evolve rapidly. While Flask is easy to get started with, keep in mind that building more intricate applications will mean exploring various extensions.
FastAPI is a strong contender when speed is of the essence, especially for API-first or machine learning projects. It uses modern Python features like type hints to provide automatic data validation and documentation. FastAPI is an excellent choice for applications that need high performance, like microservices or data-driven APIs. Despite this, it may not be as feature-rich as Django or Flask in terms of built-in functionality, which means you might need to implement additional features manually.
For a deeper comparison between Django and the different web frameworks, check out our other guides, including:
Python web framework overview
Django | Flask | FastAPI | |
Design philosophy | Full-stack framework designed for web apps with relational databases. | Lightweight backend micro-framework. | Lightweight micro-framework for building web APIs. |
Ease of use | “Batteries included” approach means everything you need is in the box, accelerating development. That said, the amount of functionality available can present a steep learning curve. | As Flask is a micro-framework, there is less code to familiarize yourself with upfront.High levels of flexibility to choose your preferred libraries and extensions. However, having less functionality built in requires more external dependencies. | Like Flask, less functionality is built in than with Django. Type hints and validation speed up development and reduce errors. Compatible with OpenAPI for automatic API reference docs. |
Extensibility | Largest selection of compatible packages out of the three. | Large number of compatible packages. | Fewer compatible packages than Flask or Django. |
Performance | Good, but not as fast as Flask or FastAPI. | Slightly faster than Django but not as performant as FastAPI. | Fastest of the three. |
Scalability | Monolithic design can limit scalability. Support for async processing can improve performance under high load. | Highly scalable thanks to a lightweight and modular design. | Highly scalable thanks to a lightweight and modular design. |
Security | Many cybersecurity defenses built in. | Client-side cookies secured by default. Other security protections need to be added, and dependencies should be checked for vulnerabilities. | Support for OAuth 2.0 out of the box. Other security protections need to be added, and dependencies should be checked for vulnerabilities. |
Maturity | Open source since 2005 and receives regular updates. | Open source since 2010 and receives regular updates. | Open source since 2018 and receives regular updates. |
Community | Large and active following. | Active and likely to keep growing as Flask remains popular. | Smaller following than Django or Flask. |
Documentation | The most active and robust official documentation. | Extensive official documentation. | The least active official documentation, given its age. |
Further reading
- The State of Django 2024
- What is the Django Web Framework?
- How to Learn Django
- An Introduction to Django Views
- The Ultimate Guide to Django Templates
- Django Project Ideas
Start your web development project with PyCharm
Regardless of your primary framework, you can access all the essential web development tools in a single IDE. PyCharm provides built-in support for Django, FastAPI, and Flask, while also offering top-notch integration with frontend frameworks like React, Angular, and Vue.js.
February 18, 2025 10:00 AM UTC
Python Software Foundation
Where is the PSF? 2025 Edition
Where to Find the PSF Online
One of the main ways we reach people for news and information about the PSF and Python is on social media. There’s been a lot of uncertainty around X as well as some other platforms popping up, so we wanted to share a brief round-up of other places you can find us:
- Read our blog: It’s here! You found it! You can always find our latest updates here at pyfound.blogspot.com.
- Subscribe to our newsletter: We send out an email newsletter about once every quarter chock full of news about PSF! You can sign up here: https://www.python.org/psf/newsletter/
- Follow us on LinkedIn: https://www.linkedin.com/company/python-software-foundation
- Follow us on Mastodon: https://fosstodon.org/@thepsf
- Follow us on Bluesky: https://bsky.app/profile/python.org
- Follow us on YouTube: https://www.youtube.com/@ThePSF
- We're still on X (for announcements only): https://twitter.com/ThePSF
As always, if you are looking for technical support rather than news about the foundation, we have collected links and resources here for people who are new or looking to get deeper into the Python programming language: https://www.python.org/about/gettingstarted/
You can also ask questions about Python or the PSF on Python’s Discuss forum. The PSF category is the best place to reach us on the forum!
Where to Find PyCon US Online
Here’s where you can go for updates and information specific to PyCon US:
- Read the PyCon US blog: https://pycon.blogspot.com/
- Subscribe to the PyCon US Newsletter. We send out an email newsletter about four times a year, during the run up to PyCon US. You can sign up here: bit.ly/3ElTPzv
- Follow PyCon US on Mastodon: https://fosstodon.org/@pycon
- Follow PyCon US on Bluesky: https://bsky.app/profile/pycon.us
- Follow PyCon US on YouTube: https://www.youtube.com/@PyConUS
- Follow PyCon US on X (for announcements only): https://twitter.com/PyCon
Where to Find PyPI Online
Here’s where you can go for updates and information specific to PyPI:
- Read the PyPI blog: https://blog.pypi.org/
- Follow PyPI on Mastodon: https://fosstodon.org/@pypi
- Follow PyPI on Bluesky: https://bsky.app/profile/pypi.org
- Follow PyPI on X (for announcements only): https://x.com/pypi
Thank you for keeping in touch, and see you around the Internet!
February 18, 2025 06:30 AM UTC
February 17, 2025
Real Python
Python News Roundup: February 2025
The new year has brought a flurry of activity to the Python community. New bugfix releases of Python 3.12 and 3.13 show that the developers seemingly never sleep. A new type of interpreter is slated for the upcoming Python 3.14 as part of ongoing efforts to improve Python’s performance.
Poetry takes a giant leap toward compatibilty with other project management tools with the release of version 2. If you’re interested in challenging yourself with some programming puzzles, check out the new season of Coding Quest.
Time to jump in! Enjoy this tour of what’s happening in the world of Python!
Poetry Version 2 Adds Compatibility
Poetry is a trusted and powerful project and dependency manager for Python. Initially created by Sébastien Eustace in 2018, it reached its Version 1 milestone in 2019. Since then, it has grown to be one of the most commonly used tools for managing Python projects.
On January 5, 2025, the Poetry team announced the release of Poetry 2.0.0. This major release comes with many updates. One of the most requested changes is compatibility with PEP 621, which describes how to specify project metadata in pyproject.toml
.
Most of the common tools for project management, including setuptools, uv, Hatch, Flit, and PDM, use pyproject.toml
and the project
table in a consistent way, as defined in PEP 621. With Poetry on board as well, you can more simply migrate your project from one tool to another.
This improved compatibility with the rest of the Python eco-system comes with a price. There are a few breaking changes in Poetry 2 compared to earlier versions. If you’re already using Poetry, you should take care when updating to the latest version.
The changelog describes all changes, and you can read the documentation for advice on how to migrate your existing projects to the new style of configuration.
The Python Team Releases Bugfix Versions for 3.12 and 3.13
Read the full article at https://realpython.com/python-news-february-2025/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 17, 2025 02:00 PM UTC
Python Bytes
#420 90% Done in 50% of the Available Time
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://peps.python.org/pep-0772/?featured_on=pythonbytes">PEP 772 – Packaging governance process</a></strong></li> <li><strong><a href="https://www.mongodb.com/blog/post/mongodb-django-backend-now-available-public-preview?utm_source=www.pythonweekly.com&utm_medium=newsletter&utm_campaign=python-weekly-issue-687-february-13-2025&_bhlid=ac970bf5150af48b53b11f639dd520db04c9a2aa&featured_on=pythonbytes">Official Django MongoDB Backend</a> Now Available in Public Preview</strong></li> <li><a href="https://qntm.org/devphilo?featured_on=pythonbytes"><strong>Developer Philosophy</strong></a></li> <li><strong><a href="https://docs.python.org/release/3.13.2/whatsnew/changelog.html#python-3-13-2">Python 3.13.2</a> released</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=CW4mZ3XNfY8' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="420">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <a href="https://peps.python.org/pep-0772/?featured_on=pythonbytes">PEP 772 – Packaging governance process</a> </p> <ul> <li>draft, created 21-Jan, by Barry Warsaw, Deb Nicholson, Pradyun Gedam</li> <li>“As Python packaging has matured, several interrelated problems with the current way of managing the technical development, decision making and processes have become apparent.”</li> <li>“This PEP proposes a Python Packaging Council with broad authority over packaging standards, tools, and implementations. Like the Python Steering Council, the Packaging Council seeks to exercise this authority as rarely as possible; instead, they use this power to establish standard processes.”</li> <li>PEP discusses <ul> <li>PyPA, Packaging-WG, Interoperability Standards, Python Steering Council, and Expectations of an elected Packaging Council</li> <li>A specification with <ul> <li>Composition: 5 people</li> <li>Mandate, Responsibilities, Delegations, Process, Terms, etc.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.mongodb.com/blog/post/mongodb-django-backend-now-available-public-preview?utm_source=www.pythonweekly.com&utm_medium=newsletter&utm_campaign=python-weekly-issue-687-february-13-2025&_bhlid=ac970bf5150af48b53b11f639dd520db04c9a2aa&featured_on=pythonbytes">Official Django MongoDB Backend</a> Now Available in Public Preview</p> <ul> <li>Over the last few years, Django developers have increasingly used MongoDB, presenting an opportunity for an official MongoDB-built Python package to make integrating both technologies as painless as possible.</li> <li>Features <ul> <li><strong>The ability to use Django models with confidence</strong>. Developers can use Django <a href="https://docs.djangoproject.com/en/5.1/topics/db/models/?featured_on=pythonbytes">models</a> to represent MongoDB documents, with support for Django forms, validations, and authentication.</li> <li><strong>Django admin support</strong>. The package allows users to fire up the Django admin page as they normally would, with full support for <a href="https://docs.djangoproject.com/en/5.1/topics/migrations/#module-django.db.migrations">migrations</a> and database schema history.</li> <li><strong>Native connecting from settings.py</strong>. Just as with any other database provider, developers can customize the database engine in settings.py to get MongoDB up and running.</li> <li><strong>MongoDB-specific querying optimizations</strong>. Field lookups have been replaced with aggregation calls (aggregation stages and aggregate operators), JOIN operations are represented through $lookup, and it’s possible to build indexes right from Python.</li> <li><strong>Limited advanced functionality</strong>. While still in development, the package already has support for time series, projections, and XOR operations.</li> <li><strong>Aggregation pipeline support</strong>. Raw querying allows aggregation pipeline operators. Since aggregation is a superset of what traditional MongoDB Query API methods provide, it gives developers more functionality.</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://qntm.org/devphilo?featured_on=pythonbytes"><strong>Developer Philosophy</strong></a></p> <ul> <li>by qntm</li> <li>Intended as “advice for junior developers about personal dev philosophy”, I think these are just great tips to keep in mind.</li> <li>The items <ul> <li>Avoid, at all costs, arriving at a scenario where the ground-up rewrite starts to look attractive <ul> <li>This is less about “don’t do rewrites”, but about noticing the warning signs ahead of time.</li> </ul></li> <li>Aim to be 90% done in 50% of the available time <ul> <li>Great quote: “The first 90% of the job takes 90% of the time. The last 10% of the job takes the other 90% of the time.”</li> </ul></li> <li>Automate good practices</li> <li>Think about pathological data <ul> <li>“Nobody cares about the golden path. Edge cases are our <em>entire job</em>.”</li> <li>Brian’s note: But also think about the happy path. Documenting and testing what you think of as the happy path is a testing start and helps others understand your idea of how things are supposed to work.</li> </ul></li> <li>There’s usually a simpler way to write it</li> <li>Write code to be testable</li> <li>It is insufficient for code to be provably correct; it should be obviously, visibly, trivially correct <ul> <li>Brian’s note: Even if it’s obviously, visibly, trivially correct, it will still break. So test it anyway.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://docs.python.org/release/3.13.2/whatsnew/changelog.html#python-3-13-2">Python 3.13.2</a> released</p> <ul> <li>Python 3.13’s second maintenance release. </li> <li>About 250 changes went into this update</li> <li>Also Python 3.12.9, Python 3.12’s ninth maintenance release already. Just 180 changes for 3.12, but it’s still worth upgrading.</li> <li>For us, it’s simply rebuilding our Docker base (i.e. —no-cache) with these lines: <pre><code>RUN curl -LsSf https://astral.sh/uv/install.sh | sh RUN --mount=type=cache,target=/root/.cache uv venv --python 3.13 /venv </code></pre></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Still thinking about pytest plugins a lot.</li> <li>The <a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes">top pytest plugin list</a> <ul> <li>Has been updated for Feb</li> <li>Is starting to include things without “pytest” in the name, like Hypothesis and Syrupy. <ul> <li>Eventually I’ll have to add “looking at trove classifiers” as part of the search, but for now, let me know if you’re favorite is missing.</li> </ul></li> <li>Includes T&C podcast episode links if I’ve covered it on the show. <ul> <li>There’s 2 so far</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li>There's <a href="https://github.com/pyscript/pyscript/releases/tag/2025.2.1?featured_on=pythonbytes">a new release of PyScript</a> out. All the details are here: Highlight is new PyGame-CE support. Go play!</li> <li><a href="https://peps.python.org/pep-2026/?featured_on=pythonbytes">PEP 2026 – Calendar versioning for Python</a> rejected. :(</li> <li><a href="https://peps.python.org/pep-0759/?featured_on=pythonbytes">PEP 759 – External Wheel Hosting</a> withdrawn</li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://bsky.app/profile/bruno.rocha.social/post/3lhhearmiz22v?featured_on=pythonbytes">Pride Versioning</a></li> </ul>
February 17, 2025 08:00 AM UTC
Quansight Labs Blog
Mastering DuckDB when you're used to pandas or Polars
It's not as scary as you think
February 17, 2025 12:00 AM UTC
February 14, 2025
Kay Hayen
Nuitka this week #16
Hey Nuitka users! This started out as an idea of a weekly update, but that hasn’t happened, and so we will switch it over to just writing up when something interesting happens and then push it out relatively immediately when it happens.
Nuitka Onefile Gets More Flexible: --onefile-cache-mode
and {PROGRAM_DIR}
We’ve got a couple of exciting updates to Nuitka’s onefile mode that give you more control and flexibility in how you deploy your applications. These enhancements stem from real-world needs and demonstrate Nuitka’s commitment to providing powerful and adaptable solutions.
Taking Control of Onefile Unpacking: --onefile-cache-mode
Onefile mode is fantastic for creating single-file executables, but the management of the unpacking directory where the application expands has sometimes been a bit… opaque. Previously, Nuitka would decide whether to clean up this directory based on whether the path used runtime-dependent variables. This made sense in theory, but in practice, it could lead to unexpected behavior and made debugging onefile issues harder.
Now, you have complete control! The new --onefile-cache-mode
option
lets you explicitly specify what happens to the unpacking directory:
--onefile-cache-mode=auto
: This is the default behavior. Nuitka will remove the unpacking directory unless runtime-dependent values were used in the path specification. This is the same behavior as previous versions.--onefile-cache-mode=cached
: The unpacking directory is not removed and becomes a persistent, cached directory. This is useful for debugging, inspecting the unpacked files, or if you have a use case that benefits from persistent caching of the unpacked data. The files will remain available for subsequent runs.--onefile-cache-mode=temporary
: The unpacking directory is removed after the program exits.
This gives you the power to choose the behavior that best suits your needs. No more guessing!
Relative Paths with {PROGRAM_DIR}
Another common request, particularly from users deploying applications in more restricted environments, was the ability to specify the onefile unpacking directory relative to the executable itself. Previously, you were limited to absolute paths or paths relative to the user’s temporary directory space.
We’ve introduced a new variable, {PROGRAM_DIR}
, that you can use in
the --onefile-tempdir-spec
option. This variable is dynamically
replaced at runtime with the full path to the directory containing the
onefile executable.
For example:
nuitka --onefile --onefile-tempdir-spec="{PROGRAM_DIR}/.myapp_data" my_program.py
This would create a directory named .myapp_data
inside the same
directory as the my_program.exe
(or my_program
on Linux/macOS)
and unpack the application there. This is perfect for creating truly
self-contained applications where all data and temporary files reside
alongside the executable.
Nuitka Commercial and Open Source
These features, like many enhancements to Nuitka, originated from a request by a Nuitka commercial customer. This highlights the close relationship between the commercial offerings and the open-source core. While commercial support helps drive development and ensures the long-term sustainability of Nuitka, the vast majority of features are made freely available to all users.
Give it a Try!
This change will be in 2.7 and is currently
We encourage you to try out these new features and let us know what you think! As always, bug reports, feature requests, and contributions are welcome on GitHub.
February 14, 2025 11:00 PM UTC
Django Weblog
DjangoCongress JP 2025 Announcement and Live Streaming!
DjangoCongress JP 2025, to be held on Saturday, February 22, 2025 at 10 am (Japan Standard Time), will be broadcast live!
It will be streamed on the following YouTube Live channels:
This year there will be talks not only about Django, but also about FastAPI and other asynchronous web topics. There will also be talks on Django core development, Django Software Foundation (DSF) governance, and other topics from around the world. Simultaneous translation will be provided in both English and Japanese.
Schedule
ROOM1
- DRFを少しずつオニオンアーキテクチャに寄せていく
- The Async Django ORM: Where Is it?
- FastAPIの現場から
- Speed at Scale for Django Web Applications
- Django NinjaによるAPI開発の効率化とリプレースの実践
- Implementing Agentic AI Solutions in Django from scratch
- Diving into DSF governance: past, present and future
ROOM2
- 生成AIでDjangoアプリが作れるのかどうか(FastAPIでもやってみよう)
- DXにおけるDjangoの部分的利用
- できる!Djangoテスト(2025)
- Djangoにおける複数ユーザー種別認証の設計アプローチ
- Getting Knowledge from Django Hits: Using Grafana and Prometheus
- Culture Eats Strategy for Breakfast: Why Psychological Safety Matters in Open Source
- µDjango. The next step in the evolution of asynchronous microservices technology.
A public viewing of the event will also be held in Tokyo. A reception will also be held, so please check the following connpass page if you plan to attend.
Registration (connpass page): DjangoCongress JP 2025パブリックビューイング
February 14, 2025 10:12 PM UTC
Eli Bendersky
Decorator JITs - Python as a DSL
Spend enough time looking at Python programs and packages for machine learning, and you'll notice that the "JIT decorator" pattern is pretty popular. For example, this JAX snippet:
import jax.numpy as jnp
import jax
@jax.jit
def add(a, b):
return jnp.add(a, b)
# Use "add" as a regular Python function
... = add(...)
Or the Triton language for writing GPU kernels directly in Python:
import triton
import triton.language as tl
@triton.jit
def add_kernel(x_ptr,
y_ptr,
output_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
In both cases, the function decorated with jit doesn't get executed by the Python interpreter in the normal sense. Instead, the code inside is more like a DSL (Domain Specific Language) processed by a special purpose compiler built into the library (JAX or Triton). Another way to think about it is that Python is used as a meta language to describe computations.
In this post I will describe some implementation strategies used by libraries to make this possible.
Preface - where we're going
The goal is to explain how different kinds of jit decorators work by using a simplified, educational example that implements several approaches from scratch. All the approaches featured in this post will be using this flow:
These are the steps that happen when a Python function wrapped with our educational jit decorator is called:
- The function is translated to an "expression IR" - Expr.
- This expression IR is converted to LLVM IR.
- Finally, the LLVM IR is JIT-executed.
Steps (2) and (3) use llvmlite; I've written about llvmlite before, see this post and also the pykaleidoscope project. For an introduction to JIT compilation, be sure to read this and maybe also the series of posts starting here.
First, let's look at the Expr IR. Here we'll make a big simplification - only supporting functions that define a single expression, e.g.:
def expr2(a, b, c, d):
return (a + d) * (10 - c) + b + d / c
Naturally, this can be easily generalized - after all, LLVM IR can be used to express fully general computations.
Here are the Expr data structures:
class Expr:
pass
@dataclass
class ConstantExpr(Expr):
value: float
@dataclass
class VarExpr(Expr):
name: str
arg_idx: int
class Op(Enum):
ADD = "+"
SUB = "-"
MUL = "*"
DIV = "/"
@dataclass
class BinOpExpr(Expr):
left: Expr
right: Expr
op: Op
To convert an Expr into LLVM IR and JIT-execute it, we'll use this function:
def llvm_jit_evaluate(expr: Expr, *args: float) -> float:
"""Use LLVM JIT to evaluate the given expression with *args.
expr is an instance of Expr. *args are the arguments to the expression, each
a float. The arguments must match the arguments the expression expects.
Returns the result of evaluating the expression.
"""
llvm.initialize()
llvm.initialize_native_target()
llvm.initialize_native_asmprinter()
llvm.initialize_native_asmparser()
cg = _LLVMCodeGenerator()
modref = llvm.parse_assembly(str(cg.codegen(expr, len(args))))
target = llvm.Target.from_default_triple()
target_machine = target.create_target_machine()
with llvm.create_mcjit_compiler(modref, target_machine) as ee:
ee.finalize_object()
cfptr = ee.get_function_address("func")
cfunc = CFUNCTYPE(c_double, *([c_double] * len(args)))(cfptr)
return cfunc(*args)
It uses the _LLVMCodeGenerator class to actually generate LLVM IR from Expr. This process is straightforward and covered extensively in the resources I linked to earlier; take a look at the full code here.
My goal with this architecture is to make things simple, but not too simple. On one hand - there are several simplifications: only single expressions are supported, very limited set of operators, etc. It's very easy to extend this! On the other hand, we could have just trivially evaluated the Expr without resorting to LLVM IR; I do want to show a more complete compilation pipeline, though, to demonstrate that an arbitrary amount of complexity can be hidden behind these simple interfaces.
With these building blocks in hand, we can review the strategies used by jit decorators to convert Python functions into Exprs.
AST-based JIT
Python comes with powerful code reflection and introspection capabilities out of the box. Here's the astjit decorator:
def astjit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise ASTJITError("Keyword arguments are not supported")
source = inspect.getsource(func)
tree = ast.parse(source)
emitter = _ExprCodeEmitter()
emitter.visit(tree)
return llvm_jit_evaluate(emitter.return_expr, *args)
return wrapper
This is a standard Python decorator. It takes a function and returns another function that will be used in its place (functools.wraps ensures that function attributes like the name and docstring of the wrapper match the wrapped function).
Here's how it's used:
from astjit import astjit
@astjit
def some_expr(a, b, c):
return b / (a + 2) - c * (b - a)
print(some_expr(2, 16, 3))
After astjit is applied to some_expr, what some_expr holds is the wrapper. When some_expr(2, 16, 3) is called, the wrapper is invoked with *args = [2, 16, 3].
The wrapper obtains the AST of the wrapped function, and then uses _ExprCodeEmitter to convert this AST into an Expr:
class _ExprCodeEmitter(ast.NodeVisitor):
def __init__(self):
self.args = []
self.return_expr = None
self.op_map = {
ast.Add: Op.ADD,
ast.Sub: Op.SUB,
ast.Mult: Op.MUL,
ast.Div: Op.DIV,
}
def visit_FunctionDef(self, node):
self.args = [arg.arg for arg in node.args.args]
if len(node.body) != 1 or not isinstance(node.body[0], ast.Return):
raise ASTJITError("Function must consist of a single return statement")
self.visit(node.body[0])
def visit_Return(self, node):
self.return_expr = self.visit(node.value)
def visit_Name(self, node):
try:
idx = self.args.index(node.id)
except ValueError:
raise ASTJITError(f"Unknown variable {node.id}")
return VarExpr(node.id, idx)
def visit_Constant(self, node):
return ConstantExpr(node.value)
def visit_BinOp(self, node):
left = self.visit(node.left)
right = self.visit(node.right)
try:
op = self.op_map[type(node.op)]
return BinOpExpr(left, right, op)
except KeyError:
raise ASTJITError(f"Unsupported operator {node.op}")
When _ExprCodeEmitter finishes visiting the AST it's given, its return_expr field will contain the Expr representing the function's return value. The wrapper then invokes llvm_jit_evaluate with this Expr.
Note how our decorator interjects into the regular Python execution process. When some_expr is called, instead of the standard Python compilation and execution process (code is compiled into bytecode, which is then executed by the VM), we translate its code to our own representation and emit LLVM from it, and then JIT execute the LLVM IR. While it seems kinda pointless in this artificial example, in reality this means we can execute the function's code in any way we like.
AST JIT case study: Triton
This approach is almost exactly how the Triton language works. The body of a function decorated with @triton.jit gets parsed to a Python AST, which then - through a series of internal IRs - ends up in LLVM IR; this in turn is lowered to PTX by the NVPTX LLVM backend. Then, the code runs on a GPU using a standard CUDA pipeline.
Naturally, the subset of Python that can be compiled down to a GPU is limited; but it's sufficient to run performant kernels, in a language that's much friendlier than CUDA and - more importantly - lives in the same file with the "host" part written in regular Python. For example, if you want testing and debugging, you can run Triton in "interpreter mode" which will just run the same kernels locally on a CPU.
Note that Triton lets us import names from the triton.language package and use them inside kernels; these serve as the intrinsics for the language - special calls the compiler handles directly.
Bytecode-based JIT
Python is a fairly complicated language with a lot of features. Therefore, if our JIT has to support some large portion of Python semantics, it may make sense to leverage more of Python's own compiler. Concretely, we can have it compile the wrapped function all the way to bytecode, and start our translation from there.
Here's the bytecodejit decorator that does just this [1]:
def bytecodejit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise BytecodeJITError("Keyword arguments are not supported")
expr = _emit_exprcode(func)
return llvm_jit_evaluate(expr, *args)
return wrapper
def _emit_exprcode(func):
bc = func.__code__
stack = []
for inst in dis.get_instructions(func):
match inst.opname:
case "LOAD_FAST":
idx = inst.arg
stack.append(VarExpr(bc.co_varnames[idx], idx))
case "LOAD_CONST":
stack.append(ConstantExpr(inst.argval))
case "BINARY_OP":
right = stack.pop()
left = stack.pop()
match inst.argrepr:
case "+":
stack.append(BinOpExpr(left, right, Op.ADD))
case "-":
stack.append(BinOpExpr(left, right, Op.SUB))
case "*":
stack.append(BinOpExpr(left, right, Op.MUL))
case "/":
stack.append(BinOpExpr(left, right, Op.DIV))
case _:
raise BytecodeJITError(f"Unsupported operator {inst.argval}")
case "RETURN_VALUE":
if len(stack) != 1:
raise BytecodeJITError("Invalid stack state")
return stack.pop()
case "RESUME" | "CACHE":
# Skip nops
pass
case _:
raise BytecodeJITError(f"Unsupported opcode {inst.opname}")
The Python VM is a stack machine; so we emulate a stack to convert the function's bytecode to Expr IR (a bit like an RPN evaluator). As before, we then use our llvm_jit_evaluate utility function to lower Expr to LLVM IR and JIT execute it.
Using this JIT is as simple as the previous one - just swap astjit for bytecodejit:
from bytecodejit import bytecodejit
@bytecodejit
def some_expr(a, b, c):
return b / (a + 2) - c * (b - a)
print(some_expr(2, 16, 3))
Bytecode JIT case study: Numba
Numba is a compiler for Python itself. The idea is that you can speed up specific functions in your code by slapping a numba.njit decorator on them. What happens next is similar in spirit to our simple bytecodejit, but of course much more complicated because it supports a very large portion of Python semantics.
Numba uses the Python compiler to emit bytecode, just as we did; it then converts it into its own IR, and then to LLVM using llvmlite [2].
By starting with the bytecode, Numba makes its life easier (no need to rewrite the entire Python compiler). On the other hand, it also makes some analyses harder, because by the time we're in bytecode, a lot of semantic information existing in higher-level representations is lost. For example, Numba has to sweat a bit to recover control flow information from the bytecode (by running it through a special interpreter first).
Tracing-based JIT
The two approaches we've seen so far are similar in many ways - both rely on Python's introspection capabilities to compile the source code of the JIT-ed function to some extent (one to AST, the other all the way to bytecode), and then work on this lowered representation.
The tracing strategy is very different. It doesn't analyze the source code of the wrapped function at all - instead, it traces its execution by means of specially-boxed arguments, leveraging overloaded operators and functions, and then works on the generated trace.
The code implementing this for our smile demo is surprisingly compact:
def tracejit(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if kwargs:
raise TraceJITError("Keyword arguments are not supported")
argspec = inspect.getfullargspec(func)
argboxes = []
for i, arg in enumerate(args):
if i >= len(argspec.args):
raise TraceJITError("Too many arguments")
argboxes.append(_Box(VarExpr(argspec.args[i], i)))
out_box = func(*argboxes)
return llvm_jit_evaluate(out_box.expr, *args)
return wrapper
Each runtime argument of the wrapped function is assigned a VarExpr, and that is placed in a _Box, a placeholder class which lets us do operator overloading:
@dataclass
class _Box:
expr: Expr
_Box.__add__ = _Box.__radd__ = _register_binary_op(Op.ADD)
_Box.__sub__ = _register_binary_op(Op.SUB)
_Box.__rsub__ = _register_binary_op(Op.SUB, reverse=True)
_Box.__mul__ = _Box.__rmul__ = _register_binary_op(Op.MUL)
_Box.__truediv__ = _register_binary_op(Op.DIV)
_Box.__rtruediv__ = _register_binary_op(Op.DIV, reverse=True)
The remaining key function is _register_binary_op:
def _register_binary_op(opcode, reverse=False):
"""Registers a binary opcode for Boxes.
If reverse is True, the operation is registered as arg2 <op> arg1,
instead of arg1 <op> arg2.
"""
def _op(arg1, arg2):
if reverse:
arg1, arg2 = arg2, arg1
box1 = arg1 if isinstance(arg1, _Box) else _Box(ConstantExpr(arg1))
box2 = arg2 if isinstance(arg2, _Box) else _Box(ConstantExpr(arg2))
return _Box(BinOpExpr(box1.expr, box2.expr, opcode))
return _op
To understand how this works, consider this trivial example:
@tracejit
def add(a, b):
return a + b
print(add(1, 2))
After the decorated function is defined, add holds the wrapper function defined inside tracejit. When add(1, 2) is called, the wrapper runs:
- For each argument of add itself (that is a and b), it creates a new _Box holding a VarExpr. This denotes a named variable in the Expr IR.
- It then calls the wrapped function, passing it the boxes as runtime parameters.
- When (the wrapped) add runs, it invokes a + b. This is caught by the overloaded __add__ operator of _Box, and it creates a new BinOpExpr with the VarExprs representing a and b as children. This BinOpExpr is then returned [3].
- The wrapper unboxes the returned Expr and passes it to llvm_jit_evaluate to emit LLVM IR from it and JIT execute it with the actual runtime arguments of the call: 1, 2.
This might be a little mind-bending at first, because there are two different executions that happen:
- The first is calling the wrapped add function itself, letting the Python interpreter run it as usual, but with special arguments that build up the IR instead of doing any computations. This is the tracing step.
- The second is lowering this IR our tracing step built into LLVM IR and then JIT executing it with the actual runtime argument values 1, 2; this is the execution step.
This tracing approach has some interesting characteristics. Since we don't have to analyze the source of the wrapped functions but only trace through the execution, we can "magically" support a much richer set of programs, e.g.:
@tracejit
def use_locals(a, b, c):
x = a + 2
y = b - a
z = c * x
return y / x - z
print(use_locals(2, 8, 11))
This just works with our basic tracejit. Since Python variables are placeholders (references) for values, our tracing step is oblivious to them - it follows the flow of values. Another example:
@tracejit
def use_loop(a, b, c):
result = 0
for i in range(1, 11):
result += i
return result + b * c
print(use_loop(10, 2, 3))
This also just works! The created Expr will be a long chain of BinExpr additions of i's runtime values through the loop, added to the BinExpr for b * c.
This last example also leads us to a limitation of the tracing approach; the loop cannot be data-dependent - it cannot depend on the function's arguments, because the tracing step has no concept of runtime values and wouldn't know how many iterations to run through; or at least, it doesn't know this unless we want to perform the tracing run for every runtime execution [4].
The tracing approach is useful in several domains, most notably automatic differentiation (AD). For a slightly deeper taste, check out my radgrad project.
Tracing JIT case study: JAX
The JAX ML framework uses a tracing approach very similar to the one described here. The first code sample in this post shows the JAX notation. JAX cleverly wraps Numpy with its own version which is traced (similar to our _Box, but JAX calls these boxes "tracers"), letting you write regular-feeling Numpy code that can be JIT optimized and executed on accelerators like GPUs and TPUs via XLA. JAX's tracer builds up an underlying IR (called jaxpr) which can then be emitted to XLA ops and passed to XLA for further lowering and execution.
For a fairly deep overview of how JAX works, I recommend reading the autodidax doc.
As mentioned earlier, JAX has some limitations with things like data-dependent control flow in native Python. This won't work, because there's control flow that depends on a runtime value (count):
import jax
@jax.jit
def sum_datadep(a, b, count):
total = a
for i in range(count):
total += b
return total
print(sum_datadep(10, 3, 3))
When sum_datadep is executed, JAX will throw an exception, saying something like:
This concrete value was not available in Python because it depends on the value of the argument count.
As a remedy, JAX has its own built-in intrinsics from the jax.lax package. Here's the example rewritten in a way that actually works:
import jax
from jax import lax
@jax.jit
def sum_datadep_fori(a, b, count):
def body(i, total):
return total + b
return lax.fori_loop(0, count, body, a)
fori_loop (and many other built-ins in the lax package) is something JAX can trace through, generating a corresponding XLA operation (XLA has support for While loops, to which this lax.fori_loop can be lowered).
The tracing approach has clear benefits for JAX as well; because it only cares about the flow of values, it can handle arbitrarily complicated Python code, as long as the flow of values can be traced. Just like the local variables and data-independent loops shown earlier, but also things like closures. This makes meta-programming and templating easy [5].
Code
The full code for this post is available on GitHub.
[1] | Once again, this is a very simplified example. A more realistic translator would have to support many, many more Python bytecode instructions. |
[2] | In fact, llvmlite itself is a Numba sub-project and is maintained by the Numba team, for which I'm grateful! |
[3] | For a fun exercise, try adding constant folding to the wrapped _op: when both its arguments are constants (not boxes), instead placing each in a _Box(ConstantExpr(...)), it could perform the mathematical operation on them and return a single constant box. This is a common optimization in compilers! |
[4] | In all the JIT approaches showed in this post, the expectation is that compilation happens once, but the compiled function can be executed many times (perhaps in a loop). This means that the compilation step cannot depend on the runtime values of the function's arguments, because it has no access to them. You could say that it does, but that's just for the very first time the function is run (in the tracing approach); it has no way of knowing their values the next times the function will run. JAX has some provisions for cases where a function is invoked with a small set of runtime values and we want to separately JIT each of them. |
[5] | A reader pointed out that TensorFlow's AutoGraph feature combines the AST and tracing approaches. TF's eager mode performs tracing, but it also uses AST analyses to rewrite Python loops and conditions into builtins like tf.cond and tf.while_loop. |
February 14, 2025 09:49 PM UTC
Hugo van Kemenade
Improving licence metadata
What? #
PEP 639 defines a spec on how to document licences used in Python projects.
Instead of using a Trove classifier such as “License :: OSI Approved :: BSD License”, which is imprecise (for example, which BSD licence?), the SPDX licence expression syntax is used.
How? #
pypproject.toml
#
Change pyproject.toml
as follows.
I usually use Hatchling as a build backend, and support was added in 1.27:
[build-system]
build-backend = "hatchling.build"
requires = [
"hatch-vcs",
- "hatchling",
+ "hatchling>=1.27",
]
Replace the freeform license
field with a valid SPDX license expression, and add
license-files
which points to the licence files in the repo. There’s often only one,
but if you have more than one, list them all:
[project]
...
-license = { text = "MIT" }
+license = "MIT"
+license-files = [ "LICENSE" ]
Optionally delete the deprecated licence classifier:
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
- "License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
For example, see humanize#236 and prettytable#350.
Upload #
Then make sure to use a PyPI uploader that supports this.
I recommend using Trusted Publishing which I use with pypa/gh-action-pypi-publish to deploy from GitHub Actions. I didn’t need to make any changes here, just make a release as usual.
Result #
PyPI #
PyPI shows the new metadata:
pip #
pip can also show you the metadata:
❯ pip install prettytable==3.13.0
❯ pip show prettytable
Name: prettytable
Version: 3.13.0
...
License-Expression: BSD-3-Clause
Location: /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages
Requires: wcwidth
Required-by: norwegianblue, pypistats
Thank you! #
A lot of work went into this. Thank you to PEP authors Philippe Ombredanne for creating the first draft in 2019, to C.A.M. Gerlach for the second draft in 2021, and especially to Karolina Surma for getting the third draft finish line and helping with the implementation.
And many projects were updated to support this, thanks to the maintainers and contributors of at least:
- PyPI/Warehouse
- packaging 24.2
- Hatchling 1.27
- Twine 6.1.0
- PyPI publish GitHub Action v1.12.4
- build-and-inspect-python-package v2.12.0
- pip 25.0
Header photo: Amelia Earhart’s 1932 pilot licence in the San Diego Air and Space Museum Archive, with no known copyright restrictions.
February 14, 2025 03:11 PM UTC
Real Python
The Real Python Podcast – Episode #239: Behavior-Driven vs Test-Driven Development & Using Regex in Python
What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 14, 2025 12:00 PM UTC
Daniel Roy Greenfeld
Building a playing card deck
Today is Valentine's Day. That makes it the perfect day to write a blog post about showing how to not just build a deck of cards, but show off cards from the heart suite.
February 14, 2025 09:50 AM UTC
February 13, 2025
Bojan Mihelac
Prefixed Parameters for Django querystring tag
An overview of Django 5.1's new querystring tag and how to add support for prefixed parameters.
February 13, 2025 09:37 PM UTC
Peter Bengtsson
get in JavaScript is the same as property in Python
Prefix a function, in an object or class, with `get` and then that acts as a function call without brackets. Just like Python's `property` decorator.
February 13, 2025 12:41 PM UTC
EuroPython
EuroPython February 2025 Newsletter
Hey ya 👋
Hope you&aposre all having a fantastic February. We sure have been busy and got some exciting updates for you as we gear up for EuroPython 2025, which is taking place once again in the beautiful city of Prague. So let&aposs dive right in!
🗃️ Community Voting on Talks & Workshops
EuroPython 2025 is right around the corner and our programme team is hard at work putting together an amazing lineup. But we need your help to shape the conference! We received over 572 fantastic proposals, and now it’s time for Community Voting! 🎉 If you&aposve attended EuroPython before or submitted a proposal this year, you’re eligible to vote.
📢 More votes = a stronger, more diverse programme! Spread the word and get your EuroPython friends to cast their votes too.
🏃The deadline is Monday next week, so don’t miss your chance!
🗳️ Vote now: https://ep2025.europython.eu/programme/voting
🧐Call for Reviewers
Want to play a key role in building an incredible conference? Join our review team and help select the best talks for EuroPython 2025! Whether you&aposre a Python expert or an enthusiastic community member, your insights matter.
We’d like to also thank the over 100 people who have already signed up to review! For those who haven’t done so yet, please remember to accept your Pretalx link and get your reviews in by Monday 17th February.
You can already start reviewing proposals, and each review takes as little as 5 minutes. We encourage reviewers to go through at least 20-30 proposals, but if you can do more, even better! With almost 600 submissions to pick from, your help ensures we curate a diverse and engaging programme.
If you&aposre passionate about Python and want to contribute, we’d love to have you. Sign up here: forms.gle/4GTJjwZ1nHBGetM18.
🏃The deadline is Monday next week, so don’t delay!
Got questions? Reach out to us at programme@europython.eu
📣 Community Outreach
EuroPython isn’t just present at other Python events—we actively support them too! As a community sponsor, we love helping local PyCons grow and thrive. We love giving back to the community and strengthening Python events across Europe! 🐍💙
PyCon + Web in Berlin
The EuroPython team had a fantastic time at PyCon + Web in Berlin, meeting fellow Pythonistas, exchanging ideas, and spreading the word about EuroPython 2025. It was great to connect with speakers, organizers, and attendees.
Ever wondered how long it takes to walk from Berlin to Prague? A huge thank you to our co-organizers, Cheuk, Artur, and Cristián, for answering that in their fantastic lightning talk about EuroPython!
FOSDEM 2025
We had some members of the EuroPython team at FOSDEM 2025, connecting with the open-source community and spreading the Python love! 🎉 We enjoyed meeting fellow enthusiasts, sharing insights about the EuroPython Society, and giving away the first EuroPython 2025 stickers. If you stopped by—thank you and we hope to see you in Prague this July.
🦒 Speaker Mentorship Programme
The signups for The Speaker Mentorship Programme closed on 22nd January 2025. We’re excited to have matched 43 mentees with 24 mentors from our community. We had an increase in the number of mentees who signed up and that’s amazing! We’re glad to be contributing to the journey of new speakers in the Python community. A massive thank you to our mentors for supporting the mentees and to our mentees; we’re proud of you for taking this step in your journey as a speaker.
26 mentees submitted at least 1 proposal. Out of this number, 13 mentees submitted 1 proposal, 9 mentees submitted 2 proposals, 2 mentees submitted 3 proposals, 1 mentee submitted 4 proposals and lastly, 1 mentee submitted 5 proposals. We wish our mentees the best of luck. We look forward to the acceptance of their proposals.
In a few weeks, we will host an online panel session with 2–3 experienced community members who will share their advice with first-time speakers. At the end of the panel, there will be a Q&A session to answer all the participants’ questions.
You can watch the recording of the previous year’s workshop here:
💰Sponsorship
EuroPython is one of the largest Python conferences in Europe, and it wouldn’t be possible without our sponsors. We are so grateful for the companies who have already expressed interest. If you’re interested in sponsoring EuroPython 2025 as well, please reach out to us at sponsoring@europython.eu.
🎤 EuroPython Speakers Share Their Experiences
We asked our past speakers to share their experiences speaking at EuroPython. These videos have been published on YouTube as shorts, and we&aposve compiled them into brief clips for you to watch.
A big thanks goes to Sebastian Witowski, Jan Smitka, Yuliia Barabash, Jodie Burchell, Max Kahan, and Cheuk Ting Ho for sharing their experiences.
Why You Should Submit a Proposal for EuroPython? Part 2
Why You Should Submit a Proposal for EuroPython? Part 3
📊 EuroPython Society Board Report
The EuroPython conference wouldn’t be what it is without the incredible volunteers who make it all happen. 💞 Behind the scenes, there’s also the EuroPython Society—a volunteer-led non-profit that manages the fiscal and legal aspects of running the conference, oversees its organization, and works on a few smaller projects like the grants programme. To keep everyone in the loop and promote transparency, the Board is sharing regular updates on what we’re working on.
The January board report is ready: https://europython-society.org/board-report-for-january-2025/.
🐍 Upcoming Events in the Python Community
- GeoPython
Basel, February 24-26, 2025 https://2025.geopython.net/ - PyCon Austria
Eisenstadt, April 6-7, 2025 https://pycon.pyug.at/en/ - PyCon Lithuania
Vilnius, April 23-25, 2025 https://pycon.lt/ - DjangoCon Europe
Dublin, April 23-27, 2025 https://2025.djangocon.eu/ - PyCon DE & PyData
Darmstadt, April 23-25, 2025 https://2025.pycon.de/ - Pycon Italia
Bologna, May 28-31, 2025 https://2025.pycon.it/en - PyCamp CZ 25 beta
Třeštice, September 12-14, 2025 https://pycamp.cz/ - Pycon UK
Manchester, September 19-22, 2025 https://2025.pyconuk.org/ - PyCon Estonia
Tallinn, October 2-3, 2025 https://pycon.ee/
That&aposs all for now! Keep an eye on your inbox and our website for more news and announcements. We&aposre counting down the days until we can come together in Prague to celebrate our shared love for Python. 🐍❤️
Cheers,
The EuroPython Team
February 13, 2025 08:36 AM UTC
February 12, 2025
Kay Hayen
Nuitka Release 2.6
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release has all-around improvements, with a lot effort spent on bug fixes in the memory leak domain, and preparatory actions for scalability improvements.
Bug Fixes
MSYS2: Path normalization to native Windows format was required in more places for the
MinGW
variant of MSYS2.The
os.path.normpath
function doesn’t normalize to native Win32 paths with MSYS2, instead using forward slashes. This required manual normalization in additional areas. (Fixed in 2.5.1)UI: Fix, give a proper error when extension modules asked to include failed to be located. instead of a proper error message. (Fixed in 2.5.1)
Fix, files with illegal module names (containing
.
) in their basename were incorrectly considered as potential sub-modules for--include-package
. These are now skipped. (Fixed in 2.5.1)Stubgen: Improved stability by preventing crashes when stubgen encounters code it cannot handle. Exceptions from it are now ignored. (Fixed in 2.5.1)
Stubgen: Addressed a crash that occurred when encountering assignments to non-variables. (Fixed in 2.5.1)
Python 3: Fixed a regression introduced in 2.5 release that could lead to segmentation faults in exception handling for generators. (Fixed in 2.5.2)
Python 3.11+: Corrected an issue where dictionary copies of large split directories could become corrupted. This primarily affected instance dictionaries, which are created as copies until updated, potentially causing problems when adding new keys. (Fixed in 2.5.2)
Python 3.11+: Removed the assumption that module dictionaries always contain only strings as keys. Some modules, like
Foundation
on macOS, use non-string keys. (Fixed in 2.5.2)Deployment: Ensured that the
--deployment
option correctly affects the C compilation process. Previously, only individual disables were applied. (Fixed in 2.5.2)Compatibility: Fixed a crash that could occur during compilation when unary operations were used within binary operations. (Fixed in 2.5.3)
Onefile: Corrected the handling of
__compiled__.original_argv0
, which could lead to crashes. (Fixed in 2.5.4)Compatibility: Resolved a segmentation fault occurring at runtime when calling
tensorflow.function
with only keyword arguments. (Fixed in 2.5.5)macOS: Harmless warnings generated for x64 DLLs on arm64 with newer macOS versions are now ignored. (Fixed in 2.5.5)
Python 3.13: Addressed a crash in Nuitka’s dictionary code that occurred when copying dictionaries due to internal changes in Python 3.13. (Fixed in 2.5.6)
macOS: Improved onefile mode signing by applying
--macos-signed-app-name
to the signature of binaries, not just app bundles. (Fixed in 2.5.6)Standalone: Corrected an issue where too many paths were added as extra directories from the Nuitka package configuration. This primarily affected the
win32com
package, which currently relies on thepackage-dirs
import hack. (Fixed in 2.5.6)Python 2: Prevented crashes on macOS when creating onefile bundles with Python 2 by handling negative CRC32 values. This issue may have affected other versions as well. (Fixed in 2.5.6)
Plugins: Restored the functionality of code provided in
pre-import-code
, which was no longer being applied due to a regression. (Fixed in 2.5.6)macOS: Suppressed the app bundle mode recommendation when it is already in use. (Fixed in 2.5.6)
macOS: Corrected path normalization when the output directory argument includes “~”.
macOS: GitHub Actions Python is now correctly identified as a Homebrew Python to ensure proper DLL resolution. (Fixed in 2.5.7)
Compatibility: Fixed a reference leak that could occur with values sent to generator objects. Asyncgen and coroutines were not affected. (Fixed in 2.5.7)
Standalone: The
--include-package
scan now correctly handles cases where both a package init file and competing Python files exist, preventing compile-time conflicts. (Fixed in 2.5.7)Modules: Resolved an issue where handling string constants in modules created for Python 3.12 could trigger assertions, and modules created with 3.12.7 or newer failed to load on older Python 3.12 versions when compiled with Nuitka 2.5.5-2.5.6. (Fixed in 2.5.7)
Python 3.10+: Corrected the tuple code used when calling certain method descriptors. This issue primarily affected a Python 2 assertion, which was not impacted in practice. (Fixed in 2.5.7)
Python 3.13: Updated resource readers to accept multiple arguments for
importlib.resources.read_text
, and correctly handleencoding
anderrors
as keyword-only arguments.Scons: The platform encoding is no longer used to decode
ccache
logs. Instead,latin1
is used, as it is sufficient for matching filenames across log lines and avoids potential encoding errors. (Fixed in 2.5.7)Python 3.12+: Requests to statically link libraries for
hacl
are now ignored, as these libraries do not exist. (Fixed in 2.5.7)Compatibility: Fixed a memory leak affecting the results of functions called via specs. This primarily impacted overloaded hard import operations. (Fixed in 2.5.7)
Standalone: When multiple distributions for a package are found, the one with the most accurate file matching is now selected. This improves handling of cases where an older version of a package (e.g.,
python-opencv
) is overwritten with a different variant (e.g.,python-opencv-headless
), ensuring the correct version is used for Nuitka package configuration and reporting. (Fixed in 2.5.8)Python 2: Prevented a potential crash during onefile initialization on Python 2 by passing the directory name directly from the onefile bootstrap, avoiding the use of
os.dirname
which may not be fully loaded at that point. (Fixed in 2.5.8)Anaconda: Preserved necessary
PATH
environment variables on Windows for packages that require loading DLLs from those locations. OnlyPATH
entries not pointing inside the installation prefix are removed. (Fixed in 2.5.8)Anaconda: Corrected the
is_conda_package
check to function properly when distribution names and package names differ. (Fixed in 2.5.8)Anaconda: Improved package name resolution for Anaconda distributions by checking conda metadata when file metadata is unavailable through the usual methods. (Fixed in 2.5.8)
MSYS2: Normalized the downloaded gcc path to use native Windows slashes, preventing potential compilation failures. (Fixed in 2.5.9)
Python 3.13: Restored static libpython functionality on Linux by adapting to a signature change in an unexposed API. (Fixed in 2.5.9)
Python 3.6+: Prevented
asyncgen
from being resurrected when a finalizer is attached, resolving memory leaks that could occur withasyncio
in the presence of exceptions. (Fixed in 2.5.10)UI: Suppressed the gcc download prompt that could appear during
--version
output on Windows systems without MSVC or with an improperly installed gcc.Ensured compatibility with monkey patched
os.lstat
oros.stat
functions, which are used in some testing scenarios.Data Composer: Improved the determinism of the JSON statistics output by sorting keys, enabling reliable build comparisons.
Python 3.6+: Fixed a memory leak in
asyncgen
with finalizers, which could lead to significant memory consumption when usingasyncio
and encountering exceptions.Scons: Optimized empty generators (an optimization result) to avoid generating unused context code, eliminating C compilation warnings.
Python 3.6+: Fixed a reference leak affecting the
asend
value inasyncgen
. While typicallyNone
, this could lead to observable reference leaks in certain cases.Python 3.5+: Improved handling of
coroutine
andasyncgen
resurrection, preventing memory leaks withasyncio
andasyncgen
, and ensuring correct execution offinally
code in coroutines.Python 3: Corrected the handling of
generator
objects resurrecting during deallocation. While not explicitly demonstrated, this addresses potential issues similar to those encountered with coroutines, particularly for old-style coroutines created with thetypes.coroutine
decorator.PGO: Fixed a potential crash during runtime trace collection by ensuring timely initialization of the output mechanism.
Package Support
Standalone: Added inclusion of metadata for
jupyter_client
to support its own usage of metadata. (Added in 2.5.1)Standalone: Added support for the
llama_cpp
package. (Added in 2.5.1)Standalone: Added support for the
litellm
package. (Added in 2.5.2)Standalone: Added support for the
lab_lamma
package. (Added in 2.5.2)Standalone: Added support for
docling
metadata. (Added in 2.5.5)Standalone: Added support for
pypdfium
on Linux. (Added in 2.5.5)Standalone: Added support for using the
debian
package. (Added in 2.5.5)Standalone: Added support for the
pdfminer
package. (Added in 2.5.5)Standalone: Included missing dependencies for the
torch._dynamo.polyfills
package. (Added in 2.5.6)Standalone: Added support for
rtree
on Linux. The previous static configuration only worked on Windows and macOS; this update detects it from the module code. (Added in 2.5.6)Standalone: Added missing
pywebview
JavaScript data files. (Added in 2.5.7)Standalone: Added support for newer versions of the
sklearn
package. (Added in 2.5.7)Standalone: Added support for newer versions of the
dask
package. (Added in 2.5.7)Standalone: Added support for newer versions of the
transformers
package. (Added in 2.5.7)Windows: Placed
numpy
DLLs at the top level for improved support in the Nuitka VM. (Added in 2.5.7)Standalone: Allowed excluding browsers when including
playwright
. (Added in 2.5.7)Standalone: Added support for newer versions of the
sqlfluff
package. (Added in 2.5.8)Standalone: Added support for the
opencv
conda package, disabling unnecessary workarounds for its dependencies. (Added in 2.5.8)Standalone: Added support for newer versions of the
soundfile
package.Standalone: Added support for newer versions of the
coincurve
package.Standalone: Added support for newer versions of the
apscheduler
package.macOS: Removed the error and workaround forcing that required bundle mode for PyQt5 on macOS, as standalone mode now appears to function correctly.
Standalone: Added support for
seleniumbase
package downloads.
New Features
Module: Implemented 2-phase loading for all modules in Python 3.5 and higher. This improves loading modules as sub-packages in Python 3.12+, where the loading context is no longer accessible.
UI: Introduced the
app
value for the--mode
parameter. This creates an app bundle on macOS and a onefile binary on other platforms, replacing the--macos-create-app-bundle
option. (Added in 2.5.5)UI: Added a
package
mode, similar tomodule
, which automatically includes all sub-modules of a package without requiring manual specification with--include-package
.Module: Added an option to completely disable the use of
stubgen
. (Added in 2.5.1)Homebrew: Added support for
tcl9
with thetk-inter
plugin.Package Resolution: Improved handling of multiple distributions installed for the same package name. Nuitka now attempts to identify the most recently installed distribution, enabling proper recognition of different versions in scenarios like
python-opencv
andpython-opencv-headless
.Python 3.13.1 Compatibility: Addressed an issue where a workaround introduced for Python 3.10.0 broke standalone mode in Python 3.13.1. (Added in 2.5.6)
Plugins: Introduced a new feature for absolute source paths (typically derived from variables or relative to constants). This offers greater flexibility compared to the
by_code
DLL feature, which may be removed in the future. (Added in 2.5.6)Plugins: Added support for
when
conditions invariable
sections within Nuitka Package configuration.macOS: App bundles now automatically switch to the containing directory when not launched from the command line. This prevents the current directory from defaulting to
/
, which is rarely correct and can be unexpected for users. (Added in 2.5.6)Compatibility: Relaxed the restriction on setting the compiled frame
f_trace
. Instead of outright rejection, the deployment flag--no-deployment-flag=frame-useless-set-trace
can be used to allow it, although it will be ignored.Windows: Added the ability to detect extension module entry points using an inline copy of
pefile
. This enables--list-package-dlls
to verify extension module validity on the platform. It also opens possibilities for automatic extension module detection on major operating systems.Watch: Added support for using
conda
packages instead of PyPI packages.UI: Introduced
--list-package-exe
to complement--list-package-dlls
for package analysis when creating Nuitka Package Configuration.Windows ARM: Removed workarounds that are no longer necessary for compilation. While the lack of dependency analysis might require correction in a hotfix, this configuration should now be supported.
Optimization
Scalability: Implemented experimental code for more compact code object usage, leading to more scalable C code and constants usage. This is expected to speed up C compilation and code generation in the future once fully validated.
Scons: Added support for C23 embedding of the constants blob. This will be utilized with Clang 19+ and GCC 15+, except on Windows and macOS where other methods are currently employed.
Compilation: Improved performance by avoiding redundant path checks in cases of duplicated package directories. This significantly speeds up certain scenarios where file system access is slow.
Scons: Enhanced detection of static libpython, including for self-compiled, uninstalled Python installations.
Anti-Bloat
Improved
no_docstrings
support for thexgboost
package. (Added in 2.5.7)Avoided unnecessary usage of
numpy
for thePIL
package.Avoided unnecessary usage of
yaml
for thenumpy
package.Excluded
tcltest
TCL code when usingtk-inter
, as these TCL files are unused.Avoided using
IPython
from thecomm
package.Avoided using
pytest
from thepdbp
package.
Organizational
UI: Added categories for plugins in the
--help
output. Non-package support plugin options are now shown by default. Introduced a dedicated--help-plugins
option and highlighted it in the general--help
output. This allows viewing all plugin options without needing to enable a specific plugin.UI: Improved warnings for onefile and OS-specific options. These warnings are now displayed unless the command originates from a Nuitka-Action context, where users typically build for different modes with a single configuration set.
Nuitka-Action: The default
mode
is nowapp
, building an application bundle on macOS and a onefile binary on other platforms.UI: The executable path in
--version
output now uses the report path. This avoids exposing the user’s home directory, encouraging more complete output sharing.UI: The Python flavor name is now included in the startup compilation message.
UI: Improved handling of missing Windows version information. If only partial version information (e.g., product or file version) is provided, an explicit error is given instead of an assertion error during post-processing.
UI: Corrected an issue where the container argument for
run-inside-nuitka-container
could not be a non-template file. (Fixed in 2.5.2)Release: The PyPI upload
sdist
creation now uses a virtual environment. This ensures consistent project name casing, as it is determined by the setuptools version. While currently using the deprecated filename format, this change prepares for the new format.Release: The
osc
binary is now used from the virtual environment to avoid potential issues with a broken system installation, as currently observed on Ubuntu.Debugging: Added an experimental option to disable the automatic conversion to short paths on Windows.
UI: Improved handling of external data files that overwrite the original file. Nuitka now prompts the user to provide an output directory to prevent unintended overwrites. (Added in 2.5.6)
UI: Introduced the alias
--include-data-files-external
for the external data files option. This clarifies that the feature is not specific to onefile mode and encourages its wider use.UI: Allowed
none
as a valid value for the macOS icon option. This disables the warning about a missing icon when intentionally not providing one.UI: Added an error check for icon filenames without suffixes, preventing cases where the file type cannot be inferred.
UI: Corrected the examples for
--include-package-data
with file patterns, which used incorrect delimiters.Scons: Added a warning about using gcc with LTO when
make
is unavailable, as this combination will not work. This provides a clearer message than the standard gcc warnings, which can be difficult for Python users to interpret.Debugging: Added an option to preserve printing during reference count tests. This can be helpful for debugging by providing additional trace information.
Debugging: Added a small code snippet for module reference leak testing to the Developer Manual.
Tests
Temporarily disabled tests that expose regressions in Python 3.13.1 that mean not to follow.
Improved test organization by using more common code for package tests. The scanning for test cases and main files now utilizes shared code.
Added support for testing variations of a test with different extra flags. This is achieved by exposing a
NUITKA_TEST_VARIANT
environment variable.Improved detection of commercial-only test cases by identifying them through their names rather than hardcoding them in the runner. These tests are now removed from the standard distribution to reduce clutter.
Utilized
--mode
options in tests for better control and clarity. Standalone mode tests now explicitly check for the application of the mode and error out if it’s missing. Mode options are added to the project options of each test case instead of requiring global configuration.Added a test case to ensure comprehensive coverage of external data file usage in onefile mode. This helps detect regressions that may have gone unnoticed previously.
Increased test coverage for coroutines and async generators, including checks for
inspect.isawaitable
and testing both function and context objects.
Cleanups
Unified the code used for generating source archives for PyPI uploads, ensuring consistency between production and standard archives.
Harmonized the usage of
include <...>
vsinclude "..."
based on the origin of the included files, improving code style consistency.Removed code duplication in the exception handler generator code by utilizing the
DROP_GENERATOR_EXCEPTION
functions.Updated Python version checks to reflect current compatibility. Checks for
>=3.4
were changed to>=3
, and outdated references to Python 3.3 in comments were updated to simply “Python 3”.Scons: Simplified and streamlined the code for the command options. An
OrderedDict
is now used to ensure more stable build outputs and prevent unnecessary differences in recorded output.Improved the
executeToolChecked
function by adding an argument to indicate whether decoding of returnedbytes
output tounicode
is desired. This eliminates redundant decoding in many places.
Summary
This a major release that it consolidates Nuitka big time.
The scalability work has progressed, even if no immediately visible effects are there yet, the next releases will have them, as this is the main area of improvement these days.
The memory leaks found are very important and very old, this is the
first time that asyncio
should be working perfect with Nuitka, it
was usable before, but compatibility is now much higher.
Also, this release puts out a much nicer help output and handling of
plugins help, which no longer needs tricks to see a plugin option that
is not enabled (yet), during --help
. The user interface is hopefully
more clean due to it.
February 12, 2025 11:00 PM UTC
Giampaolo Rodola
psutil: drop Python 2.7 support
About dropping Python 2.7 support in psutil, 3 years ago I stated:
Not a chance, for many years to come. [Python 2.7] currently represents 7-10% of total downloads, meaning around 70k / 100k downloads per day.
Only 3 years later, and to my surprise, downloads for Python 2.7 dropped to 0.36%! As such, as of psutil 7.0.0, I finally decided to drop support for Python 2.7!
The numbers
These are downloads per month:
$ pypinfo --percent psutil pyversion
Served from cache: False
Data processed: 4.65 GiB
Data billed: 4.65 GiB
Estimated cost: $0.03
| python_version | percent | download_count |
| -------------- | ------- | -------------- |
| 3.10 | 23.84% | 26,354,506 |
| 3.8 | 18.87% | 20,862,015 |
| 3.7 | 17.38% | 19,217,960 |
| 3.9 | 17.00% | 18,798,843 |
| 3.11 | 13.63% | 15,066,706 |
| 3.12 | 7.01% | 7,754,751 |
| 3.13 | 1.15% | 1,267,008 |
| 3.6 | 0.73% | 803,189 |
| 2.7 | 0.36% | 402,111 |
| 3.5 | 0.03% | 28,656 |
| Total | | 110,555,745 |
According to pypistats.org Python 2.7 downloads represents the 0.28% of the total, around 15.000 downloads per day.
The pain
Maintaining 2.7 support in psutil had become increasingly difficult, but still possible. E.g. I could still run tests by using old PYPI backports. GitHub Actions could still be tweaked to run tests and produce 2.7 wheels on Linux and macOS. Not on Windows though, for which I had to use a separate service (Appveyor). Still, the amount of hacks in psutil source code necessary to support Python 2.7 piled up over the years, and became quite big. Some disadvantages that come to mind:
- Having to maintain a Python compatibility layers like psutil/_compat.py. This translated in extra extra code and extra imports.
- The C compatibility layer to differentiate between Python 2 and 3 (
#if PY_MAJOR_VERSION <= 3
, etc.). - Dealing with the string vs. unicode differences, both in Python and in C.
- Inability to use modern language features, especially f-strings.
- Inability to freely use
enum
s, which created a difference on how CONSTANTS were exposed in terms of API. - Having to install a specific version of
pip
and other (outdated) deps. - Relying on the third-party Appveyor CI service to run tests and produce 2.7 wheels.
- Running 4 extra CI jobs on every commit (Linux, macOS, Windows 32-bit, Windows 64-bit) making the CI slower and more subject to failures (we have quite a bit of flaky tests).
- The distribution of 7 wheels specific for Python 2.7. E.g. in the previous release I had to upload:
psutil-6.1.1-cp27-cp27m-macosx_10_9_x86_64.whl
psutil-6.1.1-cp27-none-win32.whl
psutil-6.1.1-cp27-none-win_amd64.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_x86_64.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_x86_64.whl
The removal
The removal was done in
PR-2841, which removed around
1500 lines of code (nice!). It felt liberating. In doing so, in the doc I
still made the promise that the 6.1.* serie will keep supporting Python 2.7
and will receive critical bug-fixes only (no new features). It will be
maintained in a specific python2
branch. I explicitly kept
the
setup.py
script compatible with Python 2.7 in terms of syntax, so that, when the tarball
is fetched from PYPI, it will emit an informative error message on pip install
psutil
. The user trying to install psutil on Python 2.7 will see:
$ pip2 install psutil
As of version 7.0.0 psutil no longer supports Python 2.7.
Latest version supporting Python 2.7 is psutil 6.1.X.
Install it with: "pip2 install psutil==6.1.*".
As the informative message states, users that are still on Python 2.7 can still use psutil with:
pip2 install psutil==6.1.*
Related tickets
February 12, 2025 11:00 PM UTC
EuroPython Society
Board Report for January 2025
The top priority for the board in January was finishing the hiring of our event manager. We’re super excited to introduce Anežka Müller! Anežka is a freelance event manager and a longtime member of the Czech Python community. She’s a member of the Pyvec board, co-organizes PyLadies courses, PyCon CZ, Brno Pyvo, and Brno Python Pizza. She’ll be working closely with the board and OPS team, mainly managing communication with service providers. Welcome onboard!
Our second priority was onboarding teams. We’re happy that we already have the Programme team in place—they started early and launched the Call for Proposals at the beginning of January. We’ve onboarded a few more teams and are in the process of bringing in the rest.
Our third priority was improving our grant programme in order to support more events with our limited budget and to make it more clear and transparent. We went through past data, came up with a new proposal, discussed it, voted on it, and have already published it on our blog.
Individual reports:
Artur
- Updating onboarding/offboarding checklists for Volunteers and Board Members
- Started development of https://github.com/EuroPython/internal-bot
- Event Manager onboarding
- Various infrastructure updates including new website deployment and self-hosted previews for Pull Requests to the website.
- Setting up EPS AWS account.
- Working out the Grant Guidelines update for 2025
- Attending PyConWeb and FOSDEM
- Reviewing updates to the Sponsors setup and packages for 2025
- More documentation, sharing know-how and reviewing new proposals.
Mia
- Brand strategy: Analysis of social media posts from previous years and web analytics. Call with a European open-source maintainer and a call with a local events organizer about EP content.
- Comms & design: Call for proposal announcements, EP 2024 video promotions, speaker mentorship, and newsletter. Video production - gathering videos from speakers, video post-production, and scheduling them on YouTube shorts, and social media.
- Event management coordination: Calls with the event manager and discussions about previous events.
- Grants: Work on new grant guidelines and related comms.
- Team onboarding: Calls with potential comms team members and coordination.
- PR: Delivering a lightning talk at FOSDEM.
Cyril
- Offboarding the old board
- Permission cleanup
- Team selection
- Onboarding new team members
- Administrative work on Grants
Aris
- Worked on the Grants proposal
- Teams selection
- Follow-up with team members
- Board meetings
- Financial updates
- Community outreach: FOSDEM
Ege
- Working on various infrastructure updates, mostly related to the website.
- Reviewing Pull Requests for the website and the internal bot
- Working on the infrastructure team proposal.
Shekhar
- Timeline: Discussion with the Programme Team, and planning to do the same with the other teams.
- Visa Request letter: Setup and Test Visa Request Automation for the current year
- Team selection discussion with past volunteers
- Board Meetings
Anders
- ...
February 12, 2025 03:08 PM UTC
Python Morsels
Avoid over-commenting in Python
When do you need a comment in Python and when should you consider an alternative to commenting?
Table of contents
Documenting instead of commenting
Here is a comment I would not write in my code:
def first_or_none(iterable):
# Return the first item in given iterable (or None if empty).
for item in iterable:
return item
return None
That comment seems to describe what this code does... so why would I not write it?
I do like that comment, but I would prefer to write it as a docstring instead:
def first_or_none(iterable):
"""Return the first item in given iterable (or None if empty)."""
for item in iterable:
return item
return None
Documentation strings are for conveying the purpose of function, class, or module, typically at a high level.
Unlike comments, they can be read by Python's built-in help
function:
>>> help(first_or_none)
Help on function first_or_none in module __main__:
first_or_none(iterable)
Return the first item in given iterable (or None if empty).
Docstrings are also read by other documentation-oriented tools, like Sphinx.
Non-obvious variables and values
Here's a potentially helpful comment:
Read the full article: https://www.pythonmorsels.com/avoid-comments/
February 12, 2025 03:05 PM UTC
Real Python
Python Keywords: An Introduction
Python keywords are reserved words with specific functions and restrictions in the language. Currently, Python has thirty-five keywords and four soft keywords. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.
By the end of this tutorial, you’ll understand that:
- There are 35 keywords and four soft keywords in Python.
- You can get a list of all keywords using
keyword.kwlist
from thekeyword
module. - Soft keywords in Python act as keywords only in specific contexts.
print
andexec
are keywords that have been deprecated and turned into functions in Python 3.
In this article, you’ll find a basic introduction to all Python keywords and soft keywords along with other resources that will be helpful for learning more about each keyword.
Get Your Cheat Sheet: Click here to download a free cheat sheet that summarizes the main keywords in Python.
Take the Quiz: Test your knowledge with our interactive “Python Keywords: An Introduction” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Keywords: An IntroductionIn this quiz, you'll test your understanding of Python keywords and soft keywords. These reserved words have specific functions and restrictions in Python, and understanding how to use them correctly is fundamental for building Python programs.
Python Keywords
Python keywords are special reserved words that have specific meanings and purposes and can’t be used for anything but those specific purposes. These keywords are always available—you’ll never have to import them into your code.
Python keywords are different from Python’s built-in functions and types. The built-in functions and types are also always available, but they aren’t as restrictive as the keywords in their usage.
An example of something you can’t do with Python keywords is assign something to them. If you try, then you’ll get a SyntaxError
. You won’t get a SyntaxError
if you try to assign something to a built-in function or type, but it still isn’t a good idea. For a more in-depth explanation of ways keywords can be misused, check out Invalid Syntax in Python: Common Reasons for SyntaxError.
There are thirty-five keywords in Python. Here’s a list of them, each linked to its relevant section in this tutorial:
False |
await |
else |
import |
pass |
None |
break |
except |
in |
raise |
True |
class |
finally |
is |
return |
and |
continue |
for |
lambda |
try |
as |
def |
from |
nonlocal |
while |
assert |
del |
global |
not |
with |
async |
elif |
if |
or |
yield |
Two keywords have additional uses beyond their initial use cases. The else
keyword is also used with loops and with try
and except
in addition to in conditional statements. The as
keyword is most commonly used in import
statements, but also used with the with
keyword.
The list of Python keywords and soft keywords has changed over time. For example, the await
and async
keywords weren’t added until Python 3.7. Also, both print
and exec
were keywords in Python 2.7 but were turned into built-in functions in Python 3 and no longer appear in the keywords list.
Python Soft Keywords
As mentioned above, you’ll get an error if you try to assign something to a Python keyword. Soft keywords, on the other hand, aren’t that strict. They syntactically act as keywords only in certain conditions.
This new capability was made possible thanks to the introduction of the PEG parser in Python 3.9, which changed how the interpreter reads the source code.
Leveraging the PEG parser allowed for the introduction of structural pattern matching in Python. In order to use intuitive syntax, the authors picked match
, case
, and _
for the pattern matching statements. Notably, match
and case
are widely used for this purpose in many other programming languages.
To prevent conflicts with existing Python code that already used match
, case
, and _
as variable or function names, Python developers decided to introduce the concept of soft keywords.
Currently, there are four soft keywords in Python:
You can use the links above to jump to the soft keywords you’d like to read about, or you can continue reading for a guided tour.
Value Keywords: True
, False
, None
There are three Python keywords that are used as values. These values are singleton values that can be used over and over again and always reference the exact same object. You’ll most likely see and use these values a lot.
There are a few terms used in the sections below that may be new to you. They’re defined here, and you should be aware of their meaning before proceeding:
Read the full article at https://realpython.com/python-keywords/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 12, 2025 02:00 PM UTC
EuroPython Society
Changes in the Grants Programme for 2025
TL;DR:
- We are making small changes to the Grant Programme
- We are increasing transparency and reducing ambiguity in the guidelines.
- We would like to support more events with our limited budget
- We’ve introduced caps for events in order to make sure all grants are fairly given and we can support more communities.
- We’ve set aside 10% of our budget for the local community.
Background:
The EPS introduced a Grant Programme in 2017. Since then, we have granted almost EUR 350k through the programme, partly via EuroPython Finaid and by directly supporting other Python events and projects across Europe. In the last two years, the Grant Programme has grown to EUR 100k per year, with even more requests coming in.
With this growth come new challenges in how to distribute funds fairly so that more events can benefit. Looking at data from the past two years, we’ve often been close to or over our budget. The guidelines haven’t been updated in a while. As grant requests become more complex, we’d like to simplify and clarify the process, and better explain it on our website.
We would also like to acknowledge that EuroPython, when traveling around Europe, has an additional impact on the host country, and we’d like to set aside part of the budget for the local community.
The Grant Programme is also a primary funding source for EuroPython Finaid. To that end, we aim to allocate 30% of the total Grant Programme budget to Finaid, an increase from the previous 25%.
Changes:
- We’ve updated the text on our website, and split it into multiple sub-pages to make it easier to navigate. The website now includes a checklist of what we would like to see in a grant application, and a checklist for the Grants Workgroup – so that when you apply for the Grant you already know the steps that it will go through later and when you can expect an answer from us.
- We looked at the data from previous years, and size and timing of the grant requests. With the growing number and size of the grants, to make it more accessible to smaller conferences and conferences happening later in the year, we decided to introduce max caps per grant and split the budget equally between the first and second half of the year. We would also explicitly split the total budget into three categories – 30% goes to the EuroPython finaid, 10% is reserved for projects in the host country. The remaining 60% of the budget goes to fund other Python Conferences. This is similar to the split in previous years, but more explicit and transparent.
Using 2024 data, and the budget available for Community Grants (60% of total), we’ve simulated different budget caps and found a sweet spot at 6000EUR, where we are able to support all the requests with most of the grants being below that limit. For 2025 we expect to receive a similar or bigger number of requests.
2024 | 6k | 5k | 4k | 3.5 | 3 | |
Grant #1 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #2 | € 8,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #3 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #4 | € 5,000.00 | € 5,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #5 | € 10,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #6 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #7 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 |
Grant #8 | € 5,000.00 | € 5,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #9 | € 6,000.00 | € 6,000.00 | € 5,000.00 | € 4,000.00 | € 3,500.00 | € 3,000.00 |
Grant #10 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 | € 2,900.00 |
Grant #11 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 |
Grant #12 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 |
Grant #13 | € 450.00 | € 450.00 | € 450.00 | € 450.00 | € 450.00 | € 450.00 |
Grant #14 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 | € 3,000.00 |
Grant #15 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 | € 1,000.00 |
Grant #16 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 | € 2,000.00 |
Grant #17 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,500.00 | € 3,000.00 |
Grant #18 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 | € 1,500.00 |
SUM | € 66,350.00 | € 60,350.00 | € 57,350.00 | € 52,350.00 | € 48,350.00 | € 43,850.00 |
We are introducing a special 10% pool of money to be used on projects in the host country (in 2025 that’s again Czech Republic). This pool is set aside at the beginning of the year, with one caveat that we would like to deploy it in the first half of the year. Whatever is left unused goes back to the Community Pool to be used in second half of the year.
Expected outcome:
- Fairer Funding: By spreading our grants out during the year, conferences that happen later won’t miss out.
- Easy to Follow: Clear rules and deadlines cut down on confusion about how much you can get and what it’s for.
- Better Accountability: We ask for simple post-event reports so we can see where the money went and what impact it made.
- Stronger Community: Funding more events grows our Python network across Europe, helping everyone learn, connect, and collaborate.
February 12, 2025 01:16 PM UTC
Real Python
Quiz: Python Keywords: An Introduction
In this quiz, you’ll test your understanding of Python Keywords.
Python keywords are reserved words with specific functions and restrictions in the language. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 12, 2025 12:00 PM UTC
Zato Blog
Modern REST API Tutorial in Python
Modern REST API Tutorial in Python
Great APIs don't win theoretical arguments - they just prefer to work reliably and to make developers' lives easier.
Here's a tutorial on what building production APIs is really about: creating interfaces that are practical in usage, while keeping your systems maintainable for years to come.
Sound intriguing? Read the modern REST API tutorial in Python here.
More resources
➤ Python API integration tutorials
➤ What is a Network Packet Broker? How to automate networks in Python?
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
February 12, 2025 08:00 AM UTC
Kushal Das
pass using stateless OpenPGP command line interface
Yesterday I wrote about how
I am using a different tool for git
signing and verification. Next, I
replaced my pass
usage. I have a small
patch to use
stateless OpenPGP command line interface (SOP). It is an implementation
agonostic standard for handling OpenPGP messages. You can read the whole SPEC
here.
Installation
cargo install rsop rsop-oct
And copied the bash script from my repository to the path somewhere.
The rsoct
binary from rsop-oct
follows the same SOP standard but uses the
card to signing/decryption. I stored my public key in
~/.password-store/.gpg-key
file, which is in turn used for encryption.
Usage
Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)