I usually do REPL driven development in Python via emacs but you can tell it's not the natural way to do things, esp. if you start doing stuff with async. But I still feel that it makes me way more productive than I would otherwise be
It has a Tk based REPL and debugger in the box, and I guess nowadays Jupiter notebooks is the closest to a Lisp Machines/Interlisp-D kind of development.
There are the IDE integrations as well.
Pity is the lack of (compile ...) and (decompile ....), or similar.
I had a colleague who was hostile to any language other than common lisp. Except python, which I assume is just because this page exists. What if norvig woke up that day and decided to write about Ruby instead?
Which incidentely has a much better history with JIT adoption than Python, where the community has largely ignored PyPy.
Meanwhile Ruby has had MacRuby from Apple, later canceled, but the main developers went out creating RubyMotion.
Sun toyed with JRuby, it was even officially supported on Netbeans, then Red-Hat supported the project for a while. It was also one of the first dynamic languages on GraalVM, with TruffleRuby. GraalPy effort only came a couple of years later, and is still on baby steps.
As of 2025, the refernce implementation counts with YJIT, MJIT, TenderJIT, and MRuby 4 brings ZJIT to the party.
Exchanging Lisp for Python we went backwards in regards to performance in dynamic languages, in a distopian world where C, C++, Fortran libraries are "Python" libraries.
Nope they are bindings, and any language with FFI can have bindings to those same libraries, e.g. PyTorch can also be used in straight C++, or from Java.
There is a good compromise with reflection, attributes, metaclasses, one line lambdas, comprehensions
Now the lack of machine code generation for something Lisp was doing in the 1960's, Smalltalk in the 1980's, SELF in 1990's, and having to fall back on C, C++ and Fortran is bonkers.
Thankfully this is finally becoming a priority for those willing to sponsor the effort, and kudos to those making it happen.
I would rather use Common Lisp, in something like Allegro, but I will hardly find such a job, thus only arguing about language features doesn't take us that far.
In the improved error message [0] how are they able to tell nested attributes without having a big impact in performance? Or maybe this has a big impact on performance, then using exceptions for control flow is deprecated?
...
print(container.area)
> AttributeError: 'Container' object has no attribute 'area'. Did you mean: 'inner.area'?
> using exceptions for control flow is deprecated?
Exceptions are for the exceptional cases - the ones that mean normal operations are being suspended and error messages are being generated. Don't use them for control flow.
In Python, an iterator raises a StopIteration exception to indicate to the for loop that the iterator is done iterating.
In Python, the VM raises a KeyboardInterrupt exception when the user hits ctrl+c in order to unwind the stack, run cleanup code and eventually exit the program.
Python is a quite heavy user of exceptions for control flow.
Typically yes, but not in Python. In Python it is quite common and accepted, and sometimes even recommended as Pythonic to use exceptions for control flow. See iterators, for example.
Using exceptions for flow control has always been a bad idea, despite what they might have said. Perhaps they are generating that message lazily though?
On the other hand it's not like Python really cares about performance....
All iterators in Python use exceptions for flow control, as do all context managers for the abort/rollback case, and it is generally considered Pythonic to use single-indexing (EAFP) instead of check-then-get (LBYL) - generally with indexing and KeyError though and less commonly with attribute access and AttributeError.
[heavy green check mark]
try:
data = collection['key']
except KeyError:
data = ..try something else..
[red x]
if 'key' in collection:
data = collection['key']
else:
data = ..try something else..
The latter form also has the genuine disadvantage that nothing ensures the two keys are the same. I've seen typos there somewhat often in code reviews.
Last time I measured it, handling KeyError was also significantly faster than checking with “key in collection.” Also, as I was surprised to discover, Python threads are preemptively scheduled, GIL notwithstanding, so it’s possible for the key to be gone from the dictionary by the time you use it, even if it was there when you checked it. Although if you’re creating a situation where this is a problem, you probably have bigger issues.
If valid `data` can be zero, an empty string, or anything else “falsy”, then your version won’t handle those values correctly. It treats them the same as `None`, i.e. not found.
No, this would crash with numpy arrays, pandas series and such, with a ValueError: The truth value of an array with more than one element is ambiguous.
The new profiling.sampling module looks very neat, but I don't see any way to enable/disable the profiler from code. This greatly limits the usefulness, as I am often in control of the code itself but not how it is launched.
You mean the coding= comment? Where are you shipping your code that that was actually a problem? I've never been on a project where we did that, let alone needed it.
Makes sense, my bad, but even that is something I've never seen. I guess this is mostly a Windows thing? I've luckily never had the misfortune of having to deploy Python code on Windows.
encode()/decode() have used UTF-8 as the default since Python 3.2 (soon, 15 years ago). This is about the default encoding for e.g. the "encoding" parameter of open().
Worth mentioning that this is the documentation of 3.15 alpha 3. I feel like we’re better waiting for a release candidate or the final version before posting this page, in case there are any changes. Most people reading this are going to assume it’s final.
> On POSIX platforms, platlib directories will be created if needed when creating virtual environments, instead of using lib64 -> lib symlink. This means purelib and platlib of virtual environments no longer share the same lib directory on platforms where sys.platlibdir is not equal to lib.
Sigh. Why can't they just be the same in virtual environments. Who cares about lib64 in a venv? Just another useless search path.
It's not that uncommon for libraries to declare an overly strict maximum version, even if the code would actually work, because they can't know that at time of setting the version constraint.
I am here for the JIT and improved profiling goodies, one day Python will finally be a proper Lisp replacement.
-- https://www.norvig.com/python-lisp.html
What's Python's story for repl driven development ?
I usually do REPL driven development in Python via emacs but you can tell it's not the natural way to do things, esp. if you start doing stuff with async. But I still feel that it makes me way more productive than I would otherwise be
It has a Tk based REPL and debugger in the box, and I guess nowadays Jupiter notebooks is the closest to a Lisp Machines/Interlisp-D kind of development.
There are the IDE integrations as well.
Pity is the lack of (compile ...) and (decompile ....), or similar.
Which by the way is available in Julia.
I had a colleague who was hostile to any language other than common lisp. Except python, which I assume is just because this page exists. What if norvig woke up that day and decided to write about Ruby instead?
Which incidentely has a much better history with JIT adoption than Python, where the community has largely ignored PyPy.
Meanwhile Ruby has had MacRuby from Apple, later canceled, but the main developers went out creating RubyMotion.
Sun toyed with JRuby, it was even officially supported on Netbeans, then Red-Hat supported the project for a while. It was also one of the first dynamic languages on GraalVM, with TruffleRuby. GraalPy effort only came a couple of years later, and is still on baby steps.
As of 2025, the refernce implementation counts with YJIT, MJIT, TenderJIT, and MRuby 4 brings ZJIT to the party.
Exchanging Lisp for Python we went backwards in regards to performance in dynamic languages, in a distopian world where C, C++, Fortran libraries are "Python" libraries.
Nope they are bindings, and any language with FFI can have bindings to those same libraries, e.g. PyTorch can also be used in straight C++, or from Java.
Python community -- meet Schrödinger's cat
Without s-exprs nor macros? Without reader? With its stupid statement/expression divide?
...Right.
There is a good compromise with reflection, attributes, metaclasses, one line lambdas, comprehensions
Now the lack of machine code generation for something Lisp was doing in the 1960's, Smalltalk in the 1980's, SELF in 1990's, and having to fall back on C, C++ and Fortran is bonkers.
Thankfully this is finally becoming a priority for those willing to sponsor the effort, and kudos to those making it happen.
I would rather use Common Lisp, in something like Allegro, but I will hardly find such a job, thus only arguing about language features doesn't take us that far.
a lambda that forces you to define a function elsewhere if you want to do anything nontrivial in it defeats the purpose
PEP 686 makes me smile
https://peps.python.org/pep-0686/
"PEP 686 – Make UTF-8 mode default" for anyone else wondering
In the improved error message [0] how are they able to tell nested attributes without having a big impact in performance? Or maybe this has a big impact on performance, then using exceptions for control flow is deprecated?
> AttributeError: 'Container' object has no attribute 'area'. Did you mean: 'inner.area'?[0] -- https://docs.python.org/3.15/whatsnew/3.15.html#improved-err...
Here's the relevant diff: https://github.com/python/cpython/pull/137968/files#diff-966...
Search is limited to 20 attributes and non-descriptors only to avoid arbitrary code execution.
I assume constructing AttributeErrors isn't highly performance sensitive.
In Python, an iterator raises a StopIteration exception to indicate to the for loop that the iterator is done iterating.
In Python, the VM raises a KeyboardInterrupt exception when the user hits ctrl+c in order to unwind the stack, run cleanup code and eventually exit the program.
Python is a quite heavy user of exceptions for control flow.
Typically yes, but not in Python. In Python it is quite common and accepted, and sometimes even recommended as Pythonic to use exceptions for control flow. See iterators, for example.
I really dislike this too, but that’s how it is.
Using exceptions for flow control has always been a bad idea, despite what they might have said. Perhaps they are generating that message lazily though?
On the other hand it's not like Python really cares about performance....
All iterators in Python use exceptions for flow control, as do all context managers for the abort/rollback case, and it is generally considered Pythonic to use single-indexing (EAFP) instead of check-then-get (LBYL) - generally with indexing and KeyError though and less commonly with attribute access and AttributeError.
[heavy green check mark]
[red x] The latter form also has the genuine disadvantage that nothing ensures the two keys are the same. I've seen typos there somewhat often in code reviews.Last time I measured it, handling KeyError was also significantly faster than checking with “key in collection.” Also, as I was surprised to discover, Python threads are preemptively scheduled, GIL notwithstanding, so it’s possible for the key to be gone from the dictionary by the time you use it, even if it was there when you checked it. Although if you’re creating a situation where this is a problem, you probably have bigger issues.
I thought I knew enough about python culture but TIL
https://realpython.com/python-lbyl-vs-eafp/#errors-and-excep...
to me something like
is fine and isn’t subject to your disadvantage.You should do normally do
Wouldn't this be a little cleaner?
If valid `data` can be zero, an empty string, or anything else “falsy”, then your version won’t handle those values correctly. It treats them the same as `None`, i.e. not found.
:facepalm:
No, this would crash with numpy arrays, pandas series and such, with a ValueError: The truth value of an array with more than one element is ambiguous.
That behaves differently (eg if collection["key"] = 0)
No, truthiness (implicit bool coercion) is another thing you should avoid. This will do weird things if data is a string or a list or whatever.
it depends on what's in the if blocks
The value in the collection could be the actual value None, that’s different from the collection not having the key.
That's why I said "normally".
I would like to introduce you to StopIteration.
The new profiling.sampling module looks very neat, but I don't see any way to enable/disable the profiler from code. This greatly limits the usefulness, as I am often in control of the code itself but not how it is launched.
Can definitely think of some places I should use bytearray.take_bytes.
> Python now uses UTF-8 as the default encoding, independent of the system’s environment.
Nice, not specifying the encoding is one of the most common issues I need to point out in code reviews.
You mean the coding= comment? Where are you shipping your code that that was actually a problem? I've never been on a project where we did that, let alone needed it.
The comment you mention applies to source code encoding and it's obsolete for Python 3 since the beginning. This is about something else: https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-...
Makes sense, my bad, but even that is something I've never seen. I guess this is mostly a Windows thing? I've luckily never had the misfortune of having to deploy Python code on Windows.
It's a Linux thing too. It bit me in particular when running a script in a container that defaulted to ascii rather than utf-8 locale.
Have you considered reducing review noise by using static analysis?
Yep, ruff has a warning for this exact issue.
Pylint has had it too for at least a decade.
Ruff's rule is derived from Pylint: https://docs.astral.sh/ruff/rules/unspecified-encoding/
encode()/decode() have used UTF-8 as the default since Python 3.2 (soon, 15 years ago). This is about the default encoding for e.g. the "encoding" parameter of open().
Worth mentioning that this is the documentation of 3.15 alpha 3. I feel like we’re better waiting for a release candidate or the final version before posting this page, in case there are any changes. Most people reading this are going to assume it’s final.
> On POSIX platforms, platlib directories will be created if needed when creating virtual environments, instead of using lib64 -> lib symlink. This means purelib and platlib of virtual environments no longer share the same lib directory on platforms where sys.platlibdir is not equal to lib.
Sigh. Why can't they just be the same in virtual environments. Who cares about lib64 in a venv? Just another useless search path.
what about making python 5x faster(faster-cpython project)?
> faster-cpython project
Seems to have died the same death as Unladen Swallow, Pyston, etc:
https://discuss.python.org/t/community-stewardship-of-faster...
I'm the author of the thread you linked. Community stewardship is actually happening in some form or another now.
3.15 has some JIT upgrades that are in-progress. This has a non-exhaustive list of them https://docs.python.org/dev/whatsnew/3.15.html#upgraded-jit-...
Cool, now I just have to wait until my dependencies support this version.
Doesn't look to me like much got removed that was commonly used. What dependencies do you use that wouldn't automatically work on this version?
It's not that uncommon for libraries to declare an overly strict maximum version, even if the code would actually work, because they can't know that at time of setting the version constraint.
Who'd want to be sure it fully breaks with an update instead of having a small chance (a part of) it breaks with an update?!
The people who are currently doing so, presumably.
Seeing this reminded me of version 3.14, where π is an infinity expressed through its fractional parts.