293: The BEAM as the Universal Runtime
Summary
This episode of the Thinking Elixir Podcast covers several recent developments in the Elixir and Erlang ecosystem, highlighting the BEAM’s expanding role as a versatile runtime.
The hosts first discuss Hackney 3.1, a major release of the longstanding Erlang HTTP client library. The key advancement is the addition of HTTP/3 (QUIC) support implemented purely in Erlang, eliminating the need for a massive external C library. This represents a significant technical achievement, removing 1.3 million lines of C code and improving portability. The conversation contrasts this client-side support with the server-side challenges, noting that web servers like Bandit (used by Phoenix) would require a fundamentally different architecture to support HTTP/3.
Next, they explore Hornbeam, a new project by Benoît Chesneau that acts as an Erlang-powered WSGI/ASGI server for Python applications. This allows traditional (Django, Flask) and modern async (FastAPI) Python web apps to run on the BEAM, leveraging Erlang’s strengths in distribution, concurrency, and resiliency. The hosts discuss the strategic positioning of using Python for its rich ecosystem (especially in AI/ML) while using the BEAM for orchestration and infrastructure. This leads to a related tangent about Pyx, an Elixir library that interprets Python code directly within the BEAM without separate processes, potentially useful for AI agents that generate and execute Python scripts.
The episode then looks at Easel, a new Elixir library by Jason Stiebs for server-side 2D canvas rendering. Easel provides a drawing API that can target Phoenix LiveView or native WX widgets. The hosts explore the fun demo site featuring a Boids flocking simulation and a pathfinding visualizer (showing algorithms like A*), while also noting practical business applications like generating charts. All rendering logic is computed in Elixir and sent to the client.
Finally, the hosts cover two more updates: Hologram 0.7.0, a milestone release for the Elixir-to-JavaScript transpiler framework that now ports 90% of the Erlang runtime functions thanks to 49 contributors; and TideWave’s new support for OpenCode, an open-source AI coding agent. They discuss the shift away from raw API key usage towards these more feature-rich, vertically integrated coding assistants that offer tools like MCP (Model Context Protocol) integration and codebase awareness.
Recommendations
Concepts
- Boids — An artificial life program developed by Craig Reynolds in 1986 that simulates the flocking behavior of birds using simple rules for individual agents. Used as a fun demo in the Easel library.
- OWASP Top 10 — A list of the top ten most critical web application security risks. Discussed as a useful reference document but not something a developer should try to read cover-to-cover as an introduction to security.
Libraries
- Hackney — A longstanding Erlang HTTP client library. Version 3.1 now supports HTTP/3 (QUIC) using a pure Erlang implementation, removing the need for a large external C library.
- Hornbeam — An Erlang-powered WSGI/ASGI server that allows you to run and supervise Python web applications (like Django, Flask, FastAPI) on the BEAM, leveraging Erlang’s distribution and resiliency.
- Pyx — An Elixir library that provides a Python interpreter running directly within the BEAM, without containers or separate processes. Mentioned as a potential execution environment for AI-generated Python code.
- Easel — An Elixir library for server-side 2D canvas rendering. It provides a drawing API that can target Phoenix LiveView or native WX widgets, useful for visualizations, simulations (like Boids), and charts.
- Hologram — A framework that transpiles Elixir code to JavaScript, allowing you to write LiveView-style components where the logic is in Elixir but executes in the browser. Version 0.7.0 significantly expanded ported Erlang functions.
- TideWave — An AI-powered coding assistant specifically for Elixir development. It now integrates with OpenCode, an open-source AI coding agent CLI.
Tools
- OpenCode — An open-source AI coding agent CLI, similar to Cursor or Cline. It provides a feature-rich environment for AI-assisted coding with tool integration (MCPs) and is now supported by TideWave.
- Potion Shop — An open-source vulnerable Phoenix application, recommended by Paraxial.io’s Michael Lubas as a practical starting point for Elixir developers to learn application security concepts.
Topic Timeline
- 00:00:14 — Introduction and Hackney 3.1 Release — The hosts introduce the episode and immediately dive into the news of Hackney 3.1’s release. They explain that Hackney is a foundational Erlang HTTP client library. The major news is its new support for HTTP/3 (QUIC) implemented in pure Erlang, which removed 1.3 million lines of C code and associated NIFs, making it more portable.
- 00:06:26 — Sponsor Segment: Paraxial.io on OWASP Top 10 — A sponsored segment from Paraxial.io features founder Michael Lubas. He discusses the OWASP Top 10, explaining it’s a list of top web application security risks meant as a reference, not a gospel to be read cover-to-cover. He recommends developers start with practical resources like the vulnerable Phoenix application ‘Potion Shop’ to learn security concepts relevant to Elixir.
- 00:08:09 — Introducing Hornbeam: Python on the BEAM — The hosts introduce Hornbeam, a new project by Benoît Chesneau. It’s an Erlang-powered server that can host and supervise Python web applications (both WSGI like Django/Flask and ASGI like FastAPI). They discuss its value proposition: combining Python’s ecosystem (especially for AI) with the BEAM’s strengths in distribution, concurrency, and resiliency, potentially eliminating the need for tools like Redis or RabbitMQ for clustering.
- 00:13:56 — Pyx: A Python Interpreter in Elixir — Following the interop discussion, David mentions Pyx, a library by the creators of Just Bash. Pyx is a Python interpreter written in Elixir that runs Python code directly within the BEAM, without containers or process isolation. He speculates on a use case where AI agents (like Anthropic’s Sonnet) that generate Python code for task efficiency could use Pyx as their execution environment within an Elixir application.
- 00:17:26 — Easel: Server-Side Canvas Rendering in Elixir — The hosts discuss Easel, a new library by Jason Stiebs for server-side 2D canvas rendering. It provides a drawing API that can target Phoenix LiveView or WX widgets. They explore the fun demo site featuring a Boids flocking simulation and a pathfinding visualizer, noting it’s not just for art but has practical applications like generating charts. All drawing logic is computed in Elixir and sent to the client.
- 00:22:24 — Hologram 0.7.0: Major Transpilation Milestone — The hosts cover Hologram 0.7.0, a major release for the Elixir-to-JavaScript transpiler framework. This release, with help from 49 contributors, ports 150 additional Erlang functions to JavaScript, increasing Erlang runtime coverage from 34% to 90% and overall Elixir standard library readiness to 87%. This is a significant community-driven step towards running more Elixir logic directly in the browser.
- 00:24:07 — TideWave Now Supports OpenCode — The final news item is that TideWave, an AI-powered coding assistant for Elixir, now supports OpenCode. OpenCode is an open-source alternative to AI coding CLIs like Cline, Cursor, and Google’s Gemini CLI. The hosts explain that TideWave is deprecating its simpler ‘bring your own API key’ approach in favor of integration with these more full-featured agents, which offer better experiences with tools like MCPs, planning, and codebase search.
Episode Info
- Podcast: Thinking Elixir Podcast
- Author: ThinkingElixir.com
- Category: Education How To News Tech News Technology
- Published: 2026-02-24T11:15:00Z
- Duration: 00:28:44
References
- URL PocketCasts: https://pocketcasts.com/podcast/thinking-elixir-podcast/839ad8c0-8685-0138-ee32-0acc26574db2/293-the-beam-as-the-universal-runtime/c5d4edb0-c94b-4602-9cc2-83d051784d91
- Episode UUID: c5d4edb0-c94b-4602-9cc2-83d051784d91
Podcast Info
- Name: Thinking Elixir Podcast
- Type: episodic
- Site: https://podcast.thinkingelixir.com
- UUID: 839ad8c0-8685-0138-ee32-0acc26574db2
Transcript
[00:00:00] hello and welcome to the thinking elixir podcast where we cover the news of the community and
[00:00:07] learn from each other my name is mark erickson and i’m david bernheisel let’s jump into the news
[00:00:14] all right hey you’ve heard of hackney right it’s been around forever yeah it’s like it was
[00:00:19] hdb poison which was like a cover an elixir cover over hackney or hp potion which i think is gone
[00:00:26] down not gone gone but it was an eyebrows i think you know backed library well hackney is still
[00:00:33] alive and kicking and very much so actually so hackney 3.0 has been released and as of this time
[00:00:40] of recording a 3.1 was released that’s a major point release let me just put it that way right
[00:00:46] major point release so what’s different in hackney 3.0 generally the rule of thumb is just use rec
[00:00:51] and be happy right and you can still do that but hackney now has some pretty differentiated
[00:00:56] like features in it one of the big ones as is what we’ll focus on is that it now supports
[00:01:02] quick http3 quick is the google name for it but i think it was re-sainted as http3 once it got to
[00:01:10] like the the standards committee right well we should probably first say what hackney is oh true
[00:01:16] right yeah there are gonna be plenty of people who may have heard the term but like i’ve never
[00:01:21] really like used hackney like directly it’s an uh one of the original erlang
[00:01:26] written http clients so yeah if you want to go get and post and you know throw options out there
[00:01:32] you would do that with something like uh hackney you could do that with built-in libraries and
[00:01:37] in otp as well but they just weren’t as ergonomic and that’s why all these other
[00:01:41] hp libraries exist and hackney was probably the most well-known one on the erlang side
[00:01:47] and then you have various wrappers in the elixir community and rec you know is probably the the
[00:01:52] newest and most ergonomic one of those it’s pipeline based and all but hackney
[00:01:56] is yeah one of the ogs out there it’s a library by ben wash’s new shinu hopefully i’m not
[00:02:02] pronouncing your name that awfully uh but ben was has been a staple in the erlang community for a
[00:02:07] long time and uh to more than just hackney i’m sure because of hackney’s you know fame and use
[00:02:12] out there what does that mean so http3 that’s a very tough one to support it’s it’s just
[00:02:17] architecturally different than hp1 and hp2 hp3 throws a little udp out there at you so it’s just
[00:02:26] different it’s not so tcp i mean it is it does have some tcp in there but i think it’s it has
[00:02:32] udp in there and when you have these kind of fire and forget you know cross your fingers and hope
[00:02:37] that packet makes it kind of like protocols like udp there’s just a lot more logic that needs to
[00:02:42] go into it to make sure that uh you know recovery and error corrections happen that you wait for
[00:02:48] you know packets to come because you got a later one then you have to reshuffle it’s kind of a mess
[00:02:53] i think i already said it but google is the one that’s going to be the one that’s going to be the
[00:02:56] one that came up with this they introduced it in their browser first and then brought it to the
[00:03:01] committee the w3c committee i think or whatever the web standards one it’s been re-blessed as
[00:03:06] as hdb3 so both terms are correct quick is a little different just as in its earlier form
[00:03:12] but hdb3 is the real name now i guess yeah and so generally what it took to get hdb3 support
[00:03:18] anywhere was basically bringing in this massive c library that gives you that that support you
[00:03:26] and erlang and elixir land that means niffs that means not so portable because now you have to
[00:03:31] to compile for a bunch of architectures and all well this is a lot of context sorry i know a little
[00:03:36] bit more about this than maybe i should at the end so so there was a an earlier version of hackney
[00:03:42] that did that that basically brought in one of those libraries i i forget which one it is i
[00:03:47] remember this because i brought it in i needed quick support for a project i was doing it brought
[00:03:52] it in and i noticed it started compiling a bunch of c stuff and it took a while i’m like what in
[00:03:56] going on here i had to go check the you know check what’s going on i was like oh wow i gotta
[00:04:00] bring in all this just to get quick support well anyway hackney 3.1 was released and now no longer
[00:04:07] needs that c library so it is all built on the erlang quick library pure erlang quick library
[00:04:14] which is both personally a little a little troubling i don’t know if that’s i don’t know
[00:04:19] if that’s the most uh comprehensive coverage of the entire mess of quick or h3 uh so we’ll
[00:04:26] we’ll find out soon i guess uh but what this means is that they were able to remove quoting
[00:04:31] them here i don’t actually know but it says removed 1.3 million lines of c code through
[00:04:37] ls quick or boring ssl and all the the nif stuff that they had and replaced it with a pure erlang
[00:04:42] quick uh library oh wow so what a i’m sure benoit is very much grayer at this point
[00:04:50] having had to deal with all of that if that proves to all be you know
[00:04:56] working and worthwhile and like yeah like that’s a huge huge accomplishment right um
[00:05:01] for comparison’s sake i know uh phoenix ships with you know bandit by default and bandit does
[00:05:07] not support h3 and because it’s such a massively different architecture it would just it would
[00:05:12] basically be its own library you know i’m not gonna put words in anyone’s mouth on what that
[00:05:17] effort really involves but it would basically be a completely different architecture right
[00:05:21] so i don’t know how long it would take for you know phoenix to see that kind of support to be
[00:05:25] an actual
[00:05:26] server bandit is the server by the way to be an actual h3 you know uh support and it’s unclear
[00:05:32] to me at this very moment if this is just the client side or the the server side right because
[00:05:36] hackney is the client side it’s not not necessarily the the server side so those are probably two
[00:05:42] different kinds of efforts uh anyway right i’ll shut up there but that’s the short that’s the
[00:05:48] short version as short as i can get it that hackney 3.1 is released and it now supports
[00:05:54] through pure erlang
[00:05:56] http3 as a client which is great news because most uh browsers and google services all the big
[00:06:04] tech company services all likely support hdb3 in their services at this point as well very much
[00:06:10] looking forward to that and if anyone has any more closer experience to how all that stuff works i’d
[00:06:14] love to hear from you you can hit me up on blue sky i’m very curious to see what you think about
[00:06:19] that this episode is sponsored by paraxial.io the only security platform with true
[00:06:26] elixir support i sat down with founder michael lubas to ask this week’s security question
[00:06:31] from multiple security sources i’ve hear people talk about the oasp top 10 what is that and how
[00:06:40] does that apply to elixir developers so very broadly speaking i know some oas person might
[00:06:44] correct me here broadly speaking the oasp top 10 is meant to be the top 10 risks for a web
[00:06:51] application security developer hey be aware of this in reality the oasp top 10 is meant to be the
[00:06:56] it’s more something like 200 individual problems that they kind of group together in this very
[00:07:01] strange way i think the project is very good and it has a good goal but for a developer that’s
[00:07:08] coming to secure this is a common thing you’ll see on reddit well they’ll say i have no idea
[00:07:11] where to start with security and they’ll say read the oasp top 10 and i don’t think that that is what
[00:07:17] you should be doing because it’s it’s very disjointed it’s almost like reading a law
[00:07:21] textbook when you need help with a dispute with your landlord or something
[00:07:26] you should familiarize yourself with security you know you know elixir and phoenix uh potion shop
[00:07:32] is the open source vulnerable phoenix application go through that and then try to tie those concepts
[00:07:38] into the oasp top 10 but don’t treat it as something you have to read through all the way
[00:07:43] in one sitting it’s really more of like a reference that’s the intent of the project
[00:07:47] treat it as a helpful tool but it’s not the gospel truth of web application security although it’s
[00:07:53] often treated that way in an industry i definitely see it as a helpful tool
[00:07:56] reach out to michael at paraxial.io to get personalized security feedback for your
[00:08:01] projects and your business that’s at paraxial.io and next up another library update from benoit
[00:08:09] this one is called hornbeam and version 1.0 was released and then shortly after was followed up
[00:08:17] with 1.3 like a couple other iterations so we’re going to talk about what is hornbeam
[00:08:21] there’s a new thing for me i hadn’t heard about this so we’ve got links to the social media
[00:08:26] post the github project and it even has its own little website that’s kind of like a little
[00:08:31] marketing style landing page to explain what this is so what it is is it’s an erlang powered
[00:08:36] wsgi asgi server for python apps so okay i had to go look up some of these terms because i’m
[00:08:44] not fluent in the python space so what this is higher level is this erlang system can host and
[00:08:50] supervise python web applications that implement either wsgi or asgi interface
[00:08:56] which means it can run both traditional synchronous python web apps and the modern async python web
[00:09:03] apps so wsgi stands for web server gateway interface and that’s the classic synchronous
[00:09:09] interface for python web apps used by frameworks like django and flask to communicate with their
[00:09:14] web server asgi is the asynchronous server gateway interface it is the newer asynchronous interface
[00:09:20] that supports concurrent requests and protocols like web sockets used by frameworks like fast api
[00:09:26] starlet so maybe that’s more where you can have asynchronous server push kind of stuff like we’re
[00:09:31] familiar with with live view so okay that’s what’s going on in the python space right and now
[00:09:37] hornbeam is saying oh you can host those python applications that have those types of servers
[00:09:43] in erlang it is an erlang powered server for hosting those apps there was a previous version
[00:09:50] of something like this that benoit had also created and worked on called gunicorn this new
[00:09:56] version of hornbeam and hornbeam is nine times faster than that with ten times lower latency and zero failed requests allows you to run flask fast api django on the beam there’s another project called uvicorn uv like unicorn with the v instead of an n and that was another thing that was in this space and so the naming of hornbeam is like horn from like unicorn and beam from the beam
[00:10:19] yeah yes it’s like the gunicorn or g unicorn you know it’s like there’s something in there
[00:10:26] so that’s where the name comes from what i think is cool though is you look at the landing page
[00:10:30] kind of the marketing version of this and here’s how they lead with it like how they’re positioning
[00:10:36] this project that says when python meets erlang your python code erlang’s infrastructure mix the
[00:10:43] best of python which is ai web apps with erlang’s distribution concurrency resiliency and safety
[00:10:50] that is an interesting position that elixir and the erlang and the beam are in is that hey we
[00:10:56] really do orchestration and distribution well and sometimes you need libraries that already exist
[00:11:03] and they’re in the python space maybe for doing ml or you know existing projects and like i need
[00:11:10] to leverage that but maybe i can use erlang as the orchestration layer and so i like the points
[00:11:16] that the website made it said why python plus erlang each language excels at different things
[00:11:21] use both under a feature of ai ready it’s use python ml
[00:11:26] system with erlang scale under distributed they’re saying you know built-in clustering
[00:11:30] no redis or rabbit mq needed it’s resilient you know the whole idea of the beam of let it crash
[00:11:36] supervisors restart failed processes that failed process could be a python app no gill which is the
[00:11:43] global interpreter lock that erlang handles concurrency python stays simple so yeah it’s a
[00:11:49] very interesting idea for people who are needing to mix different ecosystems like that because i
[00:11:56] talked to these companies and they said oh we’re building this new feature this new thing and we
[00:12:00] want to be able to train our own models to have a competitive advantage for how our users are
[00:12:06] using this data and we want to be able to do our own refinement and tuning of our own models so
[00:12:11] okay you’re going to be needing to do some of those things in python like that’s going to be
[00:12:16] the easy path for that but then maybe i want to have the whole erlang distribution side i can have
[00:12:26] and you know still be able to push these changes over to those other servers that are running back
[00:12:30] there anyway it was interesting another cool thing i saw in there was the idea of using ets
[00:12:36] as a shared space that both the python and the erlang beam could talk and share data and write
[00:12:43] through ets it was interesting stuff huh benoit has been very busy very gray indeed i’m sure
[00:12:50] i’m implying that all this work must be very stressful he could be
[00:12:56] having the time of his of his life i i don’t know yeah he’s doing it all on a beach in a chair
[00:13:00] it’s somewhat related you know you’re talking about a bunch of this interop stuff right so
[00:13:06] you got the server side of it uh which is like this layer in between your actual web server
[00:13:11] which would be uh something like nginx and then the framework which would be like django that’s
[00:13:17] where this kind of sits in between and yeah a lot of interop right so what if you didn’t want
[00:13:23] to interrupt at all you still needed it to like be
[00:13:26] in the gosh what are all these letters they stick together wsgi yeah the web server gateway
[00:13:32] interface there you go asgi if you want it to be asynchronous which wow sorry to get back to it is
[00:13:38] is that what if you didn’t want anything you know to interrupt with like your elixir your erlang to
[00:13:42] python you know code underneath or or you had python or you or something else was generating
[00:13:47] python but you didn’t actually want it to be in executed within like a the python interpreter
[00:13:53] this is not production code but i did come across this
[00:13:56] library by the same some of the same folks that did the just bash one that we talked a while back
[00:14:01] about so just bash is another like bash interpreter where it’s not real bash it’s it’s elixir
[00:14:07] interpreted bash so you give it just like the one i i helped create as well so there’s there’s
[00:14:13] several interpreters out there right and then there’s python x which is about running yeah
[00:14:18] real python code in a isolated you know process and all that is it isolated sorry i said isolated
[00:14:24] i’m not sure that’s actually true but anyway it’s
[00:14:26] real python code that interrupts with your elixir code and marshals in between right so they are
[00:14:31] two different execution environments well this other library i came across is called pyx done
[00:14:36] by the same folks that did just bash where it’s running python inside of your elixir app no
[00:14:41] containers no ports no process isolation isolation just regular beam functions right so it’s a it’s
[00:14:47] a interpreter level kind of stuff on the github they say pyx run and you do put in some python
[00:14:52] code in there right and it just executes it in real elixir why is this interesting uh
[00:14:56] well it’s not faster you don’t do it for fast you can’t undo years or decades maybe at this point
[00:15:03] of python optimizations and all that stuff right that’s the stuff exists for a reason
[00:15:07] but i came across this article by anthropic by the way sonnet 4.6 yeah just came out
[00:15:12] and one of the ways that they optimized 4.6 with token usage is by not going back and forth so
[00:15:19] much for evaluations uh so for example to accomplish this task instead of
[00:15:26] bunch of back and forth uh with mcp tools for example it will actually just write one bit of
[00:15:32] python code i’ve actually seen this happen it’ll run one bit of python code one go do a script to
[00:15:38] get what it’s looking for on the other side so that way it doesn’t burn as many tokens to do
[00:15:42] what it wants but it’s but it’s being trained to do that in python right so here’s where pyx
[00:15:49] might come in have that be its python uh environment so where it can be connected to
[00:15:56] your actual elixir you know uh environment as well interesting i’m sure i’m missing more here
[00:16:02] to make that a real wow factor here but it is interesting to think that the two major languages
[00:16:07] that these lms are you know debugging in bash and python both now have i’m not going to say
[00:16:13] complete as in 100 as if we’re doing this as a research paper or something and checking all the
[00:16:19] boxes but like a comprehensive let’s just put it that way a comprehensive interpreter that is native
[00:16:26] that can like seriously interop with the rest of your elixir application with all the other nice
[00:16:31] tools that we have you know as well the functional programming the tide wave mcp and the the tooling
[00:16:37] out there and you know hex docs now has llm txt files yeah the llm txt files like all of these
[00:16:44] things now i’ll just say i don’t know i’ll just say elixir is the best language for lms at this
[00:16:50] point right i think it is i’ll shut up about that but it is interesting we’ll have a link in case
[00:16:56] we can’t have a full Python interpreter which will probably be a little bit of a loophole
[00:17:00] um but i’ll just share the link so go ahead and check that out you can go to thefonelibrary.org
[00:17:07] and we’ll be able to see a full one for each of you and i’ll put it in the description down below
[00:17:11] and i’ll also put it in the description down below so you can just follow along and probably
[00:17:16] follow along the amino and auto click-through page and it will help you and i’ll put it in the
[00:17:22] description down below and i’ll put that link in the description down below to help you and all of the
[00:17:24] folks that are looking at all of this so that was yeah Hornbeam that started the whole discussion but
[00:17:26] He even has a little demo site, which is pretty fun.
[00:17:29] Quoting him, he says, I’ve always loved computer art like Boyd’s, and now we can do that in Elixir.
[00:17:35] So what are Boyd’s?
[00:17:36] Say it like you’re from Boston.
[00:17:38] It’s Boyd’s, right?
[00:17:39] As in like birds, as in a flock of Boyd’s out there, right?
[00:17:43] That actually is some of the origin of the word, like the Eastern pronunciation of birds.
[00:17:49] Yeah.
[00:17:50] Yeah.
[00:17:50] So I’ve got a link to the Wikipedia article about what Boyd’s is.
[00:17:54] It’s B-O-I-D-S.
[00:17:55] But like Jason’s statement that, you know, he’s always loved computer art like Boyd’s.
[00:17:59] And it’s like, that has been true for me, right?
[00:18:01] I loved tinkering with little visual simulations like Boyd’s.
[00:18:07] So just to understand what Boyd’s is, it’s an artificial life program developed by Craig Reynolds back in 1986.
[00:18:14] And it simulates flocking behavior of birds and related group motion.
[00:18:18] Really simply what it is, is you have each of these little entities, right?
[00:18:22] It’s like the little actors.
[00:18:24] Each one of them has its own set of rules.
[00:18:27] Like it has its own orientation of which direction it’s pointing, maybe a current vector of how fast it’s moving.
[00:18:33] And it has rules.
[00:18:35] Like I want to keep at least this much distance from my nearest neighbor.
[00:18:38] If there is a group, I do want to be part of that group.
[00:18:41] So I’m going to move in the direction of the group.
[00:18:43] And, you know, just a couple simple little rules like that.
[00:18:46] And then it’s just putting into a simulator and watch what happens.
[00:18:50] And when they converge and what happens when you put obstacles in there.
[00:18:54] And you’re able to see swarming behavior that feels very natural.
[00:18:59] And it’s kind of the behavior you see like in ant swarms or bird flocking.
[00:19:03] So he was having fun with that.
[00:19:05] But it can do much more than that.
[00:19:06] It’s not just a Boyd simulator.
[00:19:08] His project called Easel was a playground where he could do those kinds of things.
[00:19:12] Yep.
[00:19:13] I’m at a thousand Boyds.
[00:19:15] 34 frames per second.
[00:19:18] It’s pretty fun.
[00:19:19] Yeah, you start at 100.
[00:19:21] So every click is adding 10 more Boyds.
[00:19:24] And you get to see that swarming behavior, which is, yeah, it’s kind of interesting to look at.
[00:19:29] Yeah.
[00:19:30] The 2D canvas API, which is probably the most material thing out of here.
[00:19:34] You can draw to a Phoenix live view to maybe native WX, you know, widgets.
[00:19:39] I haven’t looked at that forever now, it seems.
[00:19:42] But that does exist.
[00:19:43] Right.
[00:19:43] So you can do your desktop kind of like window widgets, that kind of stuff.
[00:19:47] That’s what WX helps with.
[00:19:49] So, yeah.
[00:19:49] Any kind of drawing, you know, on a canvas like that’s that’s where Easel comes in.
[00:19:53] Yeah.
[00:19:54] Just focusing on the live view part, you know, for a second, he has like optional like live view components and hooks, you know, that are available.
[00:20:00] So it’s somewhat of a fuller package there, a support for layers and animations and event handling and all that kind of stuff, which is all really cool.
[00:20:08] Not only that, but there’s a demo site.
[00:20:10] So even if you’re not going to like look at it, you can go have fun with it.
[00:20:14] But if you want to put on your business hat, there are some business applications for this.
[00:20:18] You could you could have it do charts, boring bar charts.
[00:20:24] Or you could have it do a smiley face, you know, anything with a 2D canvas and bar charts fit on a 2D canvas.
[00:20:31] Yeah.
[00:20:31] Yeah.
[00:20:32] So what’s fun about it, though, is you can check out the GitHub website and see what does this look like?
[00:20:37] So it’s server side rendered drawing operations where you’re saying here’s this easel is like the expression of a canvas.
[00:20:45] And you define the operations of what types of drawing events you want to have happen.
[00:20:50] And then that gets rendered to the canvas, which would be in live view.
[00:20:54] In the browser or with WX widgets, you know, it could be like in a little application running locally on the user’s computer.
[00:21:02] Yeah.
[00:21:02] So it’s still server rendered.
[00:21:04] It’s a really interesting experiment.
[00:21:05] I would be curious to find out from Jason if he had some specific need or he’s like, I just want to play with something fun.
[00:21:12] Regardless, I think it’s really cool.
[00:21:13] And it’s definitely worth checking out just to see what you could do and kind of get those brain juices flowing of like, huh, I wonder if I could do this.
[00:21:23] He’s got an example of a.
[00:21:24] Pathfinding, which I always find like super interesting.
[00:21:28] And there’s different algorithms for that.
[00:21:31] And this illustrates it.
[00:21:32] So one, I don’t remember what these stand for, but there’s one called DFS.
[00:21:36] Oh, depth first search.
[00:21:37] That’s what it is.
[00:21:38] Depth first search.
[00:21:39] Dijkstra is in there.
[00:21:40] There’s a star.
[00:21:41] I forget what that stands for.
[00:21:42] BFS was his breath first search, depth first search, and then greedy anyway.
[00:21:46] And you can put in random walls in there.
[00:21:48] Again, all server rendered, but you put in your random walls.
[00:21:51] You can even draw your walls, make it like nearly impossible for this.
[00:21:54] Little these little dots to find them each other.
[00:21:57] I mean, you hit start and you can see, yeah, like what the algorithm does to get there.
[00:22:02] The best one out of this is the A star one for sure.
[00:22:05] It’s a widely used intelligent pathfinding and graph traversal algorithm.
[00:22:09] All right.
[00:22:09] That’s easel.
[00:22:10] All of that is computed in Elixir sent over to, you know, over WebSockets to to the canvas on on in the case of live view, at least.
[00:22:17] And it’s really fun to play with.
[00:22:19] So go check out the demo site and see if there’s a good use that you have for it.
[00:22:23] All right.
[00:22:23] And next up.
[00:22:24] Hologram version 0.7.0 was released.
[00:22:27] So from the social media post, this one had 49 contributors as part of this release, and they have ported 150 Erlang functions to JavaScript.
[00:22:37] So if you remember, hologram is the framework that says you can take Elixir code and it gets transpiled into JS, which executes in the browser.
[00:22:46] So I’m writing my live view style components of what I want it to look like.
[00:22:50] But all the logic is in Elixir and it turns into JavaScript in the browser.
[00:22:54] And so part of what was required for making that happen is creating all the logic to say, how do we translate this standard library function?
[00:23:03] It’s built into Erlang.
[00:23:04] How do we translate that into JavaScript?
[00:23:06] And so a lot of people jumped in and helped do some of that work.
[00:23:10] So there’s a blog post to go along with this.
[00:23:11] And it says hologram version 0.7 is a milestone release for the Elixir to JavaScript porting initiative with 150 newly ported Erlang functions.
[00:23:21] Erlang runtime coverage has jumped from 34 percent to 90%.
[00:23:24] And overall, Elixir standard library readiness has grown from 74 percent to 87 percent.
[00:23:31] So it’s not 100 percent, but you can still do a lot with what’s there.
[00:23:35] Very cool just to see that major milestone.
[00:23:38] And that’s a sign of people saying, this is really interesting.
[00:23:41] I can I can help out.
[00:23:42] I can just do one of these functions and help that get across the line and get a conversion for that.
[00:23:47] And when you go check out the website, it has a whole list of all the functions that they have ported and what’s left to be ported.
[00:23:54] And there’s some discussion stuff on how do we want to handle these types of things?
[00:23:58] So very cool.
[00:23:59] If you want to go check out the hologram release version 0.7, it looks like a good one.
[00:24:02] All right.
[00:24:03] Last one is that TideWave now supports open code.
[00:24:07] So here’s the quote from from their blog post about it says with open code support, we would like it’s like they’re asking permission to.
[00:24:14] We would like to deprecate supporting bring your own key for anthropic API, open AI, API and open router.
[00:24:22] And this is because they use.
[00:24:24] An in-house custom agent that’s limited in terms of features, and by they, I think they mean like TideWave themselves.
[00:24:31] Right.
[00:24:32] That’s because they use an in-house custom agent that’s limited in terms of features such as no MCPs, no skills, no planning, no compaction, et cetera.
[00:24:40] So we recommend that folks to migrate to one of Codecs, which is open AI, CLI or cloud code or open code instead.
[00:24:48] And that’s the end of the quote.
[00:24:49] What is open code?
[00:24:50] What does all this mean?
[00:24:51] So we’ve talked about a lot, a lot in cloud code.
[00:24:54] And.
[00:24:54] In particular, it’s a kind of a revolutionary, you know, AI assisted, you know, coding CLI programs meant for the terminal.
[00:25:01] But it has a bunch more logic that it packages up, understanding, prompting, searching for, you know, stuff on in your code base.
[00:25:09] So it knows how to grep, for example, which is a lot better than literally copying and pasting stuff into something.
[00:25:14] Right. And that’s the most simple, you know, set of examples.
[00:25:17] But there’s all the other things like MCP, the model context protocols, the tools that it can connect to.
[00:25:24] And so, yeah, cloud code is what helped pioneered that Codecs is opening eyes answer to that and open code open code is given the word in there open is is kind of a non tied to a particular company version of all of these CLIs.
[00:25:42] It is the open source version of this AI coding agents.
[00:25:47] It can also connect to what I forgot to mention.
[00:25:50] Gemini CLIs out there, too.
[00:25:51] Right.
[00:25:52] Google’s.
[00:25:53] So you can be.
[00:25:54] You can be a lot more choosy with what models you want to bring.
[00:25:57] In particular, you could bring your own.
[00:25:59] Let’s say you won the lottery and you got a GPU.
[00:26:06] And a nice one.
[00:26:08] Maybe like where you got multiple GPUs in parallel in one machine.
[00:26:14] And you want to and you want to put your own agent on there.
[00:26:17] Maybe not as of this very moment.
[00:26:20] I think you technically can have cloud code point to anything else.
[00:26:24] As long as it talks, you know, in ways that entropic tells it to API calls and all that kind of stuff.
[00:26:29] But they are free to change that however they want.
[00:26:32] They could try to lock that down.
[00:26:33] Same with all these other ones.
[00:26:35] Open code is obviously going to say not open, man.
[00:26:38] Like, we’ll try to support everything we can.
[00:26:40] And it’s open source, you know, as well.
[00:26:42] So, you know, in the most simplest, you know, ways of just looking at it, it’s it’s an open source and open contribution way of looking at a cloud code.
[00:26:51] Yeah.
[00:26:51] So given that.
[00:26:52] And with the context.
[00:26:54] Of what TideWave has posted out before about the vertical integration being kind of the future of like AI assisted coding and the vertical meaning, you know, all the stuff that this agent can help do.
[00:27:07] You simply can’t just get away with just raw API keys at this point.
[00:27:11] It’s a very transactional call response kind of kind of thing.
[00:27:14] There’s there’s so much more that’s now surrounding that to give you a better developer experience.
[00:27:19] And that is what open code, cloud code, codex, CLI.
[00:27:24] Gemini, CLI, that’s what all of those are doing for you at this point.
[00:27:28] And it’s we just can’t ignore that anymore.
[00:27:30] So by providing your own API key, you’re essentially getting a lesser experience.
[00:27:35] Now, that doesn’t mean that you can’t bring your own API key ever.
[00:27:38] It just means that now you will probably want to use that API key with cloud code, you know, specifically or open code or anything else.
[00:27:47] They do accept, you know, non-subscription plans.
[00:27:50] They can accept API keys to work.
[00:27:53] So.
[00:27:53] You’re not really losing anything.
[00:27:55] It’s just really guiding people into a better experience.
[00:27:57] So that’s that’s how I’m understanding that, which makes a lot of sense.
[00:28:00] So that’s why they would like to deprecate that bring your own key kind of experience in TideWave.
[00:28:04] So that makes a lot of sense.
[00:28:05] Not to bury what the big feature is here is that TideWave now supports open code.
[00:28:10] So if you are using TideWave, go ahead and give that old hex package a bump in your code base and see what’s new in there.
[00:28:17] I saw the tasks boards, too, which is pretty nice.
[00:28:20] It’s been progressing very nicely.
[00:28:21] I love the steady pace of new things.
[00:28:23] It always keeps you hungry for what’s coming up next.
[00:28:26] And it’s definitely worth paying for, by the way.
[00:28:28] It’s still remarkably better in development for web apps because it’s in the browser and it just has more tools available there.
[00:28:35] Well, that’s all the time we have for today.
[00:28:38] Thank you for listening.
[00:28:39] We hope you’ll join us next time on Thinking Elixir.